mirror of
https://github.com/optim-enterprises-bv/kubernetes.git
synced 2025-11-02 03:08:15 +00:00
vendor: bump runc to v1.2.1
For one thing, this release decouples device management from libcontainer/cgroups. You can see the result of this in a dropped cilium/ebpf dependency (which is only needed for device management). NOTE that due to an issue with go mod / go list, github.com/opencontainers/runc had to be added to hack/unwanted-dependencies.json under x/exp. This is bogus because opencontainers/runc does not use x/exp directly, only via cilium/ebpf dependency (which is not vendored here). Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This commit is contained in:
19
vendor/github.com/cilium/ebpf/.clang-format
generated
vendored
19
vendor/github.com/cilium/ebpf/.clang-format
generated
vendored
@@ -1,19 +0,0 @@
|
||||
---
|
||||
Language: Cpp
|
||||
BasedOnStyle: LLVM
|
||||
AlignAfterOpenBracket: DontAlign
|
||||
AlignConsecutiveAssignments: true
|
||||
AlignEscapedNewlines: DontAlign
|
||||
AlwaysBreakBeforeMultilineStrings: true
|
||||
AlwaysBreakTemplateDeclarations: false
|
||||
AllowAllParametersOfDeclarationOnNextLine: false
|
||||
AllowShortFunctionsOnASingleLine: false
|
||||
BreakBeforeBraces: Attach
|
||||
IndentWidth: 4
|
||||
KeepEmptyLinesAtTheStartOfBlocks: false
|
||||
TabWidth: 4
|
||||
UseTab: ForContinuationAndIndentation
|
||||
ColumnLimit: 1000
|
||||
# Go compiler comments need to stay unindented.
|
||||
CommentPragmas: '^go:.*'
|
||||
...
|
||||
14
vendor/github.com/cilium/ebpf/.gitignore
generated
vendored
14
vendor/github.com/cilium/ebpf/.gitignore
generated
vendored
@@ -1,14 +0,0 @@
|
||||
# Binaries for programs and plugins
|
||||
*.exe
|
||||
*.exe~
|
||||
*.dll
|
||||
*.so
|
||||
*.dylib
|
||||
*.o
|
||||
!*_bpf*.o
|
||||
|
||||
# Test binary, build with `go test -c`
|
||||
*.test
|
||||
|
||||
# Output of the go coverage tool, specifically when used with LiteIDE
|
||||
*.out
|
||||
26
vendor/github.com/cilium/ebpf/.golangci.yaml
generated
vendored
26
vendor/github.com/cilium/ebpf/.golangci.yaml
generated
vendored
@@ -1,26 +0,0 @@
|
||||
---
|
||||
issues:
|
||||
exclude-rules:
|
||||
# syscall param structs will have unused fields in Go code.
|
||||
- path: syscall.*.go
|
||||
linters:
|
||||
- structcheck
|
||||
|
||||
linters:
|
||||
disable-all: true
|
||||
enable:
|
||||
- errcheck
|
||||
- goimports
|
||||
- gosimple
|
||||
- govet
|
||||
- ineffassign
|
||||
- misspell
|
||||
- staticcheck
|
||||
- typecheck
|
||||
- unused
|
||||
- gofmt
|
||||
|
||||
# Could be enabled later:
|
||||
# - gocyclo
|
||||
# - maligned
|
||||
# - gosec
|
||||
92
vendor/github.com/cilium/ebpf/ARCHITECTURE.md
generated
vendored
92
vendor/github.com/cilium/ebpf/ARCHITECTURE.md
generated
vendored
@@ -1,92 +0,0 @@
|
||||
Architecture of the library
|
||||
===
|
||||
|
||||
```mermaid
|
||||
graph RL
|
||||
Program --> ProgramSpec --> ELF
|
||||
btf.Spec --> ELF
|
||||
Map --> MapSpec --> ELF
|
||||
Links --> Map & Program
|
||||
ProgramSpec -.-> btf.Spec
|
||||
MapSpec -.-> btf.Spec
|
||||
subgraph Collection
|
||||
Program & Map
|
||||
end
|
||||
subgraph CollectionSpec
|
||||
ProgramSpec & MapSpec & btf.Spec
|
||||
end
|
||||
```
|
||||
|
||||
ELF
|
||||
---
|
||||
|
||||
BPF is usually produced by using Clang to compile a subset of C. Clang outputs
|
||||
an ELF file which contains program byte code (aka BPF), but also metadata for
|
||||
maps used by the program. The metadata follows the conventions set by libbpf
|
||||
shipped with the kernel. Certain ELF sections have special meaning
|
||||
and contain structures defined by libbpf. Newer versions of clang emit
|
||||
additional metadata in [BPF Type Format](#BTF).
|
||||
|
||||
The library aims to be compatible with libbpf so that moving from a C toolchain
|
||||
to a Go one creates little friction. To that end, the [ELF reader](elf_reader.go)
|
||||
is tested against the Linux selftests and avoids introducing custom behaviour
|
||||
if possible.
|
||||
|
||||
The output of the ELF reader is a `CollectionSpec` which encodes
|
||||
all of the information contained in the ELF in a form that is easy to work with
|
||||
in Go. The returned `CollectionSpec` should be deterministic: reading the same ELF
|
||||
file on different systems must produce the same output.
|
||||
As a corollary, any changes that depend on the runtime environment like the
|
||||
current kernel version must happen when creating [Objects](#Objects).
|
||||
|
||||
Specifications
|
||||
---
|
||||
|
||||
`CollectionSpec` is a very simple container for `ProgramSpec`, `MapSpec` and
|
||||
`btf.Spec`. Avoid adding functionality to it if possible.
|
||||
|
||||
`ProgramSpec` and `MapSpec` are blueprints for in-kernel
|
||||
objects and contain everything necessary to execute the relevant `bpf(2)`
|
||||
syscalls. They refer to `btf.Spec` for type information such as `Map` key and
|
||||
value types.
|
||||
|
||||
The [asm](asm/) package provides an assembler that can be used to generate
|
||||
`ProgramSpec` on the fly.
|
||||
|
||||
Objects
|
||||
---
|
||||
|
||||
`Program` and `Map` are the result of loading specifications into the kernel.
|
||||
Features that depend on knowledge of the current system (e.g kernel version)
|
||||
are implemented at this point.
|
||||
|
||||
Sometimes loading a spec will fail because the kernel is too old, or a feature is not
|
||||
enabled. There are multiple ways the library deals with that:
|
||||
|
||||
* Fallback: older kernels don't allow naming programs and maps. The library
|
||||
automatically detects support for names, and omits them during load if
|
||||
necessary. This works since name is primarily a debug aid.
|
||||
|
||||
* Sentinel error: sometimes it's possible to detect that a feature isn't available.
|
||||
In that case the library will return an error wrapping `ErrNotSupported`.
|
||||
This is also useful to skip tests that can't run on the current kernel.
|
||||
|
||||
Once program and map objects are loaded they expose the kernel's low-level API,
|
||||
e.g. `NextKey`. Often this API is awkward to use in Go, so there are safer
|
||||
wrappers on top of the low-level API, like `MapIterator`. The low-level API is
|
||||
useful when our higher-level API doesn't support a particular use case.
|
||||
|
||||
Links
|
||||
---
|
||||
|
||||
Programs can be attached to many different points in the kernel and newer BPF hooks
|
||||
tend to use bpf_link to do so. Older hooks unfortunately use a combination of
|
||||
syscalls, netlink messages, etc. Adding support for a new link type should not
|
||||
pull in large dependencies like netlink, so XDP programs or tracepoints are
|
||||
out of scope.
|
||||
|
||||
Each bpf_link_type has one corresponding Go type, e.g. `link.tracing` corresponds
|
||||
to BPF_LINK_TRACING. In general, these types should be unexported as long as they
|
||||
don't export methods outside of the Link interface. Each Go type may have multiple
|
||||
exported constructors. For example `AttachTracing` and `AttachLSM` create a
|
||||
tracing link, but are distinct functions since they may require different arguments.
|
||||
46
vendor/github.com/cilium/ebpf/CODE_OF_CONDUCT.md
generated
vendored
46
vendor/github.com/cilium/ebpf/CODE_OF_CONDUCT.md
generated
vendored
@@ -1,46 +0,0 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to creating a positive environment include:
|
||||
|
||||
* Using welcoming and inclusive language
|
||||
* Being respectful of differing viewpoints and experiences
|
||||
* Gracefully accepting constructive criticism
|
||||
* Focusing on what is best for the community
|
||||
* Showing empathy towards other community members
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery and unwelcome sexual attention or advances
|
||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or electronic address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a professional setting
|
||||
|
||||
## Our Responsibilities
|
||||
|
||||
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at nathanjsweet at gmail dot com or i at lmb dot io. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
|
||||
|
||||
[homepage]: http://contributor-covenant.org
|
||||
[version]: http://contributor-covenant.org/version/1/4/
|
||||
48
vendor/github.com/cilium/ebpf/CONTRIBUTING.md
generated
vendored
48
vendor/github.com/cilium/ebpf/CONTRIBUTING.md
generated
vendored
@@ -1,48 +0,0 @@
|
||||
# How to contribute
|
||||
|
||||
Development is on [GitHub](https://github.com/cilium/ebpf) and contributions in
|
||||
the form of pull requests and issues reporting bugs or suggesting new features
|
||||
are welcome. Please take a look at [the architecture](ARCHITECTURE.md) to get
|
||||
a better understanding for the high-level goals.
|
||||
|
||||
## Adding a new feature
|
||||
|
||||
1. [Join](https://ebpf.io/slack) the
|
||||
[#ebpf-go](https://cilium.slack.com/messages/ebpf-go) channel to discuss your requirements and how the feature can be implemented. The most important part is figuring out how much new exported API is necessary. **The less new API is required the easier it will be to land the feature.**
|
||||
2. (*optional*) Create a draft PR if you want to discuss the implementation or have hit a problem. It's fine if this doesn't compile or contains debug statements.
|
||||
3. Create a PR that is ready to merge. This must pass CI and have tests.
|
||||
|
||||
### API stability
|
||||
|
||||
The library doesn't guarantee the stability of its API at the moment.
|
||||
|
||||
1. If possible avoid breakage by introducing new API and deprecating the old one
|
||||
at the same time. If an API was deprecated in v0.x it can be removed in v0.x+1.
|
||||
2. Breaking API in a way that causes compilation failures is acceptable but must
|
||||
have good reasons.
|
||||
3. Changing the semantics of the API without causing compilation failures is
|
||||
heavily discouraged.
|
||||
|
||||
## Running the tests
|
||||
|
||||
Many of the tests require privileges to set resource limits and load eBPF code.
|
||||
The easiest way to obtain these is to run the tests with `sudo`.
|
||||
|
||||
To test the current package with your local kernel you can simply run:
|
||||
```
|
||||
go test -exec sudo ./...
|
||||
```
|
||||
|
||||
To test the current package with a different kernel version you can use the [run-tests.sh](run-tests.sh) script.
|
||||
It requires [virtme](https://github.com/amluto/virtme) and qemu to be installed.
|
||||
|
||||
Examples:
|
||||
|
||||
```bash
|
||||
# Run all tests on a 5.4 kernel
|
||||
./run-tests.sh 5.4
|
||||
|
||||
# Run a subset of tests:
|
||||
./run-tests.sh 5.4 ./link
|
||||
```
|
||||
|
||||
23
vendor/github.com/cilium/ebpf/LICENSE
generated
vendored
23
vendor/github.com/cilium/ebpf/LICENSE
generated
vendored
@@ -1,23 +0,0 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2017 Nathan Sweet
|
||||
Copyright (c) 2018, 2019 Cloudflare
|
||||
Copyright (c) 2019 Authors of Cilium
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
3
vendor/github.com/cilium/ebpf/MAINTAINERS.md
generated
vendored
3
vendor/github.com/cilium/ebpf/MAINTAINERS.md
generated
vendored
@@ -1,3 +0,0 @@
|
||||
# Maintainers
|
||||
|
||||
Maintainers can be found in the [Cilium Maintainers file](https://github.com/cilium/community/blob/main/roles/Maintainers.md)
|
||||
115
vendor/github.com/cilium/ebpf/Makefile
generated
vendored
115
vendor/github.com/cilium/ebpf/Makefile
generated
vendored
@@ -1,115 +0,0 @@
|
||||
# The development version of clang is distributed as the 'clang' binary,
|
||||
# while stable/released versions have a version number attached.
|
||||
# Pin the default clang to a stable version.
|
||||
CLANG ?= clang-14
|
||||
STRIP ?= llvm-strip-14
|
||||
OBJCOPY ?= llvm-objcopy-14
|
||||
CFLAGS := -O2 -g -Wall -Werror $(CFLAGS)
|
||||
|
||||
CI_KERNEL_URL ?= https://github.com/cilium/ci-kernels/raw/master/
|
||||
|
||||
# Obtain an absolute path to the directory of the Makefile.
|
||||
# Assume the Makefile is in the root of the repository.
|
||||
REPODIR := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
|
||||
UIDGID := $(shell stat -c '%u:%g' ${REPODIR})
|
||||
|
||||
# Prefer podman if installed, otherwise use docker.
|
||||
# Note: Setting the var at runtime will always override.
|
||||
CONTAINER_ENGINE ?= $(if $(shell command -v podman), podman, docker)
|
||||
CONTAINER_RUN_ARGS ?= $(if $(filter ${CONTAINER_ENGINE}, podman), --log-driver=none, --user "${UIDGID}")
|
||||
|
||||
IMAGE := $(shell cat ${REPODIR}/testdata/docker/IMAGE)
|
||||
VERSION := $(shell cat ${REPODIR}/testdata/docker/VERSION)
|
||||
|
||||
|
||||
# clang <8 doesn't tag relocs properly (STT_NOTYPE)
|
||||
# clang 9 is the first version emitting BTF
|
||||
TARGETS := \
|
||||
testdata/loader-clang-7 \
|
||||
testdata/loader-clang-9 \
|
||||
testdata/loader-$(CLANG) \
|
||||
testdata/manyprogs \
|
||||
testdata/btf_map_init \
|
||||
testdata/invalid_map \
|
||||
testdata/raw_tracepoint \
|
||||
testdata/invalid_map_static \
|
||||
testdata/invalid_btf_map_init \
|
||||
testdata/strings \
|
||||
testdata/freplace \
|
||||
testdata/iproute2_map_compat \
|
||||
testdata/map_spin_lock \
|
||||
testdata/subprog_reloc \
|
||||
testdata/fwd_decl \
|
||||
testdata/kconfig \
|
||||
testdata/kconfig_config \
|
||||
testdata/kfunc \
|
||||
testdata/invalid-kfunc \
|
||||
testdata/kfunc-kmod \
|
||||
btf/testdata/relocs \
|
||||
btf/testdata/relocs_read \
|
||||
btf/testdata/relocs_read_tgt \
|
||||
cmd/bpf2go/testdata/minimal
|
||||
|
||||
.PHONY: all clean container-all container-shell generate
|
||||
|
||||
.DEFAULT_TARGET = container-all
|
||||
|
||||
# Build all ELF binaries using a containerized LLVM toolchain.
|
||||
container-all:
|
||||
+${CONTAINER_ENGINE} run --rm -ti ${CONTAINER_RUN_ARGS} \
|
||||
-v "${REPODIR}":/ebpf -w /ebpf --env MAKEFLAGS \
|
||||
--env CFLAGS="-fdebug-prefix-map=/ebpf=." \
|
||||
--env HOME="/tmp" \
|
||||
"${IMAGE}:${VERSION}" \
|
||||
make all
|
||||
|
||||
# (debug) Drop the user into a shell inside the container as root.
|
||||
container-shell:
|
||||
${CONTAINER_ENGINE} run --rm -ti \
|
||||
-v "${REPODIR}":/ebpf -w /ebpf \
|
||||
"${IMAGE}:${VERSION}"
|
||||
|
||||
clean:
|
||||
-$(RM) testdata/*.elf
|
||||
-$(RM) btf/testdata/*.elf
|
||||
|
||||
format:
|
||||
find . -type f -name "*.c" | xargs clang-format -i
|
||||
|
||||
all: format $(addsuffix -el.elf,$(TARGETS)) $(addsuffix -eb.elf,$(TARGETS)) generate
|
||||
ln -srf testdata/loader-$(CLANG)-el.elf testdata/loader-el.elf
|
||||
ln -srf testdata/loader-$(CLANG)-eb.elf testdata/loader-eb.elf
|
||||
|
||||
# $BPF_CLANG is used in go:generate invocations.
|
||||
generate: export BPF_CLANG := $(CLANG)
|
||||
generate: export BPF_CFLAGS := $(CFLAGS)
|
||||
generate:
|
||||
go generate ./...
|
||||
|
||||
testdata/loader-%-el.elf: testdata/loader.c
|
||||
$* $(CFLAGS) -target bpfel -c $< -o $@
|
||||
$(STRIP) -g $@
|
||||
|
||||
testdata/loader-%-eb.elf: testdata/loader.c
|
||||
$* $(CFLAGS) -target bpfeb -c $< -o $@
|
||||
$(STRIP) -g $@
|
||||
|
||||
%-el.elf: %.c
|
||||
$(CLANG) $(CFLAGS) -target bpfel -c $< -o $@
|
||||
$(STRIP) -g $@
|
||||
|
||||
%-eb.elf : %.c
|
||||
$(CLANG) $(CFLAGS) -target bpfeb -c $< -o $@
|
||||
$(STRIP) -g $@
|
||||
|
||||
.PHONY: generate-btf
|
||||
generate-btf: KERNEL_VERSION?=5.19
|
||||
generate-btf:
|
||||
$(eval TMP := $(shell mktemp -d))
|
||||
curl -fL "$(CI_KERNEL_URL)/linux-$(KERNEL_VERSION).bz" -o "$(TMP)/bzImage"
|
||||
/lib/modules/$(uname -r)/build/scripts/extract-vmlinux "$(TMP)/bzImage" > "$(TMP)/vmlinux"
|
||||
$(OBJCOPY) --dump-section .BTF=/dev/stdout "$(TMP)/vmlinux" /dev/null | gzip > "btf/testdata/vmlinux.btf.gz"
|
||||
curl -fL "$(CI_KERNEL_URL)/linux-$(KERNEL_VERSION)-selftests-bpf.tgz" -o "$(TMP)/selftests.tgz"
|
||||
tar -xf "$(TMP)/selftests.tgz" --to-stdout tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.ko | \
|
||||
$(OBJCOPY) --dump-section .BTF="btf/testdata/btf_testmod.btf" - /dev/null
|
||||
$(RM) -r "$(TMP)"
|
||||
82
vendor/github.com/cilium/ebpf/README.md
generated
vendored
82
vendor/github.com/cilium/ebpf/README.md
generated
vendored
@@ -1,82 +0,0 @@
|
||||
# eBPF
|
||||
|
||||
[](https://pkg.go.dev/github.com/cilium/ebpf)
|
||||
|
||||

|
||||
|
||||
ebpf-go is a pure Go library that provides utilities for loading, compiling, and
|
||||
debugging eBPF programs. It has minimal external dependencies and is intended to
|
||||
be used in long running processes.
|
||||
|
||||
See [ebpf.io](https://ebpf.io) for complementary projects from the wider eBPF
|
||||
ecosystem.
|
||||
|
||||
## Getting Started
|
||||
|
||||
A small collection of Go and eBPF programs that serve as examples for building
|
||||
your own tools can be found under [examples/](examples/).
|
||||
|
||||
[Contributions](CONTRIBUTING.md) are highly encouraged, as they highlight certain use cases of
|
||||
eBPF and the library, and help shape the future of the project.
|
||||
|
||||
## Getting Help
|
||||
|
||||
The community actively monitors our [GitHub Discussions](https://github.com/cilium/ebpf/discussions) page.
|
||||
Please search for existing threads before starting a new one. Refrain from
|
||||
opening issues on the bug tracker if you're just starting out or if you're not
|
||||
sure if something is a bug in the library code.
|
||||
|
||||
Alternatively, [join](https://ebpf.io/slack) the
|
||||
[#ebpf-go](https://cilium.slack.com/messages/ebpf-go) channel on Slack if you
|
||||
have other questions regarding the project. Note that this channel is ephemeral
|
||||
and has its history erased past a certain point, which is less helpful for
|
||||
others running into the same problem later.
|
||||
|
||||
## Packages
|
||||
|
||||
This library includes the following packages:
|
||||
|
||||
* [asm](https://pkg.go.dev/github.com/cilium/ebpf/asm) contains a basic
|
||||
assembler, allowing you to write eBPF assembly instructions directly
|
||||
within your Go code. (You don't need to use this if you prefer to write your eBPF program in C.)
|
||||
* [cmd/bpf2go](https://pkg.go.dev/github.com/cilium/ebpf/cmd/bpf2go) allows
|
||||
compiling and embedding eBPF programs written in C within Go code. As well as
|
||||
compiling the C code, it auto-generates Go code for loading and manipulating
|
||||
the eBPF program and map objects.
|
||||
* [link](https://pkg.go.dev/github.com/cilium/ebpf/link) allows attaching eBPF
|
||||
to various hooks
|
||||
* [perf](https://pkg.go.dev/github.com/cilium/ebpf/perf) allows reading from a
|
||||
`PERF_EVENT_ARRAY`
|
||||
* [ringbuf](https://pkg.go.dev/github.com/cilium/ebpf/ringbuf) allows reading from a
|
||||
`BPF_MAP_TYPE_RINGBUF` map
|
||||
* [features](https://pkg.go.dev/github.com/cilium/ebpf/features) implements the equivalent
|
||||
of `bpftool feature probe` for discovering BPF-related kernel features using native Go.
|
||||
* [rlimit](https://pkg.go.dev/github.com/cilium/ebpf/rlimit) provides a convenient API to lift
|
||||
the `RLIMIT_MEMLOCK` constraint on kernels before 5.11.
|
||||
* [btf](https://pkg.go.dev/github.com/cilium/ebpf/btf) allows reading the BPF Type Format.
|
||||
|
||||
## Requirements
|
||||
|
||||
* A version of Go that is [supported by
|
||||
upstream](https://golang.org/doc/devel/release.html#policy)
|
||||
* Linux >= 4.9. CI is run against kernel.org LTS releases. 4.4 should work but is
|
||||
not tested against.
|
||||
|
||||
## Regenerating Testdata
|
||||
|
||||
Run `make` in the root of this repository to rebuild testdata in all
|
||||
subpackages. This requires Docker, as it relies on a standardized build
|
||||
environment to keep the build output stable.
|
||||
|
||||
It is possible to regenerate data using Podman by overriding the `CONTAINER_*`
|
||||
variables: `CONTAINER_ENGINE=podman CONTAINER_RUN_ARGS= make`.
|
||||
|
||||
The toolchain image build files are kept in [testdata/docker/](testdata/docker/).
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
|
||||
### eBPF Gopher
|
||||
|
||||
The eBPF honeygopher is based on the Go gopher designed by Renee French.
|
||||
149
vendor/github.com/cilium/ebpf/asm/alu.go
generated
vendored
149
vendor/github.com/cilium/ebpf/asm/alu.go
generated
vendored
@@ -1,149 +0,0 @@
|
||||
package asm
|
||||
|
||||
//go:generate stringer -output alu_string.go -type=Source,Endianness,ALUOp
|
||||
|
||||
// Source of ALU / ALU64 / Branch operations
|
||||
//
|
||||
// msb lsb
|
||||
// +----+-+---+
|
||||
// |op |S|cls|
|
||||
// +----+-+---+
|
||||
type Source uint8
|
||||
|
||||
const sourceMask OpCode = 0x08
|
||||
|
||||
// Source bitmask
|
||||
const (
|
||||
// InvalidSource is returned by getters when invoked
|
||||
// on non ALU / branch OpCodes.
|
||||
InvalidSource Source = 0xff
|
||||
// ImmSource src is from constant
|
||||
ImmSource Source = 0x00
|
||||
// RegSource src is from register
|
||||
RegSource Source = 0x08
|
||||
)
|
||||
|
||||
// The Endianness of a byte swap instruction.
|
||||
type Endianness uint8
|
||||
|
||||
const endianMask = sourceMask
|
||||
|
||||
// Endian flags
|
||||
const (
|
||||
InvalidEndian Endianness = 0xff
|
||||
// Convert to little endian
|
||||
LE Endianness = 0x00
|
||||
// Convert to big endian
|
||||
BE Endianness = 0x08
|
||||
)
|
||||
|
||||
// ALUOp are ALU / ALU64 operations
|
||||
//
|
||||
// msb lsb
|
||||
// +----+-+---+
|
||||
// |OP |s|cls|
|
||||
// +----+-+---+
|
||||
type ALUOp uint8
|
||||
|
||||
const aluMask OpCode = 0xf0
|
||||
|
||||
const (
|
||||
// InvalidALUOp is returned by getters when invoked
|
||||
// on non ALU OpCodes
|
||||
InvalidALUOp ALUOp = 0xff
|
||||
// Add - addition
|
||||
Add ALUOp = 0x00
|
||||
// Sub - subtraction
|
||||
Sub ALUOp = 0x10
|
||||
// Mul - multiplication
|
||||
Mul ALUOp = 0x20
|
||||
// Div - division
|
||||
Div ALUOp = 0x30
|
||||
// Or - bitwise or
|
||||
Or ALUOp = 0x40
|
||||
// And - bitwise and
|
||||
And ALUOp = 0x50
|
||||
// LSh - bitwise shift left
|
||||
LSh ALUOp = 0x60
|
||||
// RSh - bitwise shift right
|
||||
RSh ALUOp = 0x70
|
||||
// Neg - sign/unsign signing bit
|
||||
Neg ALUOp = 0x80
|
||||
// Mod - modulo
|
||||
Mod ALUOp = 0x90
|
||||
// Xor - bitwise xor
|
||||
Xor ALUOp = 0xa0
|
||||
// Mov - move value from one place to another
|
||||
Mov ALUOp = 0xb0
|
||||
// ArSh - arithmatic shift
|
||||
ArSh ALUOp = 0xc0
|
||||
// Swap - endian conversions
|
||||
Swap ALUOp = 0xd0
|
||||
)
|
||||
|
||||
// HostTo converts from host to another endianness.
|
||||
func HostTo(endian Endianness, dst Register, size Size) Instruction {
|
||||
var imm int64
|
||||
switch size {
|
||||
case Half:
|
||||
imm = 16
|
||||
case Word:
|
||||
imm = 32
|
||||
case DWord:
|
||||
imm = 64
|
||||
default:
|
||||
return Instruction{OpCode: InvalidOpCode}
|
||||
}
|
||||
|
||||
return Instruction{
|
||||
OpCode: OpCode(ALUClass).SetALUOp(Swap).SetSource(Source(endian)),
|
||||
Dst: dst,
|
||||
Constant: imm,
|
||||
}
|
||||
}
|
||||
|
||||
// Op returns the OpCode for an ALU operation with a given source.
|
||||
func (op ALUOp) Op(source Source) OpCode {
|
||||
return OpCode(ALU64Class).SetALUOp(op).SetSource(source)
|
||||
}
|
||||
|
||||
// Reg emits `dst (op) src`.
|
||||
func (op ALUOp) Reg(dst, src Register) Instruction {
|
||||
return Instruction{
|
||||
OpCode: op.Op(RegSource),
|
||||
Dst: dst,
|
||||
Src: src,
|
||||
}
|
||||
}
|
||||
|
||||
// Imm emits `dst (op) value`.
|
||||
func (op ALUOp) Imm(dst Register, value int32) Instruction {
|
||||
return Instruction{
|
||||
OpCode: op.Op(ImmSource),
|
||||
Dst: dst,
|
||||
Constant: int64(value),
|
||||
}
|
||||
}
|
||||
|
||||
// Op32 returns the OpCode for a 32-bit ALU operation with a given source.
|
||||
func (op ALUOp) Op32(source Source) OpCode {
|
||||
return OpCode(ALUClass).SetALUOp(op).SetSource(source)
|
||||
}
|
||||
|
||||
// Reg32 emits `dst (op) src`, zeroing the upper 32 bit of dst.
|
||||
func (op ALUOp) Reg32(dst, src Register) Instruction {
|
||||
return Instruction{
|
||||
OpCode: op.Op32(RegSource),
|
||||
Dst: dst,
|
||||
Src: src,
|
||||
}
|
||||
}
|
||||
|
||||
// Imm32 emits `dst (op) value`, zeroing the upper 32 bit of dst.
|
||||
func (op ALUOp) Imm32(dst Register, value int32) Instruction {
|
||||
return Instruction{
|
||||
OpCode: op.Op32(ImmSource),
|
||||
Dst: dst,
|
||||
Constant: int64(value),
|
||||
}
|
||||
}
|
||||
107
vendor/github.com/cilium/ebpf/asm/alu_string.go
generated
vendored
107
vendor/github.com/cilium/ebpf/asm/alu_string.go
generated
vendored
@@ -1,107 +0,0 @@
|
||||
// Code generated by "stringer -output alu_string.go -type=Source,Endianness,ALUOp"; DO NOT EDIT.
|
||||
|
||||
package asm
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[InvalidSource-255]
|
||||
_ = x[ImmSource-0]
|
||||
_ = x[RegSource-8]
|
||||
}
|
||||
|
||||
const (
|
||||
_Source_name_0 = "ImmSource"
|
||||
_Source_name_1 = "RegSource"
|
||||
_Source_name_2 = "InvalidSource"
|
||||
)
|
||||
|
||||
func (i Source) String() string {
|
||||
switch {
|
||||
case i == 0:
|
||||
return _Source_name_0
|
||||
case i == 8:
|
||||
return _Source_name_1
|
||||
case i == 255:
|
||||
return _Source_name_2
|
||||
default:
|
||||
return "Source(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
}
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[InvalidEndian-255]
|
||||
_ = x[LE-0]
|
||||
_ = x[BE-8]
|
||||
}
|
||||
|
||||
const (
|
||||
_Endianness_name_0 = "LE"
|
||||
_Endianness_name_1 = "BE"
|
||||
_Endianness_name_2 = "InvalidEndian"
|
||||
)
|
||||
|
||||
func (i Endianness) String() string {
|
||||
switch {
|
||||
case i == 0:
|
||||
return _Endianness_name_0
|
||||
case i == 8:
|
||||
return _Endianness_name_1
|
||||
case i == 255:
|
||||
return _Endianness_name_2
|
||||
default:
|
||||
return "Endianness(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
}
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[InvalidALUOp-255]
|
||||
_ = x[Add-0]
|
||||
_ = x[Sub-16]
|
||||
_ = x[Mul-32]
|
||||
_ = x[Div-48]
|
||||
_ = x[Or-64]
|
||||
_ = x[And-80]
|
||||
_ = x[LSh-96]
|
||||
_ = x[RSh-112]
|
||||
_ = x[Neg-128]
|
||||
_ = x[Mod-144]
|
||||
_ = x[Xor-160]
|
||||
_ = x[Mov-176]
|
||||
_ = x[ArSh-192]
|
||||
_ = x[Swap-208]
|
||||
}
|
||||
|
||||
const _ALUOp_name = "AddSubMulDivOrAndLShRShNegModXorMovArShSwapInvalidALUOp"
|
||||
|
||||
var _ALUOp_map = map[ALUOp]string{
|
||||
0: _ALUOp_name[0:3],
|
||||
16: _ALUOp_name[3:6],
|
||||
32: _ALUOp_name[6:9],
|
||||
48: _ALUOp_name[9:12],
|
||||
64: _ALUOp_name[12:14],
|
||||
80: _ALUOp_name[14:17],
|
||||
96: _ALUOp_name[17:20],
|
||||
112: _ALUOp_name[20:23],
|
||||
128: _ALUOp_name[23:26],
|
||||
144: _ALUOp_name[26:29],
|
||||
160: _ALUOp_name[29:32],
|
||||
176: _ALUOp_name[32:35],
|
||||
192: _ALUOp_name[35:39],
|
||||
208: _ALUOp_name[39:43],
|
||||
255: _ALUOp_name[43:55],
|
||||
}
|
||||
|
||||
func (i ALUOp) String() string {
|
||||
if str, ok := _ALUOp_map[i]; ok {
|
||||
return str
|
||||
}
|
||||
return "ALUOp(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
2
vendor/github.com/cilium/ebpf/asm/doc.go
generated
vendored
2
vendor/github.com/cilium/ebpf/asm/doc.go
generated
vendored
@@ -1,2 +0,0 @@
|
||||
// Package asm is an assembler for eBPF bytecode.
|
||||
package asm
|
||||
250
vendor/github.com/cilium/ebpf/asm/func.go
generated
vendored
250
vendor/github.com/cilium/ebpf/asm/func.go
generated
vendored
@@ -1,250 +0,0 @@
|
||||
package asm
|
||||
|
||||
//go:generate stringer -output func_string.go -type=BuiltinFunc
|
||||
|
||||
// BuiltinFunc is a built-in eBPF function.
|
||||
type BuiltinFunc int32
|
||||
|
||||
func (_ BuiltinFunc) Max() BuiltinFunc {
|
||||
return maxBuiltinFunc - 1
|
||||
}
|
||||
|
||||
// eBPF built-in functions
|
||||
//
|
||||
// You can regenerate this list using the following gawk script:
|
||||
//
|
||||
// /FN\(.+\),/ {
|
||||
// match($1, /\(([a-z_0-9]+),/, r)
|
||||
// split(r[1], p, "_")
|
||||
// printf "Fn"
|
||||
// for (i in p) {
|
||||
// printf "%s%s", toupper(substr(p[i], 1, 1)), substr(p[i], 2)
|
||||
// }
|
||||
// print ""
|
||||
// }
|
||||
//
|
||||
// The script expects include/uapi/linux/bpf.h as it's input.
|
||||
const (
|
||||
FnUnspec BuiltinFunc = iota
|
||||
FnMapLookupElem
|
||||
FnMapUpdateElem
|
||||
FnMapDeleteElem
|
||||
FnProbeRead
|
||||
FnKtimeGetNs
|
||||
FnTracePrintk
|
||||
FnGetPrandomU32
|
||||
FnGetSmpProcessorId
|
||||
FnSkbStoreBytes
|
||||
FnL3CsumReplace
|
||||
FnL4CsumReplace
|
||||
FnTailCall
|
||||
FnCloneRedirect
|
||||
FnGetCurrentPidTgid
|
||||
FnGetCurrentUidGid
|
||||
FnGetCurrentComm
|
||||
FnGetCgroupClassid
|
||||
FnSkbVlanPush
|
||||
FnSkbVlanPop
|
||||
FnSkbGetTunnelKey
|
||||
FnSkbSetTunnelKey
|
||||
FnPerfEventRead
|
||||
FnRedirect
|
||||
FnGetRouteRealm
|
||||
FnPerfEventOutput
|
||||
FnSkbLoadBytes
|
||||
FnGetStackid
|
||||
FnCsumDiff
|
||||
FnSkbGetTunnelOpt
|
||||
FnSkbSetTunnelOpt
|
||||
FnSkbChangeProto
|
||||
FnSkbChangeType
|
||||
FnSkbUnderCgroup
|
||||
FnGetHashRecalc
|
||||
FnGetCurrentTask
|
||||
FnProbeWriteUser
|
||||
FnCurrentTaskUnderCgroup
|
||||
FnSkbChangeTail
|
||||
FnSkbPullData
|
||||
FnCsumUpdate
|
||||
FnSetHashInvalid
|
||||
FnGetNumaNodeId
|
||||
FnSkbChangeHead
|
||||
FnXdpAdjustHead
|
||||
FnProbeReadStr
|
||||
FnGetSocketCookie
|
||||
FnGetSocketUid
|
||||
FnSetHash
|
||||
FnSetsockopt
|
||||
FnSkbAdjustRoom
|
||||
FnRedirectMap
|
||||
FnSkRedirectMap
|
||||
FnSockMapUpdate
|
||||
FnXdpAdjustMeta
|
||||
FnPerfEventReadValue
|
||||
FnPerfProgReadValue
|
||||
FnGetsockopt
|
||||
FnOverrideReturn
|
||||
FnSockOpsCbFlagsSet
|
||||
FnMsgRedirectMap
|
||||
FnMsgApplyBytes
|
||||
FnMsgCorkBytes
|
||||
FnMsgPullData
|
||||
FnBind
|
||||
FnXdpAdjustTail
|
||||
FnSkbGetXfrmState
|
||||
FnGetStack
|
||||
FnSkbLoadBytesRelative
|
||||
FnFibLookup
|
||||
FnSockHashUpdate
|
||||
FnMsgRedirectHash
|
||||
FnSkRedirectHash
|
||||
FnLwtPushEncap
|
||||
FnLwtSeg6StoreBytes
|
||||
FnLwtSeg6AdjustSrh
|
||||
FnLwtSeg6Action
|
||||
FnRcRepeat
|
||||
FnRcKeydown
|
||||
FnSkbCgroupId
|
||||
FnGetCurrentCgroupId
|
||||
FnGetLocalStorage
|
||||
FnSkSelectReuseport
|
||||
FnSkbAncestorCgroupId
|
||||
FnSkLookupTcp
|
||||
FnSkLookupUdp
|
||||
FnSkRelease
|
||||
FnMapPushElem
|
||||
FnMapPopElem
|
||||
FnMapPeekElem
|
||||
FnMsgPushData
|
||||
FnMsgPopData
|
||||
FnRcPointerRel
|
||||
FnSpinLock
|
||||
FnSpinUnlock
|
||||
FnSkFullsock
|
||||
FnTcpSock
|
||||
FnSkbEcnSetCe
|
||||
FnGetListenerSock
|
||||
FnSkcLookupTcp
|
||||
FnTcpCheckSyncookie
|
||||
FnSysctlGetName
|
||||
FnSysctlGetCurrentValue
|
||||
FnSysctlGetNewValue
|
||||
FnSysctlSetNewValue
|
||||
FnStrtol
|
||||
FnStrtoul
|
||||
FnSkStorageGet
|
||||
FnSkStorageDelete
|
||||
FnSendSignal
|
||||
FnTcpGenSyncookie
|
||||
FnSkbOutput
|
||||
FnProbeReadUser
|
||||
FnProbeReadKernel
|
||||
FnProbeReadUserStr
|
||||
FnProbeReadKernelStr
|
||||
FnTcpSendAck
|
||||
FnSendSignalThread
|
||||
FnJiffies64
|
||||
FnReadBranchRecords
|
||||
FnGetNsCurrentPidTgid
|
||||
FnXdpOutput
|
||||
FnGetNetnsCookie
|
||||
FnGetCurrentAncestorCgroupId
|
||||
FnSkAssign
|
||||
FnKtimeGetBootNs
|
||||
FnSeqPrintf
|
||||
FnSeqWrite
|
||||
FnSkCgroupId
|
||||
FnSkAncestorCgroupId
|
||||
FnRingbufOutput
|
||||
FnRingbufReserve
|
||||
FnRingbufSubmit
|
||||
FnRingbufDiscard
|
||||
FnRingbufQuery
|
||||
FnCsumLevel
|
||||
FnSkcToTcp6Sock
|
||||
FnSkcToTcpSock
|
||||
FnSkcToTcpTimewaitSock
|
||||
FnSkcToTcpRequestSock
|
||||
FnSkcToUdp6Sock
|
||||
FnGetTaskStack
|
||||
FnLoadHdrOpt
|
||||
FnStoreHdrOpt
|
||||
FnReserveHdrOpt
|
||||
FnInodeStorageGet
|
||||
FnInodeStorageDelete
|
||||
FnDPath
|
||||
FnCopyFromUser
|
||||
FnSnprintfBtf
|
||||
FnSeqPrintfBtf
|
||||
FnSkbCgroupClassid
|
||||
FnRedirectNeigh
|
||||
FnPerCpuPtr
|
||||
FnThisCpuPtr
|
||||
FnRedirectPeer
|
||||
FnTaskStorageGet
|
||||
FnTaskStorageDelete
|
||||
FnGetCurrentTaskBtf
|
||||
FnBprmOptsSet
|
||||
FnKtimeGetCoarseNs
|
||||
FnImaInodeHash
|
||||
FnSockFromFile
|
||||
FnCheckMtu
|
||||
FnForEachMapElem
|
||||
FnSnprintf
|
||||
FnSysBpf
|
||||
FnBtfFindByNameKind
|
||||
FnSysClose
|
||||
FnTimerInit
|
||||
FnTimerSetCallback
|
||||
FnTimerStart
|
||||
FnTimerCancel
|
||||
FnGetFuncIp
|
||||
FnGetAttachCookie
|
||||
FnTaskPtRegs
|
||||
FnGetBranchSnapshot
|
||||
FnTraceVprintk
|
||||
FnSkcToUnixSock
|
||||
FnKallsymsLookupName
|
||||
FnFindVma
|
||||
FnLoop
|
||||
FnStrncmp
|
||||
FnGetFuncArg
|
||||
FnGetFuncRet
|
||||
FnGetFuncArgCnt
|
||||
FnGetRetval
|
||||
FnSetRetval
|
||||
FnXdpGetBuffLen
|
||||
FnXdpLoadBytes
|
||||
FnXdpStoreBytes
|
||||
FnCopyFromUserTask
|
||||
FnSkbSetTstamp
|
||||
FnImaFileHash
|
||||
FnKptrXchg
|
||||
FnMapLookupPercpuElem
|
||||
FnSkcToMptcpSock
|
||||
FnDynptrFromMem
|
||||
FnRingbufReserveDynptr
|
||||
FnRingbufSubmitDynptr
|
||||
FnRingbufDiscardDynptr
|
||||
FnDynptrRead
|
||||
FnDynptrWrite
|
||||
FnDynptrData
|
||||
FnTcpRawGenSyncookieIpv4
|
||||
FnTcpRawGenSyncookieIpv6
|
||||
FnTcpRawCheckSyncookieIpv4
|
||||
FnTcpRawCheckSyncookieIpv6
|
||||
FnKtimeGetTaiNs
|
||||
FnUserRingbufDrain
|
||||
FnCgrpStorageGet
|
||||
FnCgrpStorageDelete
|
||||
|
||||
maxBuiltinFunc
|
||||
)
|
||||
|
||||
// Call emits a function call.
|
||||
func (fn BuiltinFunc) Call() Instruction {
|
||||
return Instruction{
|
||||
OpCode: OpCode(JumpClass).SetJumpOp(Call),
|
||||
Constant: int64(fn),
|
||||
}
|
||||
}
|
||||
235
vendor/github.com/cilium/ebpf/asm/func_string.go
generated
vendored
235
vendor/github.com/cilium/ebpf/asm/func_string.go
generated
vendored
@@ -1,235 +0,0 @@
|
||||
// Code generated by "stringer -output func_string.go -type=BuiltinFunc"; DO NOT EDIT.
|
||||
|
||||
package asm
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[FnUnspec-0]
|
||||
_ = x[FnMapLookupElem-1]
|
||||
_ = x[FnMapUpdateElem-2]
|
||||
_ = x[FnMapDeleteElem-3]
|
||||
_ = x[FnProbeRead-4]
|
||||
_ = x[FnKtimeGetNs-5]
|
||||
_ = x[FnTracePrintk-6]
|
||||
_ = x[FnGetPrandomU32-7]
|
||||
_ = x[FnGetSmpProcessorId-8]
|
||||
_ = x[FnSkbStoreBytes-9]
|
||||
_ = x[FnL3CsumReplace-10]
|
||||
_ = x[FnL4CsumReplace-11]
|
||||
_ = x[FnTailCall-12]
|
||||
_ = x[FnCloneRedirect-13]
|
||||
_ = x[FnGetCurrentPidTgid-14]
|
||||
_ = x[FnGetCurrentUidGid-15]
|
||||
_ = x[FnGetCurrentComm-16]
|
||||
_ = x[FnGetCgroupClassid-17]
|
||||
_ = x[FnSkbVlanPush-18]
|
||||
_ = x[FnSkbVlanPop-19]
|
||||
_ = x[FnSkbGetTunnelKey-20]
|
||||
_ = x[FnSkbSetTunnelKey-21]
|
||||
_ = x[FnPerfEventRead-22]
|
||||
_ = x[FnRedirect-23]
|
||||
_ = x[FnGetRouteRealm-24]
|
||||
_ = x[FnPerfEventOutput-25]
|
||||
_ = x[FnSkbLoadBytes-26]
|
||||
_ = x[FnGetStackid-27]
|
||||
_ = x[FnCsumDiff-28]
|
||||
_ = x[FnSkbGetTunnelOpt-29]
|
||||
_ = x[FnSkbSetTunnelOpt-30]
|
||||
_ = x[FnSkbChangeProto-31]
|
||||
_ = x[FnSkbChangeType-32]
|
||||
_ = x[FnSkbUnderCgroup-33]
|
||||
_ = x[FnGetHashRecalc-34]
|
||||
_ = x[FnGetCurrentTask-35]
|
||||
_ = x[FnProbeWriteUser-36]
|
||||
_ = x[FnCurrentTaskUnderCgroup-37]
|
||||
_ = x[FnSkbChangeTail-38]
|
||||
_ = x[FnSkbPullData-39]
|
||||
_ = x[FnCsumUpdate-40]
|
||||
_ = x[FnSetHashInvalid-41]
|
||||
_ = x[FnGetNumaNodeId-42]
|
||||
_ = x[FnSkbChangeHead-43]
|
||||
_ = x[FnXdpAdjustHead-44]
|
||||
_ = x[FnProbeReadStr-45]
|
||||
_ = x[FnGetSocketCookie-46]
|
||||
_ = x[FnGetSocketUid-47]
|
||||
_ = x[FnSetHash-48]
|
||||
_ = x[FnSetsockopt-49]
|
||||
_ = x[FnSkbAdjustRoom-50]
|
||||
_ = x[FnRedirectMap-51]
|
||||
_ = x[FnSkRedirectMap-52]
|
||||
_ = x[FnSockMapUpdate-53]
|
||||
_ = x[FnXdpAdjustMeta-54]
|
||||
_ = x[FnPerfEventReadValue-55]
|
||||
_ = x[FnPerfProgReadValue-56]
|
||||
_ = x[FnGetsockopt-57]
|
||||
_ = x[FnOverrideReturn-58]
|
||||
_ = x[FnSockOpsCbFlagsSet-59]
|
||||
_ = x[FnMsgRedirectMap-60]
|
||||
_ = x[FnMsgApplyBytes-61]
|
||||
_ = x[FnMsgCorkBytes-62]
|
||||
_ = x[FnMsgPullData-63]
|
||||
_ = x[FnBind-64]
|
||||
_ = x[FnXdpAdjustTail-65]
|
||||
_ = x[FnSkbGetXfrmState-66]
|
||||
_ = x[FnGetStack-67]
|
||||
_ = x[FnSkbLoadBytesRelative-68]
|
||||
_ = x[FnFibLookup-69]
|
||||
_ = x[FnSockHashUpdate-70]
|
||||
_ = x[FnMsgRedirectHash-71]
|
||||
_ = x[FnSkRedirectHash-72]
|
||||
_ = x[FnLwtPushEncap-73]
|
||||
_ = x[FnLwtSeg6StoreBytes-74]
|
||||
_ = x[FnLwtSeg6AdjustSrh-75]
|
||||
_ = x[FnLwtSeg6Action-76]
|
||||
_ = x[FnRcRepeat-77]
|
||||
_ = x[FnRcKeydown-78]
|
||||
_ = x[FnSkbCgroupId-79]
|
||||
_ = x[FnGetCurrentCgroupId-80]
|
||||
_ = x[FnGetLocalStorage-81]
|
||||
_ = x[FnSkSelectReuseport-82]
|
||||
_ = x[FnSkbAncestorCgroupId-83]
|
||||
_ = x[FnSkLookupTcp-84]
|
||||
_ = x[FnSkLookupUdp-85]
|
||||
_ = x[FnSkRelease-86]
|
||||
_ = x[FnMapPushElem-87]
|
||||
_ = x[FnMapPopElem-88]
|
||||
_ = x[FnMapPeekElem-89]
|
||||
_ = x[FnMsgPushData-90]
|
||||
_ = x[FnMsgPopData-91]
|
||||
_ = x[FnRcPointerRel-92]
|
||||
_ = x[FnSpinLock-93]
|
||||
_ = x[FnSpinUnlock-94]
|
||||
_ = x[FnSkFullsock-95]
|
||||
_ = x[FnTcpSock-96]
|
||||
_ = x[FnSkbEcnSetCe-97]
|
||||
_ = x[FnGetListenerSock-98]
|
||||
_ = x[FnSkcLookupTcp-99]
|
||||
_ = x[FnTcpCheckSyncookie-100]
|
||||
_ = x[FnSysctlGetName-101]
|
||||
_ = x[FnSysctlGetCurrentValue-102]
|
||||
_ = x[FnSysctlGetNewValue-103]
|
||||
_ = x[FnSysctlSetNewValue-104]
|
||||
_ = x[FnStrtol-105]
|
||||
_ = x[FnStrtoul-106]
|
||||
_ = x[FnSkStorageGet-107]
|
||||
_ = x[FnSkStorageDelete-108]
|
||||
_ = x[FnSendSignal-109]
|
||||
_ = x[FnTcpGenSyncookie-110]
|
||||
_ = x[FnSkbOutput-111]
|
||||
_ = x[FnProbeReadUser-112]
|
||||
_ = x[FnProbeReadKernel-113]
|
||||
_ = x[FnProbeReadUserStr-114]
|
||||
_ = x[FnProbeReadKernelStr-115]
|
||||
_ = x[FnTcpSendAck-116]
|
||||
_ = x[FnSendSignalThread-117]
|
||||
_ = x[FnJiffies64-118]
|
||||
_ = x[FnReadBranchRecords-119]
|
||||
_ = x[FnGetNsCurrentPidTgid-120]
|
||||
_ = x[FnXdpOutput-121]
|
||||
_ = x[FnGetNetnsCookie-122]
|
||||
_ = x[FnGetCurrentAncestorCgroupId-123]
|
||||
_ = x[FnSkAssign-124]
|
||||
_ = x[FnKtimeGetBootNs-125]
|
||||
_ = x[FnSeqPrintf-126]
|
||||
_ = x[FnSeqWrite-127]
|
||||
_ = x[FnSkCgroupId-128]
|
||||
_ = x[FnSkAncestorCgroupId-129]
|
||||
_ = x[FnRingbufOutput-130]
|
||||
_ = x[FnRingbufReserve-131]
|
||||
_ = x[FnRingbufSubmit-132]
|
||||
_ = x[FnRingbufDiscard-133]
|
||||
_ = x[FnRingbufQuery-134]
|
||||
_ = x[FnCsumLevel-135]
|
||||
_ = x[FnSkcToTcp6Sock-136]
|
||||
_ = x[FnSkcToTcpSock-137]
|
||||
_ = x[FnSkcToTcpTimewaitSock-138]
|
||||
_ = x[FnSkcToTcpRequestSock-139]
|
||||
_ = x[FnSkcToUdp6Sock-140]
|
||||
_ = x[FnGetTaskStack-141]
|
||||
_ = x[FnLoadHdrOpt-142]
|
||||
_ = x[FnStoreHdrOpt-143]
|
||||
_ = x[FnReserveHdrOpt-144]
|
||||
_ = x[FnInodeStorageGet-145]
|
||||
_ = x[FnInodeStorageDelete-146]
|
||||
_ = x[FnDPath-147]
|
||||
_ = x[FnCopyFromUser-148]
|
||||
_ = x[FnSnprintfBtf-149]
|
||||
_ = x[FnSeqPrintfBtf-150]
|
||||
_ = x[FnSkbCgroupClassid-151]
|
||||
_ = x[FnRedirectNeigh-152]
|
||||
_ = x[FnPerCpuPtr-153]
|
||||
_ = x[FnThisCpuPtr-154]
|
||||
_ = x[FnRedirectPeer-155]
|
||||
_ = x[FnTaskStorageGet-156]
|
||||
_ = x[FnTaskStorageDelete-157]
|
||||
_ = x[FnGetCurrentTaskBtf-158]
|
||||
_ = x[FnBprmOptsSet-159]
|
||||
_ = x[FnKtimeGetCoarseNs-160]
|
||||
_ = x[FnImaInodeHash-161]
|
||||
_ = x[FnSockFromFile-162]
|
||||
_ = x[FnCheckMtu-163]
|
||||
_ = x[FnForEachMapElem-164]
|
||||
_ = x[FnSnprintf-165]
|
||||
_ = x[FnSysBpf-166]
|
||||
_ = x[FnBtfFindByNameKind-167]
|
||||
_ = x[FnSysClose-168]
|
||||
_ = x[FnTimerInit-169]
|
||||
_ = x[FnTimerSetCallback-170]
|
||||
_ = x[FnTimerStart-171]
|
||||
_ = x[FnTimerCancel-172]
|
||||
_ = x[FnGetFuncIp-173]
|
||||
_ = x[FnGetAttachCookie-174]
|
||||
_ = x[FnTaskPtRegs-175]
|
||||
_ = x[FnGetBranchSnapshot-176]
|
||||
_ = x[FnTraceVprintk-177]
|
||||
_ = x[FnSkcToUnixSock-178]
|
||||
_ = x[FnKallsymsLookupName-179]
|
||||
_ = x[FnFindVma-180]
|
||||
_ = x[FnLoop-181]
|
||||
_ = x[FnStrncmp-182]
|
||||
_ = x[FnGetFuncArg-183]
|
||||
_ = x[FnGetFuncRet-184]
|
||||
_ = x[FnGetFuncArgCnt-185]
|
||||
_ = x[FnGetRetval-186]
|
||||
_ = x[FnSetRetval-187]
|
||||
_ = x[FnXdpGetBuffLen-188]
|
||||
_ = x[FnXdpLoadBytes-189]
|
||||
_ = x[FnXdpStoreBytes-190]
|
||||
_ = x[FnCopyFromUserTask-191]
|
||||
_ = x[FnSkbSetTstamp-192]
|
||||
_ = x[FnImaFileHash-193]
|
||||
_ = x[FnKptrXchg-194]
|
||||
_ = x[FnMapLookupPercpuElem-195]
|
||||
_ = x[FnSkcToMptcpSock-196]
|
||||
_ = x[FnDynptrFromMem-197]
|
||||
_ = x[FnRingbufReserveDynptr-198]
|
||||
_ = x[FnRingbufSubmitDynptr-199]
|
||||
_ = x[FnRingbufDiscardDynptr-200]
|
||||
_ = x[FnDynptrRead-201]
|
||||
_ = x[FnDynptrWrite-202]
|
||||
_ = x[FnDynptrData-203]
|
||||
_ = x[FnTcpRawGenSyncookieIpv4-204]
|
||||
_ = x[FnTcpRawGenSyncookieIpv6-205]
|
||||
_ = x[FnTcpRawCheckSyncookieIpv4-206]
|
||||
_ = x[FnTcpRawCheckSyncookieIpv6-207]
|
||||
_ = x[FnKtimeGetTaiNs-208]
|
||||
_ = x[FnUserRingbufDrain-209]
|
||||
_ = x[FnCgrpStorageGet-210]
|
||||
_ = x[FnCgrpStorageDelete-211]
|
||||
_ = x[maxBuiltinFunc-212]
|
||||
}
|
||||
|
||||
const _BuiltinFunc_name = "FnUnspecFnMapLookupElemFnMapUpdateElemFnMapDeleteElemFnProbeReadFnKtimeGetNsFnTracePrintkFnGetPrandomU32FnGetSmpProcessorIdFnSkbStoreBytesFnL3CsumReplaceFnL4CsumReplaceFnTailCallFnCloneRedirectFnGetCurrentPidTgidFnGetCurrentUidGidFnGetCurrentCommFnGetCgroupClassidFnSkbVlanPushFnSkbVlanPopFnSkbGetTunnelKeyFnSkbSetTunnelKeyFnPerfEventReadFnRedirectFnGetRouteRealmFnPerfEventOutputFnSkbLoadBytesFnGetStackidFnCsumDiffFnSkbGetTunnelOptFnSkbSetTunnelOptFnSkbChangeProtoFnSkbChangeTypeFnSkbUnderCgroupFnGetHashRecalcFnGetCurrentTaskFnProbeWriteUserFnCurrentTaskUnderCgroupFnSkbChangeTailFnSkbPullDataFnCsumUpdateFnSetHashInvalidFnGetNumaNodeIdFnSkbChangeHeadFnXdpAdjustHeadFnProbeReadStrFnGetSocketCookieFnGetSocketUidFnSetHashFnSetsockoptFnSkbAdjustRoomFnRedirectMapFnSkRedirectMapFnSockMapUpdateFnXdpAdjustMetaFnPerfEventReadValueFnPerfProgReadValueFnGetsockoptFnOverrideReturnFnSockOpsCbFlagsSetFnMsgRedirectMapFnMsgApplyBytesFnMsgCorkBytesFnMsgPullDataFnBindFnXdpAdjustTailFnSkbGetXfrmStateFnGetStackFnSkbLoadBytesRelativeFnFibLookupFnSockHashUpdateFnMsgRedirectHashFnSkRedirectHashFnLwtPushEncapFnLwtSeg6StoreBytesFnLwtSeg6AdjustSrhFnLwtSeg6ActionFnRcRepeatFnRcKeydownFnSkbCgroupIdFnGetCurrentCgroupIdFnGetLocalStorageFnSkSelectReuseportFnSkbAncestorCgroupIdFnSkLookupTcpFnSkLookupUdpFnSkReleaseFnMapPushElemFnMapPopElemFnMapPeekElemFnMsgPushDataFnMsgPopDataFnRcPointerRelFnSpinLockFnSpinUnlockFnSkFullsockFnTcpSockFnSkbEcnSetCeFnGetListenerSockFnSkcLookupTcpFnTcpCheckSyncookieFnSysctlGetNameFnSysctlGetCurrentValueFnSysctlGetNewValueFnSysctlSetNewValueFnStrtolFnStrtoulFnSkStorageGetFnSkStorageDeleteFnSendSignalFnTcpGenSyncookieFnSkbOutputFnProbeReadUserFnProbeReadKernelFnProbeReadUserStrFnProbeReadKernelStrFnTcpSendAckFnSendSignalThreadFnJiffies64FnReadBranchRecordsFnGetNsCurrentPidTgidFnXdpOutputFnGetNetnsCookieFnGetCurrentAncestorCgroupIdFnSkAssignFnKtimeGetBootNsFnSeqPrintfFnSeqWriteFnSkCgroupIdFnSkAncestorCgroupIdFnRingbufOutputFnRingbufReserveFnRingbufSubmitFnRingbufDiscardFnRingbufQueryFnCsumLevelFnSkcToTcp6SockFnSkcToTcpSockFnSkcToTcpTimewaitSockFnSkcToTcpRequestSockFnSkcToUdp6SockFnGetTaskStackFnLoadHdrOptFnStoreHdrOptFnReserveHdrOptFnInodeStorageGetFnInodeStorageDeleteFnDPathFnCopyFromUserFnSnprintfBtfFnSeqPrintfBtfFnSkbCgroupClassidFnRedirectNeighFnPerCpuPtrFnThisCpuPtrFnRedirectPeerFnTaskStorageGetFnTaskStorageDeleteFnGetCurrentTaskBtfFnBprmOptsSetFnKtimeGetCoarseNsFnImaInodeHashFnSockFromFileFnCheckMtuFnForEachMapElemFnSnprintfFnSysBpfFnBtfFindByNameKindFnSysCloseFnTimerInitFnTimerSetCallbackFnTimerStartFnTimerCancelFnGetFuncIpFnGetAttachCookieFnTaskPtRegsFnGetBranchSnapshotFnTraceVprintkFnSkcToUnixSockFnKallsymsLookupNameFnFindVmaFnLoopFnStrncmpFnGetFuncArgFnGetFuncRetFnGetFuncArgCntFnGetRetvalFnSetRetvalFnXdpGetBuffLenFnXdpLoadBytesFnXdpStoreBytesFnCopyFromUserTaskFnSkbSetTstampFnImaFileHashFnKptrXchgFnMapLookupPercpuElemFnSkcToMptcpSockFnDynptrFromMemFnRingbufReserveDynptrFnRingbufSubmitDynptrFnRingbufDiscardDynptrFnDynptrReadFnDynptrWriteFnDynptrDataFnTcpRawGenSyncookieIpv4FnTcpRawGenSyncookieIpv6FnTcpRawCheckSyncookieIpv4FnTcpRawCheckSyncookieIpv6FnKtimeGetTaiNsFnUserRingbufDrainFnCgrpStorageGetFnCgrpStorageDeletemaxBuiltinFunc"
|
||||
|
||||
var _BuiltinFunc_index = [...]uint16{0, 8, 23, 38, 53, 64, 76, 89, 104, 123, 138, 153, 168, 178, 193, 212, 230, 246, 264, 277, 289, 306, 323, 338, 348, 363, 380, 394, 406, 416, 433, 450, 466, 481, 497, 512, 528, 544, 568, 583, 596, 608, 624, 639, 654, 669, 683, 700, 714, 723, 735, 750, 763, 778, 793, 808, 828, 847, 859, 875, 894, 910, 925, 939, 952, 958, 973, 990, 1000, 1022, 1033, 1049, 1066, 1082, 1096, 1115, 1133, 1148, 1158, 1169, 1182, 1202, 1219, 1238, 1259, 1272, 1285, 1296, 1309, 1321, 1334, 1347, 1359, 1373, 1383, 1395, 1407, 1416, 1429, 1446, 1460, 1479, 1494, 1517, 1536, 1555, 1563, 1572, 1586, 1603, 1615, 1632, 1643, 1658, 1675, 1693, 1713, 1725, 1743, 1754, 1773, 1794, 1805, 1821, 1849, 1859, 1875, 1886, 1896, 1908, 1928, 1943, 1959, 1974, 1990, 2004, 2015, 2030, 2044, 2066, 2087, 2102, 2116, 2128, 2141, 2156, 2173, 2193, 2200, 2214, 2227, 2241, 2259, 2274, 2285, 2297, 2311, 2327, 2346, 2365, 2378, 2396, 2410, 2424, 2434, 2450, 2460, 2468, 2487, 2497, 2508, 2526, 2538, 2551, 2562, 2579, 2591, 2610, 2624, 2639, 2659, 2668, 2674, 2683, 2695, 2707, 2722, 2733, 2744, 2759, 2773, 2788, 2806, 2820, 2833, 2843, 2864, 2880, 2895, 2917, 2938, 2960, 2972, 2985, 2997, 3021, 3045, 3071, 3097, 3112, 3130, 3146, 3165, 3179}
|
||||
|
||||
func (i BuiltinFunc) String() string {
|
||||
if i < 0 || i >= BuiltinFunc(len(_BuiltinFunc_index)-1) {
|
||||
return "BuiltinFunc(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
return _BuiltinFunc_name[_BuiltinFunc_index[i]:_BuiltinFunc_index[i+1]]
|
||||
}
|
||||
877
vendor/github.com/cilium/ebpf/asm/instruction.go
generated
vendored
877
vendor/github.com/cilium/ebpf/asm/instruction.go
generated
vendored
@@ -1,877 +0,0 @@
|
||||
package asm
|
||||
|
||||
import (
|
||||
"crypto/sha1"
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// InstructionSize is the size of a BPF instruction in bytes
|
||||
const InstructionSize = 8
|
||||
|
||||
// RawInstructionOffset is an offset in units of raw BPF instructions.
|
||||
type RawInstructionOffset uint64
|
||||
|
||||
var ErrUnreferencedSymbol = errors.New("unreferenced symbol")
|
||||
var ErrUnsatisfiedMapReference = errors.New("unsatisfied map reference")
|
||||
var ErrUnsatisfiedProgramReference = errors.New("unsatisfied program reference")
|
||||
|
||||
// Bytes returns the offset of an instruction in bytes.
|
||||
func (rio RawInstructionOffset) Bytes() uint64 {
|
||||
return uint64(rio) * InstructionSize
|
||||
}
|
||||
|
||||
// Instruction is a single eBPF instruction.
|
||||
type Instruction struct {
|
||||
OpCode OpCode
|
||||
Dst Register
|
||||
Src Register
|
||||
Offset int16
|
||||
Constant int64
|
||||
|
||||
// Metadata contains optional metadata about this instruction.
|
||||
Metadata Metadata
|
||||
}
|
||||
|
||||
// Unmarshal decodes a BPF instruction.
|
||||
func (ins *Instruction) Unmarshal(r io.Reader, bo binary.ByteOrder) (uint64, error) {
|
||||
data := make([]byte, InstructionSize)
|
||||
if _, err := io.ReadFull(r, data); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
ins.OpCode = OpCode(data[0])
|
||||
|
||||
regs := data[1]
|
||||
switch bo {
|
||||
case binary.LittleEndian:
|
||||
ins.Dst, ins.Src = Register(regs&0xF), Register(regs>>4)
|
||||
case binary.BigEndian:
|
||||
ins.Dst, ins.Src = Register(regs>>4), Register(regs&0xf)
|
||||
}
|
||||
|
||||
ins.Offset = int16(bo.Uint16(data[2:4]))
|
||||
// Convert to int32 before widening to int64
|
||||
// to ensure the signed bit is carried over.
|
||||
ins.Constant = int64(int32(bo.Uint32(data[4:8])))
|
||||
|
||||
if !ins.OpCode.IsDWordLoad() {
|
||||
return InstructionSize, nil
|
||||
}
|
||||
|
||||
// Pull another instruction from the stream to retrieve the second
|
||||
// half of the 64-bit immediate value.
|
||||
if _, err := io.ReadFull(r, data); err != nil {
|
||||
// No Wrap, to avoid io.EOF clash
|
||||
return 0, errors.New("64bit immediate is missing second half")
|
||||
}
|
||||
|
||||
// Require that all fields other than the value are zero.
|
||||
if bo.Uint32(data[0:4]) != 0 {
|
||||
return 0, errors.New("64bit immediate has non-zero fields")
|
||||
}
|
||||
|
||||
cons1 := uint32(ins.Constant)
|
||||
cons2 := int32(bo.Uint32(data[4:8]))
|
||||
ins.Constant = int64(cons2)<<32 | int64(cons1)
|
||||
|
||||
return 2 * InstructionSize, nil
|
||||
}
|
||||
|
||||
// Marshal encodes a BPF instruction.
|
||||
func (ins Instruction) Marshal(w io.Writer, bo binary.ByteOrder) (uint64, error) {
|
||||
if ins.OpCode == InvalidOpCode {
|
||||
return 0, errors.New("invalid opcode")
|
||||
}
|
||||
|
||||
isDWordLoad := ins.OpCode.IsDWordLoad()
|
||||
|
||||
cons := int32(ins.Constant)
|
||||
if isDWordLoad {
|
||||
// Encode least significant 32bit first for 64bit operations.
|
||||
cons = int32(uint32(ins.Constant))
|
||||
}
|
||||
|
||||
regs, err := newBPFRegisters(ins.Dst, ins.Src, bo)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("can't marshal registers: %s", err)
|
||||
}
|
||||
|
||||
data := make([]byte, InstructionSize)
|
||||
data[0] = byte(ins.OpCode)
|
||||
data[1] = byte(regs)
|
||||
bo.PutUint16(data[2:4], uint16(ins.Offset))
|
||||
bo.PutUint32(data[4:8], uint32(cons))
|
||||
if _, err := w.Write(data); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if !isDWordLoad {
|
||||
return InstructionSize, nil
|
||||
}
|
||||
|
||||
// The first half of the second part of a double-wide instruction
|
||||
// must be zero. The second half carries the value.
|
||||
bo.PutUint32(data[0:4], 0)
|
||||
bo.PutUint32(data[4:8], uint32(ins.Constant>>32))
|
||||
if _, err := w.Write(data); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return 2 * InstructionSize, nil
|
||||
}
|
||||
|
||||
// AssociateMap associates a Map with this Instruction.
|
||||
//
|
||||
// Implicitly clears the Instruction's Reference field.
|
||||
//
|
||||
// Returns an error if the Instruction is not a map load.
|
||||
func (ins *Instruction) AssociateMap(m FDer) error {
|
||||
if !ins.IsLoadFromMap() {
|
||||
return errors.New("not a load from a map")
|
||||
}
|
||||
|
||||
ins.Metadata.Set(referenceMeta{}, nil)
|
||||
ins.Metadata.Set(mapMeta{}, m)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RewriteMapPtr changes an instruction to use a new map fd.
|
||||
//
|
||||
// Returns an error if the instruction doesn't load a map.
|
||||
//
|
||||
// Deprecated: use AssociateMap instead. If you cannot provide a Map,
|
||||
// wrap an fd in a type implementing FDer.
|
||||
func (ins *Instruction) RewriteMapPtr(fd int) error {
|
||||
if !ins.IsLoadFromMap() {
|
||||
return errors.New("not a load from a map")
|
||||
}
|
||||
|
||||
ins.encodeMapFD(fd)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ins *Instruction) encodeMapFD(fd int) {
|
||||
// Preserve the offset value for direct map loads.
|
||||
offset := uint64(ins.Constant) & (math.MaxUint32 << 32)
|
||||
rawFd := uint64(uint32(fd))
|
||||
ins.Constant = int64(offset | rawFd)
|
||||
}
|
||||
|
||||
// MapPtr returns the map fd for this instruction.
|
||||
//
|
||||
// The result is undefined if the instruction is not a load from a map,
|
||||
// see IsLoadFromMap.
|
||||
//
|
||||
// Deprecated: use Map() instead.
|
||||
func (ins *Instruction) MapPtr() int {
|
||||
// If there is a map associated with the instruction, return its FD.
|
||||
if fd := ins.Metadata.Get(mapMeta{}); fd != nil {
|
||||
return fd.(FDer).FD()
|
||||
}
|
||||
|
||||
// Fall back to the fd stored in the Constant field
|
||||
return ins.mapFd()
|
||||
}
|
||||
|
||||
// mapFd returns the map file descriptor stored in the 32 least significant
|
||||
// bits of ins' Constant field.
|
||||
func (ins *Instruction) mapFd() int {
|
||||
return int(int32(ins.Constant))
|
||||
}
|
||||
|
||||
// RewriteMapOffset changes the offset of a direct load from a map.
|
||||
//
|
||||
// Returns an error if the instruction is not a direct load.
|
||||
func (ins *Instruction) RewriteMapOffset(offset uint32) error {
|
||||
if !ins.OpCode.IsDWordLoad() {
|
||||
return fmt.Errorf("%s is not a 64 bit load", ins.OpCode)
|
||||
}
|
||||
|
||||
if ins.Src != PseudoMapValue {
|
||||
return errors.New("not a direct load from a map")
|
||||
}
|
||||
|
||||
fd := uint64(ins.Constant) & math.MaxUint32
|
||||
ins.Constant = int64(uint64(offset)<<32 | fd)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ins *Instruction) mapOffset() uint32 {
|
||||
return uint32(uint64(ins.Constant) >> 32)
|
||||
}
|
||||
|
||||
// IsLoadFromMap returns true if the instruction loads from a map.
|
||||
//
|
||||
// This covers both loading the map pointer and direct map value loads.
|
||||
func (ins *Instruction) IsLoadFromMap() bool {
|
||||
return ins.OpCode == LoadImmOp(DWord) && (ins.Src == PseudoMapFD || ins.Src == PseudoMapValue)
|
||||
}
|
||||
|
||||
// IsFunctionCall returns true if the instruction calls another BPF function.
|
||||
//
|
||||
// This is not the same thing as a BPF helper call.
|
||||
func (ins *Instruction) IsFunctionCall() bool {
|
||||
return ins.OpCode.JumpOp() == Call && ins.Src == PseudoCall
|
||||
}
|
||||
|
||||
// IsKfuncCall returns true if the instruction calls a kfunc.
|
||||
//
|
||||
// This is not the same thing as a BPF helper call.
|
||||
func (ins *Instruction) IsKfuncCall() bool {
|
||||
return ins.OpCode.JumpOp() == Call && ins.Src == PseudoKfuncCall
|
||||
}
|
||||
|
||||
// IsLoadOfFunctionPointer returns true if the instruction loads a function pointer.
|
||||
func (ins *Instruction) IsLoadOfFunctionPointer() bool {
|
||||
return ins.OpCode.IsDWordLoad() && ins.Src == PseudoFunc
|
||||
}
|
||||
|
||||
// IsFunctionReference returns true if the instruction references another BPF
|
||||
// function, either by invoking a Call jump operation or by loading a function
|
||||
// pointer.
|
||||
func (ins *Instruction) IsFunctionReference() bool {
|
||||
return ins.IsFunctionCall() || ins.IsLoadOfFunctionPointer()
|
||||
}
|
||||
|
||||
// IsBuiltinCall returns true if the instruction is a built-in call, i.e. BPF helper call.
|
||||
func (ins *Instruction) IsBuiltinCall() bool {
|
||||
return ins.OpCode.JumpOp() == Call && ins.Src == R0 && ins.Dst == R0
|
||||
}
|
||||
|
||||
// IsConstantLoad returns true if the instruction loads a constant of the
|
||||
// given size.
|
||||
func (ins *Instruction) IsConstantLoad(size Size) bool {
|
||||
return ins.OpCode == LoadImmOp(size) && ins.Src == R0 && ins.Offset == 0
|
||||
}
|
||||
|
||||
// Format implements fmt.Formatter.
|
||||
func (ins Instruction) Format(f fmt.State, c rune) {
|
||||
if c != 'v' {
|
||||
fmt.Fprintf(f, "{UNRECOGNIZED: %c}", c)
|
||||
return
|
||||
}
|
||||
|
||||
op := ins.OpCode
|
||||
|
||||
if op == InvalidOpCode {
|
||||
fmt.Fprint(f, "INVALID")
|
||||
return
|
||||
}
|
||||
|
||||
// Omit trailing space for Exit
|
||||
if op.JumpOp() == Exit {
|
||||
fmt.Fprint(f, op)
|
||||
return
|
||||
}
|
||||
|
||||
if ins.IsLoadFromMap() {
|
||||
fd := ins.mapFd()
|
||||
m := ins.Map()
|
||||
switch ins.Src {
|
||||
case PseudoMapFD:
|
||||
if m != nil {
|
||||
fmt.Fprintf(f, "LoadMapPtr dst: %s map: %s", ins.Dst, m)
|
||||
} else {
|
||||
fmt.Fprintf(f, "LoadMapPtr dst: %s fd: %d", ins.Dst, fd)
|
||||
}
|
||||
|
||||
case PseudoMapValue:
|
||||
if m != nil {
|
||||
fmt.Fprintf(f, "LoadMapValue dst: %s, map: %s off: %d", ins.Dst, m, ins.mapOffset())
|
||||
} else {
|
||||
fmt.Fprintf(f, "LoadMapValue dst: %s, fd: %d off: %d", ins.Dst, fd, ins.mapOffset())
|
||||
}
|
||||
}
|
||||
|
||||
goto ref
|
||||
}
|
||||
|
||||
fmt.Fprintf(f, "%v ", op)
|
||||
switch cls := op.Class(); {
|
||||
case cls.isLoadOrStore():
|
||||
switch op.Mode() {
|
||||
case ImmMode:
|
||||
fmt.Fprintf(f, "dst: %s imm: %d", ins.Dst, ins.Constant)
|
||||
case AbsMode:
|
||||
fmt.Fprintf(f, "imm: %d", ins.Constant)
|
||||
case IndMode:
|
||||
fmt.Fprintf(f, "dst: %s src: %s imm: %d", ins.Dst, ins.Src, ins.Constant)
|
||||
case MemMode:
|
||||
fmt.Fprintf(f, "dst: %s src: %s off: %d imm: %d", ins.Dst, ins.Src, ins.Offset, ins.Constant)
|
||||
case XAddMode:
|
||||
fmt.Fprintf(f, "dst: %s src: %s", ins.Dst, ins.Src)
|
||||
}
|
||||
|
||||
case cls.IsALU():
|
||||
fmt.Fprintf(f, "dst: %s ", ins.Dst)
|
||||
if op.ALUOp() == Swap || op.Source() == ImmSource {
|
||||
fmt.Fprintf(f, "imm: %d", ins.Constant)
|
||||
} else {
|
||||
fmt.Fprintf(f, "src: %s", ins.Src)
|
||||
}
|
||||
|
||||
case cls.IsJump():
|
||||
switch jop := op.JumpOp(); jop {
|
||||
case Call:
|
||||
switch ins.Src {
|
||||
case PseudoCall:
|
||||
// bpf-to-bpf call
|
||||
fmt.Fprint(f, ins.Constant)
|
||||
case PseudoKfuncCall:
|
||||
// kfunc call
|
||||
fmt.Fprintf(f, "Kfunc(%d)", ins.Constant)
|
||||
default:
|
||||
fmt.Fprint(f, BuiltinFunc(ins.Constant))
|
||||
}
|
||||
|
||||
default:
|
||||
fmt.Fprintf(f, "dst: %s off: %d ", ins.Dst, ins.Offset)
|
||||
if op.Source() == ImmSource {
|
||||
fmt.Fprintf(f, "imm: %d", ins.Constant)
|
||||
} else {
|
||||
fmt.Fprintf(f, "src: %s", ins.Src)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ref:
|
||||
if ins.Reference() != "" {
|
||||
fmt.Fprintf(f, " <%s>", ins.Reference())
|
||||
}
|
||||
}
|
||||
|
||||
func (ins Instruction) equal(other Instruction) bool {
|
||||
return ins.OpCode == other.OpCode &&
|
||||
ins.Dst == other.Dst &&
|
||||
ins.Src == other.Src &&
|
||||
ins.Offset == other.Offset &&
|
||||
ins.Constant == other.Constant
|
||||
}
|
||||
|
||||
// Size returns the amount of bytes ins would occupy in binary form.
|
||||
func (ins Instruction) Size() uint64 {
|
||||
return uint64(InstructionSize * ins.OpCode.rawInstructions())
|
||||
}
|
||||
|
||||
// WithMetadata sets the given Metadata on the Instruction. e.g. to copy
|
||||
// Metadata from another Instruction when replacing it.
|
||||
func (ins Instruction) WithMetadata(meta Metadata) Instruction {
|
||||
ins.Metadata = meta
|
||||
return ins
|
||||
}
|
||||
|
||||
type symbolMeta struct{}
|
||||
|
||||
// WithSymbol marks the Instruction as a Symbol, which other Instructions
|
||||
// can point to using corresponding calls to WithReference.
|
||||
func (ins Instruction) WithSymbol(name string) Instruction {
|
||||
ins.Metadata.Set(symbolMeta{}, name)
|
||||
return ins
|
||||
}
|
||||
|
||||
// Sym creates a symbol.
|
||||
//
|
||||
// Deprecated: use WithSymbol instead.
|
||||
func (ins Instruction) Sym(name string) Instruction {
|
||||
return ins.WithSymbol(name)
|
||||
}
|
||||
|
||||
// Symbol returns the value ins has been marked with using WithSymbol,
|
||||
// otherwise returns an empty string. A symbol is often an Instruction
|
||||
// at the start of a function body.
|
||||
func (ins Instruction) Symbol() string {
|
||||
sym, _ := ins.Metadata.Get(symbolMeta{}).(string)
|
||||
return sym
|
||||
}
|
||||
|
||||
type referenceMeta struct{}
|
||||
|
||||
// WithReference makes ins reference another Symbol or map by name.
|
||||
func (ins Instruction) WithReference(ref string) Instruction {
|
||||
ins.Metadata.Set(referenceMeta{}, ref)
|
||||
return ins
|
||||
}
|
||||
|
||||
// Reference returns the Symbol or map name referenced by ins, if any.
|
||||
func (ins Instruction) Reference() string {
|
||||
ref, _ := ins.Metadata.Get(referenceMeta{}).(string)
|
||||
return ref
|
||||
}
|
||||
|
||||
type mapMeta struct{}
|
||||
|
||||
// Map returns the Map referenced by ins, if any.
|
||||
// An Instruction will contain a Map if e.g. it references an existing,
|
||||
// pinned map that was opened during ELF loading.
|
||||
func (ins Instruction) Map() FDer {
|
||||
fd, _ := ins.Metadata.Get(mapMeta{}).(FDer)
|
||||
return fd
|
||||
}
|
||||
|
||||
type sourceMeta struct{}
|
||||
|
||||
// WithSource adds source information about the Instruction.
|
||||
func (ins Instruction) WithSource(src fmt.Stringer) Instruction {
|
||||
ins.Metadata.Set(sourceMeta{}, src)
|
||||
return ins
|
||||
}
|
||||
|
||||
// Source returns source information about the Instruction. The field is
|
||||
// present when the compiler emits BTF line info about the Instruction and
|
||||
// usually contains the line of source code responsible for it.
|
||||
func (ins Instruction) Source() fmt.Stringer {
|
||||
str, _ := ins.Metadata.Get(sourceMeta{}).(fmt.Stringer)
|
||||
return str
|
||||
}
|
||||
|
||||
// A Comment can be passed to Instruction.WithSource to add a comment
|
||||
// to an instruction.
|
||||
type Comment string
|
||||
|
||||
func (s Comment) String() string {
|
||||
return string(s)
|
||||
}
|
||||
|
||||
// FDer represents a resource tied to an underlying file descriptor.
|
||||
// Used as a stand-in for e.g. ebpf.Map since that type cannot be
|
||||
// imported here and FD() is the only method we rely on.
|
||||
type FDer interface {
|
||||
FD() int
|
||||
}
|
||||
|
||||
// Instructions is an eBPF program.
|
||||
type Instructions []Instruction
|
||||
|
||||
// Unmarshal unmarshals an Instructions from a binary instruction stream.
|
||||
// All instructions in insns are replaced by instructions decoded from r.
|
||||
func (insns *Instructions) Unmarshal(r io.Reader, bo binary.ByteOrder) error {
|
||||
if len(*insns) > 0 {
|
||||
*insns = nil
|
||||
}
|
||||
|
||||
var offset uint64
|
||||
for {
|
||||
var ins Instruction
|
||||
n, err := ins.Unmarshal(r, bo)
|
||||
if errors.Is(err, io.EOF) {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("offset %d: %w", offset, err)
|
||||
}
|
||||
|
||||
*insns = append(*insns, ins)
|
||||
offset += n
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Name returns the name of the function insns belongs to, if any.
|
||||
func (insns Instructions) Name() string {
|
||||
if len(insns) == 0 {
|
||||
return ""
|
||||
}
|
||||
return insns[0].Symbol()
|
||||
}
|
||||
|
||||
func (insns Instructions) String() string {
|
||||
return fmt.Sprint(insns)
|
||||
}
|
||||
|
||||
// Size returns the amount of bytes insns would occupy in binary form.
|
||||
func (insns Instructions) Size() uint64 {
|
||||
var sum uint64
|
||||
for _, ins := range insns {
|
||||
sum += ins.Size()
|
||||
}
|
||||
return sum
|
||||
}
|
||||
|
||||
// AssociateMap updates all Instructions that Reference the given symbol
|
||||
// to point to an existing Map m instead.
|
||||
//
|
||||
// Returns ErrUnreferencedSymbol error if no references to symbol are found
|
||||
// in insns. If symbol is anything else than the symbol name of map (e.g.
|
||||
// a bpf2bpf subprogram), an error is returned.
|
||||
func (insns Instructions) AssociateMap(symbol string, m FDer) error {
|
||||
if symbol == "" {
|
||||
return errors.New("empty symbol")
|
||||
}
|
||||
|
||||
var found bool
|
||||
for i := range insns {
|
||||
ins := &insns[i]
|
||||
if ins.Reference() != symbol {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := ins.AssociateMap(m); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
found = true
|
||||
}
|
||||
|
||||
if !found {
|
||||
return fmt.Errorf("symbol %s: %w", symbol, ErrUnreferencedSymbol)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RewriteMapPtr rewrites all loads of a specific map pointer to a new fd.
|
||||
//
|
||||
// Returns ErrUnreferencedSymbol if the symbol isn't used.
|
||||
//
|
||||
// Deprecated: use AssociateMap instead.
|
||||
func (insns Instructions) RewriteMapPtr(symbol string, fd int) error {
|
||||
if symbol == "" {
|
||||
return errors.New("empty symbol")
|
||||
}
|
||||
|
||||
var found bool
|
||||
for i := range insns {
|
||||
ins := &insns[i]
|
||||
if ins.Reference() != symbol {
|
||||
continue
|
||||
}
|
||||
|
||||
if !ins.IsLoadFromMap() {
|
||||
return errors.New("not a load from a map")
|
||||
}
|
||||
|
||||
ins.encodeMapFD(fd)
|
||||
|
||||
found = true
|
||||
}
|
||||
|
||||
if !found {
|
||||
return fmt.Errorf("symbol %s: %w", symbol, ErrUnreferencedSymbol)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SymbolOffsets returns the set of symbols and their offset in
|
||||
// the instructions.
|
||||
func (insns Instructions) SymbolOffsets() (map[string]int, error) {
|
||||
offsets := make(map[string]int)
|
||||
|
||||
for i, ins := range insns {
|
||||
if ins.Symbol() == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := offsets[ins.Symbol()]; ok {
|
||||
return nil, fmt.Errorf("duplicate symbol %s", ins.Symbol())
|
||||
}
|
||||
|
||||
offsets[ins.Symbol()] = i
|
||||
}
|
||||
|
||||
return offsets, nil
|
||||
}
|
||||
|
||||
// FunctionReferences returns a set of symbol names these Instructions make
|
||||
// bpf-to-bpf calls to.
|
||||
func (insns Instructions) FunctionReferences() []string {
|
||||
calls := make(map[string]struct{})
|
||||
for _, ins := range insns {
|
||||
if ins.Constant != -1 {
|
||||
// BPF-to-BPF calls have -1 constants.
|
||||
continue
|
||||
}
|
||||
|
||||
if ins.Reference() == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if !ins.IsFunctionReference() {
|
||||
continue
|
||||
}
|
||||
|
||||
calls[ins.Reference()] = struct{}{}
|
||||
}
|
||||
|
||||
result := make([]string, 0, len(calls))
|
||||
for call := range calls {
|
||||
result = append(result, call)
|
||||
}
|
||||
|
||||
sort.Strings(result)
|
||||
return result
|
||||
}
|
||||
|
||||
// ReferenceOffsets returns the set of references and their offset in
|
||||
// the instructions.
|
||||
func (insns Instructions) ReferenceOffsets() map[string][]int {
|
||||
offsets := make(map[string][]int)
|
||||
|
||||
for i, ins := range insns {
|
||||
if ins.Reference() == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
offsets[ins.Reference()] = append(offsets[ins.Reference()], i)
|
||||
}
|
||||
|
||||
return offsets
|
||||
}
|
||||
|
||||
// Format implements fmt.Formatter.
|
||||
//
|
||||
// You can control indentation of symbols by
|
||||
// specifying a width. Setting a precision controls the indentation of
|
||||
// instructions.
|
||||
// The default character is a tab, which can be overridden by specifying
|
||||
// the ' ' space flag.
|
||||
func (insns Instructions) Format(f fmt.State, c rune) {
|
||||
if c != 's' && c != 'v' {
|
||||
fmt.Fprintf(f, "{UNKNOWN FORMAT '%c'}", c)
|
||||
return
|
||||
}
|
||||
|
||||
// Precision is better in this case, because it allows
|
||||
// specifying 0 padding easily.
|
||||
padding, ok := f.Precision()
|
||||
if !ok {
|
||||
padding = 1
|
||||
}
|
||||
|
||||
indent := strings.Repeat("\t", padding)
|
||||
if f.Flag(' ') {
|
||||
indent = strings.Repeat(" ", padding)
|
||||
}
|
||||
|
||||
symPadding, ok := f.Width()
|
||||
if !ok {
|
||||
symPadding = padding - 1
|
||||
}
|
||||
if symPadding < 0 {
|
||||
symPadding = 0
|
||||
}
|
||||
|
||||
symIndent := strings.Repeat("\t", symPadding)
|
||||
if f.Flag(' ') {
|
||||
symIndent = strings.Repeat(" ", symPadding)
|
||||
}
|
||||
|
||||
// Guess how many digits we need at most, by assuming that all instructions
|
||||
// are double wide.
|
||||
highestOffset := len(insns) * 2
|
||||
offsetWidth := int(math.Ceil(math.Log10(float64(highestOffset))))
|
||||
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
if iter.Ins.Symbol() != "" {
|
||||
fmt.Fprintf(f, "%s%s:\n", symIndent, iter.Ins.Symbol())
|
||||
}
|
||||
if src := iter.Ins.Source(); src != nil {
|
||||
line := strings.TrimSpace(src.String())
|
||||
if line != "" {
|
||||
fmt.Fprintf(f, "%s%*s; %s\n", indent, offsetWidth, " ", line)
|
||||
}
|
||||
}
|
||||
fmt.Fprintf(f, "%s%*d: %v\n", indent, offsetWidth, iter.Offset, iter.Ins)
|
||||
}
|
||||
}
|
||||
|
||||
// Marshal encodes a BPF program into the kernel format.
|
||||
//
|
||||
// insns may be modified if there are unresolved jumps or bpf2bpf calls.
|
||||
//
|
||||
// Returns ErrUnsatisfiedProgramReference if there is a Reference Instruction
|
||||
// without a matching Symbol Instruction within insns.
|
||||
func (insns Instructions) Marshal(w io.Writer, bo binary.ByteOrder) error {
|
||||
if err := insns.encodeFunctionReferences(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := insns.encodeMapPointers(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for i, ins := range insns {
|
||||
if _, err := ins.Marshal(w, bo); err != nil {
|
||||
return fmt.Errorf("instruction %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Tag calculates the kernel tag for a series of instructions.
|
||||
//
|
||||
// It mirrors bpf_prog_calc_tag in the kernel and so can be compared
|
||||
// to ProgramInfo.Tag to figure out whether a loaded program matches
|
||||
// certain instructions.
|
||||
func (insns Instructions) Tag(bo binary.ByteOrder) (string, error) {
|
||||
h := sha1.New()
|
||||
for i, ins := range insns {
|
||||
if ins.IsLoadFromMap() {
|
||||
ins.Constant = 0
|
||||
}
|
||||
_, err := ins.Marshal(h, bo)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("instruction %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
return hex.EncodeToString(h.Sum(nil)[:unix.BPF_TAG_SIZE]), nil
|
||||
}
|
||||
|
||||
// encodeFunctionReferences populates the Offset (or Constant, depending on
|
||||
// the instruction type) field of instructions with a Reference field to point
|
||||
// to the offset of the corresponding instruction with a matching Symbol field.
|
||||
//
|
||||
// Only Reference Instructions that are either jumps or BPF function references
|
||||
// (calls or function pointer loads) are populated.
|
||||
//
|
||||
// Returns ErrUnsatisfiedProgramReference if there is a Reference Instruction
|
||||
// without at least one corresponding Symbol Instruction within insns.
|
||||
func (insns Instructions) encodeFunctionReferences() error {
|
||||
// Index the offsets of instructions tagged as a symbol.
|
||||
symbolOffsets := make(map[string]RawInstructionOffset)
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
ins := iter.Ins
|
||||
|
||||
if ins.Symbol() == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := symbolOffsets[ins.Symbol()]; ok {
|
||||
return fmt.Errorf("duplicate symbol %s", ins.Symbol())
|
||||
}
|
||||
|
||||
symbolOffsets[ins.Symbol()] = iter.Offset
|
||||
}
|
||||
|
||||
// Find all instructions tagged as references to other symbols.
|
||||
// Depending on the instruction type, populate their constant or offset
|
||||
// fields to point to the symbol they refer to within the insn stream.
|
||||
iter = insns.Iterate()
|
||||
for iter.Next() {
|
||||
i := iter.Index
|
||||
offset := iter.Offset
|
||||
ins := iter.Ins
|
||||
|
||||
if ins.Reference() == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
switch {
|
||||
case ins.IsFunctionReference() && ins.Constant == -1:
|
||||
symOffset, ok := symbolOffsets[ins.Reference()]
|
||||
if !ok {
|
||||
return fmt.Errorf("%s at insn %d: symbol %q: %w", ins.OpCode, i, ins.Reference(), ErrUnsatisfiedProgramReference)
|
||||
}
|
||||
|
||||
ins.Constant = int64(symOffset - offset - 1)
|
||||
|
||||
case ins.OpCode.Class().IsJump() && ins.Offset == -1:
|
||||
symOffset, ok := symbolOffsets[ins.Reference()]
|
||||
if !ok {
|
||||
return fmt.Errorf("%s at insn %d: symbol %q: %w", ins.OpCode, i, ins.Reference(), ErrUnsatisfiedProgramReference)
|
||||
}
|
||||
|
||||
ins.Offset = int16(symOffset - offset - 1)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// encodeMapPointers finds all Map Instructions and encodes their FDs
|
||||
// into their Constant fields.
|
||||
func (insns Instructions) encodeMapPointers() error {
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
ins := iter.Ins
|
||||
|
||||
if !ins.IsLoadFromMap() {
|
||||
continue
|
||||
}
|
||||
|
||||
m := ins.Map()
|
||||
if m == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
fd := m.FD()
|
||||
if fd < 0 {
|
||||
return fmt.Errorf("map %s: %w", m, sys.ErrClosedFd)
|
||||
}
|
||||
|
||||
ins.encodeMapFD(m.FD())
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Iterate allows iterating a BPF program while keeping track of
|
||||
// various offsets.
|
||||
//
|
||||
// Modifying the instruction slice will lead to undefined behaviour.
|
||||
func (insns Instructions) Iterate() *InstructionIterator {
|
||||
return &InstructionIterator{insns: insns}
|
||||
}
|
||||
|
||||
// InstructionIterator iterates over a BPF program.
|
||||
type InstructionIterator struct {
|
||||
insns Instructions
|
||||
// The instruction in question.
|
||||
Ins *Instruction
|
||||
// The index of the instruction in the original instruction slice.
|
||||
Index int
|
||||
// The offset of the instruction in raw BPF instructions. This accounts
|
||||
// for double-wide instructions.
|
||||
Offset RawInstructionOffset
|
||||
}
|
||||
|
||||
// Next returns true as long as there are any instructions remaining.
|
||||
func (iter *InstructionIterator) Next() bool {
|
||||
if len(iter.insns) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
if iter.Ins != nil {
|
||||
iter.Index++
|
||||
iter.Offset += RawInstructionOffset(iter.Ins.OpCode.rawInstructions())
|
||||
}
|
||||
iter.Ins = &iter.insns[0]
|
||||
iter.insns = iter.insns[1:]
|
||||
return true
|
||||
}
|
||||
|
||||
type bpfRegisters uint8
|
||||
|
||||
func newBPFRegisters(dst, src Register, bo binary.ByteOrder) (bpfRegisters, error) {
|
||||
switch bo {
|
||||
case binary.LittleEndian:
|
||||
return bpfRegisters((src << 4) | (dst & 0xF)), nil
|
||||
case binary.BigEndian:
|
||||
return bpfRegisters((dst << 4) | (src & 0xF)), nil
|
||||
default:
|
||||
return 0, fmt.Errorf("unrecognized ByteOrder %T", bo)
|
||||
}
|
||||
}
|
||||
|
||||
// IsUnreferencedSymbol returns true if err was caused by
|
||||
// an unreferenced symbol.
|
||||
//
|
||||
// Deprecated: use errors.Is(err, asm.ErrUnreferencedSymbol).
|
||||
func IsUnreferencedSymbol(err error) bool {
|
||||
return errors.Is(err, ErrUnreferencedSymbol)
|
||||
}
|
||||
127
vendor/github.com/cilium/ebpf/asm/jump.go
generated
vendored
127
vendor/github.com/cilium/ebpf/asm/jump.go
generated
vendored
@@ -1,127 +0,0 @@
|
||||
package asm
|
||||
|
||||
//go:generate stringer -output jump_string.go -type=JumpOp
|
||||
|
||||
// JumpOp affect control flow.
|
||||
//
|
||||
// msb lsb
|
||||
// +----+-+---+
|
||||
// |OP |s|cls|
|
||||
// +----+-+---+
|
||||
type JumpOp uint8
|
||||
|
||||
const jumpMask OpCode = aluMask
|
||||
|
||||
const (
|
||||
// InvalidJumpOp is returned by getters when invoked
|
||||
// on non branch OpCodes
|
||||
InvalidJumpOp JumpOp = 0xff
|
||||
// Ja jumps by offset unconditionally
|
||||
Ja JumpOp = 0x00
|
||||
// JEq jumps by offset if r == imm
|
||||
JEq JumpOp = 0x10
|
||||
// JGT jumps by offset if r > imm
|
||||
JGT JumpOp = 0x20
|
||||
// JGE jumps by offset if r >= imm
|
||||
JGE JumpOp = 0x30
|
||||
// JSet jumps by offset if r & imm
|
||||
JSet JumpOp = 0x40
|
||||
// JNE jumps by offset if r != imm
|
||||
JNE JumpOp = 0x50
|
||||
// JSGT jumps by offset if signed r > signed imm
|
||||
JSGT JumpOp = 0x60
|
||||
// JSGE jumps by offset if signed r >= signed imm
|
||||
JSGE JumpOp = 0x70
|
||||
// Call builtin or user defined function from imm
|
||||
Call JumpOp = 0x80
|
||||
// Exit ends execution, with value in r0
|
||||
Exit JumpOp = 0x90
|
||||
// JLT jumps by offset if r < imm
|
||||
JLT JumpOp = 0xa0
|
||||
// JLE jumps by offset if r <= imm
|
||||
JLE JumpOp = 0xb0
|
||||
// JSLT jumps by offset if signed r < signed imm
|
||||
JSLT JumpOp = 0xc0
|
||||
// JSLE jumps by offset if signed r <= signed imm
|
||||
JSLE JumpOp = 0xd0
|
||||
)
|
||||
|
||||
// Return emits an exit instruction.
|
||||
//
|
||||
// Requires a return value in R0.
|
||||
func Return() Instruction {
|
||||
return Instruction{
|
||||
OpCode: OpCode(JumpClass).SetJumpOp(Exit),
|
||||
}
|
||||
}
|
||||
|
||||
// Op returns the OpCode for a given jump source.
|
||||
func (op JumpOp) Op(source Source) OpCode {
|
||||
return OpCode(JumpClass).SetJumpOp(op).SetSource(source)
|
||||
}
|
||||
|
||||
// Imm compares 64 bit dst to 64 bit value (sign extended), and adjusts PC by offset if the condition is fulfilled.
|
||||
func (op JumpOp) Imm(dst Register, value int32, label string) Instruction {
|
||||
return Instruction{
|
||||
OpCode: op.opCode(JumpClass, ImmSource),
|
||||
Dst: dst,
|
||||
Offset: -1,
|
||||
Constant: int64(value),
|
||||
}.WithReference(label)
|
||||
}
|
||||
|
||||
// Imm32 compares 32 bit dst to 32 bit value, and adjusts PC by offset if the condition is fulfilled.
|
||||
// Requires kernel 5.1.
|
||||
func (op JumpOp) Imm32(dst Register, value int32, label string) Instruction {
|
||||
return Instruction{
|
||||
OpCode: op.opCode(Jump32Class, ImmSource),
|
||||
Dst: dst,
|
||||
Offset: -1,
|
||||
Constant: int64(value),
|
||||
}.WithReference(label)
|
||||
}
|
||||
|
||||
// Reg compares 64 bit dst to 64 bit src, and adjusts PC by offset if the condition is fulfilled.
|
||||
func (op JumpOp) Reg(dst, src Register, label string) Instruction {
|
||||
return Instruction{
|
||||
OpCode: op.opCode(JumpClass, RegSource),
|
||||
Dst: dst,
|
||||
Src: src,
|
||||
Offset: -1,
|
||||
}.WithReference(label)
|
||||
}
|
||||
|
||||
// Reg32 compares 32 bit dst to 32 bit src, and adjusts PC by offset if the condition is fulfilled.
|
||||
// Requires kernel 5.1.
|
||||
func (op JumpOp) Reg32(dst, src Register, label string) Instruction {
|
||||
return Instruction{
|
||||
OpCode: op.opCode(Jump32Class, RegSource),
|
||||
Dst: dst,
|
||||
Src: src,
|
||||
Offset: -1,
|
||||
}.WithReference(label)
|
||||
}
|
||||
|
||||
func (op JumpOp) opCode(class Class, source Source) OpCode {
|
||||
if op == Exit || op == Call || op == Ja {
|
||||
return InvalidOpCode
|
||||
}
|
||||
|
||||
return OpCode(class).SetJumpOp(op).SetSource(source)
|
||||
}
|
||||
|
||||
// Label adjusts PC to the address of the label.
|
||||
func (op JumpOp) Label(label string) Instruction {
|
||||
if op == Call {
|
||||
return Instruction{
|
||||
OpCode: OpCode(JumpClass).SetJumpOp(Call),
|
||||
Src: PseudoCall,
|
||||
Constant: -1,
|
||||
}.WithReference(label)
|
||||
}
|
||||
|
||||
return Instruction{
|
||||
OpCode: OpCode(JumpClass).SetJumpOp(op),
|
||||
Offset: -1,
|
||||
}.WithReference(label)
|
||||
}
|
||||
53
vendor/github.com/cilium/ebpf/asm/jump_string.go
generated
vendored
53
vendor/github.com/cilium/ebpf/asm/jump_string.go
generated
vendored
@@ -1,53 +0,0 @@
|
||||
// Code generated by "stringer -output jump_string.go -type=JumpOp"; DO NOT EDIT.
|
||||
|
||||
package asm
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[InvalidJumpOp-255]
|
||||
_ = x[Ja-0]
|
||||
_ = x[JEq-16]
|
||||
_ = x[JGT-32]
|
||||
_ = x[JGE-48]
|
||||
_ = x[JSet-64]
|
||||
_ = x[JNE-80]
|
||||
_ = x[JSGT-96]
|
||||
_ = x[JSGE-112]
|
||||
_ = x[Call-128]
|
||||
_ = x[Exit-144]
|
||||
_ = x[JLT-160]
|
||||
_ = x[JLE-176]
|
||||
_ = x[JSLT-192]
|
||||
_ = x[JSLE-208]
|
||||
}
|
||||
|
||||
const _JumpOp_name = "JaJEqJGTJGEJSetJNEJSGTJSGECallExitJLTJLEJSLTJSLEInvalidJumpOp"
|
||||
|
||||
var _JumpOp_map = map[JumpOp]string{
|
||||
0: _JumpOp_name[0:2],
|
||||
16: _JumpOp_name[2:5],
|
||||
32: _JumpOp_name[5:8],
|
||||
48: _JumpOp_name[8:11],
|
||||
64: _JumpOp_name[11:15],
|
||||
80: _JumpOp_name[15:18],
|
||||
96: _JumpOp_name[18:22],
|
||||
112: _JumpOp_name[22:26],
|
||||
128: _JumpOp_name[26:30],
|
||||
144: _JumpOp_name[30:34],
|
||||
160: _JumpOp_name[34:37],
|
||||
176: _JumpOp_name[37:40],
|
||||
192: _JumpOp_name[40:44],
|
||||
208: _JumpOp_name[44:48],
|
||||
255: _JumpOp_name[48:61],
|
||||
}
|
||||
|
||||
func (i JumpOp) String() string {
|
||||
if str, ok := _JumpOp_map[i]; ok {
|
||||
return str
|
||||
}
|
||||
return "JumpOp(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
204
vendor/github.com/cilium/ebpf/asm/load_store.go
generated
vendored
204
vendor/github.com/cilium/ebpf/asm/load_store.go
generated
vendored
@@ -1,204 +0,0 @@
|
||||
package asm
|
||||
|
||||
//go:generate stringer -output load_store_string.go -type=Mode,Size
|
||||
|
||||
// Mode for load and store operations
|
||||
//
|
||||
// msb lsb
|
||||
// +---+--+---+
|
||||
// |MDE|sz|cls|
|
||||
// +---+--+---+
|
||||
type Mode uint8
|
||||
|
||||
const modeMask OpCode = 0xe0
|
||||
|
||||
const (
|
||||
// InvalidMode is returned by getters when invoked
|
||||
// on non load / store OpCodes
|
||||
InvalidMode Mode = 0xff
|
||||
// ImmMode - immediate value
|
||||
ImmMode Mode = 0x00
|
||||
// AbsMode - immediate value + offset
|
||||
AbsMode Mode = 0x20
|
||||
// IndMode - indirect (imm+src)
|
||||
IndMode Mode = 0x40
|
||||
// MemMode - load from memory
|
||||
MemMode Mode = 0x60
|
||||
// XAddMode - add atomically across processors.
|
||||
XAddMode Mode = 0xc0
|
||||
)
|
||||
|
||||
// Size of load and store operations
|
||||
//
|
||||
// msb lsb
|
||||
// +---+--+---+
|
||||
// |mde|SZ|cls|
|
||||
// +---+--+---+
|
||||
type Size uint8
|
||||
|
||||
const sizeMask OpCode = 0x18
|
||||
|
||||
const (
|
||||
// InvalidSize is returned by getters when invoked
|
||||
// on non load / store OpCodes
|
||||
InvalidSize Size = 0xff
|
||||
// DWord - double word; 64 bits
|
||||
DWord Size = 0x18
|
||||
// Word - word; 32 bits
|
||||
Word Size = 0x00
|
||||
// Half - half-word; 16 bits
|
||||
Half Size = 0x08
|
||||
// Byte - byte; 8 bits
|
||||
Byte Size = 0x10
|
||||
)
|
||||
|
||||
// Sizeof returns the size in bytes.
|
||||
func (s Size) Sizeof() int {
|
||||
switch s {
|
||||
case DWord:
|
||||
return 8
|
||||
case Word:
|
||||
return 4
|
||||
case Half:
|
||||
return 2
|
||||
case Byte:
|
||||
return 1
|
||||
default:
|
||||
return -1
|
||||
}
|
||||
}
|
||||
|
||||
// LoadMemOp returns the OpCode to load a value of given size from memory.
|
||||
func LoadMemOp(size Size) OpCode {
|
||||
return OpCode(LdXClass).SetMode(MemMode).SetSize(size)
|
||||
}
|
||||
|
||||
// LoadMem emits `dst = *(size *)(src + offset)`.
|
||||
func LoadMem(dst, src Register, offset int16, size Size) Instruction {
|
||||
return Instruction{
|
||||
OpCode: LoadMemOp(size),
|
||||
Dst: dst,
|
||||
Src: src,
|
||||
Offset: offset,
|
||||
}
|
||||
}
|
||||
|
||||
// LoadImmOp returns the OpCode to load an immediate of given size.
|
||||
//
|
||||
// As of kernel 4.20, only DWord size is accepted.
|
||||
func LoadImmOp(size Size) OpCode {
|
||||
return OpCode(LdClass).SetMode(ImmMode).SetSize(size)
|
||||
}
|
||||
|
||||
// LoadImm emits `dst = (size)value`.
|
||||
//
|
||||
// As of kernel 4.20, only DWord size is accepted.
|
||||
func LoadImm(dst Register, value int64, size Size) Instruction {
|
||||
return Instruction{
|
||||
OpCode: LoadImmOp(size),
|
||||
Dst: dst,
|
||||
Constant: value,
|
||||
}
|
||||
}
|
||||
|
||||
// LoadMapPtr stores a pointer to a map in dst.
|
||||
func LoadMapPtr(dst Register, fd int) Instruction {
|
||||
if fd < 0 {
|
||||
return Instruction{OpCode: InvalidOpCode}
|
||||
}
|
||||
|
||||
return Instruction{
|
||||
OpCode: LoadImmOp(DWord),
|
||||
Dst: dst,
|
||||
Src: PseudoMapFD,
|
||||
Constant: int64(uint32(fd)),
|
||||
}
|
||||
}
|
||||
|
||||
// LoadMapValue stores a pointer to the value at a certain offset of a map.
|
||||
func LoadMapValue(dst Register, fd int, offset uint32) Instruction {
|
||||
if fd < 0 {
|
||||
return Instruction{OpCode: InvalidOpCode}
|
||||
}
|
||||
|
||||
fdAndOffset := (uint64(offset) << 32) | uint64(uint32(fd))
|
||||
return Instruction{
|
||||
OpCode: LoadImmOp(DWord),
|
||||
Dst: dst,
|
||||
Src: PseudoMapValue,
|
||||
Constant: int64(fdAndOffset),
|
||||
}
|
||||
}
|
||||
|
||||
// LoadIndOp returns the OpCode for loading a value of given size from an sk_buff.
|
||||
func LoadIndOp(size Size) OpCode {
|
||||
return OpCode(LdClass).SetMode(IndMode).SetSize(size)
|
||||
}
|
||||
|
||||
// LoadInd emits `dst = ntoh(*(size *)(((sk_buff *)R6)->data + src + offset))`.
|
||||
func LoadInd(dst, src Register, offset int32, size Size) Instruction {
|
||||
return Instruction{
|
||||
OpCode: LoadIndOp(size),
|
||||
Dst: dst,
|
||||
Src: src,
|
||||
Constant: int64(offset),
|
||||
}
|
||||
}
|
||||
|
||||
// LoadAbsOp returns the OpCode for loading a value of given size from an sk_buff.
|
||||
func LoadAbsOp(size Size) OpCode {
|
||||
return OpCode(LdClass).SetMode(AbsMode).SetSize(size)
|
||||
}
|
||||
|
||||
// LoadAbs emits `r0 = ntoh(*(size *)(((sk_buff *)R6)->data + offset))`.
|
||||
func LoadAbs(offset int32, size Size) Instruction {
|
||||
return Instruction{
|
||||
OpCode: LoadAbsOp(size),
|
||||
Dst: R0,
|
||||
Constant: int64(offset),
|
||||
}
|
||||
}
|
||||
|
||||
// StoreMemOp returns the OpCode for storing a register of given size in memory.
|
||||
func StoreMemOp(size Size) OpCode {
|
||||
return OpCode(StXClass).SetMode(MemMode).SetSize(size)
|
||||
}
|
||||
|
||||
// StoreMem emits `*(size *)(dst + offset) = src`
|
||||
func StoreMem(dst Register, offset int16, src Register, size Size) Instruction {
|
||||
return Instruction{
|
||||
OpCode: StoreMemOp(size),
|
||||
Dst: dst,
|
||||
Src: src,
|
||||
Offset: offset,
|
||||
}
|
||||
}
|
||||
|
||||
// StoreImmOp returns the OpCode for storing an immediate of given size in memory.
|
||||
func StoreImmOp(size Size) OpCode {
|
||||
return OpCode(StClass).SetMode(MemMode).SetSize(size)
|
||||
}
|
||||
|
||||
// StoreImm emits `*(size *)(dst + offset) = value`.
|
||||
func StoreImm(dst Register, offset int16, value int64, size Size) Instruction {
|
||||
return Instruction{
|
||||
OpCode: StoreImmOp(size),
|
||||
Dst: dst,
|
||||
Offset: offset,
|
||||
Constant: value,
|
||||
}
|
||||
}
|
||||
|
||||
// StoreXAddOp returns the OpCode to atomically add a register to a value in memory.
|
||||
func StoreXAddOp(size Size) OpCode {
|
||||
return OpCode(StXClass).SetMode(XAddMode).SetSize(size)
|
||||
}
|
||||
|
||||
// StoreXAdd atomically adds src to *dst.
|
||||
func StoreXAdd(dst, src Register, size Size) Instruction {
|
||||
return Instruction{
|
||||
OpCode: StoreXAddOp(size),
|
||||
Dst: dst,
|
||||
Src: src,
|
||||
}
|
||||
}
|
||||
80
vendor/github.com/cilium/ebpf/asm/load_store_string.go
generated
vendored
80
vendor/github.com/cilium/ebpf/asm/load_store_string.go
generated
vendored
@@ -1,80 +0,0 @@
|
||||
// Code generated by "stringer -output load_store_string.go -type=Mode,Size"; DO NOT EDIT.
|
||||
|
||||
package asm
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[InvalidMode-255]
|
||||
_ = x[ImmMode-0]
|
||||
_ = x[AbsMode-32]
|
||||
_ = x[IndMode-64]
|
||||
_ = x[MemMode-96]
|
||||
_ = x[XAddMode-192]
|
||||
}
|
||||
|
||||
const (
|
||||
_Mode_name_0 = "ImmMode"
|
||||
_Mode_name_1 = "AbsMode"
|
||||
_Mode_name_2 = "IndMode"
|
||||
_Mode_name_3 = "MemMode"
|
||||
_Mode_name_4 = "XAddMode"
|
||||
_Mode_name_5 = "InvalidMode"
|
||||
)
|
||||
|
||||
func (i Mode) String() string {
|
||||
switch {
|
||||
case i == 0:
|
||||
return _Mode_name_0
|
||||
case i == 32:
|
||||
return _Mode_name_1
|
||||
case i == 64:
|
||||
return _Mode_name_2
|
||||
case i == 96:
|
||||
return _Mode_name_3
|
||||
case i == 192:
|
||||
return _Mode_name_4
|
||||
case i == 255:
|
||||
return _Mode_name_5
|
||||
default:
|
||||
return "Mode(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
}
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[InvalidSize-255]
|
||||
_ = x[DWord-24]
|
||||
_ = x[Word-0]
|
||||
_ = x[Half-8]
|
||||
_ = x[Byte-16]
|
||||
}
|
||||
|
||||
const (
|
||||
_Size_name_0 = "Word"
|
||||
_Size_name_1 = "Half"
|
||||
_Size_name_2 = "Byte"
|
||||
_Size_name_3 = "DWord"
|
||||
_Size_name_4 = "InvalidSize"
|
||||
)
|
||||
|
||||
func (i Size) String() string {
|
||||
switch {
|
||||
case i == 0:
|
||||
return _Size_name_0
|
||||
case i == 8:
|
||||
return _Size_name_1
|
||||
case i == 16:
|
||||
return _Size_name_2
|
||||
case i == 24:
|
||||
return _Size_name_3
|
||||
case i == 255:
|
||||
return _Size_name_4
|
||||
default:
|
||||
return "Size(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
}
|
||||
80
vendor/github.com/cilium/ebpf/asm/metadata.go
generated
vendored
80
vendor/github.com/cilium/ebpf/asm/metadata.go
generated
vendored
@@ -1,80 +0,0 @@
|
||||
package asm
|
||||
|
||||
// Metadata contains metadata about an instruction.
|
||||
type Metadata struct {
|
||||
head *metaElement
|
||||
}
|
||||
|
||||
type metaElement struct {
|
||||
next *metaElement
|
||||
key, value interface{}
|
||||
}
|
||||
|
||||
// Find the element containing key.
|
||||
//
|
||||
// Returns nil if there is no such element.
|
||||
func (m *Metadata) find(key interface{}) *metaElement {
|
||||
for e := m.head; e != nil; e = e.next {
|
||||
if e.key == key {
|
||||
return e
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove an element from the linked list.
|
||||
//
|
||||
// Copies as many elements of the list as necessary to remove r, but doesn't
|
||||
// perform a full copy.
|
||||
func (m *Metadata) remove(r *metaElement) {
|
||||
current := &m.head
|
||||
for e := m.head; e != nil; e = e.next {
|
||||
if e == r {
|
||||
// We've found the element we want to remove.
|
||||
*current = e.next
|
||||
|
||||
// No need to copy the tail.
|
||||
return
|
||||
}
|
||||
|
||||
// There is another element in front of the one we want to remove.
|
||||
// We have to copy it to be able to change metaElement.next.
|
||||
cpy := &metaElement{key: e.key, value: e.value}
|
||||
*current = cpy
|
||||
current = &cpy.next
|
||||
}
|
||||
}
|
||||
|
||||
// Set a key to a value.
|
||||
//
|
||||
// If value is nil, the key is removed. Avoids modifying old metadata by
|
||||
// copying if necessary.
|
||||
func (m *Metadata) Set(key, value interface{}) {
|
||||
if e := m.find(key); e != nil {
|
||||
if e.value == value {
|
||||
// Key is present and the value is the same. Nothing to do.
|
||||
return
|
||||
}
|
||||
|
||||
// Key is present with a different value. Create a copy of the list
|
||||
// which doesn't have the element in it.
|
||||
m.remove(e)
|
||||
}
|
||||
|
||||
// m.head is now a linked list that doesn't contain key.
|
||||
if value == nil {
|
||||
return
|
||||
}
|
||||
|
||||
m.head = &metaElement{key: key, value: value, next: m.head}
|
||||
}
|
||||
|
||||
// Get the value of a key.
|
||||
//
|
||||
// Returns nil if no value with the given key is present.
|
||||
func (m *Metadata) Get(key interface{}) interface{} {
|
||||
if e := m.find(key); e != nil {
|
||||
return e.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
271
vendor/github.com/cilium/ebpf/asm/opcode.go
generated
vendored
271
vendor/github.com/cilium/ebpf/asm/opcode.go
generated
vendored
@@ -1,271 +0,0 @@
|
||||
package asm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
//go:generate stringer -output opcode_string.go -type=Class
|
||||
|
||||
// Class of operations
|
||||
//
|
||||
// msb lsb
|
||||
// +---+--+---+
|
||||
// | ?? |CLS|
|
||||
// +---+--+---+
|
||||
type Class uint8
|
||||
|
||||
const classMask OpCode = 0x07
|
||||
|
||||
const (
|
||||
// LdClass loads immediate values into registers.
|
||||
// Also used for non-standard load operations from cBPF.
|
||||
LdClass Class = 0x00
|
||||
// LdXClass loads memory into registers.
|
||||
LdXClass Class = 0x01
|
||||
// StClass stores immediate values to memory.
|
||||
StClass Class = 0x02
|
||||
// StXClass stores registers to memory.
|
||||
StXClass Class = 0x03
|
||||
// ALUClass describes arithmetic operators.
|
||||
ALUClass Class = 0x04
|
||||
// JumpClass describes jump operators.
|
||||
JumpClass Class = 0x05
|
||||
// Jump32Class describes jump operators with 32-bit comparisons.
|
||||
// Requires kernel 5.1.
|
||||
Jump32Class Class = 0x06
|
||||
// ALU64Class describes arithmetic operators in 64-bit mode.
|
||||
ALU64Class Class = 0x07
|
||||
)
|
||||
|
||||
// IsLoad checks if this is either LdClass or LdXClass.
|
||||
func (cls Class) IsLoad() bool {
|
||||
return cls == LdClass || cls == LdXClass
|
||||
}
|
||||
|
||||
// IsStore checks if this is either StClass or StXClass.
|
||||
func (cls Class) IsStore() bool {
|
||||
return cls == StClass || cls == StXClass
|
||||
}
|
||||
|
||||
func (cls Class) isLoadOrStore() bool {
|
||||
return cls.IsLoad() || cls.IsStore()
|
||||
}
|
||||
|
||||
// IsALU checks if this is either ALUClass or ALU64Class.
|
||||
func (cls Class) IsALU() bool {
|
||||
return cls == ALUClass || cls == ALU64Class
|
||||
}
|
||||
|
||||
// IsJump checks if this is either JumpClass or Jump32Class.
|
||||
func (cls Class) IsJump() bool {
|
||||
return cls == JumpClass || cls == Jump32Class
|
||||
}
|
||||
|
||||
func (cls Class) isJumpOrALU() bool {
|
||||
return cls.IsJump() || cls.IsALU()
|
||||
}
|
||||
|
||||
// OpCode is a packed eBPF opcode.
|
||||
//
|
||||
// Its encoding is defined by a Class value:
|
||||
//
|
||||
// msb lsb
|
||||
// +----+-+---+
|
||||
// | ???? |CLS|
|
||||
// +----+-+---+
|
||||
type OpCode uint8
|
||||
|
||||
// InvalidOpCode is returned by setters on OpCode
|
||||
const InvalidOpCode OpCode = 0xff
|
||||
|
||||
// rawInstructions returns the number of BPF instructions required
|
||||
// to encode this opcode.
|
||||
func (op OpCode) rawInstructions() int {
|
||||
if op.IsDWordLoad() {
|
||||
return 2
|
||||
}
|
||||
return 1
|
||||
}
|
||||
|
||||
func (op OpCode) IsDWordLoad() bool {
|
||||
return op == LoadImmOp(DWord)
|
||||
}
|
||||
|
||||
// Class returns the class of operation.
|
||||
func (op OpCode) Class() Class {
|
||||
return Class(op & classMask)
|
||||
}
|
||||
|
||||
// Mode returns the mode for load and store operations.
|
||||
func (op OpCode) Mode() Mode {
|
||||
if !op.Class().isLoadOrStore() {
|
||||
return InvalidMode
|
||||
}
|
||||
return Mode(op & modeMask)
|
||||
}
|
||||
|
||||
// Size returns the size for load and store operations.
|
||||
func (op OpCode) Size() Size {
|
||||
if !op.Class().isLoadOrStore() {
|
||||
return InvalidSize
|
||||
}
|
||||
return Size(op & sizeMask)
|
||||
}
|
||||
|
||||
// Source returns the source for branch and ALU operations.
|
||||
func (op OpCode) Source() Source {
|
||||
if !op.Class().isJumpOrALU() || op.ALUOp() == Swap {
|
||||
return InvalidSource
|
||||
}
|
||||
return Source(op & sourceMask)
|
||||
}
|
||||
|
||||
// ALUOp returns the ALUOp.
|
||||
func (op OpCode) ALUOp() ALUOp {
|
||||
if !op.Class().IsALU() {
|
||||
return InvalidALUOp
|
||||
}
|
||||
return ALUOp(op & aluMask)
|
||||
}
|
||||
|
||||
// Endianness returns the Endianness for a byte swap instruction.
|
||||
func (op OpCode) Endianness() Endianness {
|
||||
if op.ALUOp() != Swap {
|
||||
return InvalidEndian
|
||||
}
|
||||
return Endianness(op & endianMask)
|
||||
}
|
||||
|
||||
// JumpOp returns the JumpOp.
|
||||
// Returns InvalidJumpOp if it doesn't encode a jump.
|
||||
func (op OpCode) JumpOp() JumpOp {
|
||||
if !op.Class().IsJump() {
|
||||
return InvalidJumpOp
|
||||
}
|
||||
|
||||
jumpOp := JumpOp(op & jumpMask)
|
||||
|
||||
// Some JumpOps are only supported by JumpClass, not Jump32Class.
|
||||
if op.Class() == Jump32Class && (jumpOp == Exit || jumpOp == Call || jumpOp == Ja) {
|
||||
return InvalidJumpOp
|
||||
}
|
||||
|
||||
return jumpOp
|
||||
}
|
||||
|
||||
// SetMode sets the mode on load and store operations.
|
||||
//
|
||||
// Returns InvalidOpCode if op is of the wrong class.
|
||||
func (op OpCode) SetMode(mode Mode) OpCode {
|
||||
if !op.Class().isLoadOrStore() || !valid(OpCode(mode), modeMask) {
|
||||
return InvalidOpCode
|
||||
}
|
||||
return (op & ^modeMask) | OpCode(mode)
|
||||
}
|
||||
|
||||
// SetSize sets the size on load and store operations.
|
||||
//
|
||||
// Returns InvalidOpCode if op is of the wrong class.
|
||||
func (op OpCode) SetSize(size Size) OpCode {
|
||||
if !op.Class().isLoadOrStore() || !valid(OpCode(size), sizeMask) {
|
||||
return InvalidOpCode
|
||||
}
|
||||
return (op & ^sizeMask) | OpCode(size)
|
||||
}
|
||||
|
||||
// SetSource sets the source on jump and ALU operations.
|
||||
//
|
||||
// Returns InvalidOpCode if op is of the wrong class.
|
||||
func (op OpCode) SetSource(source Source) OpCode {
|
||||
if !op.Class().isJumpOrALU() || !valid(OpCode(source), sourceMask) {
|
||||
return InvalidOpCode
|
||||
}
|
||||
return (op & ^sourceMask) | OpCode(source)
|
||||
}
|
||||
|
||||
// SetALUOp sets the ALUOp on ALU operations.
|
||||
//
|
||||
// Returns InvalidOpCode if op is of the wrong class.
|
||||
func (op OpCode) SetALUOp(alu ALUOp) OpCode {
|
||||
if !op.Class().IsALU() || !valid(OpCode(alu), aluMask) {
|
||||
return InvalidOpCode
|
||||
}
|
||||
return (op & ^aluMask) | OpCode(alu)
|
||||
}
|
||||
|
||||
// SetJumpOp sets the JumpOp on jump operations.
|
||||
//
|
||||
// Returns InvalidOpCode if op is of the wrong class.
|
||||
func (op OpCode) SetJumpOp(jump JumpOp) OpCode {
|
||||
if !op.Class().IsJump() || !valid(OpCode(jump), jumpMask) {
|
||||
return InvalidOpCode
|
||||
}
|
||||
|
||||
newOp := (op & ^jumpMask) | OpCode(jump)
|
||||
|
||||
// Check newOp is legal.
|
||||
if newOp.JumpOp() == InvalidJumpOp {
|
||||
return InvalidOpCode
|
||||
}
|
||||
|
||||
return newOp
|
||||
}
|
||||
|
||||
func (op OpCode) String() string {
|
||||
var f strings.Builder
|
||||
|
||||
switch class := op.Class(); {
|
||||
case class.isLoadOrStore():
|
||||
f.WriteString(strings.TrimSuffix(class.String(), "Class"))
|
||||
|
||||
mode := op.Mode()
|
||||
f.WriteString(strings.TrimSuffix(mode.String(), "Mode"))
|
||||
|
||||
switch op.Size() {
|
||||
case DWord:
|
||||
f.WriteString("DW")
|
||||
case Word:
|
||||
f.WriteString("W")
|
||||
case Half:
|
||||
f.WriteString("H")
|
||||
case Byte:
|
||||
f.WriteString("B")
|
||||
}
|
||||
|
||||
case class.IsALU():
|
||||
f.WriteString(op.ALUOp().String())
|
||||
|
||||
if op.ALUOp() == Swap {
|
||||
// Width for Endian is controlled by Constant
|
||||
f.WriteString(op.Endianness().String())
|
||||
} else {
|
||||
if class == ALUClass {
|
||||
f.WriteString("32")
|
||||
}
|
||||
|
||||
f.WriteString(strings.TrimSuffix(op.Source().String(), "Source"))
|
||||
}
|
||||
|
||||
case class.IsJump():
|
||||
f.WriteString(op.JumpOp().String())
|
||||
|
||||
if class == Jump32Class {
|
||||
f.WriteString("32")
|
||||
}
|
||||
|
||||
if jop := op.JumpOp(); jop != Exit && jop != Call {
|
||||
f.WriteString(strings.TrimSuffix(op.Source().String(), "Source"))
|
||||
}
|
||||
|
||||
default:
|
||||
fmt.Fprintf(&f, "OpCode(%#x)", uint8(op))
|
||||
}
|
||||
|
||||
return f.String()
|
||||
}
|
||||
|
||||
// valid returns true if all bits in value are covered by mask.
|
||||
func valid(value, mask OpCode) bool {
|
||||
return value & ^mask == 0
|
||||
}
|
||||
30
vendor/github.com/cilium/ebpf/asm/opcode_string.go
generated
vendored
30
vendor/github.com/cilium/ebpf/asm/opcode_string.go
generated
vendored
@@ -1,30 +0,0 @@
|
||||
// Code generated by "stringer -output opcode_string.go -type=Class"; DO NOT EDIT.
|
||||
|
||||
package asm
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[LdClass-0]
|
||||
_ = x[LdXClass-1]
|
||||
_ = x[StClass-2]
|
||||
_ = x[StXClass-3]
|
||||
_ = x[ALUClass-4]
|
||||
_ = x[JumpClass-5]
|
||||
_ = x[Jump32Class-6]
|
||||
_ = x[ALU64Class-7]
|
||||
}
|
||||
|
||||
const _Class_name = "LdClassLdXClassStClassStXClassALUClassJumpClassJump32ClassALU64Class"
|
||||
|
||||
var _Class_index = [...]uint8{0, 7, 15, 22, 30, 38, 47, 58, 68}
|
||||
|
||||
func (i Class) String() string {
|
||||
if i >= Class(len(_Class_index)-1) {
|
||||
return "Class(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
return _Class_name[_Class_index[i]:_Class_index[i+1]]
|
||||
}
|
||||
51
vendor/github.com/cilium/ebpf/asm/register.go
generated
vendored
51
vendor/github.com/cilium/ebpf/asm/register.go
generated
vendored
@@ -1,51 +0,0 @@
|
||||
package asm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Register is the source or destination of most operations.
|
||||
type Register uint8
|
||||
|
||||
// R0 contains return values.
|
||||
const R0 Register = 0
|
||||
|
||||
// Registers for function arguments.
|
||||
const (
|
||||
R1 Register = R0 + 1 + iota
|
||||
R2
|
||||
R3
|
||||
R4
|
||||
R5
|
||||
)
|
||||
|
||||
// Callee saved registers preserved by function calls.
|
||||
const (
|
||||
R6 Register = R5 + 1 + iota
|
||||
R7
|
||||
R8
|
||||
R9
|
||||
)
|
||||
|
||||
// Read-only frame pointer to access stack.
|
||||
const (
|
||||
R10 Register = R9 + 1
|
||||
RFP = R10
|
||||
)
|
||||
|
||||
// Pseudo registers used by 64bit loads and jumps
|
||||
const (
|
||||
PseudoMapFD = R1 // BPF_PSEUDO_MAP_FD
|
||||
PseudoMapValue = R2 // BPF_PSEUDO_MAP_VALUE
|
||||
PseudoCall = R1 // BPF_PSEUDO_CALL
|
||||
PseudoFunc = R4 // BPF_PSEUDO_FUNC
|
||||
PseudoKfuncCall = R2 // BPF_PSEUDO_KFUNC_CALL
|
||||
)
|
||||
|
||||
func (r Register) String() string {
|
||||
v := uint8(r)
|
||||
if v == 10 {
|
||||
return "rfp"
|
||||
}
|
||||
return fmt.Sprintf("r%d", v)
|
||||
}
|
||||
66
vendor/github.com/cilium/ebpf/attachtype_string.go
generated
vendored
66
vendor/github.com/cilium/ebpf/attachtype_string.go
generated
vendored
@@ -1,66 +0,0 @@
|
||||
// Code generated by "stringer -type AttachType -trimprefix Attach"; DO NOT EDIT.
|
||||
|
||||
package ebpf
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[AttachNone-0]
|
||||
_ = x[AttachCGroupInetIngress-0]
|
||||
_ = x[AttachCGroupInetEgress-1]
|
||||
_ = x[AttachCGroupInetSockCreate-2]
|
||||
_ = x[AttachCGroupSockOps-3]
|
||||
_ = x[AttachSkSKBStreamParser-4]
|
||||
_ = x[AttachSkSKBStreamVerdict-5]
|
||||
_ = x[AttachCGroupDevice-6]
|
||||
_ = x[AttachSkMsgVerdict-7]
|
||||
_ = x[AttachCGroupInet4Bind-8]
|
||||
_ = x[AttachCGroupInet6Bind-9]
|
||||
_ = x[AttachCGroupInet4Connect-10]
|
||||
_ = x[AttachCGroupInet6Connect-11]
|
||||
_ = x[AttachCGroupInet4PostBind-12]
|
||||
_ = x[AttachCGroupInet6PostBind-13]
|
||||
_ = x[AttachCGroupUDP4Sendmsg-14]
|
||||
_ = x[AttachCGroupUDP6Sendmsg-15]
|
||||
_ = x[AttachLircMode2-16]
|
||||
_ = x[AttachFlowDissector-17]
|
||||
_ = x[AttachCGroupSysctl-18]
|
||||
_ = x[AttachCGroupUDP4Recvmsg-19]
|
||||
_ = x[AttachCGroupUDP6Recvmsg-20]
|
||||
_ = x[AttachCGroupGetsockopt-21]
|
||||
_ = x[AttachCGroupSetsockopt-22]
|
||||
_ = x[AttachTraceRawTp-23]
|
||||
_ = x[AttachTraceFEntry-24]
|
||||
_ = x[AttachTraceFExit-25]
|
||||
_ = x[AttachModifyReturn-26]
|
||||
_ = x[AttachLSMMac-27]
|
||||
_ = x[AttachTraceIter-28]
|
||||
_ = x[AttachCgroupInet4GetPeername-29]
|
||||
_ = x[AttachCgroupInet6GetPeername-30]
|
||||
_ = x[AttachCgroupInet4GetSockname-31]
|
||||
_ = x[AttachCgroupInet6GetSockname-32]
|
||||
_ = x[AttachXDPDevMap-33]
|
||||
_ = x[AttachCgroupInetSockRelease-34]
|
||||
_ = x[AttachXDPCPUMap-35]
|
||||
_ = x[AttachSkLookup-36]
|
||||
_ = x[AttachXDP-37]
|
||||
_ = x[AttachSkSKBVerdict-38]
|
||||
_ = x[AttachSkReuseportSelect-39]
|
||||
_ = x[AttachSkReuseportSelectOrMigrate-40]
|
||||
_ = x[AttachPerfEvent-41]
|
||||
_ = x[AttachTraceKprobeMulti-42]
|
||||
}
|
||||
|
||||
const _AttachType_name = "NoneCGroupInetEgressCGroupInetSockCreateCGroupSockOpsSkSKBStreamParserSkSKBStreamVerdictCGroupDeviceSkMsgVerdictCGroupInet4BindCGroupInet6BindCGroupInet4ConnectCGroupInet6ConnectCGroupInet4PostBindCGroupInet6PostBindCGroupUDP4SendmsgCGroupUDP6SendmsgLircMode2FlowDissectorCGroupSysctlCGroupUDP4RecvmsgCGroupUDP6RecvmsgCGroupGetsockoptCGroupSetsockoptTraceRawTpTraceFEntryTraceFExitModifyReturnLSMMacTraceIterCgroupInet4GetPeernameCgroupInet6GetPeernameCgroupInet4GetSocknameCgroupInet6GetSocknameXDPDevMapCgroupInetSockReleaseXDPCPUMapSkLookupXDPSkSKBVerdictSkReuseportSelectSkReuseportSelectOrMigratePerfEventTraceKprobeMulti"
|
||||
|
||||
var _AttachType_index = [...]uint16{0, 4, 20, 40, 53, 70, 88, 100, 112, 127, 142, 160, 178, 197, 216, 233, 250, 259, 272, 284, 301, 318, 334, 350, 360, 371, 381, 393, 399, 408, 430, 452, 474, 496, 505, 526, 535, 543, 546, 558, 575, 601, 610, 626}
|
||||
|
||||
func (i AttachType) String() string {
|
||||
if i >= AttachType(len(_AttachType_index)-1) {
|
||||
return "AttachType(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
return _AttachType_name[_AttachType_index[i]:_AttachType_index[i+1]]
|
||||
}
|
||||
869
vendor/github.com/cilium/ebpf/btf/btf.go
generated
vendored
869
vendor/github.com/cilium/ebpf/btf/btf.go
generated
vendored
@@ -1,869 +0,0 @@
|
||||
package btf
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"debug/elf"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"os"
|
||||
"reflect"
|
||||
"sync"
|
||||
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
const btfMagic = 0xeB9F
|
||||
|
||||
// Errors returned by BTF functions.
|
||||
var (
|
||||
ErrNotSupported = internal.ErrNotSupported
|
||||
ErrNotFound = errors.New("not found")
|
||||
ErrNoExtendedInfo = errors.New("no extended info")
|
||||
ErrMultipleMatches = errors.New("multiple matching types")
|
||||
)
|
||||
|
||||
// ID represents the unique ID of a BTF object.
|
||||
type ID = sys.BTFID
|
||||
|
||||
// Spec allows querying a set of Types and loading the set into the
|
||||
// kernel.
|
||||
type Spec struct {
|
||||
// All types contained by the spec, not including types from the base in
|
||||
// case the spec was parsed from split BTF.
|
||||
types []Type
|
||||
|
||||
// Type IDs indexed by type.
|
||||
typeIDs map[Type]TypeID
|
||||
|
||||
// The ID of the first type in types.
|
||||
firstTypeID TypeID
|
||||
|
||||
// Types indexed by essential name.
|
||||
// Includes all struct flavors and types with the same name.
|
||||
namedTypes map[essentialName][]Type
|
||||
|
||||
// String table from ELF, may be nil.
|
||||
strings *stringTable
|
||||
|
||||
// Byte order of the ELF we decoded the spec from, may be nil.
|
||||
byteOrder binary.ByteOrder
|
||||
}
|
||||
|
||||
var btfHeaderLen = binary.Size(&btfHeader{})
|
||||
|
||||
type btfHeader struct {
|
||||
Magic uint16
|
||||
Version uint8
|
||||
Flags uint8
|
||||
HdrLen uint32
|
||||
|
||||
TypeOff uint32
|
||||
TypeLen uint32
|
||||
StringOff uint32
|
||||
StringLen uint32
|
||||
}
|
||||
|
||||
// typeStart returns the offset from the beginning of the .BTF section
|
||||
// to the start of its type entries.
|
||||
func (h *btfHeader) typeStart() int64 {
|
||||
return int64(h.HdrLen + h.TypeOff)
|
||||
}
|
||||
|
||||
// stringStart returns the offset from the beginning of the .BTF section
|
||||
// to the start of its string table.
|
||||
func (h *btfHeader) stringStart() int64 {
|
||||
return int64(h.HdrLen + h.StringOff)
|
||||
}
|
||||
|
||||
// newSpec creates a Spec containing only Void.
|
||||
func newSpec() *Spec {
|
||||
return &Spec{
|
||||
[]Type{(*Void)(nil)},
|
||||
map[Type]TypeID{(*Void)(nil): 0},
|
||||
0,
|
||||
make(map[essentialName][]Type),
|
||||
nil,
|
||||
nil,
|
||||
}
|
||||
}
|
||||
|
||||
// LoadSpec opens file and calls LoadSpecFromReader on it.
|
||||
func LoadSpec(file string) (*Spec, error) {
|
||||
fh, err := os.Open(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer fh.Close()
|
||||
|
||||
return LoadSpecFromReader(fh)
|
||||
}
|
||||
|
||||
// LoadSpecFromReader reads from an ELF or a raw BTF blob.
|
||||
//
|
||||
// Returns ErrNotFound if reading from an ELF which contains no BTF. ExtInfos
|
||||
// may be nil.
|
||||
func LoadSpecFromReader(rd io.ReaderAt) (*Spec, error) {
|
||||
file, err := internal.NewSafeELFFile(rd)
|
||||
if err != nil {
|
||||
if bo := guessRawBTFByteOrder(rd); bo != nil {
|
||||
return loadRawSpec(io.NewSectionReader(rd, 0, math.MaxInt64), bo, nil)
|
||||
}
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return loadSpecFromELF(file)
|
||||
}
|
||||
|
||||
// LoadSpecAndExtInfosFromReader reads from an ELF.
|
||||
//
|
||||
// ExtInfos may be nil if the ELF doesn't contain section metadata.
|
||||
// Returns ErrNotFound if the ELF contains no BTF.
|
||||
func LoadSpecAndExtInfosFromReader(rd io.ReaderAt) (*Spec, *ExtInfos, error) {
|
||||
file, err := internal.NewSafeELFFile(rd)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
spec, err := loadSpecFromELF(file)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
extInfos, err := loadExtInfosFromELF(file, spec)
|
||||
if err != nil && !errors.Is(err, ErrNotFound) {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return spec, extInfos, nil
|
||||
}
|
||||
|
||||
// symbolOffsets extracts all symbols offsets from an ELF and indexes them by
|
||||
// section and variable name.
|
||||
//
|
||||
// References to variables in BTF data sections carry unsigned 32-bit offsets.
|
||||
// Some ELF symbols (e.g. in vmlinux) may point to virtual memory that is well
|
||||
// beyond this range. Since these symbols cannot be described by BTF info,
|
||||
// ignore them here.
|
||||
func symbolOffsets(file *internal.SafeELFFile) (map[symbol]uint32, error) {
|
||||
symbols, err := file.Symbols()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't read symbols: %v", err)
|
||||
}
|
||||
|
||||
offsets := make(map[symbol]uint32)
|
||||
for _, sym := range symbols {
|
||||
if idx := sym.Section; idx >= elf.SHN_LORESERVE && idx <= elf.SHN_HIRESERVE {
|
||||
// Ignore things like SHN_ABS
|
||||
continue
|
||||
}
|
||||
|
||||
if sym.Value > math.MaxUint32 {
|
||||
// VarSecinfo offset is u32, cannot reference symbols in higher regions.
|
||||
continue
|
||||
}
|
||||
|
||||
if int(sym.Section) >= len(file.Sections) {
|
||||
return nil, fmt.Errorf("symbol %s: invalid section %d", sym.Name, sym.Section)
|
||||
}
|
||||
|
||||
secName := file.Sections[sym.Section].Name
|
||||
offsets[symbol{secName, sym.Name}] = uint32(sym.Value)
|
||||
}
|
||||
|
||||
return offsets, nil
|
||||
}
|
||||
|
||||
func loadSpecFromELF(file *internal.SafeELFFile) (*Spec, error) {
|
||||
var (
|
||||
btfSection *elf.Section
|
||||
sectionSizes = make(map[string]uint32)
|
||||
)
|
||||
|
||||
for _, sec := range file.Sections {
|
||||
switch sec.Name {
|
||||
case ".BTF":
|
||||
btfSection = sec
|
||||
default:
|
||||
if sec.Type != elf.SHT_PROGBITS && sec.Type != elf.SHT_NOBITS {
|
||||
break
|
||||
}
|
||||
|
||||
if sec.Size > math.MaxUint32 {
|
||||
return nil, fmt.Errorf("section %s exceeds maximum size", sec.Name)
|
||||
}
|
||||
|
||||
sectionSizes[sec.Name] = uint32(sec.Size)
|
||||
}
|
||||
}
|
||||
|
||||
if btfSection == nil {
|
||||
return nil, fmt.Errorf("btf: %w", ErrNotFound)
|
||||
}
|
||||
|
||||
offsets, err := symbolOffsets(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if btfSection.ReaderAt == nil {
|
||||
return nil, fmt.Errorf("compressed BTF is not supported")
|
||||
}
|
||||
|
||||
spec, err := loadRawSpec(btfSection.ReaderAt, file.ByteOrder, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = fixupDatasec(spec.types, sectionSizes, offsets)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return spec, nil
|
||||
}
|
||||
|
||||
func loadRawSpec(btf io.ReaderAt, bo binary.ByteOrder, base *Spec) (*Spec, error) {
|
||||
var (
|
||||
baseStrings *stringTable
|
||||
firstTypeID TypeID
|
||||
err error
|
||||
)
|
||||
|
||||
if base != nil {
|
||||
if base.firstTypeID != 0 {
|
||||
return nil, fmt.Errorf("can't use split BTF as base")
|
||||
}
|
||||
|
||||
if base.strings == nil {
|
||||
return nil, fmt.Errorf("parse split BTF: base must be loaded from an ELF")
|
||||
}
|
||||
|
||||
baseStrings = base.strings
|
||||
|
||||
firstTypeID, err = base.nextTypeID()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
rawTypes, rawStrings, err := parseBTF(btf, bo, baseStrings)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
types, err := inflateRawTypes(rawTypes, rawStrings, base)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
typeIDs, typesByName := indexTypes(types, firstTypeID)
|
||||
|
||||
return &Spec{
|
||||
namedTypes: typesByName,
|
||||
typeIDs: typeIDs,
|
||||
types: types,
|
||||
firstTypeID: firstTypeID,
|
||||
strings: rawStrings,
|
||||
byteOrder: bo,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func indexTypes(types []Type, firstTypeID TypeID) (map[Type]TypeID, map[essentialName][]Type) {
|
||||
namedTypes := 0
|
||||
for _, typ := range types {
|
||||
if typ.TypeName() != "" {
|
||||
// Do a pre-pass to figure out how big types by name has to be.
|
||||
// Most types have unique names, so it's OK to ignore essentialName
|
||||
// here.
|
||||
namedTypes++
|
||||
}
|
||||
}
|
||||
|
||||
typeIDs := make(map[Type]TypeID, len(types))
|
||||
typesByName := make(map[essentialName][]Type, namedTypes)
|
||||
|
||||
for i, typ := range types {
|
||||
if name := newEssentialName(typ.TypeName()); name != "" {
|
||||
typesByName[name] = append(typesByName[name], typ)
|
||||
}
|
||||
typeIDs[typ] = firstTypeID + TypeID(i)
|
||||
}
|
||||
|
||||
return typeIDs, typesByName
|
||||
}
|
||||
|
||||
// LoadKernelSpec returns the current kernel's BTF information.
|
||||
//
|
||||
// Defaults to /sys/kernel/btf/vmlinux and falls back to scanning the file system
|
||||
// for vmlinux ELFs. Returns an error wrapping ErrNotSupported if BTF is not enabled.
|
||||
func LoadKernelSpec() (*Spec, error) {
|
||||
spec, _, err := kernelSpec()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return spec.Copy(), nil
|
||||
}
|
||||
|
||||
var kernelBTF struct {
|
||||
sync.RWMutex
|
||||
spec *Spec
|
||||
// True if the spec was read from an ELF instead of raw BTF in /sys.
|
||||
fallback bool
|
||||
}
|
||||
|
||||
// FlushKernelSpec removes any cached kernel type information.
|
||||
func FlushKernelSpec() {
|
||||
kernelBTF.Lock()
|
||||
defer kernelBTF.Unlock()
|
||||
|
||||
kernelBTF.spec, kernelBTF.fallback = nil, false
|
||||
}
|
||||
|
||||
func kernelSpec() (*Spec, bool, error) {
|
||||
kernelBTF.RLock()
|
||||
spec, fallback := kernelBTF.spec, kernelBTF.fallback
|
||||
kernelBTF.RUnlock()
|
||||
|
||||
if spec == nil {
|
||||
kernelBTF.Lock()
|
||||
defer kernelBTF.Unlock()
|
||||
|
||||
spec, fallback = kernelBTF.spec, kernelBTF.fallback
|
||||
}
|
||||
|
||||
if spec != nil {
|
||||
return spec, fallback, nil
|
||||
}
|
||||
|
||||
spec, fallback, err := loadKernelSpec()
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
kernelBTF.spec, kernelBTF.fallback = spec, fallback
|
||||
return spec, fallback, nil
|
||||
}
|
||||
|
||||
func loadKernelSpec() (_ *Spec, fallback bool, _ error) {
|
||||
fh, err := os.Open("/sys/kernel/btf/vmlinux")
|
||||
if err == nil {
|
||||
defer fh.Close()
|
||||
|
||||
spec, err := loadRawSpec(fh, internal.NativeEndian, nil)
|
||||
return spec, false, err
|
||||
}
|
||||
|
||||
file, err := findVMLinux()
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
spec, err := loadSpecFromELF(file)
|
||||
return spec, true, err
|
||||
}
|
||||
|
||||
// findVMLinux scans multiple well-known paths for vmlinux kernel images.
|
||||
func findVMLinux() (*internal.SafeELFFile, error) {
|
||||
release, err := internal.KernelRelease()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// use same list of locations as libbpf
|
||||
// https://github.com/libbpf/libbpf/blob/9a3a42608dbe3731256a5682a125ac1e23bced8f/src/btf.c#L3114-L3122
|
||||
locations := []string{
|
||||
"/boot/vmlinux-%s",
|
||||
"/lib/modules/%s/vmlinux-%[1]s",
|
||||
"/lib/modules/%s/build/vmlinux",
|
||||
"/usr/lib/modules/%s/kernel/vmlinux",
|
||||
"/usr/lib/debug/boot/vmlinux-%s",
|
||||
"/usr/lib/debug/boot/vmlinux-%s.debug",
|
||||
"/usr/lib/debug/lib/modules/%s/vmlinux",
|
||||
}
|
||||
|
||||
for _, loc := range locations {
|
||||
file, err := internal.OpenSafeELFFile(fmt.Sprintf(loc, release))
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
continue
|
||||
}
|
||||
return file, err
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("no BTF found for kernel version %s: %w", release, internal.ErrNotSupported)
|
||||
}
|
||||
|
||||
// parseBTFHeader parses the header of the .BTF section.
|
||||
func parseBTFHeader(r io.Reader, bo binary.ByteOrder) (*btfHeader, error) {
|
||||
var header btfHeader
|
||||
if err := binary.Read(r, bo, &header); err != nil {
|
||||
return nil, fmt.Errorf("can't read header: %v", err)
|
||||
}
|
||||
|
||||
if header.Magic != btfMagic {
|
||||
return nil, fmt.Errorf("incorrect magic value %v", header.Magic)
|
||||
}
|
||||
|
||||
if header.Version != 1 {
|
||||
return nil, fmt.Errorf("unexpected version %v", header.Version)
|
||||
}
|
||||
|
||||
if header.Flags != 0 {
|
||||
return nil, fmt.Errorf("unsupported flags %v", header.Flags)
|
||||
}
|
||||
|
||||
remainder := int64(header.HdrLen) - int64(binary.Size(&header))
|
||||
if remainder < 0 {
|
||||
return nil, errors.New("header length shorter than btfHeader size")
|
||||
}
|
||||
|
||||
if _, err := io.CopyN(internal.DiscardZeroes{}, r, remainder); err != nil {
|
||||
return nil, fmt.Errorf("header padding: %v", err)
|
||||
}
|
||||
|
||||
return &header, nil
|
||||
}
|
||||
|
||||
func guessRawBTFByteOrder(r io.ReaderAt) binary.ByteOrder {
|
||||
buf := new(bufio.Reader)
|
||||
for _, bo := range []binary.ByteOrder{
|
||||
binary.LittleEndian,
|
||||
binary.BigEndian,
|
||||
} {
|
||||
buf.Reset(io.NewSectionReader(r, 0, math.MaxInt64))
|
||||
if _, err := parseBTFHeader(buf, bo); err == nil {
|
||||
return bo
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseBTF reads a .BTF section into memory and parses it into a list of
|
||||
// raw types and a string table.
|
||||
func parseBTF(btf io.ReaderAt, bo binary.ByteOrder, baseStrings *stringTable) ([]rawType, *stringTable, error) {
|
||||
buf := internal.NewBufferedSectionReader(btf, 0, math.MaxInt64)
|
||||
header, err := parseBTFHeader(buf, bo)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("parsing .BTF header: %v", err)
|
||||
}
|
||||
|
||||
rawStrings, err := readStringTable(io.NewSectionReader(btf, header.stringStart(), int64(header.StringLen)),
|
||||
baseStrings)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("can't read type names: %w", err)
|
||||
}
|
||||
|
||||
buf.Reset(io.NewSectionReader(btf, header.typeStart(), int64(header.TypeLen)))
|
||||
rawTypes, err := readTypes(buf, bo, header.TypeLen)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("can't read types: %w", err)
|
||||
}
|
||||
|
||||
return rawTypes, rawStrings, nil
|
||||
}
|
||||
|
||||
type symbol struct {
|
||||
section string
|
||||
name string
|
||||
}
|
||||
|
||||
// fixupDatasec attempts to patch up missing info in Datasecs and its members by
|
||||
// supplementing them with information from the ELF headers and symbol table.
|
||||
func fixupDatasec(types []Type, sectionSizes map[string]uint32, offsets map[symbol]uint32) error {
|
||||
for _, typ := range types {
|
||||
ds, ok := typ.(*Datasec)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
name := ds.Name
|
||||
|
||||
// Some Datasecs are virtual and don't have corresponding ELF sections.
|
||||
switch name {
|
||||
case ".ksyms":
|
||||
// .ksyms describes forward declarations of kfunc signatures.
|
||||
// Nothing to fix up, all sizes and offsets are 0.
|
||||
for _, vsi := range ds.Vars {
|
||||
_, ok := vsi.Type.(*Func)
|
||||
if !ok {
|
||||
// Only Funcs are supported in the .ksyms Datasec.
|
||||
return fmt.Errorf("data section %s: expected *btf.Func, not %T: %w", name, vsi.Type, ErrNotSupported)
|
||||
}
|
||||
}
|
||||
|
||||
continue
|
||||
case ".kconfig":
|
||||
// .kconfig has a size of 0 and has all members' offsets set to 0.
|
||||
// Fix up all offsets and set the Datasec's size.
|
||||
if err := fixupDatasecLayout(ds); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Fix up extern to global linkage to avoid a BTF verifier error.
|
||||
for _, vsi := range ds.Vars {
|
||||
vsi.Type.(*Var).Linkage = GlobalVar
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
if ds.Size != 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
ds.Size, ok = sectionSizes[name]
|
||||
if !ok {
|
||||
return fmt.Errorf("data section %s: missing size", name)
|
||||
}
|
||||
|
||||
for i := range ds.Vars {
|
||||
symName := ds.Vars[i].Type.TypeName()
|
||||
ds.Vars[i].Offset, ok = offsets[symbol{name, symName}]
|
||||
if !ok {
|
||||
return fmt.Errorf("data section %s: missing offset for symbol %s", name, symName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// fixupDatasecLayout populates ds.Vars[].Offset according to var sizes and
|
||||
// alignment. Calculate and set ds.Size.
|
||||
func fixupDatasecLayout(ds *Datasec) error {
|
||||
var off uint32
|
||||
|
||||
for i, vsi := range ds.Vars {
|
||||
v, ok := vsi.Type.(*Var)
|
||||
if !ok {
|
||||
return fmt.Errorf("member %d: unsupported type %T", i, vsi.Type)
|
||||
}
|
||||
|
||||
size, err := Sizeof(v.Type)
|
||||
if err != nil {
|
||||
return fmt.Errorf("variable %s: getting size: %w", v.Name, err)
|
||||
}
|
||||
align, err := alignof(v.Type)
|
||||
if err != nil {
|
||||
return fmt.Errorf("variable %s: getting alignment: %w", v.Name, err)
|
||||
}
|
||||
|
||||
// Align the current member based on the offset of the end of the previous
|
||||
// member and the alignment of the current member.
|
||||
off = internal.Align(off, uint32(align))
|
||||
|
||||
ds.Vars[i].Offset = off
|
||||
|
||||
off += uint32(size)
|
||||
}
|
||||
|
||||
ds.Size = off
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Copy creates a copy of Spec.
|
||||
func (s *Spec) Copy() *Spec {
|
||||
types := copyTypes(s.types, nil)
|
||||
typeIDs, typesByName := indexTypes(types, s.firstTypeID)
|
||||
|
||||
// NB: Other parts of spec are not copied since they are immutable.
|
||||
return &Spec{
|
||||
types,
|
||||
typeIDs,
|
||||
s.firstTypeID,
|
||||
typesByName,
|
||||
s.strings,
|
||||
s.byteOrder,
|
||||
}
|
||||
}
|
||||
|
||||
type sliceWriter []byte
|
||||
|
||||
func (sw sliceWriter) Write(p []byte) (int, error) {
|
||||
if len(p) != len(sw) {
|
||||
return 0, errors.New("size doesn't match")
|
||||
}
|
||||
|
||||
return copy(sw, p), nil
|
||||
}
|
||||
|
||||
// nextTypeID returns the next unallocated type ID or an error if there are no
|
||||
// more type IDs.
|
||||
func (s *Spec) nextTypeID() (TypeID, error) {
|
||||
id := s.firstTypeID + TypeID(len(s.types))
|
||||
if id < s.firstTypeID {
|
||||
return 0, fmt.Errorf("no more type IDs")
|
||||
}
|
||||
return id, nil
|
||||
}
|
||||
|
||||
// TypeByID returns the BTF Type with the given type ID.
|
||||
//
|
||||
// Returns an error wrapping ErrNotFound if a Type with the given ID
|
||||
// does not exist in the Spec.
|
||||
func (s *Spec) TypeByID(id TypeID) (Type, error) {
|
||||
if id < s.firstTypeID {
|
||||
return nil, fmt.Errorf("look up type with ID %d (first ID is %d): %w", id, s.firstTypeID, ErrNotFound)
|
||||
}
|
||||
|
||||
index := int(id - s.firstTypeID)
|
||||
if index >= len(s.types) {
|
||||
return nil, fmt.Errorf("look up type with ID %d: %w", id, ErrNotFound)
|
||||
}
|
||||
|
||||
return s.types[index], nil
|
||||
}
|
||||
|
||||
// TypeID returns the ID for a given Type.
|
||||
//
|
||||
// Returns an error wrapping ErrNoFound if the type isn't part of the Spec.
|
||||
func (s *Spec) TypeID(typ Type) (TypeID, error) {
|
||||
if _, ok := typ.(*Void); ok {
|
||||
// Equality is weird for void, since it is a zero sized type.
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
id, ok := s.typeIDs[typ]
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("no ID for type %s: %w", typ, ErrNotFound)
|
||||
}
|
||||
|
||||
return id, nil
|
||||
}
|
||||
|
||||
// AnyTypesByName returns a list of BTF Types with the given name.
|
||||
//
|
||||
// If the BTF blob describes multiple compilation units like vmlinux, multiple
|
||||
// Types with the same name and kind can exist, but might not describe the same
|
||||
// data structure.
|
||||
//
|
||||
// Returns an error wrapping ErrNotFound if no matching Type exists in the Spec.
|
||||
func (s *Spec) AnyTypesByName(name string) ([]Type, error) {
|
||||
types := s.namedTypes[newEssentialName(name)]
|
||||
if len(types) == 0 {
|
||||
return nil, fmt.Errorf("type name %s: %w", name, ErrNotFound)
|
||||
}
|
||||
|
||||
// Return a copy to prevent changes to namedTypes.
|
||||
result := make([]Type, 0, len(types))
|
||||
for _, t := range types {
|
||||
// Match against the full name, not just the essential one
|
||||
// in case the type being looked up is a struct flavor.
|
||||
if t.TypeName() == name {
|
||||
result = append(result, t)
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// AnyTypeByName returns a Type with the given name.
|
||||
//
|
||||
// Returns an error if multiple types of that name exist.
|
||||
func (s *Spec) AnyTypeByName(name string) (Type, error) {
|
||||
types, err := s.AnyTypesByName(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(types) > 1 {
|
||||
return nil, fmt.Errorf("found multiple types: %v", types)
|
||||
}
|
||||
|
||||
return types[0], nil
|
||||
}
|
||||
|
||||
// TypeByName searches for a Type with a specific name. Since multiple Types
|
||||
// with the same name can exist, the parameter typ is taken to narrow down the
|
||||
// search in case of a clash.
|
||||
//
|
||||
// typ must be a non-nil pointer to an implementation of a Type. On success, the
|
||||
// address of the found Type will be copied to typ.
|
||||
//
|
||||
// Returns an error wrapping ErrNotFound if no matching Type exists in the Spec.
|
||||
// Returns an error wrapping ErrMultipleTypes if multiple candidates are found.
|
||||
func (s *Spec) TypeByName(name string, typ interface{}) error {
|
||||
typeInterface := reflect.TypeOf((*Type)(nil)).Elem()
|
||||
|
||||
// typ may be **T or *Type
|
||||
typValue := reflect.ValueOf(typ)
|
||||
if typValue.Kind() != reflect.Ptr {
|
||||
return fmt.Errorf("%T is not a pointer", typ)
|
||||
}
|
||||
|
||||
typPtr := typValue.Elem()
|
||||
if !typPtr.CanSet() {
|
||||
return fmt.Errorf("%T cannot be set", typ)
|
||||
}
|
||||
|
||||
wanted := typPtr.Type()
|
||||
if wanted == typeInterface {
|
||||
// This is *Type. Unwrap the value's type.
|
||||
wanted = typPtr.Elem().Type()
|
||||
}
|
||||
|
||||
if !wanted.AssignableTo(typeInterface) {
|
||||
return fmt.Errorf("%T does not satisfy Type interface", typ)
|
||||
}
|
||||
|
||||
types, err := s.AnyTypesByName(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var candidate Type
|
||||
for _, typ := range types {
|
||||
if reflect.TypeOf(typ) != wanted {
|
||||
continue
|
||||
}
|
||||
|
||||
if candidate != nil {
|
||||
return fmt.Errorf("type %s(%T): %w", name, typ, ErrMultipleMatches)
|
||||
}
|
||||
|
||||
candidate = typ
|
||||
}
|
||||
|
||||
if candidate == nil {
|
||||
return fmt.Errorf("%s %s: %w", wanted, name, ErrNotFound)
|
||||
}
|
||||
|
||||
typPtr.Set(reflect.ValueOf(candidate))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// LoadSplitSpecFromReader loads split BTF from a reader.
|
||||
//
|
||||
// Types from base are used to resolve references in the split BTF.
|
||||
// The returned Spec only contains types from the split BTF, not from the base.
|
||||
func LoadSplitSpecFromReader(r io.ReaderAt, base *Spec) (*Spec, error) {
|
||||
return loadRawSpec(r, internal.NativeEndian, base)
|
||||
}
|
||||
|
||||
// TypesIterator iterates over types of a given spec.
|
||||
type TypesIterator struct {
|
||||
types []Type
|
||||
index int
|
||||
// The last visited type in the spec.
|
||||
Type Type
|
||||
}
|
||||
|
||||
// Iterate returns the types iterator.
|
||||
func (s *Spec) Iterate() *TypesIterator {
|
||||
// We share the backing array of types with the Spec. This is safe since
|
||||
// we don't allow deletion or shuffling of types.
|
||||
return &TypesIterator{types: s.types, index: 0}
|
||||
}
|
||||
|
||||
// Next returns true as long as there are any remaining types.
|
||||
func (iter *TypesIterator) Next() bool {
|
||||
if len(iter.types) <= iter.index {
|
||||
return false
|
||||
}
|
||||
|
||||
iter.Type = iter.types[iter.index]
|
||||
iter.index++
|
||||
return true
|
||||
}
|
||||
|
||||
// haveBTF attempts to load a BTF blob containing an Int. It should pass on any
|
||||
// kernel that supports BPF_BTF_LOAD.
|
||||
var haveBTF = internal.NewFeatureTest("BTF", "4.18", func() error {
|
||||
// 0-length anonymous integer
|
||||
err := probeBTF(&Int{})
|
||||
if errors.Is(err, unix.EINVAL) || errors.Is(err, unix.EPERM) {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
return err
|
||||
})
|
||||
|
||||
// haveMapBTF attempts to load a minimal BTF blob containing a Var. It is
|
||||
// used as a proxy for .bss, .data and .rodata map support, which generally
|
||||
// come with a Var and Datasec. These were introduced in Linux 5.2.
|
||||
var haveMapBTF = internal.NewFeatureTest("Map BTF (Var/Datasec)", "5.2", func() error {
|
||||
if err := haveBTF(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
v := &Var{
|
||||
Name: "a",
|
||||
Type: &Pointer{(*Void)(nil)},
|
||||
}
|
||||
|
||||
err := probeBTF(v)
|
||||
if errors.Is(err, unix.EINVAL) || errors.Is(err, unix.EPERM) {
|
||||
// Treat both EINVAL and EPERM as not supported: creating the map may still
|
||||
// succeed without Btf* attrs.
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
return err
|
||||
})
|
||||
|
||||
// haveProgBTF attempts to load a BTF blob containing a Func and FuncProto. It
|
||||
// is used as a proxy for ext_info (func_info) support, which depends on
|
||||
// Func(Proto) by definition.
|
||||
var haveProgBTF = internal.NewFeatureTest("Program BTF (func/line_info)", "5.0", func() error {
|
||||
if err := haveBTF(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fn := &Func{
|
||||
Name: "a",
|
||||
Type: &FuncProto{Return: (*Void)(nil)},
|
||||
}
|
||||
|
||||
err := probeBTF(fn)
|
||||
if errors.Is(err, unix.EINVAL) || errors.Is(err, unix.EPERM) {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
return err
|
||||
})
|
||||
|
||||
var haveFuncLinkage = internal.NewFeatureTest("BTF func linkage", "5.6", func() error {
|
||||
if err := haveProgBTF(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fn := &Func{
|
||||
Name: "a",
|
||||
Type: &FuncProto{Return: (*Void)(nil)},
|
||||
Linkage: GlobalFunc,
|
||||
}
|
||||
|
||||
err := probeBTF(fn)
|
||||
if errors.Is(err, unix.EINVAL) {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
return err
|
||||
})
|
||||
|
||||
func probeBTF(typ Type) error {
|
||||
b, err := NewBuilder([]Type{typ})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
buf, err := b.Marshal(nil, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fd, err := sys.BtfLoad(&sys.BtfLoadAttr{
|
||||
Btf: sys.NewSlicePointer(buf),
|
||||
BtfSize: uint32(len(buf)),
|
||||
})
|
||||
|
||||
if err == nil {
|
||||
fd.Close()
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
371
vendor/github.com/cilium/ebpf/btf/btf_types.go
generated
vendored
371
vendor/github.com/cilium/ebpf/btf/btf_types.go
generated
vendored
@@ -1,371 +0,0 @@
|
||||
package btf
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
//go:generate stringer -linecomment -output=btf_types_string.go -type=FuncLinkage,VarLinkage,btfKind
|
||||
|
||||
// btfKind describes a Type.
|
||||
type btfKind uint8
|
||||
|
||||
// Equivalents of the BTF_KIND_* constants.
|
||||
const (
|
||||
kindUnknown btfKind = iota // Unknown
|
||||
kindInt // Int
|
||||
kindPointer // Pointer
|
||||
kindArray // Array
|
||||
kindStruct // Struct
|
||||
kindUnion // Union
|
||||
kindEnum // Enum
|
||||
kindForward // Forward
|
||||
kindTypedef // Typedef
|
||||
kindVolatile // Volatile
|
||||
kindConst // Const
|
||||
kindRestrict // Restrict
|
||||
// Added ~4.20
|
||||
kindFunc // Func
|
||||
kindFuncProto // FuncProto
|
||||
// Added ~5.1
|
||||
kindVar // Var
|
||||
kindDatasec // Datasec
|
||||
// Added ~5.13
|
||||
kindFloat // Float
|
||||
// Added 5.16
|
||||
kindDeclTag // DeclTag
|
||||
kindTypeTag // TypeTag
|
||||
// Added 6.0
|
||||
kindEnum64 // Enum64
|
||||
)
|
||||
|
||||
// FuncLinkage describes BTF function linkage metadata.
|
||||
type FuncLinkage int
|
||||
|
||||
// Equivalent of enum btf_func_linkage.
|
||||
const (
|
||||
StaticFunc FuncLinkage = iota // static
|
||||
GlobalFunc // global
|
||||
ExternFunc // extern
|
||||
)
|
||||
|
||||
// VarLinkage describes BTF variable linkage metadata.
|
||||
type VarLinkage int
|
||||
|
||||
const (
|
||||
StaticVar VarLinkage = iota // static
|
||||
GlobalVar // global
|
||||
ExternVar // extern
|
||||
)
|
||||
|
||||
const (
|
||||
btfTypeKindShift = 24
|
||||
btfTypeKindLen = 5
|
||||
btfTypeVlenShift = 0
|
||||
btfTypeVlenMask = 16
|
||||
btfTypeKindFlagShift = 31
|
||||
btfTypeKindFlagMask = 1
|
||||
)
|
||||
|
||||
var btfTypeLen = binary.Size(btfType{})
|
||||
|
||||
// btfType is equivalent to struct btf_type in Documentation/bpf/btf.rst.
|
||||
type btfType struct {
|
||||
NameOff uint32
|
||||
/* "info" bits arrangement
|
||||
* bits 0-15: vlen (e.g. # of struct's members), linkage
|
||||
* bits 16-23: unused
|
||||
* bits 24-28: kind (e.g. int, ptr, array...etc)
|
||||
* bits 29-30: unused
|
||||
* bit 31: kind_flag, currently used by
|
||||
* struct, union and fwd
|
||||
*/
|
||||
Info uint32
|
||||
/* "size" is used by INT, ENUM, STRUCT and UNION.
|
||||
* "size" tells the size of the type it is describing.
|
||||
*
|
||||
* "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT,
|
||||
* FUNC and FUNC_PROTO.
|
||||
* "type" is a type_id referring to another type.
|
||||
*/
|
||||
SizeType uint32
|
||||
}
|
||||
|
||||
func mask(len uint32) uint32 {
|
||||
return (1 << len) - 1
|
||||
}
|
||||
|
||||
func readBits(value, len, shift uint32) uint32 {
|
||||
return (value >> shift) & mask(len)
|
||||
}
|
||||
|
||||
func writeBits(value, len, shift, new uint32) uint32 {
|
||||
value &^= mask(len) << shift
|
||||
value |= (new & mask(len)) << shift
|
||||
return value
|
||||
}
|
||||
|
||||
func (bt *btfType) info(len, shift uint32) uint32 {
|
||||
return readBits(bt.Info, len, shift)
|
||||
}
|
||||
|
||||
func (bt *btfType) setInfo(value, len, shift uint32) {
|
||||
bt.Info = writeBits(bt.Info, len, shift, value)
|
||||
}
|
||||
|
||||
func (bt *btfType) Kind() btfKind {
|
||||
return btfKind(bt.info(btfTypeKindLen, btfTypeKindShift))
|
||||
}
|
||||
|
||||
func (bt *btfType) SetKind(kind btfKind) {
|
||||
bt.setInfo(uint32(kind), btfTypeKindLen, btfTypeKindShift)
|
||||
}
|
||||
|
||||
func (bt *btfType) Vlen() int {
|
||||
return int(bt.info(btfTypeVlenMask, btfTypeVlenShift))
|
||||
}
|
||||
|
||||
func (bt *btfType) SetVlen(vlen int) {
|
||||
bt.setInfo(uint32(vlen), btfTypeVlenMask, btfTypeVlenShift)
|
||||
}
|
||||
|
||||
func (bt *btfType) kindFlagBool() bool {
|
||||
return bt.info(btfTypeKindFlagMask, btfTypeKindFlagShift) == 1
|
||||
}
|
||||
|
||||
func (bt *btfType) setKindFlagBool(set bool) {
|
||||
var value uint32
|
||||
if set {
|
||||
value = 1
|
||||
}
|
||||
bt.setInfo(value, btfTypeKindFlagMask, btfTypeKindFlagShift)
|
||||
}
|
||||
|
||||
// Bitfield returns true if the struct or union contain a bitfield.
|
||||
func (bt *btfType) Bitfield() bool {
|
||||
return bt.kindFlagBool()
|
||||
}
|
||||
|
||||
func (bt *btfType) SetBitfield(isBitfield bool) {
|
||||
bt.setKindFlagBool(isBitfield)
|
||||
}
|
||||
|
||||
func (bt *btfType) FwdKind() FwdKind {
|
||||
return FwdKind(bt.info(btfTypeKindFlagMask, btfTypeKindFlagShift))
|
||||
}
|
||||
|
||||
func (bt *btfType) SetFwdKind(kind FwdKind) {
|
||||
bt.setInfo(uint32(kind), btfTypeKindFlagMask, btfTypeKindFlagShift)
|
||||
}
|
||||
|
||||
func (bt *btfType) Signed() bool {
|
||||
return bt.kindFlagBool()
|
||||
}
|
||||
|
||||
func (bt *btfType) SetSigned(signed bool) {
|
||||
bt.setKindFlagBool(signed)
|
||||
}
|
||||
|
||||
func (bt *btfType) Linkage() FuncLinkage {
|
||||
return FuncLinkage(bt.info(btfTypeVlenMask, btfTypeVlenShift))
|
||||
}
|
||||
|
||||
func (bt *btfType) SetLinkage(linkage FuncLinkage) {
|
||||
bt.setInfo(uint32(linkage), btfTypeVlenMask, btfTypeVlenShift)
|
||||
}
|
||||
|
||||
func (bt *btfType) Type() TypeID {
|
||||
// TODO: Panic here if wrong kind?
|
||||
return TypeID(bt.SizeType)
|
||||
}
|
||||
|
||||
func (bt *btfType) SetType(id TypeID) {
|
||||
bt.SizeType = uint32(id)
|
||||
}
|
||||
|
||||
func (bt *btfType) Size() uint32 {
|
||||
// TODO: Panic here if wrong kind?
|
||||
return bt.SizeType
|
||||
}
|
||||
|
||||
func (bt *btfType) SetSize(size uint32) {
|
||||
bt.SizeType = size
|
||||
}
|
||||
|
||||
func (bt *btfType) Marshal(w io.Writer, bo binary.ByteOrder) error {
|
||||
buf := make([]byte, unsafe.Sizeof(*bt))
|
||||
bo.PutUint32(buf[0:], bt.NameOff)
|
||||
bo.PutUint32(buf[4:], bt.Info)
|
||||
bo.PutUint32(buf[8:], bt.SizeType)
|
||||
_, err := w.Write(buf)
|
||||
return err
|
||||
}
|
||||
|
||||
type rawType struct {
|
||||
btfType
|
||||
data interface{}
|
||||
}
|
||||
|
||||
func (rt *rawType) Marshal(w io.Writer, bo binary.ByteOrder) error {
|
||||
if err := rt.btfType.Marshal(w, bo); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if rt.data == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return binary.Write(w, bo, rt.data)
|
||||
}
|
||||
|
||||
// btfInt encodes additional data for integers.
|
||||
//
|
||||
// ? ? ? ? e e e e o o o o o o o o ? ? ? ? ? ? ? ? b b b b b b b b
|
||||
// ? = undefined
|
||||
// e = encoding
|
||||
// o = offset (bitfields?)
|
||||
// b = bits (bitfields)
|
||||
type btfInt struct {
|
||||
Raw uint32
|
||||
}
|
||||
|
||||
const (
|
||||
btfIntEncodingLen = 4
|
||||
btfIntEncodingShift = 24
|
||||
btfIntOffsetLen = 8
|
||||
btfIntOffsetShift = 16
|
||||
btfIntBitsLen = 8
|
||||
btfIntBitsShift = 0
|
||||
)
|
||||
|
||||
func (bi btfInt) Encoding() IntEncoding {
|
||||
return IntEncoding(readBits(bi.Raw, btfIntEncodingLen, btfIntEncodingShift))
|
||||
}
|
||||
|
||||
func (bi *btfInt) SetEncoding(e IntEncoding) {
|
||||
bi.Raw = writeBits(uint32(bi.Raw), btfIntEncodingLen, btfIntEncodingShift, uint32(e))
|
||||
}
|
||||
|
||||
func (bi btfInt) Offset() Bits {
|
||||
return Bits(readBits(bi.Raw, btfIntOffsetLen, btfIntOffsetShift))
|
||||
}
|
||||
|
||||
func (bi *btfInt) SetOffset(offset uint32) {
|
||||
bi.Raw = writeBits(bi.Raw, btfIntOffsetLen, btfIntOffsetShift, offset)
|
||||
}
|
||||
|
||||
func (bi btfInt) Bits() Bits {
|
||||
return Bits(readBits(bi.Raw, btfIntBitsLen, btfIntBitsShift))
|
||||
}
|
||||
|
||||
func (bi *btfInt) SetBits(bits byte) {
|
||||
bi.Raw = writeBits(bi.Raw, btfIntBitsLen, btfIntBitsShift, uint32(bits))
|
||||
}
|
||||
|
||||
type btfArray struct {
|
||||
Type TypeID
|
||||
IndexType TypeID
|
||||
Nelems uint32
|
||||
}
|
||||
|
||||
type btfMember struct {
|
||||
NameOff uint32
|
||||
Type TypeID
|
||||
Offset uint32
|
||||
}
|
||||
|
||||
type btfVarSecinfo struct {
|
||||
Type TypeID
|
||||
Offset uint32
|
||||
Size uint32
|
||||
}
|
||||
|
||||
type btfVariable struct {
|
||||
Linkage uint32
|
||||
}
|
||||
|
||||
type btfEnum struct {
|
||||
NameOff uint32
|
||||
Val uint32
|
||||
}
|
||||
|
||||
type btfEnum64 struct {
|
||||
NameOff uint32
|
||||
ValLo32 uint32
|
||||
ValHi32 uint32
|
||||
}
|
||||
|
||||
type btfParam struct {
|
||||
NameOff uint32
|
||||
Type TypeID
|
||||
}
|
||||
|
||||
type btfDeclTag struct {
|
||||
ComponentIdx uint32
|
||||
}
|
||||
|
||||
func readTypes(r io.Reader, bo binary.ByteOrder, typeLen uint32) ([]rawType, error) {
|
||||
var header btfType
|
||||
// because of the interleaving between types and struct members it is difficult to
|
||||
// precompute the numbers of raw types this will parse
|
||||
// this "guess" is a good first estimation
|
||||
sizeOfbtfType := uintptr(btfTypeLen)
|
||||
tyMaxCount := uintptr(typeLen) / sizeOfbtfType / 2
|
||||
types := make([]rawType, 0, tyMaxCount)
|
||||
|
||||
for id := TypeID(1); ; id++ {
|
||||
if err := binary.Read(r, bo, &header); err == io.EOF {
|
||||
return types, nil
|
||||
} else if err != nil {
|
||||
return nil, fmt.Errorf("can't read type info for id %v: %v", id, err)
|
||||
}
|
||||
|
||||
var data interface{}
|
||||
switch header.Kind() {
|
||||
case kindInt:
|
||||
data = new(btfInt)
|
||||
case kindPointer:
|
||||
case kindArray:
|
||||
data = new(btfArray)
|
||||
case kindStruct:
|
||||
fallthrough
|
||||
case kindUnion:
|
||||
data = make([]btfMember, header.Vlen())
|
||||
case kindEnum:
|
||||
data = make([]btfEnum, header.Vlen())
|
||||
case kindForward:
|
||||
case kindTypedef:
|
||||
case kindVolatile:
|
||||
case kindConst:
|
||||
case kindRestrict:
|
||||
case kindFunc:
|
||||
case kindFuncProto:
|
||||
data = make([]btfParam, header.Vlen())
|
||||
case kindVar:
|
||||
data = new(btfVariable)
|
||||
case kindDatasec:
|
||||
data = make([]btfVarSecinfo, header.Vlen())
|
||||
case kindFloat:
|
||||
case kindDeclTag:
|
||||
data = new(btfDeclTag)
|
||||
case kindTypeTag:
|
||||
case kindEnum64:
|
||||
data = make([]btfEnum64, header.Vlen())
|
||||
default:
|
||||
return nil, fmt.Errorf("type id %v: unknown kind: %v", id, header.Kind())
|
||||
}
|
||||
|
||||
if data == nil {
|
||||
types = append(types, rawType{header, nil})
|
||||
continue
|
||||
}
|
||||
|
||||
if err := binary.Read(r, bo, data); err != nil {
|
||||
return nil, fmt.Errorf("type id %d: kind %v: can't read %T: %v", id, header.Kind(), data, err)
|
||||
}
|
||||
|
||||
types = append(types, rawType{header, data})
|
||||
}
|
||||
}
|
||||
80
vendor/github.com/cilium/ebpf/btf/btf_types_string.go
generated
vendored
80
vendor/github.com/cilium/ebpf/btf/btf_types_string.go
generated
vendored
@@ -1,80 +0,0 @@
|
||||
// Code generated by "stringer -linecomment -output=btf_types_string.go -type=FuncLinkage,VarLinkage,btfKind"; DO NOT EDIT.
|
||||
|
||||
package btf
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[StaticFunc-0]
|
||||
_ = x[GlobalFunc-1]
|
||||
_ = x[ExternFunc-2]
|
||||
}
|
||||
|
||||
const _FuncLinkage_name = "staticglobalextern"
|
||||
|
||||
var _FuncLinkage_index = [...]uint8{0, 6, 12, 18}
|
||||
|
||||
func (i FuncLinkage) String() string {
|
||||
if i < 0 || i >= FuncLinkage(len(_FuncLinkage_index)-1) {
|
||||
return "FuncLinkage(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
return _FuncLinkage_name[_FuncLinkage_index[i]:_FuncLinkage_index[i+1]]
|
||||
}
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[StaticVar-0]
|
||||
_ = x[GlobalVar-1]
|
||||
_ = x[ExternVar-2]
|
||||
}
|
||||
|
||||
const _VarLinkage_name = "staticglobalextern"
|
||||
|
||||
var _VarLinkage_index = [...]uint8{0, 6, 12, 18}
|
||||
|
||||
func (i VarLinkage) String() string {
|
||||
if i < 0 || i >= VarLinkage(len(_VarLinkage_index)-1) {
|
||||
return "VarLinkage(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
return _VarLinkage_name[_VarLinkage_index[i]:_VarLinkage_index[i+1]]
|
||||
}
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[kindUnknown-0]
|
||||
_ = x[kindInt-1]
|
||||
_ = x[kindPointer-2]
|
||||
_ = x[kindArray-3]
|
||||
_ = x[kindStruct-4]
|
||||
_ = x[kindUnion-5]
|
||||
_ = x[kindEnum-6]
|
||||
_ = x[kindForward-7]
|
||||
_ = x[kindTypedef-8]
|
||||
_ = x[kindVolatile-9]
|
||||
_ = x[kindConst-10]
|
||||
_ = x[kindRestrict-11]
|
||||
_ = x[kindFunc-12]
|
||||
_ = x[kindFuncProto-13]
|
||||
_ = x[kindVar-14]
|
||||
_ = x[kindDatasec-15]
|
||||
_ = x[kindFloat-16]
|
||||
_ = x[kindDeclTag-17]
|
||||
_ = x[kindTypeTag-18]
|
||||
_ = x[kindEnum64-19]
|
||||
}
|
||||
|
||||
const _btfKind_name = "UnknownIntPointerArrayStructUnionEnumForwardTypedefVolatileConstRestrictFuncFuncProtoVarDatasecFloatDeclTagTypeTagEnum64"
|
||||
|
||||
var _btfKind_index = [...]uint8{0, 7, 10, 17, 22, 28, 33, 37, 44, 51, 59, 64, 72, 76, 85, 88, 95, 100, 107, 114, 120}
|
||||
|
||||
func (i btfKind) String() string {
|
||||
if i >= btfKind(len(_btfKind_index)-1) {
|
||||
return "btfKind(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
return _btfKind_name[_btfKind_index[i]:_btfKind_index[i+1]]
|
||||
}
|
||||
1011
vendor/github.com/cilium/ebpf/btf/core.go
generated
vendored
1011
vendor/github.com/cilium/ebpf/btf/core.go
generated
vendored
File diff suppressed because it is too large
Load Diff
5
vendor/github.com/cilium/ebpf/btf/doc.go
generated
vendored
5
vendor/github.com/cilium/ebpf/btf/doc.go
generated
vendored
@@ -1,5 +0,0 @@
|
||||
// Package btf handles data encoded according to the BPF Type Format.
|
||||
//
|
||||
// The canonical documentation lives in the Linux kernel repository and is
|
||||
// available at https://www.kernel.org/doc/html/latest/bpf/btf.html
|
||||
package btf
|
||||
768
vendor/github.com/cilium/ebpf/btf/ext_info.go
generated
vendored
768
vendor/github.com/cilium/ebpf/btf/ext_info.go
generated
vendored
@@ -1,768 +0,0 @@
|
||||
package btf
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"sort"
|
||||
|
||||
"github.com/cilium/ebpf/asm"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
)
|
||||
|
||||
// ExtInfos contains ELF section metadata.
|
||||
type ExtInfos struct {
|
||||
// The slices are sorted by offset in ascending order.
|
||||
funcInfos map[string][]funcInfo
|
||||
lineInfos map[string][]lineInfo
|
||||
relocationInfos map[string][]coreRelocationInfo
|
||||
}
|
||||
|
||||
// loadExtInfosFromELF parses ext infos from the .BTF.ext section in an ELF.
|
||||
//
|
||||
// Returns an error wrapping ErrNotFound if no ext infos are present.
|
||||
func loadExtInfosFromELF(file *internal.SafeELFFile, spec *Spec) (*ExtInfos, error) {
|
||||
section := file.Section(".BTF.ext")
|
||||
if section == nil {
|
||||
return nil, fmt.Errorf("btf ext infos: %w", ErrNotFound)
|
||||
}
|
||||
|
||||
if section.ReaderAt == nil {
|
||||
return nil, fmt.Errorf("compressed ext_info is not supported")
|
||||
}
|
||||
|
||||
return loadExtInfos(section.ReaderAt, file.ByteOrder, spec, spec.strings)
|
||||
}
|
||||
|
||||
// loadExtInfos parses bare ext infos.
|
||||
func loadExtInfos(r io.ReaderAt, bo binary.ByteOrder, spec *Spec, strings *stringTable) (*ExtInfos, error) {
|
||||
// Open unbuffered section reader. binary.Read() calls io.ReadFull on
|
||||
// the header structs, resulting in one syscall per header.
|
||||
headerRd := io.NewSectionReader(r, 0, math.MaxInt64)
|
||||
extHeader, err := parseBTFExtHeader(headerRd, bo)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing BTF extension header: %w", err)
|
||||
}
|
||||
|
||||
coreHeader, err := parseBTFExtCOREHeader(headerRd, bo, extHeader)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing BTF CO-RE header: %w", err)
|
||||
}
|
||||
|
||||
buf := internal.NewBufferedSectionReader(r, extHeader.funcInfoStart(), int64(extHeader.FuncInfoLen))
|
||||
btfFuncInfos, err := parseFuncInfos(buf, bo, strings)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing BTF function info: %w", err)
|
||||
}
|
||||
|
||||
funcInfos := make(map[string][]funcInfo, len(btfFuncInfos))
|
||||
for section, bfis := range btfFuncInfos {
|
||||
funcInfos[section], err = newFuncInfos(bfis, spec)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("section %s: func infos: %w", section, err)
|
||||
}
|
||||
}
|
||||
|
||||
buf = internal.NewBufferedSectionReader(r, extHeader.lineInfoStart(), int64(extHeader.LineInfoLen))
|
||||
btfLineInfos, err := parseLineInfos(buf, bo, strings)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing BTF line info: %w", err)
|
||||
}
|
||||
|
||||
lineInfos := make(map[string][]lineInfo, len(btfLineInfos))
|
||||
for section, blis := range btfLineInfos {
|
||||
lineInfos[section], err = newLineInfos(blis, strings)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("section %s: line infos: %w", section, err)
|
||||
}
|
||||
}
|
||||
|
||||
if coreHeader == nil || coreHeader.COREReloLen == 0 {
|
||||
return &ExtInfos{funcInfos, lineInfos, nil}, nil
|
||||
}
|
||||
|
||||
var btfCORERelos map[string][]bpfCORERelo
|
||||
buf = internal.NewBufferedSectionReader(r, extHeader.coreReloStart(coreHeader), int64(coreHeader.COREReloLen))
|
||||
btfCORERelos, err = parseCORERelos(buf, bo, strings)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing CO-RE relocation info: %w", err)
|
||||
}
|
||||
|
||||
coreRelos := make(map[string][]coreRelocationInfo, len(btfCORERelos))
|
||||
for section, brs := range btfCORERelos {
|
||||
coreRelos[section], err = newRelocationInfos(brs, spec, strings)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("section %s: CO-RE relocations: %w", section, err)
|
||||
}
|
||||
}
|
||||
|
||||
return &ExtInfos{funcInfos, lineInfos, coreRelos}, nil
|
||||
}
|
||||
|
||||
type funcInfoMeta struct{}
|
||||
type coreRelocationMeta struct{}
|
||||
|
||||
// Assign per-section metadata from BTF to a section's instructions.
|
||||
func (ei *ExtInfos) Assign(insns asm.Instructions, section string) {
|
||||
funcInfos := ei.funcInfos[section]
|
||||
lineInfos := ei.lineInfos[section]
|
||||
reloInfos := ei.relocationInfos[section]
|
||||
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
if len(funcInfos) > 0 && funcInfos[0].offset == iter.Offset {
|
||||
*iter.Ins = WithFuncMetadata(*iter.Ins, funcInfos[0].fn)
|
||||
funcInfos = funcInfos[1:]
|
||||
}
|
||||
|
||||
if len(lineInfos) > 0 && lineInfos[0].offset == iter.Offset {
|
||||
*iter.Ins = iter.Ins.WithSource(lineInfos[0].line)
|
||||
lineInfos = lineInfos[1:]
|
||||
}
|
||||
|
||||
if len(reloInfos) > 0 && reloInfos[0].offset == iter.Offset {
|
||||
iter.Ins.Metadata.Set(coreRelocationMeta{}, reloInfos[0].relo)
|
||||
reloInfos = reloInfos[1:]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// MarshalExtInfos encodes function and line info embedded in insns into kernel
|
||||
// wire format.
|
||||
//
|
||||
// Returns ErrNotSupported if the kernel doesn't support BTF-associated programs.
|
||||
func MarshalExtInfos(insns asm.Instructions) (_ *Handle, funcInfos, lineInfos []byte, _ error) {
|
||||
// Bail out early if the kernel doesn't support Func(Proto). If this is the
|
||||
// case, func_info will also be unsupported.
|
||||
if err := haveProgBTF(); err != nil {
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
_, ok := iter.Ins.Source().(*Line)
|
||||
fn := FuncMetadata(iter.Ins)
|
||||
if ok || fn != nil {
|
||||
goto marshal
|
||||
}
|
||||
}
|
||||
|
||||
return nil, nil, nil, nil
|
||||
|
||||
marshal:
|
||||
var b Builder
|
||||
var fiBuf, liBuf bytes.Buffer
|
||||
for {
|
||||
if fn := FuncMetadata(iter.Ins); fn != nil {
|
||||
fi := &funcInfo{
|
||||
fn: fn,
|
||||
offset: iter.Offset,
|
||||
}
|
||||
if err := fi.marshal(&fiBuf, &b); err != nil {
|
||||
return nil, nil, nil, fmt.Errorf("write func info: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if line, ok := iter.Ins.Source().(*Line); ok {
|
||||
li := &lineInfo{
|
||||
line: line,
|
||||
offset: iter.Offset,
|
||||
}
|
||||
if err := li.marshal(&liBuf, &b); err != nil {
|
||||
return nil, nil, nil, fmt.Errorf("write line info: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if !iter.Next() {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
handle, err := NewHandle(&b)
|
||||
return handle, fiBuf.Bytes(), liBuf.Bytes(), err
|
||||
}
|
||||
|
||||
// btfExtHeader is found at the start of the .BTF.ext section.
|
||||
type btfExtHeader struct {
|
||||
Magic uint16
|
||||
Version uint8
|
||||
Flags uint8
|
||||
|
||||
// HdrLen is larger than the size of struct btfExtHeader when it is
|
||||
// immediately followed by a btfExtCOREHeader.
|
||||
HdrLen uint32
|
||||
|
||||
FuncInfoOff uint32
|
||||
FuncInfoLen uint32
|
||||
LineInfoOff uint32
|
||||
LineInfoLen uint32
|
||||
}
|
||||
|
||||
// parseBTFExtHeader parses the header of the .BTF.ext section.
|
||||
func parseBTFExtHeader(r io.Reader, bo binary.ByteOrder) (*btfExtHeader, error) {
|
||||
var header btfExtHeader
|
||||
if err := binary.Read(r, bo, &header); err != nil {
|
||||
return nil, fmt.Errorf("can't read header: %v", err)
|
||||
}
|
||||
|
||||
if header.Magic != btfMagic {
|
||||
return nil, fmt.Errorf("incorrect magic value %v", header.Magic)
|
||||
}
|
||||
|
||||
if header.Version != 1 {
|
||||
return nil, fmt.Errorf("unexpected version %v", header.Version)
|
||||
}
|
||||
|
||||
if header.Flags != 0 {
|
||||
return nil, fmt.Errorf("unsupported flags %v", header.Flags)
|
||||
}
|
||||
|
||||
if int64(header.HdrLen) < int64(binary.Size(&header)) {
|
||||
return nil, fmt.Errorf("header length shorter than btfExtHeader size")
|
||||
}
|
||||
|
||||
return &header, nil
|
||||
}
|
||||
|
||||
// funcInfoStart returns the offset from the beginning of the .BTF.ext section
|
||||
// to the start of its func_info entries.
|
||||
func (h *btfExtHeader) funcInfoStart() int64 {
|
||||
return int64(h.HdrLen + h.FuncInfoOff)
|
||||
}
|
||||
|
||||
// lineInfoStart returns the offset from the beginning of the .BTF.ext section
|
||||
// to the start of its line_info entries.
|
||||
func (h *btfExtHeader) lineInfoStart() int64 {
|
||||
return int64(h.HdrLen + h.LineInfoOff)
|
||||
}
|
||||
|
||||
// coreReloStart returns the offset from the beginning of the .BTF.ext section
|
||||
// to the start of its CO-RE relocation entries.
|
||||
func (h *btfExtHeader) coreReloStart(ch *btfExtCOREHeader) int64 {
|
||||
return int64(h.HdrLen + ch.COREReloOff)
|
||||
}
|
||||
|
||||
// btfExtCOREHeader is found right after the btfExtHeader when its HdrLen
|
||||
// field is larger than its size.
|
||||
type btfExtCOREHeader struct {
|
||||
COREReloOff uint32
|
||||
COREReloLen uint32
|
||||
}
|
||||
|
||||
// parseBTFExtCOREHeader parses the tail of the .BTF.ext header. If additional
|
||||
// header bytes are present, extHeader.HdrLen will be larger than the struct,
|
||||
// indicating the presence of a CO-RE extension header.
|
||||
func parseBTFExtCOREHeader(r io.Reader, bo binary.ByteOrder, extHeader *btfExtHeader) (*btfExtCOREHeader, error) {
|
||||
extHdrSize := int64(binary.Size(&extHeader))
|
||||
remainder := int64(extHeader.HdrLen) - extHdrSize
|
||||
|
||||
if remainder == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var coreHeader btfExtCOREHeader
|
||||
if err := binary.Read(r, bo, &coreHeader); err != nil {
|
||||
return nil, fmt.Errorf("can't read header: %v", err)
|
||||
}
|
||||
|
||||
return &coreHeader, nil
|
||||
}
|
||||
|
||||
type btfExtInfoSec struct {
|
||||
SecNameOff uint32
|
||||
NumInfo uint32
|
||||
}
|
||||
|
||||
// parseExtInfoSec parses a btf_ext_info_sec header within .BTF.ext,
|
||||
// appearing within func_info and line_info sub-sections.
|
||||
// These headers appear once for each program section in the ELF and are
|
||||
// followed by one or more func/line_info records for the section.
|
||||
func parseExtInfoSec(r io.Reader, bo binary.ByteOrder, strings *stringTable) (string, *btfExtInfoSec, error) {
|
||||
var infoHeader btfExtInfoSec
|
||||
if err := binary.Read(r, bo, &infoHeader); err != nil {
|
||||
return "", nil, fmt.Errorf("read ext info header: %w", err)
|
||||
}
|
||||
|
||||
secName, err := strings.Lookup(infoHeader.SecNameOff)
|
||||
if err != nil {
|
||||
return "", nil, fmt.Errorf("get section name: %w", err)
|
||||
}
|
||||
if secName == "" {
|
||||
return "", nil, fmt.Errorf("extinfo header refers to empty section name")
|
||||
}
|
||||
|
||||
if infoHeader.NumInfo == 0 {
|
||||
return "", nil, fmt.Errorf("section %s has zero records", secName)
|
||||
}
|
||||
|
||||
return secName, &infoHeader, nil
|
||||
}
|
||||
|
||||
// parseExtInfoRecordSize parses the uint32 at the beginning of a func_infos
|
||||
// or line_infos segment that describes the length of all extInfoRecords in
|
||||
// that segment.
|
||||
func parseExtInfoRecordSize(r io.Reader, bo binary.ByteOrder) (uint32, error) {
|
||||
const maxRecordSize = 256
|
||||
|
||||
var recordSize uint32
|
||||
if err := binary.Read(r, bo, &recordSize); err != nil {
|
||||
return 0, fmt.Errorf("can't read record size: %v", err)
|
||||
}
|
||||
|
||||
if recordSize < 4 {
|
||||
// Need at least InsnOff worth of bytes per record.
|
||||
return 0, errors.New("record size too short")
|
||||
}
|
||||
if recordSize > maxRecordSize {
|
||||
return 0, fmt.Errorf("record size %v exceeds %v", recordSize, maxRecordSize)
|
||||
}
|
||||
|
||||
return recordSize, nil
|
||||
}
|
||||
|
||||
// The size of a FuncInfo in BTF wire format.
|
||||
var FuncInfoSize = uint32(binary.Size(bpfFuncInfo{}))
|
||||
|
||||
type funcInfo struct {
|
||||
fn *Func
|
||||
offset asm.RawInstructionOffset
|
||||
}
|
||||
|
||||
type bpfFuncInfo struct {
|
||||
// Instruction offset of the function within an ELF section.
|
||||
InsnOff uint32
|
||||
TypeID TypeID
|
||||
}
|
||||
|
||||
func newFuncInfo(fi bpfFuncInfo, spec *Spec) (*funcInfo, error) {
|
||||
typ, err := spec.TypeByID(fi.TypeID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fn, ok := typ.(*Func)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("type ID %d is a %T, but expected a Func", fi.TypeID, typ)
|
||||
}
|
||||
|
||||
// C doesn't have anonymous functions, but check just in case.
|
||||
if fn.Name == "" {
|
||||
return nil, fmt.Errorf("func with type ID %d doesn't have a name", fi.TypeID)
|
||||
}
|
||||
|
||||
return &funcInfo{
|
||||
fn,
|
||||
asm.RawInstructionOffset(fi.InsnOff),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func newFuncInfos(bfis []bpfFuncInfo, spec *Spec) ([]funcInfo, error) {
|
||||
fis := make([]funcInfo, 0, len(bfis))
|
||||
for _, bfi := range bfis {
|
||||
fi, err := newFuncInfo(bfi, spec)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("offset %d: %w", bfi.InsnOff, err)
|
||||
}
|
||||
fis = append(fis, *fi)
|
||||
}
|
||||
sort.Slice(fis, func(i, j int) bool {
|
||||
return fis[i].offset <= fis[j].offset
|
||||
})
|
||||
return fis, nil
|
||||
}
|
||||
|
||||
// marshal into the BTF wire format.
|
||||
func (fi *funcInfo) marshal(w *bytes.Buffer, b *Builder) error {
|
||||
id, err := b.Add(fi.fn)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
bfi := bpfFuncInfo{
|
||||
InsnOff: uint32(fi.offset),
|
||||
TypeID: id,
|
||||
}
|
||||
buf := make([]byte, FuncInfoSize)
|
||||
internal.NativeEndian.PutUint32(buf, bfi.InsnOff)
|
||||
internal.NativeEndian.PutUint32(buf[4:], uint32(bfi.TypeID))
|
||||
_, err = w.Write(buf)
|
||||
return err
|
||||
}
|
||||
|
||||
// parseFuncInfos parses a func_info sub-section within .BTF.ext ito a map of
|
||||
// func infos indexed by section name.
|
||||
func parseFuncInfos(r io.Reader, bo binary.ByteOrder, strings *stringTable) (map[string][]bpfFuncInfo, error) {
|
||||
recordSize, err := parseExtInfoRecordSize(r, bo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result := make(map[string][]bpfFuncInfo)
|
||||
for {
|
||||
secName, infoHeader, err := parseExtInfoSec(r, bo, strings)
|
||||
if errors.Is(err, io.EOF) {
|
||||
return result, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
records, err := parseFuncInfoRecords(r, bo, recordSize, infoHeader.NumInfo)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("section %v: %w", secName, err)
|
||||
}
|
||||
|
||||
result[secName] = records
|
||||
}
|
||||
}
|
||||
|
||||
// parseFuncInfoRecords parses a stream of func_infos into a funcInfos.
|
||||
// These records appear after a btf_ext_info_sec header in the func_info
|
||||
// sub-section of .BTF.ext.
|
||||
func parseFuncInfoRecords(r io.Reader, bo binary.ByteOrder, recordSize uint32, recordNum uint32) ([]bpfFuncInfo, error) {
|
||||
var out []bpfFuncInfo
|
||||
var fi bpfFuncInfo
|
||||
|
||||
if exp, got := FuncInfoSize, recordSize; exp != got {
|
||||
// BTF blob's record size is longer than we know how to parse.
|
||||
return nil, fmt.Errorf("expected FuncInfo record size %d, but BTF blob contains %d", exp, got)
|
||||
}
|
||||
|
||||
for i := uint32(0); i < recordNum; i++ {
|
||||
if err := binary.Read(r, bo, &fi); err != nil {
|
||||
return nil, fmt.Errorf("can't read function info: %v", err)
|
||||
}
|
||||
|
||||
if fi.InsnOff%asm.InstructionSize != 0 {
|
||||
return nil, fmt.Errorf("offset %v is not aligned with instruction size", fi.InsnOff)
|
||||
}
|
||||
|
||||
// ELF tracks offset in bytes, the kernel expects raw BPF instructions.
|
||||
// Convert as early as possible.
|
||||
fi.InsnOff /= asm.InstructionSize
|
||||
|
||||
out = append(out, fi)
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
|
||||
var LineInfoSize = uint32(binary.Size(bpfLineInfo{}))
|
||||
|
||||
// Line represents the location and contents of a single line of source
|
||||
// code a BPF ELF was compiled from.
|
||||
type Line struct {
|
||||
fileName string
|
||||
line string
|
||||
lineNumber uint32
|
||||
lineColumn uint32
|
||||
}
|
||||
|
||||
func (li *Line) FileName() string {
|
||||
return li.fileName
|
||||
}
|
||||
|
||||
func (li *Line) Line() string {
|
||||
return li.line
|
||||
}
|
||||
|
||||
func (li *Line) LineNumber() uint32 {
|
||||
return li.lineNumber
|
||||
}
|
||||
|
||||
func (li *Line) LineColumn() uint32 {
|
||||
return li.lineColumn
|
||||
}
|
||||
|
||||
func (li *Line) String() string {
|
||||
return li.line
|
||||
}
|
||||
|
||||
type lineInfo struct {
|
||||
line *Line
|
||||
offset asm.RawInstructionOffset
|
||||
}
|
||||
|
||||
// Constants for the format of bpfLineInfo.LineCol.
|
||||
const (
|
||||
bpfLineShift = 10
|
||||
bpfLineMax = (1 << (32 - bpfLineShift)) - 1
|
||||
bpfColumnMax = (1 << bpfLineShift) - 1
|
||||
)
|
||||
|
||||
type bpfLineInfo struct {
|
||||
// Instruction offset of the line within the whole instruction stream, in instructions.
|
||||
InsnOff uint32
|
||||
FileNameOff uint32
|
||||
LineOff uint32
|
||||
LineCol uint32
|
||||
}
|
||||
|
||||
func newLineInfo(li bpfLineInfo, strings *stringTable) (*lineInfo, error) {
|
||||
line, err := strings.Lookup(li.LineOff)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("lookup of line: %w", err)
|
||||
}
|
||||
|
||||
fileName, err := strings.Lookup(li.FileNameOff)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("lookup of filename: %w", err)
|
||||
}
|
||||
|
||||
lineNumber := li.LineCol >> bpfLineShift
|
||||
lineColumn := li.LineCol & bpfColumnMax
|
||||
|
||||
return &lineInfo{
|
||||
&Line{
|
||||
fileName,
|
||||
line,
|
||||
lineNumber,
|
||||
lineColumn,
|
||||
},
|
||||
asm.RawInstructionOffset(li.InsnOff),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func newLineInfos(blis []bpfLineInfo, strings *stringTable) ([]lineInfo, error) {
|
||||
lis := make([]lineInfo, 0, len(blis))
|
||||
for _, bli := range blis {
|
||||
li, err := newLineInfo(bli, strings)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("offset %d: %w", bli.InsnOff, err)
|
||||
}
|
||||
lis = append(lis, *li)
|
||||
}
|
||||
sort.Slice(lis, func(i, j int) bool {
|
||||
return lis[i].offset <= lis[j].offset
|
||||
})
|
||||
return lis, nil
|
||||
}
|
||||
|
||||
// marshal writes the binary representation of the LineInfo to w.
|
||||
func (li *lineInfo) marshal(w *bytes.Buffer, b *Builder) error {
|
||||
line := li.line
|
||||
if line.lineNumber > bpfLineMax {
|
||||
return fmt.Errorf("line %d exceeds %d", line.lineNumber, bpfLineMax)
|
||||
}
|
||||
|
||||
if line.lineColumn > bpfColumnMax {
|
||||
return fmt.Errorf("column %d exceeds %d", line.lineColumn, bpfColumnMax)
|
||||
}
|
||||
|
||||
fileNameOff, err := b.addString(line.fileName)
|
||||
if err != nil {
|
||||
return fmt.Errorf("file name %q: %w", line.fileName, err)
|
||||
}
|
||||
|
||||
lineOff, err := b.addString(line.line)
|
||||
if err != nil {
|
||||
return fmt.Errorf("line %q: %w", line.line, err)
|
||||
}
|
||||
|
||||
bli := bpfLineInfo{
|
||||
uint32(li.offset),
|
||||
fileNameOff,
|
||||
lineOff,
|
||||
(line.lineNumber << bpfLineShift) | line.lineColumn,
|
||||
}
|
||||
|
||||
buf := make([]byte, LineInfoSize)
|
||||
internal.NativeEndian.PutUint32(buf, bli.InsnOff)
|
||||
internal.NativeEndian.PutUint32(buf[4:], bli.FileNameOff)
|
||||
internal.NativeEndian.PutUint32(buf[8:], bli.LineOff)
|
||||
internal.NativeEndian.PutUint32(buf[12:], bli.LineCol)
|
||||
_, err = w.Write(buf)
|
||||
return err
|
||||
}
|
||||
|
||||
// parseLineInfos parses a line_info sub-section within .BTF.ext ito a map of
|
||||
// line infos indexed by section name.
|
||||
func parseLineInfos(r io.Reader, bo binary.ByteOrder, strings *stringTable) (map[string][]bpfLineInfo, error) {
|
||||
recordSize, err := parseExtInfoRecordSize(r, bo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result := make(map[string][]bpfLineInfo)
|
||||
for {
|
||||
secName, infoHeader, err := parseExtInfoSec(r, bo, strings)
|
||||
if errors.Is(err, io.EOF) {
|
||||
return result, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
records, err := parseLineInfoRecords(r, bo, recordSize, infoHeader.NumInfo)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("section %v: %w", secName, err)
|
||||
}
|
||||
|
||||
result[secName] = records
|
||||
}
|
||||
}
|
||||
|
||||
// parseLineInfoRecords parses a stream of line_infos into a lineInfos.
|
||||
// These records appear after a btf_ext_info_sec header in the line_info
|
||||
// sub-section of .BTF.ext.
|
||||
func parseLineInfoRecords(r io.Reader, bo binary.ByteOrder, recordSize uint32, recordNum uint32) ([]bpfLineInfo, error) {
|
||||
var out []bpfLineInfo
|
||||
var li bpfLineInfo
|
||||
|
||||
if exp, got := uint32(binary.Size(li)), recordSize; exp != got {
|
||||
// BTF blob's record size is longer than we know how to parse.
|
||||
return nil, fmt.Errorf("expected LineInfo record size %d, but BTF blob contains %d", exp, got)
|
||||
}
|
||||
|
||||
for i := uint32(0); i < recordNum; i++ {
|
||||
if err := binary.Read(r, bo, &li); err != nil {
|
||||
return nil, fmt.Errorf("can't read line info: %v", err)
|
||||
}
|
||||
|
||||
if li.InsnOff%asm.InstructionSize != 0 {
|
||||
return nil, fmt.Errorf("offset %v is not aligned with instruction size", li.InsnOff)
|
||||
}
|
||||
|
||||
// ELF tracks offset in bytes, the kernel expects raw BPF instructions.
|
||||
// Convert as early as possible.
|
||||
li.InsnOff /= asm.InstructionSize
|
||||
|
||||
out = append(out, li)
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// bpfCORERelo matches the kernel's struct bpf_core_relo.
|
||||
type bpfCORERelo struct {
|
||||
InsnOff uint32
|
||||
TypeID TypeID
|
||||
AccessStrOff uint32
|
||||
Kind coreKind
|
||||
}
|
||||
|
||||
type CORERelocation struct {
|
||||
// The local type of the relocation, stripped of typedefs and qualifiers.
|
||||
typ Type
|
||||
accessor coreAccessor
|
||||
kind coreKind
|
||||
// The ID of the local type in the source BTF.
|
||||
id TypeID
|
||||
}
|
||||
|
||||
func (cr *CORERelocation) String() string {
|
||||
return fmt.Sprintf("CORERelocation(%s, %s[%s], local_id=%d)", cr.kind, cr.typ, cr.accessor, cr.id)
|
||||
}
|
||||
|
||||
func CORERelocationMetadata(ins *asm.Instruction) *CORERelocation {
|
||||
relo, _ := ins.Metadata.Get(coreRelocationMeta{}).(*CORERelocation)
|
||||
return relo
|
||||
}
|
||||
|
||||
type coreRelocationInfo struct {
|
||||
relo *CORERelocation
|
||||
offset asm.RawInstructionOffset
|
||||
}
|
||||
|
||||
func newRelocationInfo(relo bpfCORERelo, spec *Spec, strings *stringTable) (*coreRelocationInfo, error) {
|
||||
typ, err := spec.TypeByID(relo.TypeID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
accessorStr, err := strings.Lookup(relo.AccessStrOff)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
accessor, err := parseCOREAccessor(accessorStr)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("accessor %q: %s", accessorStr, err)
|
||||
}
|
||||
|
||||
return &coreRelocationInfo{
|
||||
&CORERelocation{
|
||||
typ,
|
||||
accessor,
|
||||
relo.Kind,
|
||||
relo.TypeID,
|
||||
},
|
||||
asm.RawInstructionOffset(relo.InsnOff),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func newRelocationInfos(brs []bpfCORERelo, spec *Spec, strings *stringTable) ([]coreRelocationInfo, error) {
|
||||
rs := make([]coreRelocationInfo, 0, len(brs))
|
||||
for _, br := range brs {
|
||||
relo, err := newRelocationInfo(br, spec, strings)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("offset %d: %w", br.InsnOff, err)
|
||||
}
|
||||
rs = append(rs, *relo)
|
||||
}
|
||||
sort.Slice(rs, func(i, j int) bool {
|
||||
return rs[i].offset < rs[j].offset
|
||||
})
|
||||
return rs, nil
|
||||
}
|
||||
|
||||
var extInfoReloSize = binary.Size(bpfCORERelo{})
|
||||
|
||||
// parseCORERelos parses a core_relos sub-section within .BTF.ext ito a map of
|
||||
// CO-RE relocations indexed by section name.
|
||||
func parseCORERelos(r io.Reader, bo binary.ByteOrder, strings *stringTable) (map[string][]bpfCORERelo, error) {
|
||||
recordSize, err := parseExtInfoRecordSize(r, bo)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if recordSize != uint32(extInfoReloSize) {
|
||||
return nil, fmt.Errorf("expected record size %d, got %d", extInfoReloSize, recordSize)
|
||||
}
|
||||
|
||||
result := make(map[string][]bpfCORERelo)
|
||||
for {
|
||||
secName, infoHeader, err := parseExtInfoSec(r, bo, strings)
|
||||
if errors.Is(err, io.EOF) {
|
||||
return result, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
records, err := parseCOREReloRecords(r, bo, recordSize, infoHeader.NumInfo)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("section %v: %w", secName, err)
|
||||
}
|
||||
|
||||
result[secName] = records
|
||||
}
|
||||
}
|
||||
|
||||
// parseCOREReloRecords parses a stream of CO-RE relocation entries into a
|
||||
// coreRelos. These records appear after a btf_ext_info_sec header in the
|
||||
// core_relos sub-section of .BTF.ext.
|
||||
func parseCOREReloRecords(r io.Reader, bo binary.ByteOrder, recordSize uint32, recordNum uint32) ([]bpfCORERelo, error) {
|
||||
var out []bpfCORERelo
|
||||
|
||||
var relo bpfCORERelo
|
||||
for i := uint32(0); i < recordNum; i++ {
|
||||
if err := binary.Read(r, bo, &relo); err != nil {
|
||||
return nil, fmt.Errorf("can't read CO-RE relocation: %v", err)
|
||||
}
|
||||
|
||||
if relo.InsnOff%asm.InstructionSize != 0 {
|
||||
return nil, fmt.Errorf("offset %v is not aligned with instruction size", relo.InsnOff)
|
||||
}
|
||||
|
||||
// ELF tracks offset in bytes, the kernel expects raw BPF instructions.
|
||||
// Convert as early as possible.
|
||||
relo.InsnOff /= asm.InstructionSize
|
||||
|
||||
out = append(out, relo)
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
344
vendor/github.com/cilium/ebpf/btf/format.go
generated
vendored
344
vendor/github.com/cilium/ebpf/btf/format.go
generated
vendored
@@ -1,344 +0,0 @@
|
||||
package btf
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var errNestedTooDeep = errors.New("nested too deep")
|
||||
|
||||
// GoFormatter converts a Type to Go syntax.
|
||||
//
|
||||
// A zero GoFormatter is valid to use.
|
||||
type GoFormatter struct {
|
||||
w strings.Builder
|
||||
|
||||
// Types present in this map are referred to using the given name if they
|
||||
// are encountered when outputting another type.
|
||||
Names map[Type]string
|
||||
|
||||
// Identifier is called for each field of struct-like types. By default the
|
||||
// field name is used as is.
|
||||
Identifier func(string) string
|
||||
|
||||
// EnumIdentifier is called for each element of an enum. By default the
|
||||
// name of the enum type is concatenated with Identifier(element).
|
||||
EnumIdentifier func(name, element string) string
|
||||
}
|
||||
|
||||
// TypeDeclaration generates a Go type declaration for a BTF type.
|
||||
func (gf *GoFormatter) TypeDeclaration(name string, typ Type) (string, error) {
|
||||
gf.w.Reset()
|
||||
if err := gf.writeTypeDecl(name, typ); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return gf.w.String(), nil
|
||||
}
|
||||
|
||||
func (gf *GoFormatter) identifier(s string) string {
|
||||
if gf.Identifier != nil {
|
||||
return gf.Identifier(s)
|
||||
}
|
||||
|
||||
return s
|
||||
}
|
||||
|
||||
func (gf *GoFormatter) enumIdentifier(name, element string) string {
|
||||
if gf.EnumIdentifier != nil {
|
||||
return gf.EnumIdentifier(name, element)
|
||||
}
|
||||
|
||||
return name + gf.identifier(element)
|
||||
}
|
||||
|
||||
// writeTypeDecl outputs a declaration of the given type.
|
||||
//
|
||||
// It encodes https://golang.org/ref/spec#Type_declarations:
|
||||
//
|
||||
// type foo struct { bar uint32; }
|
||||
// type bar int32
|
||||
func (gf *GoFormatter) writeTypeDecl(name string, typ Type) error {
|
||||
if name == "" {
|
||||
return fmt.Errorf("need a name for type %s", typ)
|
||||
}
|
||||
|
||||
typ = skipQualifiers(typ)
|
||||
fmt.Fprintf(&gf.w, "type %s ", name)
|
||||
if err := gf.writeTypeLit(typ, 0); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
e, ok := typ.(*Enum)
|
||||
if !ok || len(e.Values) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
gf.w.WriteString("; const ( ")
|
||||
for _, ev := range e.Values {
|
||||
id := gf.enumIdentifier(name, ev.Name)
|
||||
fmt.Fprintf(&gf.w, "%s %s = %d; ", id, name, ev.Value)
|
||||
}
|
||||
gf.w.WriteString(")")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// writeType outputs the name of a named type or a literal describing the type.
|
||||
//
|
||||
// It encodes https://golang.org/ref/spec#Types.
|
||||
//
|
||||
// foo (if foo is a named type)
|
||||
// uint32
|
||||
func (gf *GoFormatter) writeType(typ Type, depth int) error {
|
||||
typ = skipQualifiers(typ)
|
||||
|
||||
name := gf.Names[typ]
|
||||
if name != "" {
|
||||
gf.w.WriteString(name)
|
||||
return nil
|
||||
}
|
||||
|
||||
return gf.writeTypeLit(typ, depth)
|
||||
}
|
||||
|
||||
// writeTypeLit outputs a literal describing the type.
|
||||
//
|
||||
// The function ignores named types.
|
||||
//
|
||||
// It encodes https://golang.org/ref/spec#TypeLit.
|
||||
//
|
||||
// struct { bar uint32; }
|
||||
// uint32
|
||||
func (gf *GoFormatter) writeTypeLit(typ Type, depth int) error {
|
||||
depth++
|
||||
if depth > maxTypeDepth {
|
||||
return errNestedTooDeep
|
||||
}
|
||||
|
||||
var err error
|
||||
switch v := skipQualifiers(typ).(type) {
|
||||
case *Int:
|
||||
err = gf.writeIntLit(v)
|
||||
|
||||
case *Enum:
|
||||
if !v.Signed {
|
||||
gf.w.WriteRune('u')
|
||||
}
|
||||
switch v.Size {
|
||||
case 1:
|
||||
gf.w.WriteString("int8")
|
||||
case 2:
|
||||
gf.w.WriteString("int16")
|
||||
case 4:
|
||||
gf.w.WriteString("int32")
|
||||
case 8:
|
||||
gf.w.WriteString("int64")
|
||||
default:
|
||||
err = fmt.Errorf("invalid enum size %d", v.Size)
|
||||
}
|
||||
|
||||
case *Typedef:
|
||||
err = gf.writeType(v.Type, depth)
|
||||
|
||||
case *Array:
|
||||
fmt.Fprintf(&gf.w, "[%d]", v.Nelems)
|
||||
err = gf.writeType(v.Type, depth)
|
||||
|
||||
case *Struct:
|
||||
err = gf.writeStructLit(v.Size, v.Members, depth)
|
||||
|
||||
case *Union:
|
||||
// Always choose the first member to represent the union in Go.
|
||||
err = gf.writeStructLit(v.Size, v.Members[:1], depth)
|
||||
|
||||
case *Datasec:
|
||||
err = gf.writeDatasecLit(v, depth)
|
||||
|
||||
default:
|
||||
return fmt.Errorf("type %T: %w", v, ErrNotSupported)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", typ, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (gf *GoFormatter) writeIntLit(i *Int) error {
|
||||
bits := i.Size * 8
|
||||
switch i.Encoding {
|
||||
case Bool:
|
||||
if i.Size != 1 {
|
||||
return fmt.Errorf("bool with size %d", i.Size)
|
||||
}
|
||||
gf.w.WriteString("bool")
|
||||
case Char:
|
||||
if i.Size != 1 {
|
||||
return fmt.Errorf("char with size %d", i.Size)
|
||||
}
|
||||
// BTF doesn't have a way to specify the signedness of a char. Assume
|
||||
// we are dealing with unsigned, since this works nicely with []byte
|
||||
// in Go code.
|
||||
fallthrough
|
||||
case Unsigned, Signed:
|
||||
stem := "uint"
|
||||
if i.Encoding == Signed {
|
||||
stem = "int"
|
||||
}
|
||||
if i.Size > 8 {
|
||||
fmt.Fprintf(&gf.w, "[%d]byte /* %s%d */", i.Size, stem, i.Size*8)
|
||||
} else {
|
||||
fmt.Fprintf(&gf.w, "%s%d", stem, bits)
|
||||
}
|
||||
default:
|
||||
return fmt.Errorf("can't encode %s", i.Encoding)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (gf *GoFormatter) writeStructLit(size uint32, members []Member, depth int) error {
|
||||
gf.w.WriteString("struct { ")
|
||||
|
||||
prevOffset := uint32(0)
|
||||
skippedBitfield := false
|
||||
for i, m := range members {
|
||||
if m.BitfieldSize > 0 {
|
||||
skippedBitfield = true
|
||||
continue
|
||||
}
|
||||
|
||||
offset := m.Offset.Bytes()
|
||||
if n := offset - prevOffset; skippedBitfield && n > 0 {
|
||||
fmt.Fprintf(&gf.w, "_ [%d]byte /* unsupported bitfield */; ", n)
|
||||
} else {
|
||||
gf.writePadding(n)
|
||||
}
|
||||
|
||||
fieldSize, err := Sizeof(m.Type)
|
||||
if err != nil {
|
||||
return fmt.Errorf("field %d: %w", i, err)
|
||||
}
|
||||
|
||||
prevOffset = offset + uint32(fieldSize)
|
||||
if prevOffset > size {
|
||||
return fmt.Errorf("field %d of size %d exceeds type size %d", i, fieldSize, size)
|
||||
}
|
||||
|
||||
if err := gf.writeStructField(m, depth); err != nil {
|
||||
return fmt.Errorf("field %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
gf.writePadding(size - prevOffset)
|
||||
gf.w.WriteString("}")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (gf *GoFormatter) writeStructField(m Member, depth int) error {
|
||||
if m.BitfieldSize > 0 {
|
||||
return fmt.Errorf("bitfields are not supported")
|
||||
}
|
||||
if m.Offset%8 != 0 {
|
||||
return fmt.Errorf("unsupported offset %d", m.Offset)
|
||||
}
|
||||
|
||||
if m.Name == "" {
|
||||
// Special case a nested anonymous union like
|
||||
// struct foo { union { int bar; int baz }; }
|
||||
// by replacing the whole union with its first member.
|
||||
union, ok := m.Type.(*Union)
|
||||
if !ok {
|
||||
return fmt.Errorf("anonymous fields are not supported")
|
||||
|
||||
}
|
||||
|
||||
if len(union.Members) == 0 {
|
||||
return errors.New("empty anonymous union")
|
||||
}
|
||||
|
||||
depth++
|
||||
if depth > maxTypeDepth {
|
||||
return errNestedTooDeep
|
||||
}
|
||||
|
||||
m := union.Members[0]
|
||||
size, err := Sizeof(m.Type)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := gf.writeStructField(m, depth); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
gf.writePadding(union.Size - uint32(size))
|
||||
return nil
|
||||
|
||||
}
|
||||
|
||||
fmt.Fprintf(&gf.w, "%s ", gf.identifier(m.Name))
|
||||
|
||||
if err := gf.writeType(m.Type, depth); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
gf.w.WriteString("; ")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (gf *GoFormatter) writeDatasecLit(ds *Datasec, depth int) error {
|
||||
gf.w.WriteString("struct { ")
|
||||
|
||||
prevOffset := uint32(0)
|
||||
for i, vsi := range ds.Vars {
|
||||
v, ok := vsi.Type.(*Var)
|
||||
if !ok {
|
||||
return fmt.Errorf("can't format %s as part of data section", vsi.Type)
|
||||
}
|
||||
|
||||
if v.Linkage != GlobalVar {
|
||||
// Ignore static, extern, etc. for now.
|
||||
continue
|
||||
}
|
||||
|
||||
if v.Name == "" {
|
||||
return fmt.Errorf("variable %d: empty name", i)
|
||||
}
|
||||
|
||||
gf.writePadding(vsi.Offset - prevOffset)
|
||||
prevOffset = vsi.Offset + vsi.Size
|
||||
|
||||
fmt.Fprintf(&gf.w, "%s ", gf.identifier(v.Name))
|
||||
|
||||
if err := gf.writeType(v.Type, depth); err != nil {
|
||||
return fmt.Errorf("variable %d: %w", i, err)
|
||||
}
|
||||
|
||||
gf.w.WriteString("; ")
|
||||
}
|
||||
|
||||
gf.writePadding(ds.Size - prevOffset)
|
||||
gf.w.WriteString("}")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (gf *GoFormatter) writePadding(bytes uint32) {
|
||||
if bytes > 0 {
|
||||
fmt.Fprintf(&gf.w, "_ [%d]byte; ", bytes)
|
||||
}
|
||||
}
|
||||
|
||||
func skipQualifiers(typ Type) Type {
|
||||
result := typ
|
||||
for depth := 0; depth <= maxTypeDepth; depth++ {
|
||||
switch v := (result).(type) {
|
||||
case qualifier:
|
||||
result = v.qualify()
|
||||
default:
|
||||
return result
|
||||
}
|
||||
}
|
||||
return &cycle{typ}
|
||||
}
|
||||
287
vendor/github.com/cilium/ebpf/btf/handle.go
generated
vendored
287
vendor/github.com/cilium/ebpf/btf/handle.go
generated
vendored
@@ -1,287 +0,0 @@
|
||||
package btf
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"os"
|
||||
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// Handle is a reference to BTF loaded into the kernel.
|
||||
type Handle struct {
|
||||
fd *sys.FD
|
||||
|
||||
// Size of the raw BTF in bytes.
|
||||
size uint32
|
||||
|
||||
needsKernelBase bool
|
||||
}
|
||||
|
||||
// NewHandle loads the contents of a [Builder] into the kernel.
|
||||
//
|
||||
// Returns an error wrapping ErrNotSupported if the kernel doesn't support BTF.
|
||||
func NewHandle(b *Builder) (*Handle, error) {
|
||||
small := getByteSlice()
|
||||
defer putByteSlice(small)
|
||||
|
||||
buf, err := b.Marshal(*small, KernelMarshalOptions())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal BTF: %w", err)
|
||||
}
|
||||
|
||||
return NewHandleFromRawBTF(buf)
|
||||
}
|
||||
|
||||
// NewHandleFromRawBTF loads raw BTF into the kernel.
|
||||
//
|
||||
// Returns an error wrapping ErrNotSupported if the kernel doesn't support BTF.
|
||||
func NewHandleFromRawBTF(btf []byte) (*Handle, error) {
|
||||
if uint64(len(btf)) > math.MaxUint32 {
|
||||
return nil, errors.New("BTF exceeds the maximum size")
|
||||
}
|
||||
|
||||
attr := &sys.BtfLoadAttr{
|
||||
Btf: sys.NewSlicePointer(btf),
|
||||
BtfSize: uint32(len(btf)),
|
||||
}
|
||||
|
||||
fd, err := sys.BtfLoad(attr)
|
||||
if err == nil {
|
||||
return &Handle{fd, attr.BtfSize, false}, nil
|
||||
}
|
||||
|
||||
if err := haveBTF(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
logBuf := make([]byte, 64*1024)
|
||||
attr.BtfLogBuf = sys.NewSlicePointer(logBuf)
|
||||
attr.BtfLogSize = uint32(len(logBuf))
|
||||
attr.BtfLogLevel = 1
|
||||
|
||||
// Up until at least kernel 6.0, the BTF verifier does not return ENOSPC
|
||||
// if there are other verification errors. ENOSPC is only returned when
|
||||
// the BTF blob is correct, a log was requested, and the provided buffer
|
||||
// is too small.
|
||||
_, ve := sys.BtfLoad(attr)
|
||||
return nil, internal.ErrorWithLog("load btf", err, logBuf, errors.Is(ve, unix.ENOSPC))
|
||||
}
|
||||
|
||||
// NewHandleFromID returns the BTF handle for a given id.
|
||||
//
|
||||
// Prefer calling [ebpf.Program.Handle] or [ebpf.Map.Handle] if possible.
|
||||
//
|
||||
// Returns ErrNotExist, if there is no BTF with the given id.
|
||||
//
|
||||
// Requires CAP_SYS_ADMIN.
|
||||
func NewHandleFromID(id ID) (*Handle, error) {
|
||||
fd, err := sys.BtfGetFdById(&sys.BtfGetFdByIdAttr{
|
||||
Id: uint32(id),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get FD for ID %d: %w", id, err)
|
||||
}
|
||||
|
||||
info, err := newHandleInfoFromFD(fd)
|
||||
if err != nil {
|
||||
_ = fd.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Handle{fd, info.size, info.IsModule()}, nil
|
||||
}
|
||||
|
||||
// Spec parses the kernel BTF into Go types.
|
||||
//
|
||||
// base must contain type information for vmlinux if the handle is for
|
||||
// a kernel module. It may be nil otherwise.
|
||||
func (h *Handle) Spec(base *Spec) (*Spec, error) {
|
||||
var btfInfo sys.BtfInfo
|
||||
btfBuffer := make([]byte, h.size)
|
||||
btfInfo.Btf, btfInfo.BtfSize = sys.NewSlicePointerLen(btfBuffer)
|
||||
|
||||
if err := sys.ObjInfo(h.fd, &btfInfo); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if h.needsKernelBase && base == nil {
|
||||
return nil, fmt.Errorf("missing base types")
|
||||
}
|
||||
|
||||
return loadRawSpec(bytes.NewReader(btfBuffer), internal.NativeEndian, base)
|
||||
}
|
||||
|
||||
// Close destroys the handle.
|
||||
//
|
||||
// Subsequent calls to FD will return an invalid value.
|
||||
func (h *Handle) Close() error {
|
||||
if h == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return h.fd.Close()
|
||||
}
|
||||
|
||||
// FD returns the file descriptor for the handle.
|
||||
func (h *Handle) FD() int {
|
||||
return h.fd.Int()
|
||||
}
|
||||
|
||||
// Info returns metadata about the handle.
|
||||
func (h *Handle) Info() (*HandleInfo, error) {
|
||||
return newHandleInfoFromFD(h.fd)
|
||||
}
|
||||
|
||||
// HandleInfo describes a Handle.
|
||||
type HandleInfo struct {
|
||||
// ID of this handle in the kernel. The ID is only valid as long as the
|
||||
// associated handle is kept alive.
|
||||
ID ID
|
||||
|
||||
// Name is an identifying name for the BTF, currently only used by the
|
||||
// kernel.
|
||||
Name string
|
||||
|
||||
// IsKernel is true if the BTF originated with the kernel and not
|
||||
// userspace.
|
||||
IsKernel bool
|
||||
|
||||
// Size of the raw BTF in bytes.
|
||||
size uint32
|
||||
}
|
||||
|
||||
func newHandleInfoFromFD(fd *sys.FD) (*HandleInfo, error) {
|
||||
// We invoke the syscall once with a empty BTF and name buffers to get size
|
||||
// information to allocate buffers. Then we invoke it a second time with
|
||||
// buffers to receive the data.
|
||||
var btfInfo sys.BtfInfo
|
||||
if err := sys.ObjInfo(fd, &btfInfo); err != nil {
|
||||
return nil, fmt.Errorf("get BTF info for fd %s: %w", fd, err)
|
||||
}
|
||||
|
||||
if btfInfo.NameLen > 0 {
|
||||
// NameLen doesn't account for the terminating NUL.
|
||||
btfInfo.NameLen++
|
||||
}
|
||||
|
||||
// Don't pull raw BTF by default, since it may be quite large.
|
||||
btfSize := btfInfo.BtfSize
|
||||
btfInfo.BtfSize = 0
|
||||
|
||||
nameBuffer := make([]byte, btfInfo.NameLen)
|
||||
btfInfo.Name, btfInfo.NameLen = sys.NewSlicePointerLen(nameBuffer)
|
||||
if err := sys.ObjInfo(fd, &btfInfo); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &HandleInfo{
|
||||
ID: ID(btfInfo.Id),
|
||||
Name: unix.ByteSliceToString(nameBuffer),
|
||||
IsKernel: btfInfo.KernelBtf != 0,
|
||||
size: btfSize,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// IsVmlinux returns true if the BTF is for the kernel itself.
|
||||
func (i *HandleInfo) IsVmlinux() bool {
|
||||
return i.IsKernel && i.Name == "vmlinux"
|
||||
}
|
||||
|
||||
// IsModule returns true if the BTF is for a kernel module.
|
||||
func (i *HandleInfo) IsModule() bool {
|
||||
return i.IsKernel && i.Name != "vmlinux"
|
||||
}
|
||||
|
||||
// HandleIterator allows enumerating BTF blobs loaded into the kernel.
|
||||
type HandleIterator struct {
|
||||
// The ID of the current handle. Only valid after a call to Next.
|
||||
ID ID
|
||||
// The current Handle. Only valid until a call to Next.
|
||||
// See Take if you want to retain the handle.
|
||||
Handle *Handle
|
||||
err error
|
||||
}
|
||||
|
||||
// Next retrieves a handle for the next BTF object.
|
||||
//
|
||||
// Returns true if another BTF object was found. Call [HandleIterator.Err] after
|
||||
// the function returns false.
|
||||
func (it *HandleIterator) Next() bool {
|
||||
id := it.ID
|
||||
for {
|
||||
attr := &sys.BtfGetNextIdAttr{Id: id}
|
||||
err := sys.BtfGetNextId(attr)
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
// There are no more BTF objects.
|
||||
break
|
||||
} else if err != nil {
|
||||
it.err = fmt.Errorf("get next BTF ID: %w", err)
|
||||
break
|
||||
}
|
||||
|
||||
id = attr.NextId
|
||||
handle, err := NewHandleFromID(id)
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
// Try again with the next ID.
|
||||
continue
|
||||
} else if err != nil {
|
||||
it.err = fmt.Errorf("retrieve handle for ID %d: %w", id, err)
|
||||
break
|
||||
}
|
||||
|
||||
it.Handle.Close()
|
||||
it.ID, it.Handle = id, handle
|
||||
return true
|
||||
}
|
||||
|
||||
// No more handles or we encountered an error.
|
||||
it.Handle.Close()
|
||||
it.Handle = nil
|
||||
return false
|
||||
}
|
||||
|
||||
// Take the ownership of the current handle.
|
||||
//
|
||||
// It's the callers responsibility to close the handle.
|
||||
func (it *HandleIterator) Take() *Handle {
|
||||
handle := it.Handle
|
||||
it.Handle = nil
|
||||
return handle
|
||||
}
|
||||
|
||||
// Err returns an error if iteration failed for some reason.
|
||||
func (it *HandleIterator) Err() error {
|
||||
return it.err
|
||||
}
|
||||
|
||||
// FindHandle returns the first handle for which predicate returns true.
|
||||
//
|
||||
// Requires CAP_SYS_ADMIN.
|
||||
//
|
||||
// Returns an error wrapping ErrNotFound if predicate never returns true or if
|
||||
// there is no BTF loaded into the kernel.
|
||||
func FindHandle(predicate func(info *HandleInfo) bool) (*Handle, error) {
|
||||
it := new(HandleIterator)
|
||||
defer it.Handle.Close()
|
||||
|
||||
for it.Next() {
|
||||
info, err := it.Handle.Info()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("info for ID %d: %w", it.ID, err)
|
||||
}
|
||||
|
||||
if predicate(info) {
|
||||
return it.Take(), nil
|
||||
}
|
||||
}
|
||||
if err := it.Err(); err != nil {
|
||||
return nil, fmt.Errorf("iterate handles: %w", err)
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("find handle: %w", ErrNotFound)
|
||||
}
|
||||
543
vendor/github.com/cilium/ebpf/btf/marshal.go
generated
vendored
543
vendor/github.com/cilium/ebpf/btf/marshal.go
generated
vendored
@@ -1,543 +0,0 @@
|
||||
package btf
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"sync"
|
||||
|
||||
"github.com/cilium/ebpf/internal"
|
||||
|
||||
"golang.org/x/exp/slices"
|
||||
)
|
||||
|
||||
type MarshalOptions struct {
|
||||
// Target byte order. Defaults to the system's native endianness.
|
||||
Order binary.ByteOrder
|
||||
// Remove function linkage information for compatibility with <5.6 kernels.
|
||||
StripFuncLinkage bool
|
||||
}
|
||||
|
||||
// KernelMarshalOptions will generate BTF suitable for the current kernel.
|
||||
func KernelMarshalOptions() *MarshalOptions {
|
||||
return &MarshalOptions{
|
||||
Order: internal.NativeEndian,
|
||||
StripFuncLinkage: haveFuncLinkage() != nil,
|
||||
}
|
||||
}
|
||||
|
||||
// encoder turns Types into raw BTF.
|
||||
type encoder struct {
|
||||
MarshalOptions
|
||||
|
||||
pending internal.Deque[Type]
|
||||
buf *bytes.Buffer
|
||||
strings *stringTableBuilder
|
||||
ids map[Type]TypeID
|
||||
lastID TypeID
|
||||
}
|
||||
|
||||
var bufferPool = sync.Pool{
|
||||
New: func() any {
|
||||
buf := make([]byte, btfHeaderLen+128)
|
||||
return &buf
|
||||
},
|
||||
}
|
||||
|
||||
func getByteSlice() *[]byte {
|
||||
return bufferPool.Get().(*[]byte)
|
||||
}
|
||||
|
||||
func putByteSlice(buf *[]byte) {
|
||||
*buf = (*buf)[:0]
|
||||
bufferPool.Put(buf)
|
||||
}
|
||||
|
||||
// Builder turns Types into raw BTF.
|
||||
//
|
||||
// The default value may be used and represents an empty BTF blob. Void is
|
||||
// added implicitly if necessary.
|
||||
type Builder struct {
|
||||
// Explicitly added types.
|
||||
types []Type
|
||||
// IDs for all added types which the user knows about.
|
||||
stableIDs map[Type]TypeID
|
||||
// Explicitly added strings.
|
||||
strings *stringTableBuilder
|
||||
}
|
||||
|
||||
// NewBuilder creates a Builder from a list of types.
|
||||
//
|
||||
// It is more efficient than calling [Add] individually.
|
||||
//
|
||||
// Returns an error if adding any of the types fails.
|
||||
func NewBuilder(types []Type) (*Builder, error) {
|
||||
b := &Builder{
|
||||
make([]Type, 0, len(types)),
|
||||
make(map[Type]TypeID, len(types)),
|
||||
nil,
|
||||
}
|
||||
|
||||
for _, typ := range types {
|
||||
_, err := b.Add(typ)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("add %s: %w", typ, err)
|
||||
}
|
||||
}
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
||||
// Add a Type and allocate a stable ID for it.
|
||||
//
|
||||
// Adding the identical Type multiple times is valid and will return the same ID.
|
||||
//
|
||||
// See [Type] for details on identity.
|
||||
func (b *Builder) Add(typ Type) (TypeID, error) {
|
||||
if b.stableIDs == nil {
|
||||
b.stableIDs = make(map[Type]TypeID)
|
||||
}
|
||||
|
||||
if _, ok := typ.(*Void); ok {
|
||||
// Equality is weird for void, since it is a zero sized type.
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
if ds, ok := typ.(*Datasec); ok {
|
||||
if err := datasecResolveWorkaround(b, ds); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
id, ok := b.stableIDs[typ]
|
||||
if ok {
|
||||
return id, nil
|
||||
}
|
||||
|
||||
b.types = append(b.types, typ)
|
||||
|
||||
id = TypeID(len(b.types))
|
||||
if int(id) != len(b.types) {
|
||||
return 0, fmt.Errorf("no more type IDs")
|
||||
}
|
||||
|
||||
b.stableIDs[typ] = id
|
||||
return id, nil
|
||||
}
|
||||
|
||||
// Marshal encodes all types in the Marshaler into BTF wire format.
|
||||
//
|
||||
// opts may be nil.
|
||||
func (b *Builder) Marshal(buf []byte, opts *MarshalOptions) ([]byte, error) {
|
||||
stb := b.strings
|
||||
if stb == nil {
|
||||
// Assume that most types are named. This makes encoding large BTF like
|
||||
// vmlinux a lot cheaper.
|
||||
stb = newStringTableBuilder(len(b.types))
|
||||
} else {
|
||||
// Avoid modifying the Builder's string table.
|
||||
stb = b.strings.Copy()
|
||||
}
|
||||
|
||||
if opts == nil {
|
||||
opts = &MarshalOptions{Order: internal.NativeEndian}
|
||||
}
|
||||
|
||||
// Reserve space for the BTF header.
|
||||
buf = slices.Grow(buf, btfHeaderLen)[:btfHeaderLen]
|
||||
|
||||
w := internal.NewBuffer(buf)
|
||||
defer internal.PutBuffer(w)
|
||||
|
||||
e := encoder{
|
||||
MarshalOptions: *opts,
|
||||
buf: w,
|
||||
strings: stb,
|
||||
lastID: TypeID(len(b.types)),
|
||||
ids: make(map[Type]TypeID, len(b.types)),
|
||||
}
|
||||
|
||||
// Ensure that types are marshaled in the exact order they were Add()ed.
|
||||
// Otherwise the ID returned from Add() won't match.
|
||||
e.pending.Grow(len(b.types))
|
||||
for _, typ := range b.types {
|
||||
e.pending.Push(typ)
|
||||
e.ids[typ] = b.stableIDs[typ]
|
||||
}
|
||||
|
||||
if err := e.deflatePending(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
length := e.buf.Len()
|
||||
typeLen := uint32(length - btfHeaderLen)
|
||||
|
||||
stringLen := e.strings.Length()
|
||||
buf = e.strings.AppendEncoded(e.buf.Bytes())
|
||||
|
||||
// Fill out the header, and write it out.
|
||||
header := &btfHeader{
|
||||
Magic: btfMagic,
|
||||
Version: 1,
|
||||
Flags: 0,
|
||||
HdrLen: uint32(btfHeaderLen),
|
||||
TypeOff: 0,
|
||||
TypeLen: typeLen,
|
||||
StringOff: typeLen,
|
||||
StringLen: uint32(stringLen),
|
||||
}
|
||||
|
||||
err := binary.Write(sliceWriter(buf[:btfHeaderLen]), e.Order, header)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("write header: %v", err)
|
||||
}
|
||||
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
// addString adds a string to the resulting BTF.
|
||||
//
|
||||
// Adding the same string multiple times will return the same result.
|
||||
//
|
||||
// Returns an identifier into the string table or an error if the string
|
||||
// contains invalid characters.
|
||||
func (b *Builder) addString(str string) (uint32, error) {
|
||||
if b.strings == nil {
|
||||
b.strings = newStringTableBuilder(0)
|
||||
}
|
||||
|
||||
return b.strings.Add(str)
|
||||
}
|
||||
|
||||
func (e *encoder) allocateID(typ Type) error {
|
||||
id := e.lastID + 1
|
||||
if id < e.lastID {
|
||||
return errors.New("type ID overflow")
|
||||
}
|
||||
|
||||
e.pending.Push(typ)
|
||||
e.ids[typ] = id
|
||||
e.lastID = id
|
||||
return nil
|
||||
}
|
||||
|
||||
// id returns the ID for the given type or panics with an error.
|
||||
func (e *encoder) id(typ Type) TypeID {
|
||||
if _, ok := typ.(*Void); ok {
|
||||
return 0
|
||||
}
|
||||
|
||||
id, ok := e.ids[typ]
|
||||
if !ok {
|
||||
panic(fmt.Errorf("no ID for type %v", typ))
|
||||
}
|
||||
|
||||
return id
|
||||
}
|
||||
|
||||
func (e *encoder) deflatePending() error {
|
||||
// Declare root outside of the loop to avoid repeated heap allocations.
|
||||
var root Type
|
||||
skip := func(t Type) (skip bool) {
|
||||
if t == root {
|
||||
// Force descending into the current root type even if it already
|
||||
// has an ID. Otherwise we miss children of types that have their
|
||||
// ID pre-allocated via Add.
|
||||
return false
|
||||
}
|
||||
|
||||
_, isVoid := t.(*Void)
|
||||
_, alreadyEncoded := e.ids[t]
|
||||
return isVoid || alreadyEncoded
|
||||
}
|
||||
|
||||
for !e.pending.Empty() {
|
||||
root = e.pending.Shift()
|
||||
|
||||
// Allocate IDs for all children of typ, including transitive dependencies.
|
||||
iter := postorderTraversal(root, skip)
|
||||
for iter.Next() {
|
||||
if iter.Type == root {
|
||||
// The iterator yields root at the end, do not allocate another ID.
|
||||
break
|
||||
}
|
||||
|
||||
if err := e.allocateID(iter.Type); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := e.deflateType(root); err != nil {
|
||||
id := e.ids[root]
|
||||
return fmt.Errorf("deflate %v with ID %d: %w", root, id, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *encoder) deflateType(typ Type) (err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
var ok bool
|
||||
err, ok = r.(error)
|
||||
if !ok {
|
||||
panic(r)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
var raw rawType
|
||||
raw.NameOff, err = e.strings.Add(typ.TypeName())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch v := typ.(type) {
|
||||
case *Void:
|
||||
return errors.New("Void is implicit in BTF wire format")
|
||||
|
||||
case *Int:
|
||||
raw.SetKind(kindInt)
|
||||
raw.SetSize(v.Size)
|
||||
|
||||
var bi btfInt
|
||||
bi.SetEncoding(v.Encoding)
|
||||
// We need to set bits in addition to size, since btf_type_int_is_regular
|
||||
// otherwise flags this as a bitfield.
|
||||
bi.SetBits(byte(v.Size) * 8)
|
||||
raw.data = bi
|
||||
|
||||
case *Pointer:
|
||||
raw.SetKind(kindPointer)
|
||||
raw.SetType(e.id(v.Target))
|
||||
|
||||
case *Array:
|
||||
raw.SetKind(kindArray)
|
||||
raw.data = &btfArray{
|
||||
e.id(v.Type),
|
||||
e.id(v.Index),
|
||||
v.Nelems,
|
||||
}
|
||||
|
||||
case *Struct:
|
||||
raw.SetKind(kindStruct)
|
||||
raw.SetSize(v.Size)
|
||||
raw.data, err = e.convertMembers(&raw.btfType, v.Members)
|
||||
|
||||
case *Union:
|
||||
raw.SetKind(kindUnion)
|
||||
raw.SetSize(v.Size)
|
||||
raw.data, err = e.convertMembers(&raw.btfType, v.Members)
|
||||
|
||||
case *Enum:
|
||||
raw.SetSize(v.size())
|
||||
raw.SetVlen(len(v.Values))
|
||||
raw.SetSigned(v.Signed)
|
||||
|
||||
if v.has64BitValues() {
|
||||
raw.SetKind(kindEnum64)
|
||||
raw.data, err = e.deflateEnum64Values(v.Values)
|
||||
} else {
|
||||
raw.SetKind(kindEnum)
|
||||
raw.data, err = e.deflateEnumValues(v.Values)
|
||||
}
|
||||
|
||||
case *Fwd:
|
||||
raw.SetKind(kindForward)
|
||||
raw.SetFwdKind(v.Kind)
|
||||
|
||||
case *Typedef:
|
||||
raw.SetKind(kindTypedef)
|
||||
raw.SetType(e.id(v.Type))
|
||||
|
||||
case *Volatile:
|
||||
raw.SetKind(kindVolatile)
|
||||
raw.SetType(e.id(v.Type))
|
||||
|
||||
case *Const:
|
||||
raw.SetKind(kindConst)
|
||||
raw.SetType(e.id(v.Type))
|
||||
|
||||
case *Restrict:
|
||||
raw.SetKind(kindRestrict)
|
||||
raw.SetType(e.id(v.Type))
|
||||
|
||||
case *Func:
|
||||
raw.SetKind(kindFunc)
|
||||
raw.SetType(e.id(v.Type))
|
||||
if !e.StripFuncLinkage {
|
||||
raw.SetLinkage(v.Linkage)
|
||||
}
|
||||
|
||||
case *FuncProto:
|
||||
raw.SetKind(kindFuncProto)
|
||||
raw.SetType(e.id(v.Return))
|
||||
raw.SetVlen(len(v.Params))
|
||||
raw.data, err = e.deflateFuncParams(v.Params)
|
||||
|
||||
case *Var:
|
||||
raw.SetKind(kindVar)
|
||||
raw.SetType(e.id(v.Type))
|
||||
raw.data = btfVariable{uint32(v.Linkage)}
|
||||
|
||||
case *Datasec:
|
||||
raw.SetKind(kindDatasec)
|
||||
raw.SetSize(v.Size)
|
||||
raw.SetVlen(len(v.Vars))
|
||||
raw.data = e.deflateVarSecinfos(v.Vars)
|
||||
|
||||
case *Float:
|
||||
raw.SetKind(kindFloat)
|
||||
raw.SetSize(v.Size)
|
||||
|
||||
case *declTag:
|
||||
raw.SetKind(kindDeclTag)
|
||||
raw.SetType(e.id(v.Type))
|
||||
raw.data = &btfDeclTag{uint32(v.Index)}
|
||||
raw.NameOff, err = e.strings.Add(v.Value)
|
||||
|
||||
case *typeTag:
|
||||
raw.SetKind(kindTypeTag)
|
||||
raw.SetType(e.id(v.Type))
|
||||
raw.NameOff, err = e.strings.Add(v.Value)
|
||||
|
||||
default:
|
||||
return fmt.Errorf("don't know how to deflate %T", v)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return raw.Marshal(e.buf, e.Order)
|
||||
}
|
||||
|
||||
func (e *encoder) convertMembers(header *btfType, members []Member) ([]btfMember, error) {
|
||||
bms := make([]btfMember, 0, len(members))
|
||||
isBitfield := false
|
||||
for _, member := range members {
|
||||
isBitfield = isBitfield || member.BitfieldSize > 0
|
||||
|
||||
offset := member.Offset
|
||||
if isBitfield {
|
||||
offset = member.BitfieldSize<<24 | (member.Offset & 0xffffff)
|
||||
}
|
||||
|
||||
nameOff, err := e.strings.Add(member.Name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bms = append(bms, btfMember{
|
||||
nameOff,
|
||||
e.id(member.Type),
|
||||
uint32(offset),
|
||||
})
|
||||
}
|
||||
|
||||
header.SetVlen(len(members))
|
||||
header.SetBitfield(isBitfield)
|
||||
return bms, nil
|
||||
}
|
||||
|
||||
func (e *encoder) deflateEnumValues(values []EnumValue) ([]btfEnum, error) {
|
||||
bes := make([]btfEnum, 0, len(values))
|
||||
for _, value := range values {
|
||||
nameOff, err := e.strings.Add(value.Name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if value.Value > math.MaxUint32 {
|
||||
return nil, fmt.Errorf("value of enum %q exceeds 32 bits", value.Name)
|
||||
}
|
||||
|
||||
bes = append(bes, btfEnum{
|
||||
nameOff,
|
||||
uint32(value.Value),
|
||||
})
|
||||
}
|
||||
|
||||
return bes, nil
|
||||
}
|
||||
|
||||
func (e *encoder) deflateEnum64Values(values []EnumValue) ([]btfEnum64, error) {
|
||||
bes := make([]btfEnum64, 0, len(values))
|
||||
for _, value := range values {
|
||||
nameOff, err := e.strings.Add(value.Name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bes = append(bes, btfEnum64{
|
||||
nameOff,
|
||||
uint32(value.Value),
|
||||
uint32(value.Value >> 32),
|
||||
})
|
||||
}
|
||||
|
||||
return bes, nil
|
||||
}
|
||||
|
||||
func (e *encoder) deflateFuncParams(params []FuncParam) ([]btfParam, error) {
|
||||
bps := make([]btfParam, 0, len(params))
|
||||
for _, param := range params {
|
||||
nameOff, err := e.strings.Add(param.Name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bps = append(bps, btfParam{
|
||||
nameOff,
|
||||
e.id(param.Type),
|
||||
})
|
||||
}
|
||||
return bps, nil
|
||||
}
|
||||
|
||||
func (e *encoder) deflateVarSecinfos(vars []VarSecinfo) []btfVarSecinfo {
|
||||
vsis := make([]btfVarSecinfo, 0, len(vars))
|
||||
for _, v := range vars {
|
||||
vsis = append(vsis, btfVarSecinfo{
|
||||
e.id(v.Type),
|
||||
v.Offset,
|
||||
v.Size,
|
||||
})
|
||||
}
|
||||
return vsis
|
||||
}
|
||||
|
||||
// MarshalMapKV creates a BTF object containing a map key and value.
|
||||
//
|
||||
// The function is intended for the use of the ebpf package and may be removed
|
||||
// at any point in time.
|
||||
func MarshalMapKV(key, value Type) (_ *Handle, keyID, valueID TypeID, err error) {
|
||||
var b Builder
|
||||
|
||||
if key != nil {
|
||||
keyID, err = b.Add(key)
|
||||
if err != nil {
|
||||
return nil, 0, 0, fmt.Errorf("add key type: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if value != nil {
|
||||
valueID, err = b.Add(value)
|
||||
if err != nil {
|
||||
return nil, 0, 0, fmt.Errorf("add value type: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
handle, err := NewHandle(&b)
|
||||
if err != nil {
|
||||
// Check for 'full' map BTF support, since kernels between 4.18 and 5.2
|
||||
// already support BTF blobs for maps without Var or Datasec just fine.
|
||||
if err := haveMapBTF(); err != nil {
|
||||
return nil, 0, 0, err
|
||||
}
|
||||
}
|
||||
return handle, keyID, valueID, err
|
||||
}
|
||||
214
vendor/github.com/cilium/ebpf/btf/strings.go
generated
vendored
214
vendor/github.com/cilium/ebpf/btf/strings.go
generated
vendored
@@ -1,214 +0,0 @@
|
||||
package btf
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/exp/maps"
|
||||
)
|
||||
|
||||
type stringTable struct {
|
||||
base *stringTable
|
||||
offsets []uint32
|
||||
strings []string
|
||||
}
|
||||
|
||||
// sizedReader is implemented by bytes.Reader, io.SectionReader, strings.Reader, etc.
|
||||
type sizedReader interface {
|
||||
io.Reader
|
||||
Size() int64
|
||||
}
|
||||
|
||||
func readStringTable(r sizedReader, base *stringTable) (*stringTable, error) {
|
||||
// When parsing split BTF's string table, the first entry offset is derived
|
||||
// from the last entry offset of the base BTF.
|
||||
firstStringOffset := uint32(0)
|
||||
if base != nil {
|
||||
idx := len(base.offsets) - 1
|
||||
firstStringOffset = base.offsets[idx] + uint32(len(base.strings[idx])) + 1
|
||||
}
|
||||
|
||||
// Derived from vmlinux BTF.
|
||||
const averageStringLength = 16
|
||||
|
||||
n := int(r.Size() / averageStringLength)
|
||||
offsets := make([]uint32, 0, n)
|
||||
strings := make([]string, 0, n)
|
||||
|
||||
offset := firstStringOffset
|
||||
scanner := bufio.NewScanner(r)
|
||||
scanner.Split(splitNull)
|
||||
for scanner.Scan() {
|
||||
str := scanner.Text()
|
||||
offsets = append(offsets, offset)
|
||||
strings = append(strings, str)
|
||||
offset += uint32(len(str)) + 1
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(strings) == 0 {
|
||||
return nil, errors.New("string table is empty")
|
||||
}
|
||||
|
||||
if firstStringOffset == 0 && strings[0] != "" {
|
||||
return nil, errors.New("first item in string table is non-empty")
|
||||
}
|
||||
|
||||
return &stringTable{base, offsets, strings}, nil
|
||||
}
|
||||
|
||||
func splitNull(data []byte, atEOF bool) (advance int, token []byte, err error) {
|
||||
i := bytes.IndexByte(data, 0)
|
||||
if i == -1 {
|
||||
if atEOF && len(data) > 0 {
|
||||
return 0, nil, errors.New("string table isn't null terminated")
|
||||
}
|
||||
return 0, nil, nil
|
||||
}
|
||||
|
||||
return i + 1, data[:i], nil
|
||||
}
|
||||
|
||||
func (st *stringTable) Lookup(offset uint32) (string, error) {
|
||||
if st.base != nil && offset <= st.base.offsets[len(st.base.offsets)-1] {
|
||||
return st.base.lookup(offset)
|
||||
}
|
||||
return st.lookup(offset)
|
||||
}
|
||||
|
||||
func (st *stringTable) lookup(offset uint32) (string, error) {
|
||||
i := search(st.offsets, offset)
|
||||
if i == len(st.offsets) || st.offsets[i] != offset {
|
||||
return "", fmt.Errorf("offset %d isn't start of a string", offset)
|
||||
}
|
||||
|
||||
return st.strings[i], nil
|
||||
}
|
||||
|
||||
func (st *stringTable) Marshal(w io.Writer) error {
|
||||
for _, str := range st.strings {
|
||||
_, err := io.WriteString(w, str)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = w.Write([]byte{0})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Num returns the number of strings in the table.
|
||||
func (st *stringTable) Num() int {
|
||||
return len(st.strings)
|
||||
}
|
||||
|
||||
// search is a copy of sort.Search specialised for uint32.
|
||||
//
|
||||
// Licensed under https://go.dev/LICENSE
|
||||
func search(ints []uint32, needle uint32) int {
|
||||
// Define f(-1) == false and f(n) == true.
|
||||
// Invariant: f(i-1) == false, f(j) == true.
|
||||
i, j := 0, len(ints)
|
||||
for i < j {
|
||||
h := int(uint(i+j) >> 1) // avoid overflow when computing h
|
||||
// i ≤ h < j
|
||||
if !(ints[h] >= needle) {
|
||||
i = h + 1 // preserves f(i-1) == false
|
||||
} else {
|
||||
j = h // preserves f(j) == true
|
||||
}
|
||||
}
|
||||
// i == j, f(i-1) == false, and f(j) (= f(i)) == true => answer is i.
|
||||
return i
|
||||
}
|
||||
|
||||
// stringTableBuilder builds BTF string tables.
|
||||
type stringTableBuilder struct {
|
||||
length uint32
|
||||
strings map[string]uint32
|
||||
}
|
||||
|
||||
// newStringTableBuilder creates a builder with the given capacity.
|
||||
//
|
||||
// capacity may be zero.
|
||||
func newStringTableBuilder(capacity int) *stringTableBuilder {
|
||||
var stb stringTableBuilder
|
||||
|
||||
if capacity == 0 {
|
||||
// Use the runtime's small default size.
|
||||
stb.strings = make(map[string]uint32)
|
||||
} else {
|
||||
stb.strings = make(map[string]uint32, capacity)
|
||||
}
|
||||
|
||||
// Ensure that the empty string is at index 0.
|
||||
stb.append("")
|
||||
return &stb
|
||||
}
|
||||
|
||||
// Add a string to the table.
|
||||
//
|
||||
// Adding the same string multiple times will only store it once.
|
||||
func (stb *stringTableBuilder) Add(str string) (uint32, error) {
|
||||
if strings.IndexByte(str, 0) != -1 {
|
||||
return 0, fmt.Errorf("string contains null: %q", str)
|
||||
}
|
||||
|
||||
offset, ok := stb.strings[str]
|
||||
if ok {
|
||||
return offset, nil
|
||||
}
|
||||
|
||||
return stb.append(str), nil
|
||||
}
|
||||
|
||||
func (stb *stringTableBuilder) append(str string) uint32 {
|
||||
offset := stb.length
|
||||
stb.length += uint32(len(str)) + 1
|
||||
stb.strings[str] = offset
|
||||
return offset
|
||||
}
|
||||
|
||||
// Lookup finds the offset of a string in the table.
|
||||
//
|
||||
// Returns an error if str hasn't been added yet.
|
||||
func (stb *stringTableBuilder) Lookup(str string) (uint32, error) {
|
||||
offset, ok := stb.strings[str]
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("string %q is not in table", str)
|
||||
}
|
||||
|
||||
return offset, nil
|
||||
}
|
||||
|
||||
// Length returns the length in bytes.
|
||||
func (stb *stringTableBuilder) Length() int {
|
||||
return int(stb.length)
|
||||
}
|
||||
|
||||
// AppendEncoded appends the string table to the end of the provided buffer.
|
||||
func (stb *stringTableBuilder) AppendEncoded(buf []byte) []byte {
|
||||
n := len(buf)
|
||||
buf = append(buf, make([]byte, stb.Length())...)
|
||||
strings := buf[n:]
|
||||
for str, offset := range stb.strings {
|
||||
copy(strings[offset:], str)
|
||||
}
|
||||
return buf
|
||||
}
|
||||
|
||||
// Copy the string table builder.
|
||||
func (stb *stringTableBuilder) Copy() *stringTableBuilder {
|
||||
return &stringTableBuilder{
|
||||
stb.length,
|
||||
maps.Clone(stb.strings),
|
||||
}
|
||||
}
|
||||
141
vendor/github.com/cilium/ebpf/btf/traversal.go
generated
vendored
141
vendor/github.com/cilium/ebpf/btf/traversal.go
generated
vendored
@@ -1,141 +0,0 @@
|
||||
package btf
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf/internal"
|
||||
)
|
||||
|
||||
// Functions to traverse a cyclic graph of types. The below was very useful:
|
||||
// https://eli.thegreenplace.net/2015/directed-graph-traversal-orderings-and-applications-to-data-flow-analysis/#post-order-and-reverse-post-order
|
||||
|
||||
type postorderIterator struct {
|
||||
// Iteration skips types for which this function returns true.
|
||||
skip func(Type) bool
|
||||
// The root type. May be nil if skip(root) is true.
|
||||
root Type
|
||||
|
||||
// Contains types which need to be either walked or yielded.
|
||||
types typeDeque
|
||||
// Contains a boolean whether the type has been walked or not.
|
||||
walked internal.Deque[bool]
|
||||
// The set of types which has been pushed onto types.
|
||||
pushed map[Type]struct{}
|
||||
|
||||
// The current type. Only valid after a call to Next().
|
||||
Type Type
|
||||
}
|
||||
|
||||
// postorderTraversal iterates all types reachable from root by visiting the
|
||||
// leaves of the graph first.
|
||||
//
|
||||
// Types for which skip returns true are ignored. skip may be nil.
|
||||
func postorderTraversal(root Type, skip func(Type) (skip bool)) postorderIterator {
|
||||
// Avoid allocations for the common case of a skipped root.
|
||||
if skip != nil && skip(root) {
|
||||
return postorderIterator{}
|
||||
}
|
||||
|
||||
po := postorderIterator{root: root, skip: skip}
|
||||
walkType(root, po.push)
|
||||
|
||||
return po
|
||||
}
|
||||
|
||||
func (po *postorderIterator) push(t *Type) {
|
||||
if _, ok := po.pushed[*t]; ok || *t == po.root {
|
||||
return
|
||||
}
|
||||
|
||||
if po.skip != nil && po.skip(*t) {
|
||||
return
|
||||
}
|
||||
|
||||
if po.pushed == nil {
|
||||
// Lazily allocate pushed to avoid an allocation for Types without children.
|
||||
po.pushed = make(map[Type]struct{})
|
||||
}
|
||||
|
||||
po.pushed[*t] = struct{}{}
|
||||
po.types.Push(t)
|
||||
po.walked.Push(false)
|
||||
}
|
||||
|
||||
// Next returns true if there is another Type to traverse.
|
||||
func (po *postorderIterator) Next() bool {
|
||||
for !po.types.Empty() {
|
||||
t := po.types.Pop()
|
||||
|
||||
if !po.walked.Pop() {
|
||||
// Push the type again, so that we re-evaluate it in done state
|
||||
// after all children have been handled.
|
||||
po.types.Push(t)
|
||||
po.walked.Push(true)
|
||||
|
||||
// Add all direct children to todo.
|
||||
walkType(*t, po.push)
|
||||
} else {
|
||||
// We've walked this type previously, so we now know that all
|
||||
// children have been handled.
|
||||
po.Type = *t
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Only return root once.
|
||||
po.Type, po.root = po.root, nil
|
||||
return po.Type != nil
|
||||
}
|
||||
|
||||
// walkType calls fn on each child of typ.
|
||||
func walkType(typ Type, fn func(*Type)) {
|
||||
// Explicitly type switch on the most common types to allow the inliner to
|
||||
// do its work. This avoids allocating intermediate slices from walk() on
|
||||
// the heap.
|
||||
switch v := typ.(type) {
|
||||
case *Void, *Int, *Enum, *Fwd, *Float:
|
||||
// No children to traverse.
|
||||
case *Pointer:
|
||||
fn(&v.Target)
|
||||
case *Array:
|
||||
fn(&v.Index)
|
||||
fn(&v.Type)
|
||||
case *Struct:
|
||||
for i := range v.Members {
|
||||
fn(&v.Members[i].Type)
|
||||
}
|
||||
case *Union:
|
||||
for i := range v.Members {
|
||||
fn(&v.Members[i].Type)
|
||||
}
|
||||
case *Typedef:
|
||||
fn(&v.Type)
|
||||
case *Volatile:
|
||||
fn(&v.Type)
|
||||
case *Const:
|
||||
fn(&v.Type)
|
||||
case *Restrict:
|
||||
fn(&v.Type)
|
||||
case *Func:
|
||||
fn(&v.Type)
|
||||
case *FuncProto:
|
||||
fn(&v.Return)
|
||||
for i := range v.Params {
|
||||
fn(&v.Params[i].Type)
|
||||
}
|
||||
case *Var:
|
||||
fn(&v.Type)
|
||||
case *Datasec:
|
||||
for i := range v.Vars {
|
||||
fn(&v.Vars[i].Type)
|
||||
}
|
||||
case *declTag:
|
||||
fn(&v.Type)
|
||||
case *typeTag:
|
||||
fn(&v.Type)
|
||||
case *cycle:
|
||||
// cycle has children, but we ignore them deliberately.
|
||||
default:
|
||||
panic(fmt.Sprintf("don't know how to walk Type %T", v))
|
||||
}
|
||||
}
|
||||
1258
vendor/github.com/cilium/ebpf/btf/types.go
generated
vendored
1258
vendor/github.com/cilium/ebpf/btf/types.go
generated
vendored
File diff suppressed because it is too large
Load Diff
26
vendor/github.com/cilium/ebpf/btf/workarounds.go
generated
vendored
26
vendor/github.com/cilium/ebpf/btf/workarounds.go
generated
vendored
@@ -1,26 +0,0 @@
|
||||
package btf
|
||||
|
||||
// datasecResolveWorkaround ensures that certain vars in a Datasec are added
|
||||
// to a Spec before the Datasec. This avoids a bug in kernel BTF validation.
|
||||
//
|
||||
// See https://lore.kernel.org/bpf/20230302123440.1193507-1-lmb@isovalent.com/
|
||||
func datasecResolveWorkaround(b *Builder, ds *Datasec) error {
|
||||
for _, vsi := range ds.Vars {
|
||||
v, ok := vsi.Type.(*Var)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
switch v.Type.(type) {
|
||||
case *Typedef, *Volatile, *Const, *Restrict, *typeTag:
|
||||
// NB: We must never call Add on a Datasec, otherwise we risk
|
||||
// infinite recursion.
|
||||
_, err := b.Add(v.Type)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
841
vendor/github.com/cilium/ebpf/collection.go
generated
vendored
841
vendor/github.com/cilium/ebpf/collection.go
generated
vendored
@@ -1,841 +0,0 @@
|
||||
package ebpf
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/cilium/ebpf/asm"
|
||||
"github.com/cilium/ebpf/btf"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/kconfig"
|
||||
)
|
||||
|
||||
// CollectionOptions control loading a collection into the kernel.
|
||||
//
|
||||
// Maps and Programs are passed to NewMapWithOptions and NewProgramsWithOptions.
|
||||
type CollectionOptions struct {
|
||||
Maps MapOptions
|
||||
Programs ProgramOptions
|
||||
|
||||
// MapReplacements takes a set of Maps that will be used instead of
|
||||
// creating new ones when loading the CollectionSpec.
|
||||
//
|
||||
// For each given Map, there must be a corresponding MapSpec in
|
||||
// CollectionSpec.Maps, and its type, key/value size, max entries and flags
|
||||
// must match the values of the MapSpec.
|
||||
//
|
||||
// The given Maps are Clone()d before being used in the Collection, so the
|
||||
// caller can Close() them freely when they are no longer needed.
|
||||
MapReplacements map[string]*Map
|
||||
}
|
||||
|
||||
// CollectionSpec describes a collection.
|
||||
type CollectionSpec struct {
|
||||
Maps map[string]*MapSpec
|
||||
Programs map[string]*ProgramSpec
|
||||
|
||||
// Types holds type information about Maps and Programs.
|
||||
// Modifications to Types are currently undefined behaviour.
|
||||
Types *btf.Spec
|
||||
|
||||
// ByteOrder specifies whether the ELF was compiled for
|
||||
// big-endian or little-endian architectures.
|
||||
ByteOrder binary.ByteOrder
|
||||
}
|
||||
|
||||
// Copy returns a recursive copy of the spec.
|
||||
func (cs *CollectionSpec) Copy() *CollectionSpec {
|
||||
if cs == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
cpy := CollectionSpec{
|
||||
Maps: make(map[string]*MapSpec, len(cs.Maps)),
|
||||
Programs: make(map[string]*ProgramSpec, len(cs.Programs)),
|
||||
ByteOrder: cs.ByteOrder,
|
||||
Types: cs.Types,
|
||||
}
|
||||
|
||||
for name, spec := range cs.Maps {
|
||||
cpy.Maps[name] = spec.Copy()
|
||||
}
|
||||
|
||||
for name, spec := range cs.Programs {
|
||||
cpy.Programs[name] = spec.Copy()
|
||||
}
|
||||
|
||||
return &cpy
|
||||
}
|
||||
|
||||
// RewriteMaps replaces all references to specific maps.
|
||||
//
|
||||
// Use this function to use pre-existing maps instead of creating new ones
|
||||
// when calling NewCollection. Any named maps are removed from CollectionSpec.Maps.
|
||||
//
|
||||
// Returns an error if a named map isn't used in at least one program.
|
||||
//
|
||||
// Deprecated: Pass CollectionOptions.MapReplacements when loading the Collection
|
||||
// instead.
|
||||
func (cs *CollectionSpec) RewriteMaps(maps map[string]*Map) error {
|
||||
for symbol, m := range maps {
|
||||
// have we seen a program that uses this symbol / map
|
||||
seen := false
|
||||
for progName, progSpec := range cs.Programs {
|
||||
err := progSpec.Instructions.AssociateMap(symbol, m)
|
||||
|
||||
switch {
|
||||
case err == nil:
|
||||
seen = true
|
||||
|
||||
case errors.Is(err, asm.ErrUnreferencedSymbol):
|
||||
// Not all programs need to use the map
|
||||
|
||||
default:
|
||||
return fmt.Errorf("program %s: %w", progName, err)
|
||||
}
|
||||
}
|
||||
|
||||
if !seen {
|
||||
return fmt.Errorf("map %s not referenced by any programs", symbol)
|
||||
}
|
||||
|
||||
// Prevent NewCollection from creating rewritten maps
|
||||
delete(cs.Maps, symbol)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// MissingConstantsError is returned by [CollectionSpec.RewriteConstants].
|
||||
type MissingConstantsError struct {
|
||||
// The constants missing from .rodata.
|
||||
Constants []string
|
||||
}
|
||||
|
||||
func (m *MissingConstantsError) Error() string {
|
||||
return fmt.Sprintf("some constants are missing from .rodata: %s", strings.Join(m.Constants, ", "))
|
||||
}
|
||||
|
||||
// RewriteConstants replaces the value of multiple constants.
|
||||
//
|
||||
// The constant must be defined like so in the C program:
|
||||
//
|
||||
// volatile const type foobar;
|
||||
// volatile const type foobar = default;
|
||||
//
|
||||
// Replacement values must be of the same length as the C sizeof(type).
|
||||
// If necessary, they are marshalled according to the same rules as
|
||||
// map values.
|
||||
//
|
||||
// From Linux 5.5 the verifier will use constants to eliminate dead code.
|
||||
//
|
||||
// Returns an error wrapping [MissingConstantsError] if a constant doesn't exist.
|
||||
func (cs *CollectionSpec) RewriteConstants(consts map[string]interface{}) error {
|
||||
replaced := make(map[string]bool)
|
||||
|
||||
for name, spec := range cs.Maps {
|
||||
if !strings.HasPrefix(name, ".rodata") {
|
||||
continue
|
||||
}
|
||||
|
||||
b, ds, err := spec.dataSection()
|
||||
if errors.Is(err, errMapNoBTFValue) {
|
||||
// Data sections without a BTF Datasec are valid, but don't support
|
||||
// constant replacements.
|
||||
continue
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("map %s: %w", name, err)
|
||||
}
|
||||
|
||||
// MapSpec.Copy() performs a shallow copy. Fully copy the byte slice
|
||||
// to avoid any changes affecting other copies of the MapSpec.
|
||||
cpy := make([]byte, len(b))
|
||||
copy(cpy, b)
|
||||
|
||||
for _, v := range ds.Vars {
|
||||
vname := v.Type.TypeName()
|
||||
replacement, ok := consts[vname]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := v.Type.(*btf.Var); !ok {
|
||||
return fmt.Errorf("section %s: unexpected type %T for variable %s", name, v.Type, vname)
|
||||
}
|
||||
|
||||
if replaced[vname] {
|
||||
return fmt.Errorf("section %s: duplicate variable %s", name, vname)
|
||||
}
|
||||
|
||||
if int(v.Offset+v.Size) > len(cpy) {
|
||||
return fmt.Errorf("section %s: offset %d(+%d) for variable %s is out of bounds", name, v.Offset, v.Size, vname)
|
||||
}
|
||||
|
||||
b, err := marshalBytes(replacement, int(v.Size))
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshaling constant replacement %s: %w", vname, err)
|
||||
}
|
||||
|
||||
copy(cpy[v.Offset:v.Offset+v.Size], b)
|
||||
|
||||
replaced[vname] = true
|
||||
}
|
||||
|
||||
spec.Contents[0] = MapKV{Key: uint32(0), Value: cpy}
|
||||
}
|
||||
|
||||
var missing []string
|
||||
for c := range consts {
|
||||
if !replaced[c] {
|
||||
missing = append(missing, c)
|
||||
}
|
||||
}
|
||||
|
||||
if len(missing) != 0 {
|
||||
return fmt.Errorf("rewrite constants: %w", &MissingConstantsError{Constants: missing})
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Assign the contents of a CollectionSpec to a struct.
|
||||
//
|
||||
// This function is a shortcut to manually checking the presence
|
||||
// of maps and programs in a CollectionSpec. Consider using bpf2go
|
||||
// if this sounds useful.
|
||||
//
|
||||
// 'to' must be a pointer to a struct. A field of the
|
||||
// struct is updated with values from Programs or Maps if it
|
||||
// has an `ebpf` tag and its type is *ProgramSpec or *MapSpec.
|
||||
// The tag's value specifies the name of the program or map as
|
||||
// found in the CollectionSpec.
|
||||
//
|
||||
// struct {
|
||||
// Foo *ebpf.ProgramSpec `ebpf:"xdp_foo"`
|
||||
// Bar *ebpf.MapSpec `ebpf:"bar_map"`
|
||||
// Ignored int
|
||||
// }
|
||||
//
|
||||
// Returns an error if any of the eBPF objects can't be found, or
|
||||
// if the same MapSpec or ProgramSpec is assigned multiple times.
|
||||
func (cs *CollectionSpec) Assign(to interface{}) error {
|
||||
// Assign() only supports assigning ProgramSpecs and MapSpecs,
|
||||
// so doesn't load any resources into the kernel.
|
||||
getValue := func(typ reflect.Type, name string) (interface{}, error) {
|
||||
switch typ {
|
||||
|
||||
case reflect.TypeOf((*ProgramSpec)(nil)):
|
||||
if p := cs.Programs[name]; p != nil {
|
||||
return p, nil
|
||||
}
|
||||
return nil, fmt.Errorf("missing program %q", name)
|
||||
|
||||
case reflect.TypeOf((*MapSpec)(nil)):
|
||||
if m := cs.Maps[name]; m != nil {
|
||||
return m, nil
|
||||
}
|
||||
return nil, fmt.Errorf("missing map %q", name)
|
||||
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported type %s", typ)
|
||||
}
|
||||
}
|
||||
|
||||
return assignValues(to, getValue)
|
||||
}
|
||||
|
||||
// LoadAndAssign loads Maps and Programs into the kernel and assigns them
|
||||
// to a struct.
|
||||
//
|
||||
// Omitting Map/Program.Close() during application shutdown is an error.
|
||||
// See the package documentation for details around Map and Program lifecycle.
|
||||
//
|
||||
// This function is a shortcut to manually checking the presence
|
||||
// of maps and programs in a CollectionSpec. Consider using bpf2go
|
||||
// if this sounds useful.
|
||||
//
|
||||
// 'to' must be a pointer to a struct. A field of the struct is updated with
|
||||
// a Program or Map if it has an `ebpf` tag and its type is *Program or *Map.
|
||||
// The tag's value specifies the name of the program or map as found in the
|
||||
// CollectionSpec. Before updating the struct, the requested objects and their
|
||||
// dependent resources are loaded into the kernel and populated with values if
|
||||
// specified.
|
||||
//
|
||||
// struct {
|
||||
// Foo *ebpf.Program `ebpf:"xdp_foo"`
|
||||
// Bar *ebpf.Map `ebpf:"bar_map"`
|
||||
// Ignored int
|
||||
// }
|
||||
//
|
||||
// opts may be nil.
|
||||
//
|
||||
// Returns an error if any of the fields can't be found, or
|
||||
// if the same Map or Program is assigned multiple times.
|
||||
func (cs *CollectionSpec) LoadAndAssign(to interface{}, opts *CollectionOptions) error {
|
||||
loader, err := newCollectionLoader(cs, opts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer loader.close()
|
||||
|
||||
// Support assigning Programs and Maps, lazy-loading the required objects.
|
||||
assignedMaps := make(map[string]bool)
|
||||
assignedProgs := make(map[string]bool)
|
||||
|
||||
getValue := func(typ reflect.Type, name string) (interface{}, error) {
|
||||
switch typ {
|
||||
|
||||
case reflect.TypeOf((*Program)(nil)):
|
||||
assignedProgs[name] = true
|
||||
return loader.loadProgram(name)
|
||||
|
||||
case reflect.TypeOf((*Map)(nil)):
|
||||
assignedMaps[name] = true
|
||||
return loader.loadMap(name)
|
||||
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported type %s", typ)
|
||||
}
|
||||
}
|
||||
|
||||
// Load the Maps and Programs requested by the annotated struct.
|
||||
if err := assignValues(to, getValue); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Populate the requested maps. Has a chance of lazy-loading other dependent maps.
|
||||
if err := loader.populateMaps(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Evaluate the loader's objects after all (lazy)loading has taken place.
|
||||
for n, m := range loader.maps {
|
||||
switch m.typ {
|
||||
case ProgramArray:
|
||||
// Require all lazy-loaded ProgramArrays to be assigned to the given object.
|
||||
// The kernel empties a ProgramArray once the last user space reference
|
||||
// to it closes, which leads to failed tail calls. Combined with the library
|
||||
// closing map fds via GC finalizers this can lead to surprising behaviour.
|
||||
// Only allow unassigned ProgramArrays when the library hasn't pre-populated
|
||||
// any entries from static value declarations. At this point, we know the map
|
||||
// is empty and there's no way for the caller to interact with the map going
|
||||
// forward.
|
||||
if !assignedMaps[n] && len(cs.Maps[n].Contents) > 0 {
|
||||
return fmt.Errorf("ProgramArray %s must be assigned to prevent missed tail calls", n)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Prevent loader.cleanup() from closing assigned Maps and Programs.
|
||||
for m := range assignedMaps {
|
||||
delete(loader.maps, m)
|
||||
}
|
||||
for p := range assignedProgs {
|
||||
delete(loader.programs, p)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Collection is a collection of Programs and Maps associated
|
||||
// with their symbols
|
||||
type Collection struct {
|
||||
Programs map[string]*Program
|
||||
Maps map[string]*Map
|
||||
}
|
||||
|
||||
// NewCollection creates a Collection from the given spec, creating and
|
||||
// loading its declared resources into the kernel.
|
||||
//
|
||||
// Omitting Collection.Close() during application shutdown is an error.
|
||||
// See the package documentation for details around Map and Program lifecycle.
|
||||
func NewCollection(spec *CollectionSpec) (*Collection, error) {
|
||||
return NewCollectionWithOptions(spec, CollectionOptions{})
|
||||
}
|
||||
|
||||
// NewCollectionWithOptions creates a Collection from the given spec using
|
||||
// options, creating and loading its declared resources into the kernel.
|
||||
//
|
||||
// Omitting Collection.Close() during application shutdown is an error.
|
||||
// See the package documentation for details around Map and Program lifecycle.
|
||||
func NewCollectionWithOptions(spec *CollectionSpec, opts CollectionOptions) (*Collection, error) {
|
||||
loader, err := newCollectionLoader(spec, &opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer loader.close()
|
||||
|
||||
// Create maps first, as their fds need to be linked into programs.
|
||||
for mapName := range spec.Maps {
|
||||
if _, err := loader.loadMap(mapName); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
for progName, prog := range spec.Programs {
|
||||
if prog.Type == UnspecifiedProgram {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, err := loader.loadProgram(progName); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// Maps can contain Program and Map stubs, so populate them after
|
||||
// all Maps and Programs have been successfully loaded.
|
||||
if err := loader.populateMaps(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Prevent loader.cleanup from closing maps and programs.
|
||||
maps, progs := loader.maps, loader.programs
|
||||
loader.maps, loader.programs = nil, nil
|
||||
|
||||
return &Collection{
|
||||
progs,
|
||||
maps,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type collectionLoader struct {
|
||||
coll *CollectionSpec
|
||||
opts *CollectionOptions
|
||||
maps map[string]*Map
|
||||
programs map[string]*Program
|
||||
}
|
||||
|
||||
func newCollectionLoader(coll *CollectionSpec, opts *CollectionOptions) (*collectionLoader, error) {
|
||||
if opts == nil {
|
||||
opts = &CollectionOptions{}
|
||||
}
|
||||
|
||||
// Check for existing MapSpecs in the CollectionSpec for all provided replacement maps.
|
||||
for name, m := range opts.MapReplacements {
|
||||
spec, ok := coll.Maps[name]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("replacement map %s not found in CollectionSpec", name)
|
||||
}
|
||||
|
||||
if err := spec.Compatible(m); err != nil {
|
||||
return nil, fmt.Errorf("using replacement map %s: %w", spec.Name, err)
|
||||
}
|
||||
}
|
||||
|
||||
return &collectionLoader{
|
||||
coll,
|
||||
opts,
|
||||
make(map[string]*Map),
|
||||
make(map[string]*Program),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// close all resources left over in the collectionLoader.
|
||||
func (cl *collectionLoader) close() {
|
||||
for _, m := range cl.maps {
|
||||
m.Close()
|
||||
}
|
||||
for _, p := range cl.programs {
|
||||
p.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func (cl *collectionLoader) loadMap(mapName string) (*Map, error) {
|
||||
if m := cl.maps[mapName]; m != nil {
|
||||
return m, nil
|
||||
}
|
||||
|
||||
mapSpec := cl.coll.Maps[mapName]
|
||||
if mapSpec == nil {
|
||||
return nil, fmt.Errorf("missing map %s", mapName)
|
||||
}
|
||||
|
||||
if replaceMap, ok := cl.opts.MapReplacements[mapName]; ok {
|
||||
// Clone the map to avoid closing user's map later on.
|
||||
m, err := replaceMap.Clone()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cl.maps[mapName] = m
|
||||
return m, nil
|
||||
}
|
||||
|
||||
m, err := newMapWithOptions(mapSpec, cl.opts.Maps)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("map %s: %w", mapName, err)
|
||||
}
|
||||
|
||||
cl.maps[mapName] = m
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (cl *collectionLoader) loadProgram(progName string) (*Program, error) {
|
||||
if prog := cl.programs[progName]; prog != nil {
|
||||
return prog, nil
|
||||
}
|
||||
|
||||
progSpec := cl.coll.Programs[progName]
|
||||
if progSpec == nil {
|
||||
return nil, fmt.Errorf("unknown program %s", progName)
|
||||
}
|
||||
|
||||
// Bail out early if we know the kernel is going to reject the program.
|
||||
// This skips loading map dependencies, saving some cleanup work later.
|
||||
if progSpec.Type == UnspecifiedProgram {
|
||||
return nil, fmt.Errorf("cannot load program %s: program type is unspecified", progName)
|
||||
}
|
||||
|
||||
progSpec = progSpec.Copy()
|
||||
|
||||
// Rewrite any reference to a valid map in the program's instructions,
|
||||
// which includes all of its dependencies.
|
||||
for i := range progSpec.Instructions {
|
||||
ins := &progSpec.Instructions[i]
|
||||
|
||||
if !ins.IsLoadFromMap() || ins.Reference() == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Don't overwrite map loads containing non-zero map fd's,
|
||||
// they can be manually included by the caller.
|
||||
// Map FDs/IDs are placed in the lower 32 bits of Constant.
|
||||
if int32(ins.Constant) > 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
m, err := cl.loadMap(ins.Reference())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("program %s: %w", progName, err)
|
||||
}
|
||||
|
||||
if err := ins.AssociateMap(m); err != nil {
|
||||
return nil, fmt.Errorf("program %s: map %s: %w", progName, ins.Reference(), err)
|
||||
}
|
||||
}
|
||||
|
||||
prog, err := newProgramWithOptions(progSpec, cl.opts.Programs)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("program %s: %w", progName, err)
|
||||
}
|
||||
|
||||
cl.programs[progName] = prog
|
||||
return prog, nil
|
||||
}
|
||||
|
||||
func (cl *collectionLoader) populateMaps() error {
|
||||
for mapName, m := range cl.maps {
|
||||
mapSpec, ok := cl.coll.Maps[mapName]
|
||||
if !ok {
|
||||
return fmt.Errorf("missing map spec %s", mapName)
|
||||
}
|
||||
|
||||
// MapSpecs that refer to inner maps or programs within the same
|
||||
// CollectionSpec do so using strings. These strings are used as the key
|
||||
// to look up the respective object in the Maps or Programs fields.
|
||||
// Resolve those references to actual Map or Program resources that
|
||||
// have been loaded into the kernel.
|
||||
if mapSpec.Type.canStoreMap() || mapSpec.Type.canStoreProgram() {
|
||||
mapSpec = mapSpec.Copy()
|
||||
|
||||
for i, kv := range mapSpec.Contents {
|
||||
objName, ok := kv.Value.(string)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
switch t := mapSpec.Type; {
|
||||
case t.canStoreProgram():
|
||||
// loadProgram is idempotent and could return an existing Program.
|
||||
prog, err := cl.loadProgram(objName)
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading program %s, for map %s: %w", objName, mapName, err)
|
||||
}
|
||||
mapSpec.Contents[i] = MapKV{kv.Key, prog}
|
||||
|
||||
case t.canStoreMap():
|
||||
// loadMap is idempotent and could return an existing Map.
|
||||
innerMap, err := cl.loadMap(objName)
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading inner map %s, for map %s: %w", objName, mapName, err)
|
||||
}
|
||||
mapSpec.Contents[i] = MapKV{kv.Key, innerMap}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Populate and freeze the map if specified.
|
||||
if err := m.finalize(mapSpec); err != nil {
|
||||
return fmt.Errorf("populating map %s: %w", mapName, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// resolveKconfig resolves all variables declared in .kconfig and populates
|
||||
// m.Contents. Does nothing if the given m.Contents is non-empty.
|
||||
func resolveKconfig(m *MapSpec) error {
|
||||
ds, ok := m.Value.(*btf.Datasec)
|
||||
if !ok {
|
||||
return errors.New("map value is not a Datasec")
|
||||
}
|
||||
|
||||
type configInfo struct {
|
||||
offset uint32
|
||||
typ btf.Type
|
||||
}
|
||||
|
||||
configs := make(map[string]configInfo)
|
||||
|
||||
data := make([]byte, ds.Size)
|
||||
for _, vsi := range ds.Vars {
|
||||
v := vsi.Type.(*btf.Var)
|
||||
n := v.TypeName()
|
||||
|
||||
switch n {
|
||||
case "LINUX_KERNEL_VERSION":
|
||||
if integer, ok := v.Type.(*btf.Int); !ok || integer.Size != 4 {
|
||||
return fmt.Errorf("variable %s must be a 32 bits integer, got %s", n, v.Type)
|
||||
}
|
||||
|
||||
kv, err := internal.KernelVersion()
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting kernel version: %w", err)
|
||||
}
|
||||
internal.NativeEndian.PutUint32(data[vsi.Offset:], kv.Kernel())
|
||||
|
||||
case "LINUX_HAS_SYSCALL_WRAPPER":
|
||||
if integer, ok := v.Type.(*btf.Int); !ok || integer.Size != 4 {
|
||||
return fmt.Errorf("variable %s must be a 32 bits integer, got %s", n, v.Type)
|
||||
}
|
||||
var value uint32 = 1
|
||||
if err := haveSyscallWrapper(); errors.Is(err, ErrNotSupported) {
|
||||
value = 0
|
||||
} else if err != nil {
|
||||
return fmt.Errorf("unable to derive a value for LINUX_HAS_SYSCALL_WRAPPER: %w", err)
|
||||
}
|
||||
|
||||
internal.NativeEndian.PutUint32(data[vsi.Offset:], value)
|
||||
|
||||
default: // Catch CONFIG_*.
|
||||
configs[n] = configInfo{
|
||||
offset: vsi.Offset,
|
||||
typ: v.Type,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// We only parse kconfig file if a CONFIG_* variable was found.
|
||||
if len(configs) > 0 {
|
||||
f, err := kconfig.Find()
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot find a kconfig file: %w", err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
filter := make(map[string]struct{}, len(configs))
|
||||
for config := range configs {
|
||||
filter[config] = struct{}{}
|
||||
}
|
||||
|
||||
kernelConfig, err := kconfig.Parse(f, filter)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse kconfig file: %w", err)
|
||||
}
|
||||
|
||||
for n, info := range configs {
|
||||
value, ok := kernelConfig[n]
|
||||
if !ok {
|
||||
return fmt.Errorf("config option %q does not exists for this kernel", n)
|
||||
}
|
||||
|
||||
err := kconfig.PutValue(data[info.offset:], info.typ, value)
|
||||
if err != nil {
|
||||
return fmt.Errorf("problem adding value for %s: %w", n, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
m.Contents = []MapKV{{uint32(0), data}}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// LoadCollection reads an object file and creates and loads its declared
|
||||
// resources into the kernel.
|
||||
//
|
||||
// Omitting Collection.Close() during application shutdown is an error.
|
||||
// See the package documentation for details around Map and Program lifecycle.
|
||||
func LoadCollection(file string) (*Collection, error) {
|
||||
spec, err := LoadCollectionSpec(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return NewCollection(spec)
|
||||
}
|
||||
|
||||
// Close frees all maps and programs associated with the collection.
|
||||
//
|
||||
// The collection mustn't be used afterwards.
|
||||
func (coll *Collection) Close() {
|
||||
for _, prog := range coll.Programs {
|
||||
prog.Close()
|
||||
}
|
||||
for _, m := range coll.Maps {
|
||||
m.Close()
|
||||
}
|
||||
}
|
||||
|
||||
// DetachMap removes the named map from the Collection.
|
||||
//
|
||||
// This means that a later call to Close() will not affect this map.
|
||||
//
|
||||
// Returns nil if no map of that name exists.
|
||||
func (coll *Collection) DetachMap(name string) *Map {
|
||||
m := coll.Maps[name]
|
||||
delete(coll.Maps, name)
|
||||
return m
|
||||
}
|
||||
|
||||
// DetachProgram removes the named program from the Collection.
|
||||
//
|
||||
// This means that a later call to Close() will not affect this program.
|
||||
//
|
||||
// Returns nil if no program of that name exists.
|
||||
func (coll *Collection) DetachProgram(name string) *Program {
|
||||
p := coll.Programs[name]
|
||||
delete(coll.Programs, name)
|
||||
return p
|
||||
}
|
||||
|
||||
// structField represents a struct field containing the ebpf struct tag.
|
||||
type structField struct {
|
||||
reflect.StructField
|
||||
value reflect.Value
|
||||
}
|
||||
|
||||
// ebpfFields extracts field names tagged with 'ebpf' from a struct type.
|
||||
// Keep track of visited types to avoid infinite recursion.
|
||||
func ebpfFields(structVal reflect.Value, visited map[reflect.Type]bool) ([]structField, error) {
|
||||
if visited == nil {
|
||||
visited = make(map[reflect.Type]bool)
|
||||
}
|
||||
|
||||
structType := structVal.Type()
|
||||
if structType.Kind() != reflect.Struct {
|
||||
return nil, fmt.Errorf("%s is not a struct", structType)
|
||||
}
|
||||
|
||||
if visited[structType] {
|
||||
return nil, fmt.Errorf("recursion on type %s", structType)
|
||||
}
|
||||
|
||||
fields := make([]structField, 0, structType.NumField())
|
||||
for i := 0; i < structType.NumField(); i++ {
|
||||
field := structField{structType.Field(i), structVal.Field(i)}
|
||||
|
||||
// If the field is tagged, gather it and move on.
|
||||
name := field.Tag.Get("ebpf")
|
||||
if name != "" {
|
||||
fields = append(fields, field)
|
||||
continue
|
||||
}
|
||||
|
||||
// If the field does not have an ebpf tag, but is a struct or a pointer
|
||||
// to a struct, attempt to gather its fields as well.
|
||||
var v reflect.Value
|
||||
switch field.Type.Kind() {
|
||||
case reflect.Ptr:
|
||||
if field.Type.Elem().Kind() != reflect.Struct {
|
||||
continue
|
||||
}
|
||||
|
||||
if field.value.IsNil() {
|
||||
return nil, fmt.Errorf("nil pointer to %s", structType)
|
||||
}
|
||||
|
||||
// Obtain the destination type of the pointer.
|
||||
v = field.value.Elem()
|
||||
|
||||
case reflect.Struct:
|
||||
// Reference the value's type directly.
|
||||
v = field.value
|
||||
|
||||
default:
|
||||
continue
|
||||
}
|
||||
|
||||
inner, err := ebpfFields(v, visited)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("field %s: %w", field.Name, err)
|
||||
}
|
||||
|
||||
fields = append(fields, inner...)
|
||||
}
|
||||
|
||||
return fields, nil
|
||||
}
|
||||
|
||||
// assignValues attempts to populate all fields of 'to' tagged with 'ebpf'.
|
||||
//
|
||||
// getValue is called for every tagged field of 'to' and must return the value
|
||||
// to be assigned to the field with the given typ and name.
|
||||
func assignValues(to interface{},
|
||||
getValue func(typ reflect.Type, name string) (interface{}, error)) error {
|
||||
|
||||
toValue := reflect.ValueOf(to)
|
||||
if toValue.Type().Kind() != reflect.Ptr {
|
||||
return fmt.Errorf("%T is not a pointer to struct", to)
|
||||
}
|
||||
|
||||
if toValue.IsNil() {
|
||||
return fmt.Errorf("nil pointer to %T", to)
|
||||
}
|
||||
|
||||
fields, err := ebpfFields(toValue.Elem(), nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
type elem struct {
|
||||
// Either *Map or *Program
|
||||
typ reflect.Type
|
||||
name string
|
||||
}
|
||||
|
||||
assigned := make(map[elem]string)
|
||||
for _, field := range fields {
|
||||
// Get string value the field is tagged with.
|
||||
tag := field.Tag.Get("ebpf")
|
||||
if strings.Contains(tag, ",") {
|
||||
return fmt.Errorf("field %s: ebpf tag contains a comma", field.Name)
|
||||
}
|
||||
|
||||
// Check if the eBPF object with the requested
|
||||
// type and tag was already assigned elsewhere.
|
||||
e := elem{field.Type, tag}
|
||||
if af := assigned[e]; af != "" {
|
||||
return fmt.Errorf("field %s: object %q was already assigned to %s", field.Name, tag, af)
|
||||
}
|
||||
|
||||
// Get the eBPF object referred to by the tag.
|
||||
value, err := getValue(field.Type, tag)
|
||||
if err != nil {
|
||||
return fmt.Errorf("field %s: %w", field.Name, err)
|
||||
}
|
||||
|
||||
if !field.value.CanSet() {
|
||||
return fmt.Errorf("field %s: can't set value", field.Name)
|
||||
}
|
||||
field.value.Set(reflect.ValueOf(value))
|
||||
|
||||
assigned[e] = field.Name
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
25
vendor/github.com/cilium/ebpf/doc.go
generated
vendored
25
vendor/github.com/cilium/ebpf/doc.go
generated
vendored
@@ -1,25 +0,0 @@
|
||||
// Package ebpf is a toolkit for working with eBPF programs.
|
||||
//
|
||||
// eBPF programs are small snippets of code which are executed directly
|
||||
// in a VM in the Linux kernel, which makes them very fast and flexible.
|
||||
// Many Linux subsystems now accept eBPF programs. This makes it possible
|
||||
// to implement highly application specific logic inside the kernel,
|
||||
// without having to modify the actual kernel itself.
|
||||
//
|
||||
// This package is designed for long-running processes which
|
||||
// want to use eBPF to implement part of their application logic. It has no
|
||||
// run-time dependencies outside of the library and the Linux kernel itself.
|
||||
// eBPF code should be compiled ahead of time using clang, and shipped with
|
||||
// your application as any other resource.
|
||||
//
|
||||
// Use the link subpackage to attach a loaded program to a hook in the kernel.
|
||||
//
|
||||
// Note that losing all references to Map and Program resources will cause
|
||||
// their underlying file descriptors to be closed, potentially removing those
|
||||
// objects from the kernel. Always retain a reference by e.g. deferring a
|
||||
// Close() of a Collection or LoadAndAssign object until application exit.
|
||||
//
|
||||
// Special care needs to be taken when handling maps of type ProgramArray,
|
||||
// as the kernel erases its contents when the last userspace or bpffs
|
||||
// reference disappears, regardless of the map being in active use.
|
||||
package ebpf
|
||||
1314
vendor/github.com/cilium/ebpf/elf_reader.go
generated
vendored
1314
vendor/github.com/cilium/ebpf/elf_reader.go
generated
vendored
File diff suppressed because it is too large
Load Diff
373
vendor/github.com/cilium/ebpf/info.go
generated
vendored
373
vendor/github.com/cilium/ebpf/info.go
generated
vendored
@@ -1,373 +0,0 @@
|
||||
package ebpf
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf/asm"
|
||||
"github.com/cilium/ebpf/btf"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// MapInfo describes a map.
|
||||
type MapInfo struct {
|
||||
Type MapType
|
||||
id MapID
|
||||
KeySize uint32
|
||||
ValueSize uint32
|
||||
MaxEntries uint32
|
||||
Flags uint32
|
||||
// Name as supplied by user space at load time. Available from 4.15.
|
||||
Name string
|
||||
}
|
||||
|
||||
func newMapInfoFromFd(fd *sys.FD) (*MapInfo, error) {
|
||||
var info sys.MapInfo
|
||||
err := sys.ObjInfo(fd, &info)
|
||||
if errors.Is(err, syscall.EINVAL) {
|
||||
return newMapInfoFromProc(fd)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &MapInfo{
|
||||
MapType(info.Type),
|
||||
MapID(info.Id),
|
||||
info.KeySize,
|
||||
info.ValueSize,
|
||||
info.MaxEntries,
|
||||
uint32(info.MapFlags),
|
||||
unix.ByteSliceToString(info.Name[:]),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func newMapInfoFromProc(fd *sys.FD) (*MapInfo, error) {
|
||||
var mi MapInfo
|
||||
err := scanFdInfo(fd, map[string]interface{}{
|
||||
"map_type": &mi.Type,
|
||||
"key_size": &mi.KeySize,
|
||||
"value_size": &mi.ValueSize,
|
||||
"max_entries": &mi.MaxEntries,
|
||||
"map_flags": &mi.Flags,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &mi, nil
|
||||
}
|
||||
|
||||
// ID returns the map ID.
|
||||
//
|
||||
// Available from 4.13.
|
||||
//
|
||||
// The bool return value indicates whether this optional field is available.
|
||||
func (mi *MapInfo) ID() (MapID, bool) {
|
||||
return mi.id, mi.id > 0
|
||||
}
|
||||
|
||||
// programStats holds statistics of a program.
|
||||
type programStats struct {
|
||||
// Total accumulated runtime of the program ins ns.
|
||||
runtime time.Duration
|
||||
// Total number of times the program was called.
|
||||
runCount uint64
|
||||
}
|
||||
|
||||
// ProgramInfo describes a program.
|
||||
type ProgramInfo struct {
|
||||
Type ProgramType
|
||||
id ProgramID
|
||||
// Truncated hash of the BPF bytecode. Available from 4.13.
|
||||
Tag string
|
||||
// Name as supplied by user space at load time. Available from 4.15.
|
||||
Name string
|
||||
|
||||
createdByUID uint32
|
||||
haveCreatedByUID bool
|
||||
btf btf.ID
|
||||
stats *programStats
|
||||
|
||||
maps []MapID
|
||||
insns []byte
|
||||
}
|
||||
|
||||
func newProgramInfoFromFd(fd *sys.FD) (*ProgramInfo, error) {
|
||||
var info sys.ProgInfo
|
||||
err := sys.ObjInfo(fd, &info)
|
||||
if errors.Is(err, syscall.EINVAL) {
|
||||
return newProgramInfoFromProc(fd)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pi := ProgramInfo{
|
||||
Type: ProgramType(info.Type),
|
||||
id: ProgramID(info.Id),
|
||||
Tag: hex.EncodeToString(info.Tag[:]),
|
||||
Name: unix.ByteSliceToString(info.Name[:]),
|
||||
btf: btf.ID(info.BtfId),
|
||||
stats: &programStats{
|
||||
runtime: time.Duration(info.RunTimeNs),
|
||||
runCount: info.RunCnt,
|
||||
},
|
||||
}
|
||||
|
||||
// Start with a clean struct for the second call, otherwise we may get EFAULT.
|
||||
var info2 sys.ProgInfo
|
||||
|
||||
if info.NrMapIds > 0 {
|
||||
pi.maps = make([]MapID, info.NrMapIds)
|
||||
info2.NrMapIds = info.NrMapIds
|
||||
info2.MapIds = sys.NewPointer(unsafe.Pointer(&pi.maps[0]))
|
||||
} else if haveProgramInfoMapIDs() == nil {
|
||||
// This program really has no associated maps.
|
||||
pi.maps = make([]MapID, 0)
|
||||
} else {
|
||||
// The kernel doesn't report associated maps.
|
||||
pi.maps = nil
|
||||
}
|
||||
|
||||
// createdByUID and NrMapIds were introduced in the same kernel version.
|
||||
if pi.maps != nil {
|
||||
pi.createdByUID = info.CreatedByUid
|
||||
pi.haveCreatedByUID = true
|
||||
}
|
||||
|
||||
if info.XlatedProgLen > 0 {
|
||||
pi.insns = make([]byte, info.XlatedProgLen)
|
||||
info2.XlatedProgLen = info.XlatedProgLen
|
||||
info2.XlatedProgInsns = sys.NewSlicePointer(pi.insns)
|
||||
}
|
||||
|
||||
if info.NrMapIds > 0 || info.XlatedProgLen > 0 {
|
||||
if err := sys.ObjInfo(fd, &info2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return &pi, nil
|
||||
}
|
||||
|
||||
func newProgramInfoFromProc(fd *sys.FD) (*ProgramInfo, error) {
|
||||
var info ProgramInfo
|
||||
err := scanFdInfo(fd, map[string]interface{}{
|
||||
"prog_type": &info.Type,
|
||||
"prog_tag": &info.Tag,
|
||||
})
|
||||
if errors.Is(err, errMissingFields) {
|
||||
return nil, &internal.UnsupportedFeatureError{
|
||||
Name: "reading program info from /proc/self/fdinfo",
|
||||
MinimumVersion: internal.Version{4, 10, 0},
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &info, nil
|
||||
}
|
||||
|
||||
// ID returns the program ID.
|
||||
//
|
||||
// Available from 4.13.
|
||||
//
|
||||
// The bool return value indicates whether this optional field is available.
|
||||
func (pi *ProgramInfo) ID() (ProgramID, bool) {
|
||||
return pi.id, pi.id > 0
|
||||
}
|
||||
|
||||
// CreatedByUID returns the Uid that created the program.
|
||||
//
|
||||
// Available from 4.15.
|
||||
//
|
||||
// The bool return value indicates whether this optional field is available.
|
||||
func (pi *ProgramInfo) CreatedByUID() (uint32, bool) {
|
||||
return pi.createdByUID, pi.haveCreatedByUID
|
||||
}
|
||||
|
||||
// BTFID returns the BTF ID associated with the program.
|
||||
//
|
||||
// The ID is only valid as long as the associated program is kept alive.
|
||||
// Available from 5.0.
|
||||
//
|
||||
// The bool return value indicates whether this optional field is available and
|
||||
// populated. (The field may be available but not populated if the kernel
|
||||
// supports the field but the program was loaded without BTF information.)
|
||||
func (pi *ProgramInfo) BTFID() (btf.ID, bool) {
|
||||
return pi.btf, pi.btf > 0
|
||||
}
|
||||
|
||||
// RunCount returns the total number of times the program was called.
|
||||
//
|
||||
// Can return 0 if the collection of statistics is not enabled. See EnableStats().
|
||||
// The bool return value indicates whether this optional field is available.
|
||||
func (pi *ProgramInfo) RunCount() (uint64, bool) {
|
||||
if pi.stats != nil {
|
||||
return pi.stats.runCount, true
|
||||
}
|
||||
return 0, false
|
||||
}
|
||||
|
||||
// Runtime returns the total accumulated runtime of the program.
|
||||
//
|
||||
// Can return 0 if the collection of statistics is not enabled. See EnableStats().
|
||||
// The bool return value indicates whether this optional field is available.
|
||||
func (pi *ProgramInfo) Runtime() (time.Duration, bool) {
|
||||
if pi.stats != nil {
|
||||
return pi.stats.runtime, true
|
||||
}
|
||||
return time.Duration(0), false
|
||||
}
|
||||
|
||||
// Instructions returns the 'xlated' instruction stream of the program
|
||||
// after it has been verified and rewritten by the kernel. These instructions
|
||||
// cannot be loaded back into the kernel as-is, this is mainly used for
|
||||
// inspecting loaded programs for troubleshooting, dumping, etc.
|
||||
//
|
||||
// For example, map accesses are made to reference their kernel map IDs,
|
||||
// not the FDs they had when the program was inserted. Note that before
|
||||
// the introduction of bpf_insn_prepare_dump in kernel 4.16, xlated
|
||||
// instructions were not sanitized, making the output even less reusable
|
||||
// and less likely to round-trip or evaluate to the same program Tag.
|
||||
//
|
||||
// The first instruction is marked as a symbol using the Program's name.
|
||||
//
|
||||
// Available from 4.13. Requires CAP_BPF or equivalent.
|
||||
func (pi *ProgramInfo) Instructions() (asm.Instructions, error) {
|
||||
// If the calling process is not BPF-capable or if the kernel doesn't
|
||||
// support getting xlated instructions, the field will be zero.
|
||||
if len(pi.insns) == 0 {
|
||||
return nil, fmt.Errorf("insufficient permissions or unsupported kernel: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
r := bytes.NewReader(pi.insns)
|
||||
var insns asm.Instructions
|
||||
if err := insns.Unmarshal(r, internal.NativeEndian); err != nil {
|
||||
return nil, fmt.Errorf("unmarshaling instructions: %w", err)
|
||||
}
|
||||
|
||||
// Tag the first instruction with the name of the program, if available.
|
||||
insns[0] = insns[0].WithSymbol(pi.Name)
|
||||
|
||||
return insns, nil
|
||||
}
|
||||
|
||||
// MapIDs returns the maps related to the program.
|
||||
//
|
||||
// Available from 4.15.
|
||||
//
|
||||
// The bool return value indicates whether this optional field is available.
|
||||
func (pi *ProgramInfo) MapIDs() ([]MapID, bool) {
|
||||
return pi.maps, pi.maps != nil
|
||||
}
|
||||
|
||||
func scanFdInfo(fd *sys.FD, fields map[string]interface{}) error {
|
||||
fh, err := os.Open(fmt.Sprintf("/proc/self/fdinfo/%d", fd.Int()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer fh.Close()
|
||||
|
||||
if err := scanFdInfoReader(fh, fields); err != nil {
|
||||
return fmt.Errorf("%s: %w", fh.Name(), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var errMissingFields = errors.New("missing fields")
|
||||
|
||||
func scanFdInfoReader(r io.Reader, fields map[string]interface{}) error {
|
||||
var (
|
||||
scanner = bufio.NewScanner(r)
|
||||
scanned int
|
||||
)
|
||||
|
||||
for scanner.Scan() {
|
||||
parts := strings.SplitN(scanner.Text(), "\t", 2)
|
||||
if len(parts) != 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
name := strings.TrimSuffix(parts[0], ":")
|
||||
field, ok := fields[string(name)]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
if n, err := fmt.Sscanln(parts[1], field); err != nil || n != 1 {
|
||||
return fmt.Errorf("can't parse field %s: %v", name, err)
|
||||
}
|
||||
|
||||
scanned++
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(fields) > 0 && scanned == 0 {
|
||||
return ErrNotSupported
|
||||
}
|
||||
|
||||
if scanned != len(fields) {
|
||||
return errMissingFields
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// EnableStats starts the measuring of the runtime
|
||||
// and run counts of eBPF programs.
|
||||
//
|
||||
// Collecting statistics can have an impact on the performance.
|
||||
//
|
||||
// Requires at least 5.8.
|
||||
func EnableStats(which uint32) (io.Closer, error) {
|
||||
fd, err := sys.EnableStats(&sys.EnableStatsAttr{
|
||||
Type: which,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return fd, nil
|
||||
}
|
||||
|
||||
var haveProgramInfoMapIDs = internal.NewFeatureTest("map IDs in program info", "4.15", func() error {
|
||||
prog, err := progLoad(asm.Instructions{
|
||||
asm.LoadImm(asm.R0, 0, asm.DWord),
|
||||
asm.Return(),
|
||||
}, SocketFilter, "MIT")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer prog.Close()
|
||||
|
||||
err = sys.ObjInfo(prog, &sys.ProgInfo{
|
||||
// NB: Don't need to allocate MapIds since the program isn't using
|
||||
// any maps.
|
||||
NrMapIds: 1,
|
||||
})
|
||||
if errors.Is(err, unix.EINVAL) {
|
||||
// Most likely the syscall doesn't exist.
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
if errors.Is(err, unix.E2BIG) {
|
||||
// We've hit check_uarg_tail_zero on older kernels.
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
|
||||
return err
|
||||
})
|
||||
8
vendor/github.com/cilium/ebpf/internal/align.go
generated
vendored
8
vendor/github.com/cilium/ebpf/internal/align.go
generated
vendored
@@ -1,8 +0,0 @@
|
||||
package internal
|
||||
|
||||
import "golang.org/x/exp/constraints"
|
||||
|
||||
// Align returns 'n' updated to 'alignment' boundary.
|
||||
func Align[I constraints.Integer](n, alignment I) I {
|
||||
return (n + alignment - 1) / alignment * alignment
|
||||
}
|
||||
31
vendor/github.com/cilium/ebpf/internal/buffer.go
generated
vendored
31
vendor/github.com/cilium/ebpf/internal/buffer.go
generated
vendored
@@ -1,31 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var bytesBufferPool = sync.Pool{
|
||||
New: func() interface{} {
|
||||
return new(bytes.Buffer)
|
||||
},
|
||||
}
|
||||
|
||||
// NewBuffer retrieves a [bytes.Buffer] from a pool an re-initialises it.
|
||||
//
|
||||
// The returned buffer should be passed to [PutBuffer].
|
||||
func NewBuffer(buf []byte) *bytes.Buffer {
|
||||
wr := bytesBufferPool.Get().(*bytes.Buffer)
|
||||
// Reinitialize the Buffer with a new backing slice since it is returned to
|
||||
// the caller by wr.Bytes() below. Pooling is faster despite calling
|
||||
// NewBuffer. The pooled alloc is still reused, it only needs to be zeroed.
|
||||
*wr = *bytes.NewBuffer(buf)
|
||||
return wr
|
||||
}
|
||||
|
||||
// PutBuffer releases a buffer to the pool.
|
||||
func PutBuffer(buf *bytes.Buffer) {
|
||||
// Release reference to the backing buffer.
|
||||
*buf = *bytes.NewBuffer(nil)
|
||||
bytesBufferPool.Put(buf)
|
||||
}
|
||||
51
vendor/github.com/cilium/ebpf/internal/cpu.go
generated
vendored
51
vendor/github.com/cilium/ebpf/internal/cpu.go
generated
vendored
@@ -1,51 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// PossibleCPUs returns the max number of CPUs a system may possibly have
|
||||
// Logical CPU numbers must be of the form 0-n
|
||||
var PossibleCPUs = Memoize(func() (int, error) {
|
||||
return parseCPUsFromFile("/sys/devices/system/cpu/possible")
|
||||
})
|
||||
|
||||
func parseCPUsFromFile(path string) (int, error) {
|
||||
spec, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
n, err := parseCPUs(string(spec))
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("can't parse %s: %v", path, err)
|
||||
}
|
||||
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// parseCPUs parses the number of cpus from a string produced
|
||||
// by bitmap_list_string() in the Linux kernel.
|
||||
// Multiple ranges are rejected, since they can't be unified
|
||||
// into a single number.
|
||||
// This is the format of /sys/devices/system/cpu/possible, it
|
||||
// is not suitable for /sys/devices/system/cpu/online, etc.
|
||||
func parseCPUs(spec string) (int, error) {
|
||||
if strings.Trim(spec, "\n") == "0" {
|
||||
return 1, nil
|
||||
}
|
||||
|
||||
var low, high int
|
||||
n, err := fmt.Sscanf(spec, "%d-%d\n", &low, &high)
|
||||
if n != 2 || err != nil {
|
||||
return 0, fmt.Errorf("invalid format: %s", spec)
|
||||
}
|
||||
if low != 0 {
|
||||
return 0, fmt.Errorf("CPU spec doesn't start at zero: %s", spec)
|
||||
}
|
||||
|
||||
// cpus is 0 indexed
|
||||
return high + 1, nil
|
||||
}
|
||||
91
vendor/github.com/cilium/ebpf/internal/deque.go
generated
vendored
91
vendor/github.com/cilium/ebpf/internal/deque.go
generated
vendored
@@ -1,91 +0,0 @@
|
||||
package internal
|
||||
|
||||
import "math/bits"
|
||||
|
||||
// Deque implements a double ended queue.
|
||||
type Deque[T any] struct {
|
||||
elems []T
|
||||
read, write uint64
|
||||
mask uint64
|
||||
}
|
||||
|
||||
// Reset clears the contents of the deque while retaining the backing buffer.
|
||||
func (dq *Deque[T]) Reset() {
|
||||
var zero T
|
||||
|
||||
for i := dq.read; i < dq.write; i++ {
|
||||
dq.elems[i&dq.mask] = zero
|
||||
}
|
||||
|
||||
dq.read, dq.write = 0, 0
|
||||
}
|
||||
|
||||
func (dq *Deque[T]) Empty() bool {
|
||||
return dq.read == dq.write
|
||||
}
|
||||
|
||||
// Push adds an element to the end.
|
||||
func (dq *Deque[T]) Push(e T) {
|
||||
dq.Grow(1)
|
||||
dq.elems[dq.write&dq.mask] = e
|
||||
dq.write++
|
||||
}
|
||||
|
||||
// Shift returns the first element or the zero value.
|
||||
func (dq *Deque[T]) Shift() T {
|
||||
var zero T
|
||||
|
||||
if dq.Empty() {
|
||||
return zero
|
||||
}
|
||||
|
||||
index := dq.read & dq.mask
|
||||
t := dq.elems[index]
|
||||
dq.elems[index] = zero
|
||||
dq.read++
|
||||
return t
|
||||
}
|
||||
|
||||
// Pop returns the last element or the zero value.
|
||||
func (dq *Deque[T]) Pop() T {
|
||||
var zero T
|
||||
|
||||
if dq.Empty() {
|
||||
return zero
|
||||
}
|
||||
|
||||
dq.write--
|
||||
index := dq.write & dq.mask
|
||||
t := dq.elems[index]
|
||||
dq.elems[index] = zero
|
||||
return t
|
||||
}
|
||||
|
||||
// Grow the deque's capacity, if necessary, to guarantee space for another n
|
||||
// elements.
|
||||
func (dq *Deque[T]) Grow(n int) {
|
||||
have := dq.write - dq.read
|
||||
need := have + uint64(n)
|
||||
if need < have {
|
||||
panic("overflow")
|
||||
}
|
||||
if uint64(len(dq.elems)) >= need {
|
||||
return
|
||||
}
|
||||
|
||||
// Round up to the new power of two which is at least 8.
|
||||
// See https://jameshfisher.com/2018/03/30/round-up-power-2/
|
||||
capacity := 1 << (64 - bits.LeadingZeros64(need-1))
|
||||
if capacity < 8 {
|
||||
capacity = 8
|
||||
}
|
||||
|
||||
elems := make([]T, have, capacity)
|
||||
pivot := dq.read & dq.mask
|
||||
copied := copy(elems, dq.elems[pivot:])
|
||||
copy(elems[copied:], dq.elems[:pivot])
|
||||
|
||||
dq.elems = elems[:capacity]
|
||||
dq.mask = uint64(capacity) - 1
|
||||
dq.read, dq.write = 0, have
|
||||
}
|
||||
102
vendor/github.com/cilium/ebpf/internal/elf.go
generated
vendored
102
vendor/github.com/cilium/ebpf/internal/elf.go
generated
vendored
@@ -1,102 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"debug/elf"
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
type SafeELFFile struct {
|
||||
*elf.File
|
||||
}
|
||||
|
||||
// NewSafeELFFile reads an ELF safely.
|
||||
//
|
||||
// Any panic during parsing is turned into an error. This is necessary since
|
||||
// there are a bunch of unfixed bugs in debug/elf.
|
||||
//
|
||||
// https://github.com/golang/go/issues?q=is%3Aissue+is%3Aopen+debug%2Felf+in%3Atitle
|
||||
func NewSafeELFFile(r io.ReaderAt) (safe *SafeELFFile, err error) {
|
||||
defer func() {
|
||||
r := recover()
|
||||
if r == nil {
|
||||
return
|
||||
}
|
||||
|
||||
safe = nil
|
||||
err = fmt.Errorf("reading ELF file panicked: %s", r)
|
||||
}()
|
||||
|
||||
file, err := elf.NewFile(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &SafeELFFile{file}, nil
|
||||
}
|
||||
|
||||
// OpenSafeELFFile reads an ELF from a file.
|
||||
//
|
||||
// It works like NewSafeELFFile, with the exception that safe.Close will
|
||||
// close the underlying file.
|
||||
func OpenSafeELFFile(path string) (safe *SafeELFFile, err error) {
|
||||
defer func() {
|
||||
r := recover()
|
||||
if r == nil {
|
||||
return
|
||||
}
|
||||
|
||||
safe = nil
|
||||
err = fmt.Errorf("reading ELF file panicked: %s", r)
|
||||
}()
|
||||
|
||||
file, err := elf.Open(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &SafeELFFile{file}, nil
|
||||
}
|
||||
|
||||
// Symbols is the safe version of elf.File.Symbols.
|
||||
func (se *SafeELFFile) Symbols() (syms []elf.Symbol, err error) {
|
||||
defer func() {
|
||||
r := recover()
|
||||
if r == nil {
|
||||
return
|
||||
}
|
||||
|
||||
syms = nil
|
||||
err = fmt.Errorf("reading ELF symbols panicked: %s", r)
|
||||
}()
|
||||
|
||||
syms, err = se.File.Symbols()
|
||||
return
|
||||
}
|
||||
|
||||
// DynamicSymbols is the safe version of elf.File.DynamicSymbols.
|
||||
func (se *SafeELFFile) DynamicSymbols() (syms []elf.Symbol, err error) {
|
||||
defer func() {
|
||||
r := recover()
|
||||
if r == nil {
|
||||
return
|
||||
}
|
||||
|
||||
syms = nil
|
||||
err = fmt.Errorf("reading ELF dynamic symbols panicked: %s", r)
|
||||
}()
|
||||
|
||||
syms, err = se.File.DynamicSymbols()
|
||||
return
|
||||
}
|
||||
|
||||
// SectionsByType returns all sections in the file with the specified section type.
|
||||
func (se *SafeELFFile) SectionsByType(typ elf.SectionType) []*elf.Section {
|
||||
sections := make([]*elf.Section, 0, 1)
|
||||
for _, section := range se.Sections {
|
||||
if section.Type == typ {
|
||||
sections = append(sections, section)
|
||||
}
|
||||
}
|
||||
return sections
|
||||
}
|
||||
12
vendor/github.com/cilium/ebpf/internal/endian_be.go
generated
vendored
12
vendor/github.com/cilium/ebpf/internal/endian_be.go
generated
vendored
@@ -1,12 +0,0 @@
|
||||
//go:build armbe || arm64be || mips || mips64 || mips64p32 || ppc64 || s390 || s390x || sparc || sparc64
|
||||
|
||||
package internal
|
||||
|
||||
import "encoding/binary"
|
||||
|
||||
// NativeEndian is set to either binary.BigEndian or binary.LittleEndian,
|
||||
// depending on the host's endianness.
|
||||
var NativeEndian binary.ByteOrder = binary.BigEndian
|
||||
|
||||
// ClangEndian is set to either "el" or "eb" depending on the host's endianness.
|
||||
const ClangEndian = "eb"
|
||||
12
vendor/github.com/cilium/ebpf/internal/endian_le.go
generated
vendored
12
vendor/github.com/cilium/ebpf/internal/endian_le.go
generated
vendored
@@ -1,12 +0,0 @@
|
||||
//go:build 386 || amd64 || amd64p32 || arm || arm64 || loong64 || mipsle || mips64le || mips64p32le || ppc64le || riscv64
|
||||
|
||||
package internal
|
||||
|
||||
import "encoding/binary"
|
||||
|
||||
// NativeEndian is set to either binary.BigEndian or binary.LittleEndian,
|
||||
// depending on the host's endianness.
|
||||
var NativeEndian binary.ByteOrder = binary.LittleEndian
|
||||
|
||||
// ClangEndian is set to either "el" or "eb" depending on the host's endianness.
|
||||
const ClangEndian = "el"
|
||||
198
vendor/github.com/cilium/ebpf/internal/errors.go
generated
vendored
198
vendor/github.com/cilium/ebpf/internal/errors.go
generated
vendored
@@ -1,198 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ErrorWithLog wraps err in a VerifierError that includes the parsed verifier
|
||||
// log buffer.
|
||||
//
|
||||
// The default error output is a summary of the full log. The latter can be
|
||||
// accessed via VerifierError.Log or by formatting the error, see Format.
|
||||
func ErrorWithLog(source string, err error, log []byte, truncated bool) *VerifierError {
|
||||
const whitespace = "\t\r\v\n "
|
||||
|
||||
// Convert verifier log C string by truncating it on the first 0 byte
|
||||
// and trimming trailing whitespace before interpreting as a Go string.
|
||||
if i := bytes.IndexByte(log, 0); i != -1 {
|
||||
log = log[:i]
|
||||
}
|
||||
|
||||
log = bytes.Trim(log, whitespace)
|
||||
if len(log) == 0 {
|
||||
return &VerifierError{source, err, nil, truncated}
|
||||
}
|
||||
|
||||
logLines := bytes.Split(log, []byte{'\n'})
|
||||
lines := make([]string, 0, len(logLines))
|
||||
for _, line := range logLines {
|
||||
// Don't remove leading white space on individual lines. We rely on it
|
||||
// when outputting logs.
|
||||
lines = append(lines, string(bytes.TrimRight(line, whitespace)))
|
||||
}
|
||||
|
||||
return &VerifierError{source, err, lines, truncated}
|
||||
}
|
||||
|
||||
// VerifierError includes information from the eBPF verifier.
|
||||
//
|
||||
// It summarises the log output, see Format if you want to output the full contents.
|
||||
type VerifierError struct {
|
||||
source string
|
||||
// The error which caused this error.
|
||||
Cause error
|
||||
// The verifier output split into lines.
|
||||
Log []string
|
||||
// Whether the log output is truncated, based on several heuristics.
|
||||
Truncated bool
|
||||
}
|
||||
|
||||
func (le *VerifierError) Unwrap() error {
|
||||
return le.Cause
|
||||
}
|
||||
|
||||
func (le *VerifierError) Error() string {
|
||||
log := le.Log
|
||||
if n := len(log); n > 0 && strings.HasPrefix(log[n-1], "processed ") {
|
||||
// Get rid of "processed 39 insns (limit 1000000) ..." from summary.
|
||||
log = log[:n-1]
|
||||
}
|
||||
|
||||
var b strings.Builder
|
||||
fmt.Fprintf(&b, "%s: %s", le.source, le.Cause.Error())
|
||||
|
||||
n := len(log)
|
||||
if n == 0 {
|
||||
return b.String()
|
||||
}
|
||||
|
||||
lines := log[n-1:]
|
||||
if n >= 2 && (includePreviousLine(log[n-1]) || le.Truncated) {
|
||||
// Add one more line of context if it aids understanding the error.
|
||||
lines = log[n-2:]
|
||||
}
|
||||
|
||||
for _, line := range lines {
|
||||
b.WriteString(": ")
|
||||
b.WriteString(strings.TrimSpace(line))
|
||||
}
|
||||
|
||||
omitted := len(le.Log) - len(lines)
|
||||
if omitted == 0 && !le.Truncated {
|
||||
return b.String()
|
||||
}
|
||||
|
||||
b.WriteString(" (")
|
||||
if le.Truncated {
|
||||
b.WriteString("truncated")
|
||||
}
|
||||
|
||||
if omitted > 0 {
|
||||
if le.Truncated {
|
||||
b.WriteString(", ")
|
||||
}
|
||||
fmt.Fprintf(&b, "%d line(s) omitted", omitted)
|
||||
}
|
||||
b.WriteString(")")
|
||||
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// includePreviousLine returns true if the given line likely is better
|
||||
// understood with additional context from the preceding line.
|
||||
func includePreviousLine(line string) bool {
|
||||
// We need to find a good trade off between understandable error messages
|
||||
// and too much complexity here. Checking the string prefix is ok, requiring
|
||||
// regular expressions to do it is probably overkill.
|
||||
|
||||
if strings.HasPrefix(line, "\t") {
|
||||
// [13] STRUCT drm_rect size=16 vlen=4
|
||||
// \tx1 type_id=2
|
||||
return true
|
||||
}
|
||||
|
||||
if len(line) >= 2 && line[0] == 'R' && line[1] >= '0' && line[1] <= '9' {
|
||||
// 0: (95) exit
|
||||
// R0 !read_ok
|
||||
return true
|
||||
}
|
||||
|
||||
if strings.HasPrefix(line, "invalid bpf_context access") {
|
||||
// 0: (79) r6 = *(u64 *)(r1 +0)
|
||||
// func '__x64_sys_recvfrom' arg0 type FWD is not a struct
|
||||
// invalid bpf_context access off=0 size=8
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// Format the error.
|
||||
//
|
||||
// Understood verbs are %s and %v, which are equivalent to calling Error(). %v
|
||||
// allows outputting additional information using the following flags:
|
||||
//
|
||||
// %+<width>v: Output the first <width> lines, or all lines if no width is given.
|
||||
// %-<width>v: Output the last <width> lines, or all lines if no width is given.
|
||||
//
|
||||
// Use width to specify how many lines to output. Use the '-' flag to output
|
||||
// lines from the end of the log instead of the beginning.
|
||||
func (le *VerifierError) Format(f fmt.State, verb rune) {
|
||||
switch verb {
|
||||
case 's':
|
||||
_, _ = io.WriteString(f, le.Error())
|
||||
|
||||
case 'v':
|
||||
n, haveWidth := f.Width()
|
||||
if !haveWidth || n > len(le.Log) {
|
||||
n = len(le.Log)
|
||||
}
|
||||
|
||||
if !f.Flag('+') && !f.Flag('-') {
|
||||
if haveWidth {
|
||||
_, _ = io.WriteString(f, "%!v(BADWIDTH)")
|
||||
return
|
||||
}
|
||||
|
||||
_, _ = io.WriteString(f, le.Error())
|
||||
return
|
||||
}
|
||||
|
||||
if f.Flag('+') && f.Flag('-') {
|
||||
_, _ = io.WriteString(f, "%!v(BADFLAG)")
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Fprintf(f, "%s: %s:", le.source, le.Cause.Error())
|
||||
|
||||
omitted := len(le.Log) - n
|
||||
lines := le.Log[:n]
|
||||
if f.Flag('-') {
|
||||
// Print last instead of first lines.
|
||||
lines = le.Log[len(le.Log)-n:]
|
||||
if omitted > 0 {
|
||||
fmt.Fprintf(f, "\n\t(%d line(s) omitted)", omitted)
|
||||
}
|
||||
}
|
||||
|
||||
for _, line := range lines {
|
||||
fmt.Fprintf(f, "\n\t%s", line)
|
||||
}
|
||||
|
||||
if !f.Flag('-') {
|
||||
if omitted > 0 {
|
||||
fmt.Fprintf(f, "\n\t(%d line(s) omitted)", omitted)
|
||||
}
|
||||
}
|
||||
|
||||
if le.Truncated {
|
||||
fmt.Fprintf(f, "\n\t(truncated)")
|
||||
}
|
||||
|
||||
default:
|
||||
fmt.Fprintf(f, "%%!%c(BADVERB)", verb)
|
||||
}
|
||||
}
|
||||
184
vendor/github.com/cilium/ebpf/internal/feature.go
generated
vendored
184
vendor/github.com/cilium/ebpf/internal/feature.go
generated
vendored
@@ -1,184 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// ErrNotSupported indicates that a feature is not supported by the current kernel.
|
||||
var ErrNotSupported = errors.New("not supported")
|
||||
|
||||
// UnsupportedFeatureError is returned by FeatureTest() functions.
|
||||
type UnsupportedFeatureError struct {
|
||||
// The minimum Linux mainline version required for this feature.
|
||||
// Used for the error string, and for sanity checking during testing.
|
||||
MinimumVersion Version
|
||||
|
||||
// The name of the feature that isn't supported.
|
||||
Name string
|
||||
}
|
||||
|
||||
func (ufe *UnsupportedFeatureError) Error() string {
|
||||
if ufe.MinimumVersion.Unspecified() {
|
||||
return fmt.Sprintf("%s not supported", ufe.Name)
|
||||
}
|
||||
return fmt.Sprintf("%s not supported (requires >= %s)", ufe.Name, ufe.MinimumVersion)
|
||||
}
|
||||
|
||||
// Is indicates that UnsupportedFeatureError is ErrNotSupported.
|
||||
func (ufe *UnsupportedFeatureError) Is(target error) bool {
|
||||
return target == ErrNotSupported
|
||||
}
|
||||
|
||||
// FeatureTest caches the result of a [FeatureTestFn].
|
||||
//
|
||||
// Fields should not be modified after creation.
|
||||
type FeatureTest struct {
|
||||
// The name of the feature being detected.
|
||||
Name string
|
||||
// Version in in the form Major.Minor[.Patch].
|
||||
Version string
|
||||
// The feature test itself.
|
||||
Fn FeatureTestFn
|
||||
|
||||
mu sync.RWMutex
|
||||
done bool
|
||||
result error
|
||||
}
|
||||
|
||||
// FeatureTestFn is used to determine whether the kernel supports
|
||||
// a certain feature.
|
||||
//
|
||||
// The return values have the following semantics:
|
||||
//
|
||||
// err == ErrNotSupported: the feature is not available
|
||||
// err == nil: the feature is available
|
||||
// err != nil: the test couldn't be executed
|
||||
type FeatureTestFn func() error
|
||||
|
||||
// NewFeatureTest is a convenient way to create a single [FeatureTest].
|
||||
func NewFeatureTest(name, version string, fn FeatureTestFn) func() error {
|
||||
ft := &FeatureTest{
|
||||
Name: name,
|
||||
Version: version,
|
||||
Fn: fn,
|
||||
}
|
||||
|
||||
return ft.execute
|
||||
}
|
||||
|
||||
// execute the feature test.
|
||||
//
|
||||
// The result is cached if the test is conclusive.
|
||||
//
|
||||
// See [FeatureTestFn] for the meaning of the returned error.
|
||||
func (ft *FeatureTest) execute() error {
|
||||
ft.mu.RLock()
|
||||
result, done := ft.result, ft.done
|
||||
ft.mu.RUnlock()
|
||||
|
||||
if done {
|
||||
return result
|
||||
}
|
||||
|
||||
ft.mu.Lock()
|
||||
defer ft.mu.Unlock()
|
||||
|
||||
// The test may have been executed by another caller while we were
|
||||
// waiting to acquire ft.mu.
|
||||
if ft.done {
|
||||
return ft.result
|
||||
}
|
||||
|
||||
err := ft.Fn()
|
||||
if err == nil {
|
||||
ft.done = true
|
||||
return nil
|
||||
}
|
||||
|
||||
if errors.Is(err, ErrNotSupported) {
|
||||
var v Version
|
||||
if ft.Version != "" {
|
||||
v, err = NewVersion(ft.Version)
|
||||
if err != nil {
|
||||
return fmt.Errorf("feature %s: %w", ft.Name, err)
|
||||
}
|
||||
}
|
||||
|
||||
ft.done = true
|
||||
ft.result = &UnsupportedFeatureError{
|
||||
MinimumVersion: v,
|
||||
Name: ft.Name,
|
||||
}
|
||||
|
||||
return ft.result
|
||||
}
|
||||
|
||||
// We couldn't execute the feature test to a point
|
||||
// where it could make a determination.
|
||||
// Don't cache the result, just return it.
|
||||
return fmt.Errorf("detect support for %s: %w", ft.Name, err)
|
||||
}
|
||||
|
||||
// FeatureMatrix groups multiple related feature tests into a map.
|
||||
//
|
||||
// Useful when there is a small number of discrete features which are known
|
||||
// at compile time.
|
||||
//
|
||||
// It must not be modified concurrently with calling [FeatureMatrix.Result].
|
||||
type FeatureMatrix[K comparable] map[K]*FeatureTest
|
||||
|
||||
// Result returns the outcome of the feature test for the given key.
|
||||
//
|
||||
// It's safe to call this function concurrently.
|
||||
func (fm FeatureMatrix[K]) Result(key K) error {
|
||||
ft, ok := fm[key]
|
||||
if !ok {
|
||||
return fmt.Errorf("no feature probe for %v", key)
|
||||
}
|
||||
|
||||
return ft.execute()
|
||||
}
|
||||
|
||||
// FeatureCache caches a potentially unlimited number of feature probes.
|
||||
//
|
||||
// Useful when there is a high cardinality for a feature test.
|
||||
type FeatureCache[K comparable] struct {
|
||||
mu sync.RWMutex
|
||||
newTest func(K) *FeatureTest
|
||||
features map[K]*FeatureTest
|
||||
}
|
||||
|
||||
func NewFeatureCache[K comparable](newTest func(K) *FeatureTest) *FeatureCache[K] {
|
||||
return &FeatureCache[K]{
|
||||
newTest: newTest,
|
||||
features: make(map[K]*FeatureTest),
|
||||
}
|
||||
}
|
||||
|
||||
func (fc *FeatureCache[K]) Result(key K) error {
|
||||
// NB: Executing the feature test happens without fc.mu taken.
|
||||
return fc.retrieve(key).execute()
|
||||
}
|
||||
|
||||
func (fc *FeatureCache[K]) retrieve(key K) *FeatureTest {
|
||||
fc.mu.RLock()
|
||||
ft := fc.features[key]
|
||||
fc.mu.RUnlock()
|
||||
|
||||
if ft != nil {
|
||||
return ft
|
||||
}
|
||||
|
||||
fc.mu.Lock()
|
||||
defer fc.mu.Unlock()
|
||||
|
||||
if ft := fc.features[key]; ft != nil {
|
||||
return ft
|
||||
}
|
||||
|
||||
ft = fc.newTest(key)
|
||||
fc.features[key] = ft
|
||||
return ft
|
||||
}
|
||||
128
vendor/github.com/cilium/ebpf/internal/io.go
generated
vendored
128
vendor/github.com/cilium/ebpf/internal/io.go
generated
vendored
@@ -1,128 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// NewBufferedSectionReader wraps an io.ReaderAt in an appropriately-sized
|
||||
// buffered reader. It is a convenience function for reading subsections of
|
||||
// ELF sections while minimizing the amount of read() syscalls made.
|
||||
//
|
||||
// Syscall overhead is non-negligible in continuous integration context
|
||||
// where ELFs might be accessed over virtual filesystems with poor random
|
||||
// access performance. Buffering reads makes sense because (sub)sections
|
||||
// end up being read completely anyway.
|
||||
//
|
||||
// Use instead of the r.Seek() + io.LimitReader() pattern.
|
||||
func NewBufferedSectionReader(ra io.ReaderAt, off, n int64) *bufio.Reader {
|
||||
// Clamp the size of the buffer to one page to avoid slurping large parts
|
||||
// of a file into memory. bufio.NewReader uses a hardcoded default buffer
|
||||
// of 4096. Allow arches with larger pages to allocate more, but don't
|
||||
// allocate a fixed 4k buffer if we only need to read a small segment.
|
||||
buf := n
|
||||
if ps := int64(os.Getpagesize()); n > ps {
|
||||
buf = ps
|
||||
}
|
||||
|
||||
return bufio.NewReaderSize(io.NewSectionReader(ra, off, n), int(buf))
|
||||
}
|
||||
|
||||
// DiscardZeroes makes sure that all written bytes are zero
|
||||
// before discarding them.
|
||||
type DiscardZeroes struct{}
|
||||
|
||||
func (DiscardZeroes) Write(p []byte) (int, error) {
|
||||
for _, b := range p {
|
||||
if b != 0 {
|
||||
return 0, errors.New("encountered non-zero byte")
|
||||
}
|
||||
}
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
// ReadAllCompressed decompresses a gzipped file into memory.
|
||||
func ReadAllCompressed(file string) ([]byte, error) {
|
||||
fh, err := os.Open(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer fh.Close()
|
||||
|
||||
gz, err := gzip.NewReader(fh)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer gz.Close()
|
||||
|
||||
return io.ReadAll(gz)
|
||||
}
|
||||
|
||||
// ReadUint64FromFile reads a uint64 from a file.
|
||||
//
|
||||
// format specifies the contents of the file in fmt.Scanf syntax.
|
||||
func ReadUint64FromFile(format string, path ...string) (uint64, error) {
|
||||
filename := filepath.Join(path...)
|
||||
data, err := os.ReadFile(filename)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("reading file %q: %w", filename, err)
|
||||
}
|
||||
|
||||
var value uint64
|
||||
n, err := fmt.Fscanf(bytes.NewReader(data), format, &value)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("parsing file %q: %w", filename, err)
|
||||
}
|
||||
if n != 1 {
|
||||
return 0, fmt.Errorf("parsing file %q: expected 1 item, got %d", filename, n)
|
||||
}
|
||||
|
||||
return value, nil
|
||||
}
|
||||
|
||||
type uint64FromFileKey struct {
|
||||
format, path string
|
||||
}
|
||||
|
||||
var uint64FromFileCache = struct {
|
||||
sync.RWMutex
|
||||
values map[uint64FromFileKey]uint64
|
||||
}{
|
||||
values: map[uint64FromFileKey]uint64{},
|
||||
}
|
||||
|
||||
// ReadUint64FromFileOnce is like readUint64FromFile but memoizes the result.
|
||||
func ReadUint64FromFileOnce(format string, path ...string) (uint64, error) {
|
||||
filename := filepath.Join(path...)
|
||||
key := uint64FromFileKey{format, filename}
|
||||
|
||||
uint64FromFileCache.RLock()
|
||||
if value, ok := uint64FromFileCache.values[key]; ok {
|
||||
uint64FromFileCache.RUnlock()
|
||||
return value, nil
|
||||
}
|
||||
uint64FromFileCache.RUnlock()
|
||||
|
||||
value, err := ReadUint64FromFile(format, filename)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
uint64FromFileCache.Lock()
|
||||
defer uint64FromFileCache.Unlock()
|
||||
|
||||
if value, ok := uint64FromFileCache.values[key]; ok {
|
||||
// Someone else got here before us, use what is cached.
|
||||
return value, nil
|
||||
}
|
||||
|
||||
uint64FromFileCache.values[key] = value
|
||||
return value, nil
|
||||
}
|
||||
267
vendor/github.com/cilium/ebpf/internal/kconfig/kconfig.go
generated
vendored
267
vendor/github.com/cilium/ebpf/internal/kconfig/kconfig.go
generated
vendored
@@ -1,267 +0,0 @@
|
||||
package kconfig
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/cilium/ebpf/btf"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
)
|
||||
|
||||
// Find find a kconfig file on the host.
|
||||
// It first reads from /boot/config- of the current running kernel and tries
|
||||
// /proc/config.gz if nothing was found in /boot.
|
||||
// If none of the file provide a kconfig, it returns an error.
|
||||
func Find() (*os.File, error) {
|
||||
kernelRelease, err := internal.KernelRelease()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot get kernel release: %w", err)
|
||||
}
|
||||
|
||||
path := "/boot/config-" + kernelRelease
|
||||
f, err := os.Open(path)
|
||||
if err == nil {
|
||||
return f, nil
|
||||
}
|
||||
|
||||
f, err = os.Open("/proc/config.gz")
|
||||
if err == nil {
|
||||
return f, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("neither %s nor /proc/config.gz provide a kconfig", path)
|
||||
}
|
||||
|
||||
// Parse parses the kconfig file for which a reader is given.
|
||||
// All the CONFIG_* which are in filter and which are set set will be
|
||||
// put in the returned map as key with their corresponding value as map value.
|
||||
// If filter is nil, no filtering will occur.
|
||||
// If the kconfig file is not valid, error will be returned.
|
||||
func Parse(source io.ReaderAt, filter map[string]struct{}) (map[string]string, error) {
|
||||
var r io.Reader
|
||||
zr, err := gzip.NewReader(io.NewSectionReader(source, 0, math.MaxInt64))
|
||||
if err != nil {
|
||||
r = io.NewSectionReader(source, 0, math.MaxInt64)
|
||||
} else {
|
||||
// Source is gzip compressed, transparently decompress.
|
||||
r = zr
|
||||
}
|
||||
|
||||
ret := make(map[string]string, len(filter))
|
||||
|
||||
s := bufio.NewScanner(r)
|
||||
|
||||
for s.Scan() {
|
||||
line := s.Bytes()
|
||||
err = processKconfigLine(line, ret, filter)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot parse line: %w", err)
|
||||
}
|
||||
|
||||
if filter != nil && len(ret) == len(filter) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if err := s.Err(); err != nil {
|
||||
return nil, fmt.Errorf("cannot parse: %w", err)
|
||||
}
|
||||
|
||||
if zr != nil {
|
||||
return ret, zr.Close()
|
||||
}
|
||||
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
// Golang translation of libbpf bpf_object__process_kconfig_line():
|
||||
// https://github.com/libbpf/libbpf/blob/fbd60dbff51c870f5e80a17c4f2fd639eb80af90/src/libbpf.c#L1874
|
||||
// It does the same checks but does not put the data inside the BPF map.
|
||||
func processKconfigLine(line []byte, m map[string]string, filter map[string]struct{}) error {
|
||||
// Ignore empty lines and "# CONFIG_* is not set".
|
||||
if !bytes.HasPrefix(line, []byte("CONFIG_")) {
|
||||
return nil
|
||||
}
|
||||
|
||||
key, value, found := bytes.Cut(line, []byte{'='})
|
||||
if !found {
|
||||
return fmt.Errorf("line %q does not contain separator '='", line)
|
||||
}
|
||||
|
||||
if len(value) == 0 {
|
||||
return fmt.Errorf("line %q has no value", line)
|
||||
}
|
||||
|
||||
if filter != nil {
|
||||
// NB: map[string(key)] gets special optimisation help from the compiler
|
||||
// and doesn't allocate. Don't turn this into a variable.
|
||||
_, ok := filter[string(key)]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// This can seem odd, but libbpf only sets the value the first time the key is
|
||||
// met:
|
||||
// https://github.com/torvalds/linux/blob/0d85b27b0cc6/tools/lib/bpf/libbpf.c#L1906-L1908
|
||||
_, ok := m[string(key)]
|
||||
if !ok {
|
||||
m[string(key)] = string(value)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// PutValue translates the value given as parameter depending on the BTF
|
||||
// type, the translated value is then written to the byte array.
|
||||
func PutValue(data []byte, typ btf.Type, value string) error {
|
||||
typ = btf.UnderlyingType(typ)
|
||||
|
||||
switch value {
|
||||
case "y", "n", "m":
|
||||
return putValueTri(data, typ, value)
|
||||
default:
|
||||
if strings.HasPrefix(value, `"`) {
|
||||
return putValueString(data, typ, value)
|
||||
}
|
||||
return putValueNumber(data, typ, value)
|
||||
}
|
||||
}
|
||||
|
||||
// Golang translation of libbpf_tristate enum:
|
||||
// https://github.com/libbpf/libbpf/blob/fbd60dbff51c870f5e80a17c4f2fd639eb80af90/src/bpf_helpers.h#L169
|
||||
type triState int
|
||||
|
||||
const (
|
||||
TriNo triState = 0
|
||||
TriYes triState = 1
|
||||
TriModule triState = 2
|
||||
)
|
||||
|
||||
func putValueTri(data []byte, typ btf.Type, value string) error {
|
||||
switch v := typ.(type) {
|
||||
case *btf.Int:
|
||||
if v.Encoding != btf.Bool {
|
||||
return fmt.Errorf("cannot add tri value, expected btf.Bool, got: %v", v.Encoding)
|
||||
}
|
||||
|
||||
if v.Size != 1 {
|
||||
return fmt.Errorf("cannot add tri value, expected size of 1 byte, got: %d", v.Size)
|
||||
}
|
||||
|
||||
switch value {
|
||||
case "y":
|
||||
data[0] = 1
|
||||
case "n":
|
||||
data[0] = 0
|
||||
default:
|
||||
return fmt.Errorf("cannot use %q for btf.Bool", value)
|
||||
}
|
||||
case *btf.Enum:
|
||||
if v.Name != "libbpf_tristate" {
|
||||
return fmt.Errorf("cannot use enum %q, only libbpf_tristate is supported", v.Name)
|
||||
}
|
||||
|
||||
var tri triState
|
||||
switch value {
|
||||
case "y":
|
||||
tri = TriYes
|
||||
case "m":
|
||||
tri = TriModule
|
||||
case "n":
|
||||
tri = TriNo
|
||||
default:
|
||||
return fmt.Errorf("value %q is not support for libbpf_tristate", value)
|
||||
}
|
||||
|
||||
internal.NativeEndian.PutUint64(data, uint64(tri))
|
||||
default:
|
||||
return fmt.Errorf("cannot add number value, expected btf.Int or btf.Enum, got: %T", v)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func putValueString(data []byte, typ btf.Type, value string) error {
|
||||
array, ok := typ.(*btf.Array)
|
||||
if !ok {
|
||||
return fmt.Errorf("cannot add string value, expected btf.Array, got %T", array)
|
||||
}
|
||||
|
||||
contentType, ok := btf.UnderlyingType(array.Type).(*btf.Int)
|
||||
if !ok {
|
||||
return fmt.Errorf("cannot add string value, expected array of btf.Int, got %T", contentType)
|
||||
}
|
||||
|
||||
// Any Int, which is not bool, of one byte could be used to store char:
|
||||
// https://github.com/torvalds/linux/blob/1a5304fecee5/tools/lib/bpf/libbpf.c#L3637-L3638
|
||||
if contentType.Size != 1 && contentType.Encoding != btf.Bool {
|
||||
return fmt.Errorf("cannot add string value, expected array of btf.Int of size 1, got array of btf.Int of size: %v", contentType.Size)
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(value, `"`) || !strings.HasSuffix(value, `"`) {
|
||||
return fmt.Errorf(`value %q must start and finish with '"'`, value)
|
||||
}
|
||||
|
||||
str := strings.Trim(value, `"`)
|
||||
|
||||
// We need to trim string if the bpf array is smaller.
|
||||
if uint32(len(str)) >= array.Nelems {
|
||||
str = str[:array.Nelems]
|
||||
}
|
||||
|
||||
// Write the string content to .kconfig.
|
||||
copy(data, str)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func putValueNumber(data []byte, typ btf.Type, value string) error {
|
||||
integer, ok := typ.(*btf.Int)
|
||||
if !ok {
|
||||
return fmt.Errorf("cannot add number value, expected *btf.Int, got: %T", integer)
|
||||
}
|
||||
|
||||
size := integer.Size
|
||||
sizeInBits := size * 8
|
||||
|
||||
var n uint64
|
||||
var err error
|
||||
if integer.Encoding == btf.Signed {
|
||||
parsed, e := strconv.ParseInt(value, 0, int(sizeInBits))
|
||||
|
||||
n = uint64(parsed)
|
||||
err = e
|
||||
} else {
|
||||
parsed, e := strconv.ParseUint(value, 0, int(sizeInBits))
|
||||
|
||||
n = uint64(parsed)
|
||||
err = e
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse value: %w", err)
|
||||
}
|
||||
|
||||
switch size {
|
||||
case 1:
|
||||
data[0] = byte(n)
|
||||
case 2:
|
||||
internal.NativeEndian.PutUint16(data, uint16(n))
|
||||
case 4:
|
||||
internal.NativeEndian.PutUint32(data, uint32(n))
|
||||
case 8:
|
||||
internal.NativeEndian.PutUint64(data, uint64(n))
|
||||
default:
|
||||
return fmt.Errorf("size (%d) is not valid, expected: 1, 2, 4 or 8", size)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
26
vendor/github.com/cilium/ebpf/internal/memoize.go
generated
vendored
26
vendor/github.com/cilium/ebpf/internal/memoize.go
generated
vendored
@@ -1,26 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
type memoizedFunc[T any] struct {
|
||||
once sync.Once
|
||||
fn func() (T, error)
|
||||
result T
|
||||
err error
|
||||
}
|
||||
|
||||
func (mf *memoizedFunc[T]) do() (T, error) {
|
||||
mf.once.Do(func() {
|
||||
mf.result, mf.err = mf.fn()
|
||||
})
|
||||
return mf.result, mf.err
|
||||
}
|
||||
|
||||
// Memoize the result of a function call.
|
||||
//
|
||||
// fn is only ever called once, even if it returns an error.
|
||||
func Memoize[T any](fn func() (T, error)) func() (T, error) {
|
||||
return (&memoizedFunc[T]{fn: fn}).do
|
||||
}
|
||||
97
vendor/github.com/cilium/ebpf/internal/output.go
generated
vendored
97
vendor/github.com/cilium/ebpf/internal/output.go
generated
vendored
@@ -1,97 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"go/format"
|
||||
"go/scanner"
|
||||
"io"
|
||||
"reflect"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
// Identifier turns a C style type or field name into an exportable Go equivalent.
|
||||
func Identifier(str string) string {
|
||||
prev := rune(-1)
|
||||
return strings.Map(func(r rune) rune {
|
||||
// See https://golang.org/ref/spec#Identifiers
|
||||
switch {
|
||||
case unicode.IsLetter(r):
|
||||
if prev == -1 {
|
||||
r = unicode.ToUpper(r)
|
||||
}
|
||||
|
||||
case r == '_':
|
||||
switch {
|
||||
// The previous rune was deleted, or we are at the
|
||||
// beginning of the string.
|
||||
case prev == -1:
|
||||
fallthrough
|
||||
|
||||
// The previous rune is a lower case letter or a digit.
|
||||
case unicode.IsDigit(prev) || (unicode.IsLetter(prev) && unicode.IsLower(prev)):
|
||||
// delete the current rune, and force the
|
||||
// next character to be uppercased.
|
||||
r = -1
|
||||
}
|
||||
|
||||
case unicode.IsDigit(r):
|
||||
|
||||
default:
|
||||
// Delete the current rune. prev is unchanged.
|
||||
return -1
|
||||
}
|
||||
|
||||
prev = r
|
||||
return r
|
||||
}, str)
|
||||
}
|
||||
|
||||
// WriteFormatted outputs a formatted src into out.
|
||||
//
|
||||
// If formatting fails it returns an informative error message.
|
||||
func WriteFormatted(src []byte, out io.Writer) error {
|
||||
formatted, err := format.Source(src)
|
||||
if err == nil {
|
||||
_, err = out.Write(formatted)
|
||||
return err
|
||||
}
|
||||
|
||||
var el scanner.ErrorList
|
||||
if !errors.As(err, &el) {
|
||||
return err
|
||||
}
|
||||
|
||||
var nel scanner.ErrorList
|
||||
for _, err := range el {
|
||||
if !err.Pos.IsValid() {
|
||||
nel = append(nel, err)
|
||||
continue
|
||||
}
|
||||
|
||||
buf := src[err.Pos.Offset:]
|
||||
nl := bytes.IndexRune(buf, '\n')
|
||||
if nl == -1 {
|
||||
nel = append(nel, err)
|
||||
continue
|
||||
}
|
||||
|
||||
err.Msg += ": " + string(buf[:nl])
|
||||
nel = append(nel, err)
|
||||
}
|
||||
|
||||
return nel
|
||||
}
|
||||
|
||||
// GoTypeName is like %T, but elides the package name.
|
||||
//
|
||||
// Pointers to a type are peeled off.
|
||||
func GoTypeName(t any) string {
|
||||
rT := reflect.TypeOf(t)
|
||||
for rT.Kind() == reflect.Pointer {
|
||||
rT = rT.Elem()
|
||||
}
|
||||
// Doesn't return the correct Name for generic types due to https://github.com/golang/go/issues/55924
|
||||
return rT.Name()
|
||||
}
|
||||
65
vendor/github.com/cilium/ebpf/internal/pinning.go
generated
vendored
65
vendor/github.com/cilium/ebpf/internal/pinning.go
generated
vendored
@@ -1,65 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
func Pin(currentPath, newPath string, fd *sys.FD) error {
|
||||
if newPath == "" {
|
||||
return errors.New("given pinning path cannot be empty")
|
||||
}
|
||||
if currentPath == newPath {
|
||||
return nil
|
||||
}
|
||||
|
||||
fsType, err := FSType(filepath.Dir(newPath))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if fsType != unix.BPF_FS_MAGIC {
|
||||
return fmt.Errorf("%s is not on a bpf filesystem", newPath)
|
||||
}
|
||||
|
||||
defer runtime.KeepAlive(fd)
|
||||
|
||||
if currentPath == "" {
|
||||
return sys.ObjPin(&sys.ObjPinAttr{
|
||||
Pathname: sys.NewStringPointer(newPath),
|
||||
BpfFd: fd.Uint(),
|
||||
})
|
||||
}
|
||||
|
||||
// Renameat2 is used instead of os.Rename to disallow the new path replacing
|
||||
// an existing path.
|
||||
err = unix.Renameat2(unix.AT_FDCWD, currentPath, unix.AT_FDCWD, newPath, unix.RENAME_NOREPLACE)
|
||||
if err == nil {
|
||||
// Object is now moved to the new pinning path.
|
||||
return nil
|
||||
}
|
||||
if !os.IsNotExist(err) {
|
||||
return fmt.Errorf("unable to move pinned object to new path %v: %w", newPath, err)
|
||||
}
|
||||
// Internal state not in sync with the file system so let's fix it.
|
||||
return sys.ObjPin(&sys.ObjPinAttr{
|
||||
Pathname: sys.NewStringPointer(newPath),
|
||||
BpfFd: fd.Uint(),
|
||||
})
|
||||
}
|
||||
|
||||
func Unpin(pinnedPath string) error {
|
||||
if pinnedPath == "" {
|
||||
return nil
|
||||
}
|
||||
err := os.Remove(pinnedPath)
|
||||
if err == nil || os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
43
vendor/github.com/cilium/ebpf/internal/platform.go
generated
vendored
43
vendor/github.com/cilium/ebpf/internal/platform.go
generated
vendored
@@ -1,43 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
)
|
||||
|
||||
// PlatformPrefix returns the platform-dependent syscall wrapper prefix used by
|
||||
// the linux kernel.
|
||||
//
|
||||
// Based on https://github.com/golang/go/blob/master/src/go/build/syslist.go
|
||||
// and https://github.com/libbpf/libbpf/blob/master/src/libbpf.c#L10047
|
||||
func PlatformPrefix() string {
|
||||
switch runtime.GOARCH {
|
||||
case "386":
|
||||
return "__ia32_"
|
||||
case "amd64", "amd64p32":
|
||||
return "__x64_"
|
||||
|
||||
case "arm", "armbe":
|
||||
return "__arm_"
|
||||
case "arm64", "arm64be":
|
||||
return "__arm64_"
|
||||
|
||||
case "mips", "mipsle", "mips64", "mips64le", "mips64p32", "mips64p32le":
|
||||
return "__mips_"
|
||||
|
||||
case "s390":
|
||||
return "__s390_"
|
||||
case "s390x":
|
||||
return "__s390x_"
|
||||
|
||||
case "riscv", "riscv64":
|
||||
return "__riscv_"
|
||||
|
||||
case "ppc":
|
||||
return "__powerpc_"
|
||||
case "ppc64", "ppc64le":
|
||||
return "__powerpc64_"
|
||||
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
11
vendor/github.com/cilium/ebpf/internal/prog.go
generated
vendored
11
vendor/github.com/cilium/ebpf/internal/prog.go
generated
vendored
@@ -1,11 +0,0 @@
|
||||
package internal
|
||||
|
||||
// EmptyBPFContext is the smallest-possible BPF input context to be used for
|
||||
// invoking `Program.{Run,Benchmark,Test}`.
|
||||
//
|
||||
// Programs require a context input buffer of at least 15 bytes. Looking in
|
||||
// net/bpf/test_run.c, bpf_test_init() requires that the input is at least
|
||||
// ETH_HLEN (14) bytes. As of Linux commit fd18942 ("bpf: Don't redirect packets
|
||||
// with invalid pkt_len"), it also requires the skb to be non-empty after
|
||||
// removing the Layer 2 header.
|
||||
var EmptyBPFContext = make([]byte, 15)
|
||||
23
vendor/github.com/cilium/ebpf/internal/statfs.go
generated
vendored
23
vendor/github.com/cilium/ebpf/internal/statfs.go
generated
vendored
@@ -1,23 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
func FSType(path string) (int64, error) {
|
||||
var statfs unix.Statfs_t
|
||||
if err := unix.Statfs(path, &statfs); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
fsType := int64(statfs.Type)
|
||||
if unsafe.Sizeof(statfs.Type) == 4 {
|
||||
// We're on a 32 bit arch, where statfs.Type is int32. bpfFSType is a
|
||||
// negative number when interpreted as int32 so we need to cast via
|
||||
// uint32 to avoid sign extension.
|
||||
fsType = int64(uint32(statfs.Type))
|
||||
}
|
||||
return fsType, nil
|
||||
}
|
||||
6
vendor/github.com/cilium/ebpf/internal/sys/doc.go
generated
vendored
6
vendor/github.com/cilium/ebpf/internal/sys/doc.go
generated
vendored
@@ -1,6 +0,0 @@
|
||||
// Package sys contains bindings for the BPF syscall.
|
||||
package sys
|
||||
|
||||
// Regenerate types.go by invoking go generate in the current directory.
|
||||
|
||||
//go:generate go run github.com/cilium/ebpf/internal/cmd/gentypes ../../btf/testdata/vmlinux.btf.gz
|
||||
133
vendor/github.com/cilium/ebpf/internal/sys/fd.go
generated
vendored
133
vendor/github.com/cilium/ebpf/internal/sys/fd.go
generated
vendored
@@ -1,133 +0,0 @@
|
||||
package sys
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"os"
|
||||
"runtime"
|
||||
"strconv"
|
||||
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
var ErrClosedFd = unix.EBADF
|
||||
|
||||
type FD struct {
|
||||
raw int
|
||||
}
|
||||
|
||||
func newFD(value int) *FD {
|
||||
if onLeakFD != nil {
|
||||
// Attempt to store the caller's stack for the given fd value.
|
||||
// Panic if fds contains an existing stack for the fd.
|
||||
old, exist := fds.LoadOrStore(value, callersFrames())
|
||||
if exist {
|
||||
f := old.(*runtime.Frames)
|
||||
panic(fmt.Sprintf("found existing stack for fd %d:\n%s", value, FormatFrames(f)))
|
||||
}
|
||||
}
|
||||
|
||||
fd := &FD{value}
|
||||
runtime.SetFinalizer(fd, (*FD).finalize)
|
||||
return fd
|
||||
}
|
||||
|
||||
// finalize is set as the FD's runtime finalizer and
|
||||
// sends a leak trace before calling FD.Close().
|
||||
func (fd *FD) finalize() {
|
||||
if fd.raw < 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// Invoke the fd leak callback. Calls LoadAndDelete to guarantee the callback
|
||||
// is invoked at most once for one sys.FD allocation, runtime.Frames can only
|
||||
// be unwound once.
|
||||
f, ok := fds.LoadAndDelete(fd.Int())
|
||||
if ok && onLeakFD != nil {
|
||||
onLeakFD(f.(*runtime.Frames))
|
||||
}
|
||||
|
||||
_ = fd.Close()
|
||||
}
|
||||
|
||||
// NewFD wraps a raw fd with a finalizer.
|
||||
//
|
||||
// You must not use the raw fd after calling this function, since the underlying
|
||||
// file descriptor number may change. This is because the BPF UAPI assumes that
|
||||
// zero is not a valid fd value.
|
||||
func NewFD(value int) (*FD, error) {
|
||||
if value < 0 {
|
||||
return nil, fmt.Errorf("invalid fd %d", value)
|
||||
}
|
||||
|
||||
fd := newFD(value)
|
||||
if value != 0 {
|
||||
return fd, nil
|
||||
}
|
||||
|
||||
dup, err := fd.Dup()
|
||||
_ = fd.Close()
|
||||
return dup, err
|
||||
}
|
||||
|
||||
func (fd *FD) String() string {
|
||||
return strconv.FormatInt(int64(fd.raw), 10)
|
||||
}
|
||||
|
||||
func (fd *FD) Int() int {
|
||||
return fd.raw
|
||||
}
|
||||
|
||||
func (fd *FD) Uint() uint32 {
|
||||
if fd.raw < 0 || int64(fd.raw) > math.MaxUint32 {
|
||||
// Best effort: this is the number most likely to be an invalid file
|
||||
// descriptor. It is equal to -1 (on two's complement arches).
|
||||
return math.MaxUint32
|
||||
}
|
||||
return uint32(fd.raw)
|
||||
}
|
||||
|
||||
func (fd *FD) Close() error {
|
||||
if fd.raw < 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return unix.Close(fd.disown())
|
||||
}
|
||||
|
||||
func (fd *FD) disown() int {
|
||||
value := int(fd.raw)
|
||||
fds.Delete(int(value))
|
||||
fd.raw = -1
|
||||
|
||||
runtime.SetFinalizer(fd, nil)
|
||||
return value
|
||||
}
|
||||
|
||||
func (fd *FD) Dup() (*FD, error) {
|
||||
if fd.raw < 0 {
|
||||
return nil, ErrClosedFd
|
||||
}
|
||||
|
||||
// Always require the fd to be larger than zero: the BPF API treats the value
|
||||
// as "no argument provided".
|
||||
dup, err := unix.FcntlInt(uintptr(fd.raw), unix.F_DUPFD_CLOEXEC, 1)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't dup fd: %v", err)
|
||||
}
|
||||
|
||||
return newFD(dup), nil
|
||||
}
|
||||
|
||||
// File takes ownership of FD and turns it into an [*os.File].
|
||||
//
|
||||
// You must not use the FD after the call returns.
|
||||
//
|
||||
// Returns nil if the FD is not valid.
|
||||
func (fd *FD) File(name string) *os.File {
|
||||
if fd.raw < 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return os.NewFile(uintptr(fd.disown()), name)
|
||||
}
|
||||
93
vendor/github.com/cilium/ebpf/internal/sys/fd_trace.go
generated
vendored
93
vendor/github.com/cilium/ebpf/internal/sys/fd_trace.go
generated
vendored
@@ -1,93 +0,0 @@
|
||||
package sys
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"runtime"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// OnLeakFD controls tracing [FD] lifetime to detect resources that are not
|
||||
// closed by Close().
|
||||
//
|
||||
// If fn is not nil, tracing is enabled for all FDs created going forward. fn is
|
||||
// invoked for all FDs that are closed by the garbage collector instead of an
|
||||
// explicit Close() by a caller. Calling OnLeakFD twice with a non-nil fn
|
||||
// (without disabling tracing in the meantime) will cause a panic.
|
||||
//
|
||||
// If fn is nil, tracing will be disabled. Any FDs that have not been closed are
|
||||
// considered to be leaked, fn will be invoked for them, and the process will be
|
||||
// terminated.
|
||||
//
|
||||
// fn will be invoked at most once for every unique sys.FD allocation since a
|
||||
// runtime.Frames can only be unwound once.
|
||||
func OnLeakFD(fn func(*runtime.Frames)) {
|
||||
// Enable leak tracing if new fn is provided.
|
||||
if fn != nil {
|
||||
if onLeakFD != nil {
|
||||
panic("OnLeakFD called twice with non-nil fn")
|
||||
}
|
||||
|
||||
onLeakFD = fn
|
||||
return
|
||||
}
|
||||
|
||||
// fn is nil past this point.
|
||||
|
||||
if onLeakFD == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Call onLeakFD for all open fds.
|
||||
if fs := flushFrames(); len(fs) != 0 {
|
||||
for _, f := range fs {
|
||||
onLeakFD(f)
|
||||
}
|
||||
}
|
||||
|
||||
onLeakFD = nil
|
||||
}
|
||||
|
||||
var onLeakFD func(*runtime.Frames)
|
||||
|
||||
// fds is a registry of all file descriptors wrapped into sys.fds that were
|
||||
// created while an fd tracer was active.
|
||||
var fds sync.Map // map[int]*runtime.Frames
|
||||
|
||||
// flushFrames removes all elements from fds and returns them as a slice. This
|
||||
// deals with the fact that a runtime.Frames can only be unwound once using
|
||||
// Next().
|
||||
func flushFrames() []*runtime.Frames {
|
||||
var frames []*runtime.Frames
|
||||
fds.Range(func(key, value any) bool {
|
||||
frames = append(frames, value.(*runtime.Frames))
|
||||
fds.Delete(key)
|
||||
return true
|
||||
})
|
||||
return frames
|
||||
}
|
||||
|
||||
func callersFrames() *runtime.Frames {
|
||||
c := make([]uintptr, 32)
|
||||
|
||||
// Skip runtime.Callers and this function.
|
||||
i := runtime.Callers(2, c)
|
||||
if i == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return runtime.CallersFrames(c)
|
||||
}
|
||||
|
||||
// FormatFrames formats a runtime.Frames as a human-readable string.
|
||||
func FormatFrames(fs *runtime.Frames) string {
|
||||
var b bytes.Buffer
|
||||
for {
|
||||
f, more := fs.Next()
|
||||
b.WriteString(fmt.Sprintf("\t%s+%#x\n\t\t%s:%d\n", f.Function, f.PC-f.Entry, f.File, f.Line))
|
||||
if !more {
|
||||
break
|
||||
}
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
49
vendor/github.com/cilium/ebpf/internal/sys/mapflags_string.go
generated
vendored
49
vendor/github.com/cilium/ebpf/internal/sys/mapflags_string.go
generated
vendored
@@ -1,49 +0,0 @@
|
||||
// Code generated by "stringer -type MapFlags"; DO NOT EDIT.
|
||||
|
||||
package sys
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[BPF_F_NO_PREALLOC-1]
|
||||
_ = x[BPF_F_NO_COMMON_LRU-2]
|
||||
_ = x[BPF_F_NUMA_NODE-4]
|
||||
_ = x[BPF_F_RDONLY-8]
|
||||
_ = x[BPF_F_WRONLY-16]
|
||||
_ = x[BPF_F_STACK_BUILD_ID-32]
|
||||
_ = x[BPF_F_ZERO_SEED-64]
|
||||
_ = x[BPF_F_RDONLY_PROG-128]
|
||||
_ = x[BPF_F_WRONLY_PROG-256]
|
||||
_ = x[BPF_F_CLONE-512]
|
||||
_ = x[BPF_F_MMAPABLE-1024]
|
||||
_ = x[BPF_F_PRESERVE_ELEMS-2048]
|
||||
_ = x[BPF_F_INNER_MAP-4096]
|
||||
}
|
||||
|
||||
const _MapFlags_name = "BPF_F_NO_PREALLOCBPF_F_NO_COMMON_LRUBPF_F_NUMA_NODEBPF_F_RDONLYBPF_F_WRONLYBPF_F_STACK_BUILD_IDBPF_F_ZERO_SEEDBPF_F_RDONLY_PROGBPF_F_WRONLY_PROGBPF_F_CLONEBPF_F_MMAPABLEBPF_F_PRESERVE_ELEMSBPF_F_INNER_MAP"
|
||||
|
||||
var _MapFlags_map = map[MapFlags]string{
|
||||
1: _MapFlags_name[0:17],
|
||||
2: _MapFlags_name[17:36],
|
||||
4: _MapFlags_name[36:51],
|
||||
8: _MapFlags_name[51:63],
|
||||
16: _MapFlags_name[63:75],
|
||||
32: _MapFlags_name[75:95],
|
||||
64: _MapFlags_name[95:110],
|
||||
128: _MapFlags_name[110:127],
|
||||
256: _MapFlags_name[127:144],
|
||||
512: _MapFlags_name[144:155],
|
||||
1024: _MapFlags_name[155:169],
|
||||
2048: _MapFlags_name[169:189],
|
||||
4096: _MapFlags_name[189:204],
|
||||
}
|
||||
|
||||
func (i MapFlags) String() string {
|
||||
if str, ok := _MapFlags_map[i]; ok {
|
||||
return str
|
||||
}
|
||||
return "MapFlags(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
52
vendor/github.com/cilium/ebpf/internal/sys/ptr.go
generated
vendored
52
vendor/github.com/cilium/ebpf/internal/sys/ptr.go
generated
vendored
@@ -1,52 +0,0 @@
|
||||
package sys
|
||||
|
||||
import (
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// NewPointer creates a 64-bit pointer from an unsafe Pointer.
|
||||
func NewPointer(ptr unsafe.Pointer) Pointer {
|
||||
return Pointer{ptr: ptr}
|
||||
}
|
||||
|
||||
// NewSlicePointer creates a 64-bit pointer from a byte slice.
|
||||
func NewSlicePointer(buf []byte) Pointer {
|
||||
if len(buf) == 0 {
|
||||
return Pointer{}
|
||||
}
|
||||
|
||||
return Pointer{ptr: unsafe.Pointer(&buf[0])}
|
||||
}
|
||||
|
||||
// NewSlicePointerLen creates a 64-bit pointer from a byte slice.
|
||||
//
|
||||
// Useful to assign both the pointer and the length in one go.
|
||||
func NewSlicePointerLen(buf []byte) (Pointer, uint32) {
|
||||
return NewSlicePointer(buf), uint32(len(buf))
|
||||
}
|
||||
|
||||
// NewStringPointer creates a 64-bit pointer from a string.
|
||||
func NewStringPointer(str string) Pointer {
|
||||
p, err := unix.BytePtrFromString(str)
|
||||
if err != nil {
|
||||
return Pointer{}
|
||||
}
|
||||
|
||||
return Pointer{ptr: unsafe.Pointer(p)}
|
||||
}
|
||||
|
||||
// NewStringSlicePointer allocates an array of Pointers to each string in the
|
||||
// given slice of strings and returns a 64-bit pointer to the start of the
|
||||
// resulting array.
|
||||
//
|
||||
// Use this function to pass arrays of strings as syscall arguments.
|
||||
func NewStringSlicePointer(strings []string) Pointer {
|
||||
sp := make([]Pointer, 0, len(strings))
|
||||
for _, s := range strings {
|
||||
sp = append(sp, NewStringPointer(s))
|
||||
}
|
||||
|
||||
return Pointer{ptr: unsafe.Pointer(&sp[0])}
|
||||
}
|
||||
14
vendor/github.com/cilium/ebpf/internal/sys/ptr_32_be.go
generated
vendored
14
vendor/github.com/cilium/ebpf/internal/sys/ptr_32_be.go
generated
vendored
@@ -1,14 +0,0 @@
|
||||
//go:build armbe || mips || mips64p32
|
||||
|
||||
package sys
|
||||
|
||||
import (
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// Pointer wraps an unsafe.Pointer to be 64bit to
|
||||
// conform to the syscall specification.
|
||||
type Pointer struct {
|
||||
pad uint32
|
||||
ptr unsafe.Pointer
|
||||
}
|
||||
14
vendor/github.com/cilium/ebpf/internal/sys/ptr_32_le.go
generated
vendored
14
vendor/github.com/cilium/ebpf/internal/sys/ptr_32_le.go
generated
vendored
@@ -1,14 +0,0 @@
|
||||
//go:build 386 || amd64p32 || arm || mipsle || mips64p32le
|
||||
|
||||
package sys
|
||||
|
||||
import (
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// Pointer wraps an unsafe.Pointer to be 64bit to
|
||||
// conform to the syscall specification.
|
||||
type Pointer struct {
|
||||
ptr unsafe.Pointer
|
||||
pad uint32
|
||||
}
|
||||
13
vendor/github.com/cilium/ebpf/internal/sys/ptr_64.go
generated
vendored
13
vendor/github.com/cilium/ebpf/internal/sys/ptr_64.go
generated
vendored
@@ -1,13 +0,0 @@
|
||||
//go:build !386 && !amd64p32 && !arm && !mipsle && !mips64p32le && !armbe && !mips && !mips64p32
|
||||
|
||||
package sys
|
||||
|
||||
import (
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// Pointer wraps an unsafe.Pointer to be 64bit to
|
||||
// conform to the syscall specification.
|
||||
type Pointer struct {
|
||||
ptr unsafe.Pointer
|
||||
}
|
||||
83
vendor/github.com/cilium/ebpf/internal/sys/signals.go
generated
vendored
83
vendor/github.com/cilium/ebpf/internal/sys/signals.go
generated
vendored
@@ -1,83 +0,0 @@
|
||||
package sys
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// A sigset containing only SIGPROF.
|
||||
var profSet unix.Sigset_t
|
||||
|
||||
func init() {
|
||||
// See sigsetAdd for details on the implementation. Open coded here so
|
||||
// that the compiler will check the constant calculations for us.
|
||||
profSet.Val[sigprofBit/wordBits] |= 1 << (sigprofBit % wordBits)
|
||||
}
|
||||
|
||||
// maskProfilerSignal locks the calling goroutine to its underlying OS thread
|
||||
// and adds SIGPROF to the thread's signal mask. This prevents pprof from
|
||||
// interrupting expensive syscalls like e.g. BPF_PROG_LOAD.
|
||||
//
|
||||
// The caller must defer unmaskProfilerSignal() to reverse the operation.
|
||||
func maskProfilerSignal() {
|
||||
runtime.LockOSThread()
|
||||
|
||||
if err := unix.PthreadSigmask(unix.SIG_BLOCK, &profSet, nil); err != nil {
|
||||
runtime.UnlockOSThread()
|
||||
panic(fmt.Errorf("masking profiler signal: %w", err))
|
||||
}
|
||||
}
|
||||
|
||||
// unmaskProfilerSignal removes SIGPROF from the underlying thread's signal
|
||||
// mask, allowing it to be interrupted for profiling once again.
|
||||
//
|
||||
// It also unlocks the current goroutine from its underlying OS thread.
|
||||
func unmaskProfilerSignal() {
|
||||
defer runtime.UnlockOSThread()
|
||||
|
||||
if err := unix.PthreadSigmask(unix.SIG_UNBLOCK, &profSet, nil); err != nil {
|
||||
panic(fmt.Errorf("unmasking profiler signal: %w", err))
|
||||
}
|
||||
}
|
||||
|
||||
const (
|
||||
// Signal is the nth bit in the bitfield.
|
||||
sigprofBit = int(unix.SIGPROF - 1)
|
||||
// The number of bits in one Sigset_t word.
|
||||
wordBits = int(unsafe.Sizeof(unix.Sigset_t{}.Val[0])) * 8
|
||||
)
|
||||
|
||||
// sigsetAdd adds signal to set.
|
||||
//
|
||||
// Note: Sigset_t.Val's value type is uint32 or uint64 depending on the arch.
|
||||
// This function must be able to deal with both and so must avoid any direct
|
||||
// references to u32 or u64 types.
|
||||
func sigsetAdd(set *unix.Sigset_t, signal unix.Signal) error {
|
||||
if signal < 1 {
|
||||
return fmt.Errorf("signal %d must be larger than 0", signal)
|
||||
}
|
||||
|
||||
// For amd64, runtime.sigaddset() performs the following operation:
|
||||
// set[(signal-1)/32] |= 1 << ((uint32(signal) - 1) & 31)
|
||||
//
|
||||
// This trick depends on sigset being two u32's, causing a signal in the the
|
||||
// bottom 31 bits to be written to the low word if bit 32 is low, or the high
|
||||
// word if bit 32 is high.
|
||||
|
||||
// Signal is the nth bit in the bitfield.
|
||||
bit := int(signal - 1)
|
||||
// Word within the sigset the bit needs to be written to.
|
||||
word := bit / wordBits
|
||||
|
||||
if word >= len(set.Val) {
|
||||
return fmt.Errorf("signal %d does not fit within unix.Sigset_t", signal)
|
||||
}
|
||||
|
||||
// Write the signal bit into its corresponding word at the corrected offset.
|
||||
set.Val[word] |= 1 << (bit % wordBits)
|
||||
|
||||
return nil
|
||||
}
|
||||
178
vendor/github.com/cilium/ebpf/internal/sys/syscall.go
generated
vendored
178
vendor/github.com/cilium/ebpf/internal/sys/syscall.go
generated
vendored
@@ -1,178 +0,0 @@
|
||||
package sys
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// ENOTSUPP is a Linux internal error code that has leaked into UAPI.
|
||||
//
|
||||
// It is not the same as ENOTSUP or EOPNOTSUPP.
|
||||
var ENOTSUPP = syscall.Errno(524)
|
||||
|
||||
// BPF wraps SYS_BPF.
|
||||
//
|
||||
// Any pointers contained in attr must use the Pointer type from this package.
|
||||
func BPF(cmd Cmd, attr unsafe.Pointer, size uintptr) (uintptr, error) {
|
||||
// Prevent the Go profiler from repeatedly interrupting the verifier,
|
||||
// which could otherwise lead to a livelock due to receiving EAGAIN.
|
||||
if cmd == BPF_PROG_LOAD || cmd == BPF_PROG_RUN {
|
||||
maskProfilerSignal()
|
||||
defer unmaskProfilerSignal()
|
||||
}
|
||||
|
||||
for {
|
||||
r1, _, errNo := unix.Syscall(unix.SYS_BPF, uintptr(cmd), uintptr(attr), size)
|
||||
runtime.KeepAlive(attr)
|
||||
|
||||
// As of ~4.20 the verifier can be interrupted by a signal,
|
||||
// and returns EAGAIN in that case.
|
||||
if errNo == unix.EAGAIN && cmd == BPF_PROG_LOAD {
|
||||
continue
|
||||
}
|
||||
|
||||
var err error
|
||||
if errNo != 0 {
|
||||
err = wrappedErrno{errNo}
|
||||
}
|
||||
|
||||
return r1, err
|
||||
}
|
||||
}
|
||||
|
||||
// Info is implemented by all structs that can be passed to the ObjInfo syscall.
|
||||
//
|
||||
// MapInfo
|
||||
// ProgInfo
|
||||
// LinkInfo
|
||||
// BtfInfo
|
||||
type Info interface {
|
||||
info() (unsafe.Pointer, uint32)
|
||||
}
|
||||
|
||||
var _ Info = (*MapInfo)(nil)
|
||||
|
||||
func (i *MapInfo) info() (unsafe.Pointer, uint32) {
|
||||
return unsafe.Pointer(i), uint32(unsafe.Sizeof(*i))
|
||||
}
|
||||
|
||||
var _ Info = (*ProgInfo)(nil)
|
||||
|
||||
func (i *ProgInfo) info() (unsafe.Pointer, uint32) {
|
||||
return unsafe.Pointer(i), uint32(unsafe.Sizeof(*i))
|
||||
}
|
||||
|
||||
var _ Info = (*LinkInfo)(nil)
|
||||
|
||||
func (i *LinkInfo) info() (unsafe.Pointer, uint32) {
|
||||
return unsafe.Pointer(i), uint32(unsafe.Sizeof(*i))
|
||||
}
|
||||
|
||||
var _ Info = (*BtfInfo)(nil)
|
||||
|
||||
func (i *BtfInfo) info() (unsafe.Pointer, uint32) {
|
||||
return unsafe.Pointer(i), uint32(unsafe.Sizeof(*i))
|
||||
}
|
||||
|
||||
// ObjInfo retrieves information about a BPF Fd.
|
||||
//
|
||||
// info may be one of MapInfo, ProgInfo, LinkInfo and BtfInfo.
|
||||
func ObjInfo(fd *FD, info Info) error {
|
||||
ptr, len := info.info()
|
||||
err := ObjGetInfoByFd(&ObjGetInfoByFdAttr{
|
||||
BpfFd: fd.Uint(),
|
||||
InfoLen: len,
|
||||
Info: NewPointer(ptr),
|
||||
})
|
||||
runtime.KeepAlive(fd)
|
||||
return err
|
||||
}
|
||||
|
||||
// BPFObjName is a null-terminated string made up of
|
||||
// 'A-Za-z0-9_' characters.
|
||||
type ObjName [unix.BPF_OBJ_NAME_LEN]byte
|
||||
|
||||
// NewObjName truncates the result if it is too long.
|
||||
func NewObjName(name string) ObjName {
|
||||
var result ObjName
|
||||
copy(result[:unix.BPF_OBJ_NAME_LEN-1], name)
|
||||
return result
|
||||
}
|
||||
|
||||
// LogLevel controls the verbosity of the kernel's eBPF program verifier.
|
||||
type LogLevel uint32
|
||||
|
||||
const (
|
||||
BPF_LOG_LEVEL1 LogLevel = 1 << iota
|
||||
BPF_LOG_LEVEL2
|
||||
BPF_LOG_STATS
|
||||
)
|
||||
|
||||
// LinkID uniquely identifies a bpf_link.
|
||||
type LinkID uint32
|
||||
|
||||
// BTFID uniquely identifies a BTF blob loaded into the kernel.
|
||||
type BTFID uint32
|
||||
|
||||
// TypeID identifies a type in a BTF blob.
|
||||
type TypeID uint32
|
||||
|
||||
// MapFlags control map behaviour.
|
||||
type MapFlags uint32
|
||||
|
||||
//go:generate stringer -type MapFlags
|
||||
|
||||
const (
|
||||
BPF_F_NO_PREALLOC MapFlags = 1 << iota
|
||||
BPF_F_NO_COMMON_LRU
|
||||
BPF_F_NUMA_NODE
|
||||
BPF_F_RDONLY
|
||||
BPF_F_WRONLY
|
||||
BPF_F_STACK_BUILD_ID
|
||||
BPF_F_ZERO_SEED
|
||||
BPF_F_RDONLY_PROG
|
||||
BPF_F_WRONLY_PROG
|
||||
BPF_F_CLONE
|
||||
BPF_F_MMAPABLE
|
||||
BPF_F_PRESERVE_ELEMS
|
||||
BPF_F_INNER_MAP
|
||||
)
|
||||
|
||||
// wrappedErrno wraps syscall.Errno to prevent direct comparisons with
|
||||
// syscall.E* or unix.E* constants.
|
||||
//
|
||||
// You should never export an error of this type.
|
||||
type wrappedErrno struct {
|
||||
syscall.Errno
|
||||
}
|
||||
|
||||
func (we wrappedErrno) Unwrap() error {
|
||||
return we.Errno
|
||||
}
|
||||
|
||||
func (we wrappedErrno) Error() string {
|
||||
if we.Errno == ENOTSUPP {
|
||||
return "operation not supported"
|
||||
}
|
||||
return we.Errno.Error()
|
||||
}
|
||||
|
||||
type syscallError struct {
|
||||
error
|
||||
errno syscall.Errno
|
||||
}
|
||||
|
||||
func Error(err error, errno syscall.Errno) error {
|
||||
return &syscallError{err, errno}
|
||||
}
|
||||
|
||||
func (se *syscallError) Is(target error) bool {
|
||||
return target == se.error
|
||||
}
|
||||
|
||||
func (se *syscallError) Unwrap() error {
|
||||
return se.errno
|
||||
}
|
||||
1117
vendor/github.com/cilium/ebpf/internal/sys/types.go
generated
vendored
1117
vendor/github.com/cilium/ebpf/internal/sys/types.go
generated
vendored
File diff suppressed because it is too large
Load Diff
359
vendor/github.com/cilium/ebpf/internal/tracefs/kprobe.go
generated
vendored
359
vendor/github.com/cilium/ebpf/internal/tracefs/kprobe.go
generated
vendored
@@ -1,359 +0,0 @@
|
||||
package tracefs
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrInvalidInput = errors.New("invalid input")
|
||||
|
||||
ErrInvalidMaxActive = errors.New("can only set maxactive on kretprobes")
|
||||
)
|
||||
|
||||
//go:generate stringer -type=ProbeType -linecomment
|
||||
|
||||
type ProbeType uint8
|
||||
|
||||
const (
|
||||
Kprobe ProbeType = iota // kprobe
|
||||
Uprobe // uprobe
|
||||
)
|
||||
|
||||
func (pt ProbeType) eventsFile() (*os.File, error) {
|
||||
path, err := sanitizeTracefsPath(fmt.Sprintf("%s_events", pt.String()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return os.OpenFile(path, os.O_APPEND|os.O_WRONLY, 0666)
|
||||
}
|
||||
|
||||
type ProbeArgs struct {
|
||||
Type ProbeType
|
||||
Symbol, Group, Path string
|
||||
Offset, RefCtrOffset, Cookie uint64
|
||||
Pid, RetprobeMaxActive int
|
||||
Ret bool
|
||||
}
|
||||
|
||||
// RandomGroup generates a pseudorandom string for use as a tracefs group name.
|
||||
// Returns an error when the output string would exceed 63 characters (kernel
|
||||
// limitation), when rand.Read() fails or when prefix contains characters not
|
||||
// allowed by IsValidTraceID.
|
||||
func RandomGroup(prefix string) (string, error) {
|
||||
if !validIdentifier(prefix) {
|
||||
return "", fmt.Errorf("prefix '%s' must be alphanumeric or underscore: %w", prefix, ErrInvalidInput)
|
||||
}
|
||||
|
||||
b := make([]byte, 8)
|
||||
if _, err := rand.Read(b); err != nil {
|
||||
return "", fmt.Errorf("reading random bytes: %w", err)
|
||||
}
|
||||
|
||||
group := fmt.Sprintf("%s_%x", prefix, b)
|
||||
if len(group) > 63 {
|
||||
return "", fmt.Errorf("group name '%s' cannot be longer than 63 characters: %w", group, ErrInvalidInput)
|
||||
}
|
||||
|
||||
return group, nil
|
||||
}
|
||||
|
||||
// validIdentifier implements the equivalent of a regex match
|
||||
// against "^[a-zA-Z_][0-9a-zA-Z_]*$".
|
||||
//
|
||||
// Trace event groups, names and kernel symbols must adhere to this set
|
||||
// of characters. Non-empty, first character must not be a number, all
|
||||
// characters must be alphanumeric or underscore.
|
||||
func validIdentifier(s string) bool {
|
||||
if len(s) < 1 {
|
||||
return false
|
||||
}
|
||||
for i, c := range []byte(s) {
|
||||
switch {
|
||||
case c >= 'a' && c <= 'z':
|
||||
case c >= 'A' && c <= 'Z':
|
||||
case c == '_':
|
||||
case i > 0 && c >= '0' && c <= '9':
|
||||
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func sanitizeTracefsPath(path ...string) (string, error) {
|
||||
base, err := getTracefsPath()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
l := filepath.Join(path...)
|
||||
p := filepath.Join(base, l)
|
||||
if !strings.HasPrefix(p, base) {
|
||||
return "", fmt.Errorf("path '%s' attempts to escape base path '%s': %w", l, base, ErrInvalidInput)
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// getTracefsPath will return a correct path to the tracefs mount point.
|
||||
// Since kernel 4.1 tracefs should be mounted by default at /sys/kernel/tracing,
|
||||
// but may be also be available at /sys/kernel/debug/tracing if debugfs is mounted.
|
||||
// The available tracefs paths will depends on distribution choices.
|
||||
var getTracefsPath = internal.Memoize(func() (string, error) {
|
||||
for _, p := range []struct {
|
||||
path string
|
||||
fsType int64
|
||||
}{
|
||||
{"/sys/kernel/tracing", unix.TRACEFS_MAGIC},
|
||||
{"/sys/kernel/debug/tracing", unix.TRACEFS_MAGIC},
|
||||
// RHEL/CentOS
|
||||
{"/sys/kernel/debug/tracing", unix.DEBUGFS_MAGIC},
|
||||
} {
|
||||
if fsType, err := internal.FSType(p.path); err == nil && fsType == p.fsType {
|
||||
return p.path, nil
|
||||
}
|
||||
}
|
||||
|
||||
return "", errors.New("neither debugfs nor tracefs are mounted")
|
||||
})
|
||||
|
||||
// sanitizeIdentifier replaces every invalid character for the tracefs api with an underscore.
|
||||
//
|
||||
// It is equivalent to calling regexp.MustCompile("[^a-zA-Z0-9]+").ReplaceAllString("_").
|
||||
func sanitizeIdentifier(s string) string {
|
||||
var skip bool
|
||||
return strings.Map(func(c rune) rune {
|
||||
switch {
|
||||
case c >= 'a' && c <= 'z',
|
||||
c >= 'A' && c <= 'Z',
|
||||
c >= '0' && c <= '9':
|
||||
skip = false
|
||||
return c
|
||||
|
||||
case skip:
|
||||
return -1
|
||||
|
||||
default:
|
||||
skip = true
|
||||
return '_'
|
||||
}
|
||||
}, s)
|
||||
}
|
||||
|
||||
// EventID reads a trace event's ID from tracefs given its group and name.
|
||||
// The kernel requires group and name to be alphanumeric or underscore.
|
||||
func EventID(group, name string) (uint64, error) {
|
||||
if !validIdentifier(group) {
|
||||
return 0, fmt.Errorf("invalid tracefs group: %q", group)
|
||||
}
|
||||
|
||||
if !validIdentifier(name) {
|
||||
return 0, fmt.Errorf("invalid tracefs name: %q", name)
|
||||
}
|
||||
|
||||
path, err := sanitizeTracefsPath("events", group, name, "id")
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
tid, err := internal.ReadUint64FromFile("%d\n", path)
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return 0, err
|
||||
}
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("reading trace event ID of %s/%s: %w", group, name, err)
|
||||
}
|
||||
|
||||
return tid, nil
|
||||
}
|
||||
|
||||
func probePrefix(ret bool, maxActive int) string {
|
||||
if ret {
|
||||
if maxActive > 0 {
|
||||
return fmt.Sprintf("r%d", maxActive)
|
||||
}
|
||||
return "r"
|
||||
}
|
||||
return "p"
|
||||
}
|
||||
|
||||
// Event represents an entry in a tracefs probe events file.
|
||||
type Event struct {
|
||||
typ ProbeType
|
||||
group, name string
|
||||
// event id allocated by the kernel. 0 if the event has already been removed.
|
||||
id uint64
|
||||
}
|
||||
|
||||
// NewEvent creates a new ephemeral trace event.
|
||||
//
|
||||
// Returns os.ErrNotExist if symbol is not a valid
|
||||
// kernel symbol, or if it is not traceable with kprobes. Returns os.ErrExist
|
||||
// if a probe with the same group and symbol already exists. Returns an error if
|
||||
// args.RetprobeMaxActive is used on non kprobe types. Returns ErrNotSupported if
|
||||
// the kernel is too old to support kretprobe maxactive.
|
||||
func NewEvent(args ProbeArgs) (*Event, error) {
|
||||
// Before attempting to create a trace event through tracefs,
|
||||
// check if an event with the same group and name already exists.
|
||||
// Kernels 4.x and earlier don't return os.ErrExist on writing a duplicate
|
||||
// entry, so we need to rely on reads for detecting uniqueness.
|
||||
eventName := sanitizeIdentifier(args.Symbol)
|
||||
_, err := EventID(args.Group, eventName)
|
||||
if err == nil {
|
||||
return nil, fmt.Errorf("trace event %s/%s: %w", args.Group, eventName, os.ErrExist)
|
||||
}
|
||||
if err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
return nil, fmt.Errorf("checking trace event %s/%s: %w", args.Group, eventName, err)
|
||||
}
|
||||
|
||||
// Open the kprobe_events file in tracefs.
|
||||
f, err := args.Type.eventsFile()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
var pe, token string
|
||||
switch args.Type {
|
||||
case Kprobe:
|
||||
// The kprobe_events syntax is as follows (see Documentation/trace/kprobetrace.txt):
|
||||
// p[:[GRP/]EVENT] [MOD:]SYM[+offs]|MEMADDR [FETCHARGS] : Set a probe
|
||||
// r[MAXACTIVE][:[GRP/]EVENT] [MOD:]SYM[+0] [FETCHARGS] : Set a return probe
|
||||
// -:[GRP/]EVENT : Clear a probe
|
||||
//
|
||||
// Some examples:
|
||||
// r:ebpf_1234/r_my_kretprobe nf_conntrack_destroy
|
||||
// p:ebpf_5678/p_my_kprobe __x64_sys_execve
|
||||
//
|
||||
// Leaving the kretprobe's MAXACTIVE set to 0 (or absent) will make the
|
||||
// kernel default to NR_CPUS. This is desired in most eBPF cases since
|
||||
// subsampling or rate limiting logic can be more accurately implemented in
|
||||
// the eBPF program itself.
|
||||
// See Documentation/kprobes.txt for more details.
|
||||
if args.RetprobeMaxActive != 0 && !args.Ret {
|
||||
return nil, ErrInvalidMaxActive
|
||||
}
|
||||
token = KprobeToken(args)
|
||||
pe = fmt.Sprintf("%s:%s/%s %s", probePrefix(args.Ret, args.RetprobeMaxActive), args.Group, eventName, token)
|
||||
case Uprobe:
|
||||
// The uprobe_events syntax is as follows:
|
||||
// p[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a probe
|
||||
// r[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a return probe
|
||||
// -:[GRP/]EVENT : Clear a probe
|
||||
//
|
||||
// Some examples:
|
||||
// r:ebpf_1234/readline /bin/bash:0x12345
|
||||
// p:ebpf_5678/main_mySymbol /bin/mybin:0x12345(0x123)
|
||||
//
|
||||
// See Documentation/trace/uprobetracer.txt for more details.
|
||||
if args.RetprobeMaxActive != 0 {
|
||||
return nil, ErrInvalidMaxActive
|
||||
}
|
||||
token = UprobeToken(args)
|
||||
pe = fmt.Sprintf("%s:%s/%s %s", probePrefix(args.Ret, 0), args.Group, eventName, token)
|
||||
}
|
||||
_, err = f.WriteString(pe)
|
||||
|
||||
// Since commit 97c753e62e6c, ENOENT is correctly returned instead of EINVAL
|
||||
// when trying to create a retprobe for a missing symbol.
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return nil, fmt.Errorf("token %s: not found: %w", token, err)
|
||||
}
|
||||
// Since commit ab105a4fb894, EILSEQ is returned when a kprobe sym+offset is resolved
|
||||
// to an invalid insn boundary. The exact conditions that trigger this error are
|
||||
// arch specific however.
|
||||
if errors.Is(err, syscall.EILSEQ) {
|
||||
return nil, fmt.Errorf("token %s: bad insn boundary: %w", token, os.ErrNotExist)
|
||||
}
|
||||
// ERANGE is returned when the `SYM[+offs]` token is too big and cannot
|
||||
// be resolved.
|
||||
if errors.Is(err, syscall.ERANGE) {
|
||||
return nil, fmt.Errorf("token %s: offset too big: %w", token, os.ErrNotExist)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("token %s: writing '%s': %w", token, pe, err)
|
||||
}
|
||||
|
||||
// Get the newly-created trace event's id.
|
||||
tid, err := EventID(args.Group, eventName)
|
||||
if args.RetprobeMaxActive != 0 && errors.Is(err, os.ErrNotExist) {
|
||||
// Kernels < 4.12 don't support maxactive and therefore auto generate
|
||||
// group and event names from the symbol and offset. The symbol is used
|
||||
// without any sanitization.
|
||||
// See https://elixir.bootlin.com/linux/v4.10/source/kernel/trace/trace_kprobe.c#L712
|
||||
event := fmt.Sprintf("kprobes/r_%s_%d", args.Symbol, args.Offset)
|
||||
if err := removeEvent(args.Type, event); err != nil {
|
||||
return nil, fmt.Errorf("failed to remove spurious maxactive event: %s", err)
|
||||
}
|
||||
return nil, fmt.Errorf("create trace event with non-default maxactive: %w", internal.ErrNotSupported)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("get trace event id: %w", err)
|
||||
}
|
||||
|
||||
evt := &Event{args.Type, args.Group, eventName, tid}
|
||||
runtime.SetFinalizer(evt, (*Event).Close)
|
||||
return evt, nil
|
||||
}
|
||||
|
||||
// Close removes the event from tracefs.
|
||||
//
|
||||
// Returns os.ErrClosed if the event has already been closed before.
|
||||
func (evt *Event) Close() error {
|
||||
if evt.id == 0 {
|
||||
return os.ErrClosed
|
||||
}
|
||||
|
||||
evt.id = 0
|
||||
runtime.SetFinalizer(evt, nil)
|
||||
pe := fmt.Sprintf("%s/%s", evt.group, evt.name)
|
||||
return removeEvent(evt.typ, pe)
|
||||
}
|
||||
|
||||
func removeEvent(typ ProbeType, pe string) error {
|
||||
f, err := typ.eventsFile()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
// See [k,u]probe_events syntax above. The probe type does not need to be specified
|
||||
// for removals.
|
||||
if _, err = f.WriteString("-:" + pe); err != nil {
|
||||
return fmt.Errorf("remove event %q from %s: %w", pe, f.Name(), err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ID returns the tracefs ID associated with the event.
|
||||
func (evt *Event) ID() uint64 {
|
||||
return evt.id
|
||||
}
|
||||
|
||||
// Group returns the tracefs group used by the event.
|
||||
func (evt *Event) Group() string {
|
||||
return evt.group
|
||||
}
|
||||
|
||||
// KprobeToken creates the SYM[+offs] token for the tracefs api.
|
||||
func KprobeToken(args ProbeArgs) string {
|
||||
po := args.Symbol
|
||||
|
||||
if args.Offset != 0 {
|
||||
po += fmt.Sprintf("+%#x", args.Offset)
|
||||
}
|
||||
|
||||
return po
|
||||
}
|
||||
24
vendor/github.com/cilium/ebpf/internal/tracefs/probetype_string.go
generated
vendored
24
vendor/github.com/cilium/ebpf/internal/tracefs/probetype_string.go
generated
vendored
@@ -1,24 +0,0 @@
|
||||
// Code generated by "stringer -type=ProbeType -linecomment"; DO NOT EDIT.
|
||||
|
||||
package tracefs
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[Kprobe-0]
|
||||
_ = x[Uprobe-1]
|
||||
}
|
||||
|
||||
const _ProbeType_name = "kprobeuprobe"
|
||||
|
||||
var _ProbeType_index = [...]uint8{0, 6, 12}
|
||||
|
||||
func (i ProbeType) String() string {
|
||||
if i >= ProbeType(len(_ProbeType_index)-1) {
|
||||
return "ProbeType(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
return _ProbeType_name[_ProbeType_index[i]:_ProbeType_index[i+1]]
|
||||
}
|
||||
16
vendor/github.com/cilium/ebpf/internal/tracefs/uprobe.go
generated
vendored
16
vendor/github.com/cilium/ebpf/internal/tracefs/uprobe.go
generated
vendored
@@ -1,16 +0,0 @@
|
||||
package tracefs
|
||||
|
||||
import "fmt"
|
||||
|
||||
// UprobeToken creates the PATH:OFFSET(REF_CTR_OFFSET) token for the tracefs api.
|
||||
func UprobeToken(args ProbeArgs) string {
|
||||
po := fmt.Sprintf("%s:%#x", args.Path, args.Offset)
|
||||
|
||||
if args.RefCtrOffset != 0 {
|
||||
// This is not documented in Documentation/trace/uprobetracer.txt.
|
||||
// elixir.bootlin.com/linux/v5.15-rc7/source/kernel/trace/trace.c#L5564
|
||||
po += fmt.Sprintf("(%#x)", args.RefCtrOffset)
|
||||
}
|
||||
|
||||
return po
|
||||
}
|
||||
11
vendor/github.com/cilium/ebpf/internal/unix/doc.go
generated
vendored
11
vendor/github.com/cilium/ebpf/internal/unix/doc.go
generated
vendored
@@ -1,11 +0,0 @@
|
||||
// Package unix re-exports Linux specific parts of golang.org/x/sys/unix.
|
||||
//
|
||||
// It avoids breaking compilation on other OS by providing stubs as follows:
|
||||
// - Invoking a function always returns an error.
|
||||
// - Errnos have distinct, non-zero values.
|
||||
// - Constants have distinct but meaningless values.
|
||||
// - Types use the same names for members, but may or may not follow the
|
||||
// Linux layout.
|
||||
package unix
|
||||
|
||||
// Note: please don't add any custom API to this package. Use internal/sys instead.
|
||||
202
vendor/github.com/cilium/ebpf/internal/unix/types_linux.go
generated
vendored
202
vendor/github.com/cilium/ebpf/internal/unix/types_linux.go
generated
vendored
@@ -1,202 +0,0 @@
|
||||
//go:build linux
|
||||
|
||||
package unix
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
|
||||
linux "golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
const (
|
||||
ENOENT = linux.ENOENT
|
||||
EEXIST = linux.EEXIST
|
||||
EAGAIN = linux.EAGAIN
|
||||
ENOSPC = linux.ENOSPC
|
||||
EINVAL = linux.EINVAL
|
||||
EPOLLIN = linux.EPOLLIN
|
||||
EINTR = linux.EINTR
|
||||
EPERM = linux.EPERM
|
||||
ESRCH = linux.ESRCH
|
||||
ENODEV = linux.ENODEV
|
||||
EBADF = linux.EBADF
|
||||
E2BIG = linux.E2BIG
|
||||
EFAULT = linux.EFAULT
|
||||
EACCES = linux.EACCES
|
||||
EILSEQ = linux.EILSEQ
|
||||
EOPNOTSUPP = linux.EOPNOTSUPP
|
||||
)
|
||||
|
||||
const (
|
||||
BPF_F_NO_PREALLOC = linux.BPF_F_NO_PREALLOC
|
||||
BPF_F_NUMA_NODE = linux.BPF_F_NUMA_NODE
|
||||
BPF_F_RDONLY = linux.BPF_F_RDONLY
|
||||
BPF_F_WRONLY = linux.BPF_F_WRONLY
|
||||
BPF_F_RDONLY_PROG = linux.BPF_F_RDONLY_PROG
|
||||
BPF_F_WRONLY_PROG = linux.BPF_F_WRONLY_PROG
|
||||
BPF_F_SLEEPABLE = linux.BPF_F_SLEEPABLE
|
||||
BPF_F_XDP_HAS_FRAGS = linux.BPF_F_XDP_HAS_FRAGS
|
||||
BPF_F_MMAPABLE = linux.BPF_F_MMAPABLE
|
||||
BPF_F_INNER_MAP = linux.BPF_F_INNER_MAP
|
||||
BPF_F_KPROBE_MULTI_RETURN = linux.BPF_F_KPROBE_MULTI_RETURN
|
||||
BPF_OBJ_NAME_LEN = linux.BPF_OBJ_NAME_LEN
|
||||
BPF_TAG_SIZE = linux.BPF_TAG_SIZE
|
||||
BPF_RINGBUF_BUSY_BIT = linux.BPF_RINGBUF_BUSY_BIT
|
||||
BPF_RINGBUF_DISCARD_BIT = linux.BPF_RINGBUF_DISCARD_BIT
|
||||
BPF_RINGBUF_HDR_SZ = linux.BPF_RINGBUF_HDR_SZ
|
||||
SYS_BPF = linux.SYS_BPF
|
||||
F_DUPFD_CLOEXEC = linux.F_DUPFD_CLOEXEC
|
||||
EPOLL_CTL_ADD = linux.EPOLL_CTL_ADD
|
||||
EPOLL_CLOEXEC = linux.EPOLL_CLOEXEC
|
||||
O_CLOEXEC = linux.O_CLOEXEC
|
||||
O_NONBLOCK = linux.O_NONBLOCK
|
||||
PROT_NONE = linux.PROT_NONE
|
||||
PROT_READ = linux.PROT_READ
|
||||
PROT_WRITE = linux.PROT_WRITE
|
||||
MAP_ANON = linux.MAP_ANON
|
||||
MAP_SHARED = linux.MAP_SHARED
|
||||
MAP_PRIVATE = linux.MAP_PRIVATE
|
||||
PERF_ATTR_SIZE_VER1 = linux.PERF_ATTR_SIZE_VER1
|
||||
PERF_TYPE_SOFTWARE = linux.PERF_TYPE_SOFTWARE
|
||||
PERF_TYPE_TRACEPOINT = linux.PERF_TYPE_TRACEPOINT
|
||||
PERF_COUNT_SW_BPF_OUTPUT = linux.PERF_COUNT_SW_BPF_OUTPUT
|
||||
PERF_EVENT_IOC_DISABLE = linux.PERF_EVENT_IOC_DISABLE
|
||||
PERF_EVENT_IOC_ENABLE = linux.PERF_EVENT_IOC_ENABLE
|
||||
PERF_EVENT_IOC_SET_BPF = linux.PERF_EVENT_IOC_SET_BPF
|
||||
PerfBitWatermark = linux.PerfBitWatermark
|
||||
PerfBitWriteBackward = linux.PerfBitWriteBackward
|
||||
PERF_SAMPLE_RAW = linux.PERF_SAMPLE_RAW
|
||||
PERF_FLAG_FD_CLOEXEC = linux.PERF_FLAG_FD_CLOEXEC
|
||||
RLIM_INFINITY = linux.RLIM_INFINITY
|
||||
RLIMIT_MEMLOCK = linux.RLIMIT_MEMLOCK
|
||||
BPF_STATS_RUN_TIME = linux.BPF_STATS_RUN_TIME
|
||||
PERF_RECORD_LOST = linux.PERF_RECORD_LOST
|
||||
PERF_RECORD_SAMPLE = linux.PERF_RECORD_SAMPLE
|
||||
AT_FDCWD = linux.AT_FDCWD
|
||||
RENAME_NOREPLACE = linux.RENAME_NOREPLACE
|
||||
SO_ATTACH_BPF = linux.SO_ATTACH_BPF
|
||||
SO_DETACH_BPF = linux.SO_DETACH_BPF
|
||||
SOL_SOCKET = linux.SOL_SOCKET
|
||||
SIGPROF = linux.SIGPROF
|
||||
SIG_BLOCK = linux.SIG_BLOCK
|
||||
SIG_UNBLOCK = linux.SIG_UNBLOCK
|
||||
EM_NONE = linux.EM_NONE
|
||||
EM_BPF = linux.EM_BPF
|
||||
BPF_FS_MAGIC = linux.BPF_FS_MAGIC
|
||||
TRACEFS_MAGIC = linux.TRACEFS_MAGIC
|
||||
DEBUGFS_MAGIC = linux.DEBUGFS_MAGIC
|
||||
)
|
||||
|
||||
type Statfs_t = linux.Statfs_t
|
||||
type Stat_t = linux.Stat_t
|
||||
type Rlimit = linux.Rlimit
|
||||
type Signal = linux.Signal
|
||||
type Sigset_t = linux.Sigset_t
|
||||
type PerfEventMmapPage = linux.PerfEventMmapPage
|
||||
type EpollEvent = linux.EpollEvent
|
||||
type PerfEventAttr = linux.PerfEventAttr
|
||||
type Utsname = linux.Utsname
|
||||
|
||||
func Syscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) {
|
||||
return linux.Syscall(trap, a1, a2, a3)
|
||||
}
|
||||
|
||||
func PthreadSigmask(how int, set, oldset *Sigset_t) error {
|
||||
return linux.PthreadSigmask(how, set, oldset)
|
||||
}
|
||||
|
||||
func FcntlInt(fd uintptr, cmd, arg int) (int, error) {
|
||||
return linux.FcntlInt(fd, cmd, arg)
|
||||
}
|
||||
|
||||
func IoctlSetInt(fd int, req uint, value int) error {
|
||||
return linux.IoctlSetInt(fd, req, value)
|
||||
}
|
||||
|
||||
func Statfs(path string, buf *Statfs_t) (err error) {
|
||||
return linux.Statfs(path, buf)
|
||||
}
|
||||
|
||||
func Close(fd int) (err error) {
|
||||
return linux.Close(fd)
|
||||
}
|
||||
|
||||
func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) {
|
||||
return linux.EpollWait(epfd, events, msec)
|
||||
}
|
||||
|
||||
func EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) {
|
||||
return linux.EpollCtl(epfd, op, fd, event)
|
||||
}
|
||||
|
||||
func Eventfd(initval uint, flags int) (fd int, err error) {
|
||||
return linux.Eventfd(initval, flags)
|
||||
}
|
||||
|
||||
func Write(fd int, p []byte) (n int, err error) {
|
||||
return linux.Write(fd, p)
|
||||
}
|
||||
|
||||
func EpollCreate1(flag int) (fd int, err error) {
|
||||
return linux.EpollCreate1(flag)
|
||||
}
|
||||
|
||||
func SetNonblock(fd int, nonblocking bool) (err error) {
|
||||
return linux.SetNonblock(fd, nonblocking)
|
||||
}
|
||||
|
||||
func Mmap(fd int, offset int64, length int, prot int, flags int) (data []byte, err error) {
|
||||
return linux.Mmap(fd, offset, length, prot, flags)
|
||||
}
|
||||
|
||||
func Munmap(b []byte) (err error) {
|
||||
return linux.Munmap(b)
|
||||
}
|
||||
|
||||
func PerfEventOpen(attr *PerfEventAttr, pid int, cpu int, groupFd int, flags int) (fd int, err error) {
|
||||
return linux.PerfEventOpen(attr, pid, cpu, groupFd, flags)
|
||||
}
|
||||
|
||||
func Uname(buf *Utsname) (err error) {
|
||||
return linux.Uname(buf)
|
||||
}
|
||||
|
||||
func Getpid() int {
|
||||
return linux.Getpid()
|
||||
}
|
||||
|
||||
func Gettid() int {
|
||||
return linux.Gettid()
|
||||
}
|
||||
|
||||
func Tgkill(tgid int, tid int, sig syscall.Signal) (err error) {
|
||||
return linux.Tgkill(tgid, tid, sig)
|
||||
}
|
||||
|
||||
func BytePtrFromString(s string) (*byte, error) {
|
||||
return linux.BytePtrFromString(s)
|
||||
}
|
||||
|
||||
func ByteSliceToString(s []byte) string {
|
||||
return linux.ByteSliceToString(s)
|
||||
}
|
||||
|
||||
func Renameat2(olddirfd int, oldpath string, newdirfd int, newpath string, flags uint) error {
|
||||
return linux.Renameat2(olddirfd, oldpath, newdirfd, newpath, flags)
|
||||
}
|
||||
|
||||
func Prlimit(pid, resource int, new, old *Rlimit) error {
|
||||
return linux.Prlimit(pid, resource, new, old)
|
||||
}
|
||||
|
||||
func Open(path string, mode int, perm uint32) (int, error) {
|
||||
return linux.Open(path, mode, perm)
|
||||
}
|
||||
|
||||
func Fstat(fd int, stat *Stat_t) error {
|
||||
return linux.Fstat(fd, stat)
|
||||
}
|
||||
|
||||
func SetsockoptInt(fd, level, opt, value int) error {
|
||||
return linux.SetsockoptInt(fd, level, opt, value)
|
||||
}
|
||||
294
vendor/github.com/cilium/ebpf/internal/unix/types_other.go
generated
vendored
294
vendor/github.com/cilium/ebpf/internal/unix/types_other.go
generated
vendored
@@ -1,294 +0,0 @@
|
||||
//go:build !linux
|
||||
|
||||
package unix
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
var errNonLinux = fmt.Errorf("unsupported platform %s/%s", runtime.GOOS, runtime.GOARCH)
|
||||
|
||||
// Errnos are distinct and non-zero.
|
||||
const (
|
||||
ENOENT syscall.Errno = iota + 1
|
||||
EEXIST
|
||||
EAGAIN
|
||||
ENOSPC
|
||||
EINVAL
|
||||
EINTR
|
||||
EPERM
|
||||
ESRCH
|
||||
ENODEV
|
||||
EBADF
|
||||
E2BIG
|
||||
EFAULT
|
||||
EACCES
|
||||
EILSEQ
|
||||
EOPNOTSUPP
|
||||
)
|
||||
|
||||
// Constants are distinct to avoid breaking switch statements.
|
||||
const (
|
||||
BPF_F_NO_PREALLOC = iota
|
||||
BPF_F_NUMA_NODE
|
||||
BPF_F_RDONLY
|
||||
BPF_F_WRONLY
|
||||
BPF_F_RDONLY_PROG
|
||||
BPF_F_WRONLY_PROG
|
||||
BPF_F_SLEEPABLE
|
||||
BPF_F_MMAPABLE
|
||||
BPF_F_INNER_MAP
|
||||
BPF_F_KPROBE_MULTI_RETURN
|
||||
BPF_F_XDP_HAS_FRAGS
|
||||
BPF_OBJ_NAME_LEN
|
||||
BPF_TAG_SIZE
|
||||
BPF_RINGBUF_BUSY_BIT
|
||||
BPF_RINGBUF_DISCARD_BIT
|
||||
BPF_RINGBUF_HDR_SZ
|
||||
SYS_BPF
|
||||
F_DUPFD_CLOEXEC
|
||||
EPOLLIN
|
||||
EPOLL_CTL_ADD
|
||||
EPOLL_CLOEXEC
|
||||
O_CLOEXEC
|
||||
O_NONBLOCK
|
||||
PROT_NONE
|
||||
PROT_READ
|
||||
PROT_WRITE
|
||||
MAP_ANON
|
||||
MAP_SHARED
|
||||
MAP_PRIVATE
|
||||
PERF_ATTR_SIZE_VER1
|
||||
PERF_TYPE_SOFTWARE
|
||||
PERF_TYPE_TRACEPOINT
|
||||
PERF_COUNT_SW_BPF_OUTPUT
|
||||
PERF_EVENT_IOC_DISABLE
|
||||
PERF_EVENT_IOC_ENABLE
|
||||
PERF_EVENT_IOC_SET_BPF
|
||||
PerfBitWatermark
|
||||
PerfBitWriteBackward
|
||||
PERF_SAMPLE_RAW
|
||||
PERF_FLAG_FD_CLOEXEC
|
||||
RLIM_INFINITY
|
||||
RLIMIT_MEMLOCK
|
||||
BPF_STATS_RUN_TIME
|
||||
PERF_RECORD_LOST
|
||||
PERF_RECORD_SAMPLE
|
||||
AT_FDCWD
|
||||
RENAME_NOREPLACE
|
||||
SO_ATTACH_BPF
|
||||
SO_DETACH_BPF
|
||||
SOL_SOCKET
|
||||
SIGPROF
|
||||
SIG_BLOCK
|
||||
SIG_UNBLOCK
|
||||
EM_NONE
|
||||
EM_BPF
|
||||
BPF_FS_MAGIC
|
||||
TRACEFS_MAGIC
|
||||
DEBUGFS_MAGIC
|
||||
)
|
||||
|
||||
type Statfs_t struct {
|
||||
Type int64
|
||||
Bsize int64
|
||||
Blocks uint64
|
||||
Bfree uint64
|
||||
Bavail uint64
|
||||
Files uint64
|
||||
Ffree uint64
|
||||
Fsid [2]int32
|
||||
Namelen int64
|
||||
Frsize int64
|
||||
Flags int64
|
||||
Spare [4]int64
|
||||
}
|
||||
|
||||
type Stat_t struct {
|
||||
Dev uint64
|
||||
Ino uint64
|
||||
Nlink uint64
|
||||
Mode uint32
|
||||
Uid uint32
|
||||
Gid uint32
|
||||
_ int32
|
||||
Rdev uint64
|
||||
Size int64
|
||||
Blksize int64
|
||||
Blocks int64
|
||||
}
|
||||
|
||||
type Rlimit struct {
|
||||
Cur uint64
|
||||
Max uint64
|
||||
}
|
||||
|
||||
type Signal int
|
||||
|
||||
type Sigset_t struct {
|
||||
Val [4]uint64
|
||||
}
|
||||
|
||||
func Syscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) {
|
||||
return 0, 0, syscall.ENOTSUP
|
||||
}
|
||||
|
||||
func PthreadSigmask(how int, set, oldset *Sigset_t) error {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func FcntlInt(fd uintptr, cmd, arg int) (int, error) {
|
||||
return -1, errNonLinux
|
||||
}
|
||||
|
||||
func IoctlSetInt(fd int, req uint, value int) error {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func Statfs(path string, buf *Statfs_t) error {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func Close(fd int) (err error) {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
type EpollEvent struct {
|
||||
Events uint32
|
||||
Fd int32
|
||||
Pad int32
|
||||
}
|
||||
|
||||
func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) {
|
||||
return 0, errNonLinux
|
||||
}
|
||||
|
||||
func EpollCtl(epfd int, op int, fd int, event *EpollEvent) (err error) {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func Eventfd(initval uint, flags int) (fd int, err error) {
|
||||
return 0, errNonLinux
|
||||
}
|
||||
|
||||
func Write(fd int, p []byte) (n int, err error) {
|
||||
return 0, errNonLinux
|
||||
}
|
||||
|
||||
func EpollCreate1(flag int) (fd int, err error) {
|
||||
return 0, errNonLinux
|
||||
}
|
||||
|
||||
type PerfEventMmapPage struct {
|
||||
Version uint32
|
||||
Compat_version uint32
|
||||
Lock uint32
|
||||
Index uint32
|
||||
Offset int64
|
||||
Time_enabled uint64
|
||||
Time_running uint64
|
||||
Capabilities uint64
|
||||
Pmc_width uint16
|
||||
Time_shift uint16
|
||||
Time_mult uint32
|
||||
Time_offset uint64
|
||||
Time_zero uint64
|
||||
Size uint32
|
||||
|
||||
Data_head uint64
|
||||
Data_tail uint64
|
||||
Data_offset uint64
|
||||
Data_size uint64
|
||||
Aux_head uint64
|
||||
Aux_tail uint64
|
||||
Aux_offset uint64
|
||||
Aux_size uint64
|
||||
}
|
||||
|
||||
func SetNonblock(fd int, nonblocking bool) (err error) {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func Mmap(fd int, offset int64, length int, prot int, flags int) (data []byte, err error) {
|
||||
return []byte{}, errNonLinux
|
||||
}
|
||||
|
||||
func Munmap(b []byte) (err error) {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
type PerfEventAttr struct {
|
||||
Type uint32
|
||||
Size uint32
|
||||
Config uint64
|
||||
Sample uint64
|
||||
Sample_type uint64
|
||||
Read_format uint64
|
||||
Bits uint64
|
||||
Wakeup uint32
|
||||
Bp_type uint32
|
||||
Ext1 uint64
|
||||
Ext2 uint64
|
||||
Branch_sample_type uint64
|
||||
Sample_regs_user uint64
|
||||
Sample_stack_user uint32
|
||||
Clockid int32
|
||||
Sample_regs_intr uint64
|
||||
Aux_watermark uint32
|
||||
Sample_max_stack uint16
|
||||
}
|
||||
|
||||
func PerfEventOpen(attr *PerfEventAttr, pid int, cpu int, groupFd int, flags int) (fd int, err error) {
|
||||
return 0, errNonLinux
|
||||
}
|
||||
|
||||
type Utsname struct {
|
||||
Release [65]byte
|
||||
Version [65]byte
|
||||
}
|
||||
|
||||
func Uname(buf *Utsname) (err error) {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func Getpid() int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func Gettid() int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func Tgkill(tgid int, tid int, sig syscall.Signal) (err error) {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func BytePtrFromString(s string) (*byte, error) {
|
||||
return nil, errNonLinux
|
||||
}
|
||||
|
||||
func ByteSliceToString(s []byte) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func Renameat2(olddirfd int, oldpath string, newdirfd int, newpath string, flags uint) error {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func Prlimit(pid, resource int, new, old *Rlimit) error {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func Open(path string, mode int, perm uint32) (int, error) {
|
||||
return -1, errNonLinux
|
||||
}
|
||||
|
||||
func Fstat(fd int, stat *Stat_t) error {
|
||||
return errNonLinux
|
||||
}
|
||||
|
||||
func SetsockoptInt(fd, level, opt, value int) error {
|
||||
return errNonLinux
|
||||
}
|
||||
153
vendor/github.com/cilium/ebpf/internal/vdso.go
generated
vendored
153
vendor/github.com/cilium/ebpf/internal/vdso.go
generated
vendored
@@ -1,153 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"debug/elf"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"os"
|
||||
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
var (
|
||||
errAuxvNoVDSO = errors.New("no vdso address found in auxv")
|
||||
)
|
||||
|
||||
// vdsoVersion returns the LINUX_VERSION_CODE embedded in the vDSO library
|
||||
// linked into the current process image.
|
||||
func vdsoVersion() (uint32, error) {
|
||||
// Read data from the auxiliary vector, which is normally passed directly
|
||||
// to the process. Go does not expose that data, so we must read it from procfs.
|
||||
// https://man7.org/linux/man-pages/man3/getauxval.3.html
|
||||
av, err := os.Open("/proc/self/auxv")
|
||||
if errors.Is(err, unix.EACCES) {
|
||||
return 0, fmt.Errorf("opening auxv: %w (process may not be dumpable due to file capabilities)", err)
|
||||
}
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("opening auxv: %w", err)
|
||||
}
|
||||
defer av.Close()
|
||||
|
||||
vdsoAddr, err := vdsoMemoryAddress(av)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("finding vDSO memory address: %w", err)
|
||||
}
|
||||
|
||||
// Use /proc/self/mem rather than unsafe.Pointer tricks.
|
||||
mem, err := os.Open("/proc/self/mem")
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("opening mem: %w", err)
|
||||
}
|
||||
defer mem.Close()
|
||||
|
||||
// Open ELF at provided memory address, as offset into /proc/self/mem.
|
||||
c, err := vdsoLinuxVersionCode(io.NewSectionReader(mem, int64(vdsoAddr), math.MaxInt64))
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("reading linux version code: %w", err)
|
||||
}
|
||||
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// vdsoMemoryAddress returns the memory address of the vDSO library
|
||||
// linked into the current process image. r is an io.Reader into an auxv blob.
|
||||
func vdsoMemoryAddress(r io.Reader) (uint64, error) {
|
||||
const (
|
||||
_AT_NULL = 0 // End of vector
|
||||
_AT_SYSINFO_EHDR = 33 // Offset to vDSO blob in process image
|
||||
)
|
||||
|
||||
// Loop through all tag/value pairs in auxv until we find `AT_SYSINFO_EHDR`,
|
||||
// the address of a page containing the virtual Dynamic Shared Object (vDSO).
|
||||
aux := struct{ Tag, Val uint64 }{}
|
||||
for {
|
||||
if err := binary.Read(r, NativeEndian, &aux); err != nil {
|
||||
return 0, fmt.Errorf("reading auxv entry: %w", err)
|
||||
}
|
||||
|
||||
switch aux.Tag {
|
||||
case _AT_SYSINFO_EHDR:
|
||||
if aux.Val != 0 {
|
||||
return aux.Val, nil
|
||||
}
|
||||
return 0, fmt.Errorf("invalid vDSO address in auxv")
|
||||
// _AT_NULL is always the last tag/val pair in the aux vector
|
||||
// and can be treated like EOF.
|
||||
case _AT_NULL:
|
||||
return 0, errAuxvNoVDSO
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// format described at https://www.man7.org/linux/man-pages/man5/elf.5.html in section 'Notes (Nhdr)'
|
||||
type elfNoteHeader struct {
|
||||
NameSize int32
|
||||
DescSize int32
|
||||
Type int32
|
||||
}
|
||||
|
||||
// vdsoLinuxVersionCode returns the LINUX_VERSION_CODE embedded in
|
||||
// the ELF notes section of the binary provided by the reader.
|
||||
func vdsoLinuxVersionCode(r io.ReaderAt) (uint32, error) {
|
||||
hdr, err := NewSafeELFFile(r)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("reading vDSO ELF: %w", err)
|
||||
}
|
||||
|
||||
sections := hdr.SectionsByType(elf.SHT_NOTE)
|
||||
if len(sections) == 0 {
|
||||
return 0, fmt.Errorf("no note section found in vDSO ELF")
|
||||
}
|
||||
|
||||
for _, sec := range sections {
|
||||
sr := sec.Open()
|
||||
var n elfNoteHeader
|
||||
|
||||
// Read notes until we find one named 'Linux'.
|
||||
for {
|
||||
if err := binary.Read(sr, hdr.ByteOrder, &n); err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
// We looked at all the notes in this section
|
||||
break
|
||||
}
|
||||
return 0, fmt.Errorf("reading note header: %w", err)
|
||||
}
|
||||
|
||||
// If a note name is defined, it follows the note header.
|
||||
var name string
|
||||
if n.NameSize > 0 {
|
||||
// Read the note name, aligned to 4 bytes.
|
||||
buf := make([]byte, Align(n.NameSize, 4))
|
||||
if err := binary.Read(sr, hdr.ByteOrder, &buf); err != nil {
|
||||
return 0, fmt.Errorf("reading note name: %w", err)
|
||||
}
|
||||
|
||||
// Read nul-terminated string.
|
||||
name = unix.ByteSliceToString(buf[:n.NameSize])
|
||||
}
|
||||
|
||||
// If a note descriptor is defined, it follows the name.
|
||||
// It is possible for a note to have a descriptor but not a name.
|
||||
if n.DescSize > 0 {
|
||||
// LINUX_VERSION_CODE is a uint32 value.
|
||||
if name == "Linux" && n.DescSize == 4 && n.Type == 0 {
|
||||
var version uint32
|
||||
if err := binary.Read(sr, hdr.ByteOrder, &version); err != nil {
|
||||
return 0, fmt.Errorf("reading note descriptor: %w", err)
|
||||
}
|
||||
return version, nil
|
||||
}
|
||||
|
||||
// Discard the note descriptor if it exists but we're not interested in it.
|
||||
if _, err := io.CopyN(io.Discard, sr, int64(Align(n.DescSize, 4))); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return 0, fmt.Errorf("no Linux note in ELF")
|
||||
}
|
||||
106
vendor/github.com/cilium/ebpf/internal/version.go
generated
vendored
106
vendor/github.com/cilium/ebpf/internal/version.go
generated
vendored
@@ -1,106 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
const (
|
||||
// Version constant used in ELF binaries indicating that the loader needs to
|
||||
// substitute the eBPF program's version with the value of the kernel's
|
||||
// KERNEL_VERSION compile-time macro. Used for compatibility with BCC, gobpf
|
||||
// and RedSift.
|
||||
MagicKernelVersion = 0xFFFFFFFE
|
||||
)
|
||||
|
||||
// A Version in the form Major.Minor.Patch.
|
||||
type Version [3]uint16
|
||||
|
||||
// NewVersion creates a version from a string like "Major.Minor.Patch".
|
||||
//
|
||||
// Patch is optional.
|
||||
func NewVersion(ver string) (Version, error) {
|
||||
var major, minor, patch uint16
|
||||
n, _ := fmt.Sscanf(ver, "%d.%d.%d", &major, &minor, &patch)
|
||||
if n < 2 {
|
||||
return Version{}, fmt.Errorf("invalid version: %s", ver)
|
||||
}
|
||||
return Version{major, minor, patch}, nil
|
||||
}
|
||||
|
||||
// NewVersionFromCode creates a version from a LINUX_VERSION_CODE.
|
||||
func NewVersionFromCode(code uint32) Version {
|
||||
return Version{
|
||||
uint16(uint8(code >> 16)),
|
||||
uint16(uint8(code >> 8)),
|
||||
uint16(uint8(code)),
|
||||
}
|
||||
}
|
||||
|
||||
func (v Version) String() string {
|
||||
if v[2] == 0 {
|
||||
return fmt.Sprintf("v%d.%d", v[0], v[1])
|
||||
}
|
||||
return fmt.Sprintf("v%d.%d.%d", v[0], v[1], v[2])
|
||||
}
|
||||
|
||||
// Less returns true if the version is less than another version.
|
||||
func (v Version) Less(other Version) bool {
|
||||
for i, a := range v {
|
||||
if a == other[i] {
|
||||
continue
|
||||
}
|
||||
return a < other[i]
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Unspecified returns true if the version is all zero.
|
||||
func (v Version) Unspecified() bool {
|
||||
return v[0] == 0 && v[1] == 0 && v[2] == 0
|
||||
}
|
||||
|
||||
// Kernel implements the kernel's KERNEL_VERSION macro from linux/version.h.
|
||||
// It represents the kernel version and patch level as a single value.
|
||||
func (v Version) Kernel() uint32 {
|
||||
|
||||
// Kernels 4.4 and 4.9 have their SUBLEVEL clamped to 255 to avoid
|
||||
// overflowing into PATCHLEVEL.
|
||||
// See kernel commit 9b82f13e7ef3 ("kbuild: clamp SUBLEVEL to 255").
|
||||
s := v[2]
|
||||
if s > 255 {
|
||||
s = 255
|
||||
}
|
||||
|
||||
// Truncate members to uint8 to prevent them from spilling over into
|
||||
// each other when overflowing 8 bits.
|
||||
return uint32(uint8(v[0]))<<16 | uint32(uint8(v[1]))<<8 | uint32(uint8(s))
|
||||
}
|
||||
|
||||
// KernelVersion returns the version of the currently running kernel.
|
||||
var KernelVersion = Memoize(func() (Version, error) {
|
||||
return detectKernelVersion()
|
||||
})
|
||||
|
||||
// detectKernelVersion returns the version of the running kernel.
|
||||
func detectKernelVersion() (Version, error) {
|
||||
vc, err := vdsoVersion()
|
||||
if err != nil {
|
||||
return Version{}, err
|
||||
}
|
||||
return NewVersionFromCode(vc), nil
|
||||
}
|
||||
|
||||
// KernelRelease returns the release string of the running kernel.
|
||||
// Its format depends on the Linux distribution and corresponds to directory
|
||||
// names in /lib/modules by convention. Some examples are 5.15.17-1-lts and
|
||||
// 4.19.0-16-amd64.
|
||||
func KernelRelease() (string, error) {
|
||||
var uname unix.Utsname
|
||||
if err := unix.Uname(&uname); err != nil {
|
||||
return "", fmt.Errorf("uname failed: %w", err)
|
||||
}
|
||||
|
||||
return unix.ByteSliceToString(uname.Release[:]), nil
|
||||
}
|
||||
190
vendor/github.com/cilium/ebpf/link/cgroup.go
generated
vendored
190
vendor/github.com/cilium/ebpf/link/cgroup.go
generated
vendored
@@ -1,190 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
)
|
||||
|
||||
type cgroupAttachFlags uint32
|
||||
|
||||
const (
|
||||
// Allow programs attached to sub-cgroups to override the verdict of this
|
||||
// program.
|
||||
flagAllowOverride cgroupAttachFlags = 1 << iota
|
||||
// Allow attaching multiple programs to the cgroup. Only works if the cgroup
|
||||
// has zero or more programs attached using the Multi flag. Implies override.
|
||||
flagAllowMulti
|
||||
// Set automatically by progAttachCgroup.Update(). Used for updating a
|
||||
// specific given program attached in multi-mode.
|
||||
flagReplace
|
||||
)
|
||||
|
||||
type CgroupOptions struct {
|
||||
// Path to a cgroupv2 folder.
|
||||
Path string
|
||||
// One of the AttachCgroup* constants
|
||||
Attach ebpf.AttachType
|
||||
// Program must be of type CGroup*, and the attach type must match Attach.
|
||||
Program *ebpf.Program
|
||||
}
|
||||
|
||||
// AttachCgroup links a BPF program to a cgroup.
|
||||
//
|
||||
// If the running kernel doesn't support bpf_link, attempts to emulate its
|
||||
// semantics using the legacy PROG_ATTACH mechanism. If bpf_link is not
|
||||
// available, the returned [Link] will not support pinning to bpffs.
|
||||
//
|
||||
// If you need more control over attachment flags or the attachment mechanism
|
||||
// used, look at [RawAttachProgram] and [AttachRawLink] instead.
|
||||
func AttachCgroup(opts CgroupOptions) (cg Link, err error) {
|
||||
cgroup, err := os.Open(opts.Path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't open cgroup: %s", err)
|
||||
}
|
||||
defer func() {
|
||||
if _, ok := cg.(*progAttachCgroup); ok {
|
||||
// Skip closing the cgroup handle if we return a valid progAttachCgroup,
|
||||
// where the handle is retained to implement Update().
|
||||
return
|
||||
}
|
||||
cgroup.Close()
|
||||
}()
|
||||
|
||||
cg, err = newLinkCgroup(cgroup, opts.Attach, opts.Program)
|
||||
if err == nil {
|
||||
return cg, nil
|
||||
}
|
||||
|
||||
if errors.Is(err, ErrNotSupported) {
|
||||
cg, err = newProgAttachCgroup(cgroup, opts.Attach, opts.Program, flagAllowMulti)
|
||||
}
|
||||
if errors.Is(err, ErrNotSupported) {
|
||||
cg, err = newProgAttachCgroup(cgroup, opts.Attach, opts.Program, flagAllowOverride)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return cg, nil
|
||||
}
|
||||
|
||||
type progAttachCgroup struct {
|
||||
cgroup *os.File
|
||||
current *ebpf.Program
|
||||
attachType ebpf.AttachType
|
||||
flags cgroupAttachFlags
|
||||
}
|
||||
|
||||
var _ Link = (*progAttachCgroup)(nil)
|
||||
|
||||
func (cg *progAttachCgroup) isLink() {}
|
||||
|
||||
// newProgAttachCgroup attaches prog to cgroup using BPF_PROG_ATTACH.
|
||||
// cgroup and prog are retained by [progAttachCgroup].
|
||||
func newProgAttachCgroup(cgroup *os.File, attach ebpf.AttachType, prog *ebpf.Program, flags cgroupAttachFlags) (*progAttachCgroup, error) {
|
||||
if flags&flagAllowMulti > 0 {
|
||||
if err := haveProgAttachReplace(); err != nil {
|
||||
return nil, fmt.Errorf("can't support multiple programs: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Use a program handle that cannot be closed by the caller.
|
||||
clone, err := prog.Clone()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = RawAttachProgram(RawAttachProgramOptions{
|
||||
Target: int(cgroup.Fd()),
|
||||
Program: clone,
|
||||
Flags: uint32(flags),
|
||||
Attach: attach,
|
||||
})
|
||||
if err != nil {
|
||||
clone.Close()
|
||||
return nil, fmt.Errorf("cgroup: %w", err)
|
||||
}
|
||||
|
||||
return &progAttachCgroup{cgroup, clone, attach, flags}, nil
|
||||
}
|
||||
|
||||
func (cg *progAttachCgroup) Close() error {
|
||||
defer cg.cgroup.Close()
|
||||
defer cg.current.Close()
|
||||
|
||||
err := RawDetachProgram(RawDetachProgramOptions{
|
||||
Target: int(cg.cgroup.Fd()),
|
||||
Program: cg.current,
|
||||
Attach: cg.attachType,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("close cgroup: %s", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cg *progAttachCgroup) Update(prog *ebpf.Program) error {
|
||||
new, err := prog.Clone()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
args := RawAttachProgramOptions{
|
||||
Target: int(cg.cgroup.Fd()),
|
||||
Program: prog,
|
||||
Attach: cg.attachType,
|
||||
Flags: uint32(cg.flags),
|
||||
}
|
||||
|
||||
if cg.flags&flagAllowMulti > 0 {
|
||||
// Atomically replacing multiple programs requires at least
|
||||
// 5.5 (commit 7dd68b3279f17921 "bpf: Support replacing cgroup-bpf
|
||||
// program in MULTI mode")
|
||||
args.Flags |= uint32(flagReplace)
|
||||
args.Replace = cg.current
|
||||
}
|
||||
|
||||
if err := RawAttachProgram(args); err != nil {
|
||||
new.Close()
|
||||
return fmt.Errorf("can't update cgroup: %s", err)
|
||||
}
|
||||
|
||||
cg.current.Close()
|
||||
cg.current = new
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cg *progAttachCgroup) Pin(string) error {
|
||||
return fmt.Errorf("can't pin cgroup: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (cg *progAttachCgroup) Unpin() error {
|
||||
return fmt.Errorf("can't unpin cgroup: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (cg *progAttachCgroup) Info() (*Info, error) {
|
||||
return nil, fmt.Errorf("can't get cgroup info: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
type linkCgroup struct {
|
||||
RawLink
|
||||
}
|
||||
|
||||
var _ Link = (*linkCgroup)(nil)
|
||||
|
||||
// newLinkCgroup attaches prog to cgroup using BPF_LINK_CREATE.
|
||||
func newLinkCgroup(cgroup *os.File, attach ebpf.AttachType, prog *ebpf.Program) (*linkCgroup, error) {
|
||||
link, err := AttachRawLink(RawLinkOptions{
|
||||
Target: int(cgroup.Fd()),
|
||||
Program: prog,
|
||||
Attach: attach,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &linkCgroup{*link}, err
|
||||
}
|
||||
2
vendor/github.com/cilium/ebpf/link/doc.go
generated
vendored
2
vendor/github.com/cilium/ebpf/link/doc.go
generated
vendored
@@ -1,2 +0,0 @@
|
||||
// Package link allows attaching eBPF programs to various kernel hooks.
|
||||
package link
|
||||
85
vendor/github.com/cilium/ebpf/link/iter.go
generated
vendored
85
vendor/github.com/cilium/ebpf/link/iter.go
generated
vendored
@@ -1,85 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
)
|
||||
|
||||
type IterOptions struct {
|
||||
// Program must be of type Tracing with attach type
|
||||
// AttachTraceIter. The kind of iterator to attach to is
|
||||
// determined at load time via the AttachTo field.
|
||||
//
|
||||
// AttachTo requires the kernel to include BTF of itself,
|
||||
// and it to be compiled with a recent pahole (>= 1.16).
|
||||
Program *ebpf.Program
|
||||
|
||||
// Map specifies the target map for bpf_map_elem and sockmap iterators.
|
||||
// It may be nil.
|
||||
Map *ebpf.Map
|
||||
}
|
||||
|
||||
// AttachIter attaches a BPF seq_file iterator.
|
||||
func AttachIter(opts IterOptions) (*Iter, error) {
|
||||
if err := haveBPFLink(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
progFd := opts.Program.FD()
|
||||
if progFd < 0 {
|
||||
return nil, fmt.Errorf("invalid program: %s", sys.ErrClosedFd)
|
||||
}
|
||||
|
||||
var info bpfIterLinkInfoMap
|
||||
if opts.Map != nil {
|
||||
mapFd := opts.Map.FD()
|
||||
if mapFd < 0 {
|
||||
return nil, fmt.Errorf("invalid map: %w", sys.ErrClosedFd)
|
||||
}
|
||||
info.map_fd = uint32(mapFd)
|
||||
}
|
||||
|
||||
attr := sys.LinkCreateIterAttr{
|
||||
ProgFd: uint32(progFd),
|
||||
AttachType: sys.AttachType(ebpf.AttachTraceIter),
|
||||
IterInfo: sys.NewPointer(unsafe.Pointer(&info)),
|
||||
IterInfoLen: uint32(unsafe.Sizeof(info)),
|
||||
}
|
||||
|
||||
fd, err := sys.LinkCreateIter(&attr)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't link iterator: %w", err)
|
||||
}
|
||||
|
||||
return &Iter{RawLink{fd, ""}}, err
|
||||
}
|
||||
|
||||
// Iter represents an attached bpf_iter.
|
||||
type Iter struct {
|
||||
RawLink
|
||||
}
|
||||
|
||||
// Open creates a new instance of the iterator.
|
||||
//
|
||||
// Reading from the returned reader triggers the BPF program.
|
||||
func (it *Iter) Open() (io.ReadCloser, error) {
|
||||
attr := &sys.IterCreateAttr{
|
||||
LinkFd: it.fd.Uint(),
|
||||
}
|
||||
|
||||
fd, err := sys.IterCreate(attr)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't create iterator: %w", err)
|
||||
}
|
||||
|
||||
return fd.File("bpf_iter"), nil
|
||||
}
|
||||
|
||||
// union bpf_iter_link_info.map
|
||||
type bpfIterLinkInfoMap struct {
|
||||
map_fd uint32
|
||||
}
|
||||
357
vendor/github.com/cilium/ebpf/link/kprobe.go
generated
vendored
357
vendor/github.com/cilium/ebpf/link/kprobe.go
generated
vendored
@@ -1,357 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
"strings"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/tracefs"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// KprobeOptions defines additional parameters that will be used
|
||||
// when loading Kprobes.
|
||||
type KprobeOptions struct {
|
||||
// Arbitrary value that can be fetched from an eBPF program
|
||||
// via `bpf_get_attach_cookie()`.
|
||||
//
|
||||
// Needs kernel 5.15+.
|
||||
Cookie uint64
|
||||
// Offset of the kprobe relative to the traced symbol.
|
||||
// Can be used to insert kprobes at arbitrary offsets in kernel functions,
|
||||
// e.g. in places where functions have been inlined.
|
||||
Offset uint64
|
||||
// Increase the maximum number of concurrent invocations of a kretprobe.
|
||||
// Required when tracing some long running functions in the kernel.
|
||||
//
|
||||
// Deprecated: this setting forces the use of an outdated kernel API and is not portable
|
||||
// across kernel versions.
|
||||
RetprobeMaxActive int
|
||||
// Prefix used for the event name if the kprobe must be attached using tracefs.
|
||||
// The group name will be formatted as `<prefix>_<randomstr>`.
|
||||
// The default empty string is equivalent to "ebpf" as the prefix.
|
||||
TraceFSPrefix string
|
||||
}
|
||||
|
||||
func (ko *KprobeOptions) cookie() uint64 {
|
||||
if ko == nil {
|
||||
return 0
|
||||
}
|
||||
return ko.Cookie
|
||||
}
|
||||
|
||||
// Kprobe attaches the given eBPF program to a perf event that fires when the
|
||||
// given kernel symbol starts executing. See /proc/kallsyms for available
|
||||
// symbols. For example, printk():
|
||||
//
|
||||
// kp, err := Kprobe("printk", prog, nil)
|
||||
//
|
||||
// Losing the reference to the resulting Link (kp) will close the Kprobe
|
||||
// and prevent further execution of prog. The Link must be Closed during
|
||||
// program shutdown to avoid leaking system resources.
|
||||
//
|
||||
// If attaching to symbol fails, automatically retries with the running
|
||||
// platform's syscall prefix (e.g. __x64_) to support attaching to syscalls
|
||||
// in a portable fashion.
|
||||
func Kprobe(symbol string, prog *ebpf.Program, opts *KprobeOptions) (Link, error) {
|
||||
k, err := kprobe(symbol, prog, opts, false)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lnk, err := attachPerfEvent(k, prog, opts.cookie())
|
||||
if err != nil {
|
||||
k.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lnk, nil
|
||||
}
|
||||
|
||||
// Kretprobe attaches the given eBPF program to a perf event that fires right
|
||||
// before the given kernel symbol exits, with the function stack left intact.
|
||||
// See /proc/kallsyms for available symbols. For example, printk():
|
||||
//
|
||||
// kp, err := Kretprobe("printk", prog, nil)
|
||||
//
|
||||
// Losing the reference to the resulting Link (kp) will close the Kretprobe
|
||||
// and prevent further execution of prog. The Link must be Closed during
|
||||
// program shutdown to avoid leaking system resources.
|
||||
//
|
||||
// If attaching to symbol fails, automatically retries with the running
|
||||
// platform's syscall prefix (e.g. __x64_) to support attaching to syscalls
|
||||
// in a portable fashion.
|
||||
//
|
||||
// On kernels 5.10 and earlier, setting a kretprobe on a nonexistent symbol
|
||||
// incorrectly returns unix.EINVAL instead of os.ErrNotExist.
|
||||
func Kretprobe(symbol string, prog *ebpf.Program, opts *KprobeOptions) (Link, error) {
|
||||
k, err := kprobe(symbol, prog, opts, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lnk, err := attachPerfEvent(k, prog, opts.cookie())
|
||||
if err != nil {
|
||||
k.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lnk, nil
|
||||
}
|
||||
|
||||
// isValidKprobeSymbol implements the equivalent of a regex match
|
||||
// against "^[a-zA-Z_][0-9a-zA-Z_.]*$".
|
||||
func isValidKprobeSymbol(s string) bool {
|
||||
if len(s) < 1 {
|
||||
return false
|
||||
}
|
||||
|
||||
for i, c := range []byte(s) {
|
||||
switch {
|
||||
case c >= 'a' && c <= 'z':
|
||||
case c >= 'A' && c <= 'Z':
|
||||
case c == '_':
|
||||
case i > 0 && c >= '0' && c <= '9':
|
||||
|
||||
// Allow `.` in symbol name. GCC-compiled kernel may change symbol name
|
||||
// to have a `.isra.$n` suffix, like `udp_send_skb.isra.52`.
|
||||
// See: https://gcc.gnu.org/gcc-10/changes.html
|
||||
case i > 0 && c == '.':
|
||||
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// kprobe opens a perf event on the given symbol and attaches prog to it.
|
||||
// If ret is true, create a kretprobe.
|
||||
func kprobe(symbol string, prog *ebpf.Program, opts *KprobeOptions, ret bool) (*perfEvent, error) {
|
||||
if symbol == "" {
|
||||
return nil, fmt.Errorf("symbol name cannot be empty: %w", errInvalidInput)
|
||||
}
|
||||
if prog == nil {
|
||||
return nil, fmt.Errorf("prog cannot be nil: %w", errInvalidInput)
|
||||
}
|
||||
if !isValidKprobeSymbol(symbol) {
|
||||
return nil, fmt.Errorf("symbol '%s' must be a valid symbol in /proc/kallsyms: %w", symbol, errInvalidInput)
|
||||
}
|
||||
if prog.Type() != ebpf.Kprobe {
|
||||
return nil, fmt.Errorf("eBPF program type %s is not a Kprobe: %w", prog.Type(), errInvalidInput)
|
||||
}
|
||||
|
||||
args := tracefs.ProbeArgs{
|
||||
Type: tracefs.Kprobe,
|
||||
Pid: perfAllThreads,
|
||||
Symbol: symbol,
|
||||
Ret: ret,
|
||||
}
|
||||
|
||||
if opts != nil {
|
||||
args.RetprobeMaxActive = opts.RetprobeMaxActive
|
||||
args.Cookie = opts.Cookie
|
||||
args.Offset = opts.Offset
|
||||
args.Group = opts.TraceFSPrefix
|
||||
}
|
||||
|
||||
// Use kprobe PMU if the kernel has it available.
|
||||
tp, err := pmuProbe(args)
|
||||
if errors.Is(err, os.ErrNotExist) || errors.Is(err, unix.EINVAL) {
|
||||
if prefix := internal.PlatformPrefix(); prefix != "" {
|
||||
args.Symbol = prefix + symbol
|
||||
tp, err = pmuProbe(args)
|
||||
}
|
||||
}
|
||||
if err == nil {
|
||||
return tp, nil
|
||||
}
|
||||
if err != nil && !errors.Is(err, ErrNotSupported) {
|
||||
return nil, fmt.Errorf("creating perf_kprobe PMU (arch-specific fallback for %q): %w", symbol, err)
|
||||
}
|
||||
|
||||
// Use tracefs if kprobe PMU is missing.
|
||||
args.Symbol = symbol
|
||||
tp, err = tracefsProbe(args)
|
||||
if errors.Is(err, os.ErrNotExist) || errors.Is(err, unix.EINVAL) {
|
||||
if prefix := internal.PlatformPrefix(); prefix != "" {
|
||||
args.Symbol = prefix + symbol
|
||||
tp, err = tracefsProbe(args)
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating tracefs event (arch-specific fallback for %q): %w", symbol, err)
|
||||
}
|
||||
|
||||
return tp, nil
|
||||
}
|
||||
|
||||
// pmuProbe opens a perf event based on a Performance Monitoring Unit.
|
||||
//
|
||||
// Requires at least a 4.17 kernel.
|
||||
// e12f03d7031a "perf/core: Implement the 'perf_kprobe' PMU"
|
||||
// 33ea4b24277b "perf/core: Implement the 'perf_uprobe' PMU"
|
||||
//
|
||||
// Returns ErrNotSupported if the kernel doesn't support perf_[k,u]probe PMU
|
||||
func pmuProbe(args tracefs.ProbeArgs) (*perfEvent, error) {
|
||||
// Getting the PMU type will fail if the kernel doesn't support
|
||||
// the perf_[k,u]probe PMU.
|
||||
eventType, err := internal.ReadUint64FromFileOnce("%d\n", "/sys/bus/event_source/devices", args.Type.String(), "type")
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return nil, fmt.Errorf("%s: %w", args.Type, ErrNotSupported)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Use tracefs if we want to set kretprobe's retprobeMaxActive.
|
||||
if args.RetprobeMaxActive != 0 {
|
||||
return nil, fmt.Errorf("pmu probe: non-zero retprobeMaxActive: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
var config uint64
|
||||
if args.Ret {
|
||||
bit, err := internal.ReadUint64FromFileOnce("config:%d\n", "/sys/bus/event_source/devices", args.Type.String(), "/format/retprobe")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
config |= 1 << bit
|
||||
}
|
||||
|
||||
var (
|
||||
attr unix.PerfEventAttr
|
||||
sp unsafe.Pointer
|
||||
token string
|
||||
)
|
||||
switch args.Type {
|
||||
case tracefs.Kprobe:
|
||||
// Create a pointer to a NUL-terminated string for the kernel.
|
||||
sp, err = unsafeStringPtr(args.Symbol)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
token = tracefs.KprobeToken(args)
|
||||
|
||||
attr = unix.PerfEventAttr{
|
||||
// The minimum size required for PMU kprobes is PERF_ATTR_SIZE_VER1,
|
||||
// since it added the config2 (Ext2) field. Use Ext2 as probe_offset.
|
||||
Size: unix.PERF_ATTR_SIZE_VER1,
|
||||
Type: uint32(eventType), // PMU event type read from sysfs
|
||||
Ext1: uint64(uintptr(sp)), // Kernel symbol to trace
|
||||
Ext2: args.Offset, // Kernel symbol offset
|
||||
Config: config, // Retprobe flag
|
||||
}
|
||||
case tracefs.Uprobe:
|
||||
sp, err = unsafeStringPtr(args.Path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if args.RefCtrOffset != 0 {
|
||||
config |= args.RefCtrOffset << uprobeRefCtrOffsetShift
|
||||
}
|
||||
|
||||
token = tracefs.UprobeToken(args)
|
||||
|
||||
attr = unix.PerfEventAttr{
|
||||
// The minimum size required for PMU uprobes is PERF_ATTR_SIZE_VER1,
|
||||
// since it added the config2 (Ext2) field. The Size field controls the
|
||||
// size of the internal buffer the kernel allocates for reading the
|
||||
// perf_event_attr argument from userspace.
|
||||
Size: unix.PERF_ATTR_SIZE_VER1,
|
||||
Type: uint32(eventType), // PMU event type read from sysfs
|
||||
Ext1: uint64(uintptr(sp)), // Uprobe path
|
||||
Ext2: args.Offset, // Uprobe offset
|
||||
Config: config, // RefCtrOffset, Retprobe flag
|
||||
}
|
||||
}
|
||||
|
||||
rawFd, err := unix.PerfEventOpen(&attr, args.Pid, 0, -1, unix.PERF_FLAG_FD_CLOEXEC)
|
||||
|
||||
// On some old kernels, kprobe PMU doesn't allow `.` in symbol names and
|
||||
// return -EINVAL. Return ErrNotSupported to allow falling back to tracefs.
|
||||
// https://github.com/torvalds/linux/blob/94710cac0ef4/kernel/trace/trace_kprobe.c#L340-L343
|
||||
if errors.Is(err, unix.EINVAL) && strings.Contains(args.Symbol, ".") {
|
||||
return nil, fmt.Errorf("token %s: older kernels don't accept dots: %w", token, ErrNotSupported)
|
||||
}
|
||||
// Since commit 97c753e62e6c, ENOENT is correctly returned instead of EINVAL
|
||||
// when trying to create a retprobe for a missing symbol.
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return nil, fmt.Errorf("token %s: not found: %w", token, err)
|
||||
}
|
||||
// Since commit ab105a4fb894, EILSEQ is returned when a kprobe sym+offset is resolved
|
||||
// to an invalid insn boundary. The exact conditions that trigger this error are
|
||||
// arch specific however.
|
||||
if errors.Is(err, unix.EILSEQ) {
|
||||
return nil, fmt.Errorf("token %s: bad insn boundary: %w", token, os.ErrNotExist)
|
||||
}
|
||||
// Since at least commit cb9a19fe4aa51, ENOTSUPP is returned
|
||||
// when attempting to set a uprobe on a trap instruction.
|
||||
if errors.Is(err, sys.ENOTSUPP) {
|
||||
return nil, fmt.Errorf("token %s: failed setting uprobe on offset %#x (possible trap insn): %w", token, args.Offset, err)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("token %s: opening perf event: %w", token, err)
|
||||
}
|
||||
|
||||
// Ensure the string pointer is not collected before PerfEventOpen returns.
|
||||
runtime.KeepAlive(sp)
|
||||
|
||||
fd, err := sys.NewFD(rawFd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Kernel has perf_[k,u]probe PMU available, initialize perf event.
|
||||
return newPerfEvent(fd, nil), nil
|
||||
}
|
||||
|
||||
// tracefsProbe creates a trace event by writing an entry to <tracefs>/[k,u]probe_events.
|
||||
// A new trace event group name is generated on every call to support creating
|
||||
// multiple trace events for the same kernel or userspace symbol.
|
||||
// Path and offset are only set in the case of uprobe(s) and are used to set
|
||||
// the executable/library path on the filesystem and the offset where the probe is inserted.
|
||||
// A perf event is then opened on the newly-created trace event and returned to the caller.
|
||||
func tracefsProbe(args tracefs.ProbeArgs) (*perfEvent, error) {
|
||||
groupPrefix := "ebpf"
|
||||
if args.Group != "" {
|
||||
groupPrefix = args.Group
|
||||
}
|
||||
|
||||
// Generate a random string for each trace event we attempt to create.
|
||||
// This value is used as the 'group' token in tracefs to allow creating
|
||||
// multiple kprobe trace events with the same name.
|
||||
group, err := tracefs.RandomGroup(groupPrefix)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("randomizing group name: %w", err)
|
||||
}
|
||||
args.Group = group
|
||||
|
||||
// Create the [k,u]probe trace event using tracefs.
|
||||
evt, err := tracefs.NewEvent(args)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating probe entry on tracefs: %w", err)
|
||||
}
|
||||
|
||||
// Kprobes are ephemeral tracepoints and share the same perf event type.
|
||||
fd, err := openTracepointPerfEvent(evt.ID(), args.Pid)
|
||||
if err != nil {
|
||||
// Make sure we clean up the created tracefs event when we return error.
|
||||
// If a livepatch handler is already active on the symbol, the write to
|
||||
// tracefs will succeed, a trace event will show up, but creating the
|
||||
// perf event will fail with EBUSY.
|
||||
_ = evt.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newPerfEvent(fd, evt), nil
|
||||
}
|
||||
180
vendor/github.com/cilium/ebpf/link/kprobe_multi.go
generated
vendored
180
vendor/github.com/cilium/ebpf/link/kprobe_multi.go
generated
vendored
@@ -1,180 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/asm"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// KprobeMultiOptions defines additional parameters that will be used
|
||||
// when opening a KprobeMulti Link.
|
||||
type KprobeMultiOptions struct {
|
||||
// Symbols takes a list of kernel symbol names to attach an ebpf program to.
|
||||
//
|
||||
// Mutually exclusive with Addresses.
|
||||
Symbols []string
|
||||
|
||||
// Addresses takes a list of kernel symbol addresses in case they can not
|
||||
// be referred to by name.
|
||||
//
|
||||
// Note that only start addresses can be specified, since the fprobe API
|
||||
// limits the attach point to the function entry or return.
|
||||
//
|
||||
// Mutually exclusive with Symbols.
|
||||
Addresses []uintptr
|
||||
|
||||
// Cookies specifies arbitrary values that can be fetched from an eBPF
|
||||
// program via `bpf_get_attach_cookie()`.
|
||||
//
|
||||
// If set, its length should be equal to the length of Symbols or Addresses.
|
||||
// Each Cookie is assigned to the Symbol or Address specified at the
|
||||
// corresponding slice index.
|
||||
Cookies []uint64
|
||||
}
|
||||
|
||||
// KprobeMulti attaches the given eBPF program to the entry point of a given set
|
||||
// of kernel symbols.
|
||||
//
|
||||
// The difference with Kprobe() is that multi-kprobe accomplishes this in a
|
||||
// single system call, making it significantly faster than attaching many
|
||||
// probes one at a time.
|
||||
//
|
||||
// Requires at least Linux 5.18.
|
||||
func KprobeMulti(prog *ebpf.Program, opts KprobeMultiOptions) (Link, error) {
|
||||
return kprobeMulti(prog, opts, 0)
|
||||
}
|
||||
|
||||
// KretprobeMulti attaches the given eBPF program to the return point of a given
|
||||
// set of kernel symbols.
|
||||
//
|
||||
// The difference with Kretprobe() is that multi-kprobe accomplishes this in a
|
||||
// single system call, making it significantly faster than attaching many
|
||||
// probes one at a time.
|
||||
//
|
||||
// Requires at least Linux 5.18.
|
||||
func KretprobeMulti(prog *ebpf.Program, opts KprobeMultiOptions) (Link, error) {
|
||||
return kprobeMulti(prog, opts, unix.BPF_F_KPROBE_MULTI_RETURN)
|
||||
}
|
||||
|
||||
func kprobeMulti(prog *ebpf.Program, opts KprobeMultiOptions, flags uint32) (Link, error) {
|
||||
if prog == nil {
|
||||
return nil, errors.New("cannot attach a nil program")
|
||||
}
|
||||
|
||||
syms := uint32(len(opts.Symbols))
|
||||
addrs := uint32(len(opts.Addresses))
|
||||
cookies := uint32(len(opts.Cookies))
|
||||
|
||||
if syms == 0 && addrs == 0 {
|
||||
return nil, fmt.Errorf("one of Symbols or Addresses is required: %w", errInvalidInput)
|
||||
}
|
||||
if syms != 0 && addrs != 0 {
|
||||
return nil, fmt.Errorf("Symbols and Addresses are mutually exclusive: %w", errInvalidInput)
|
||||
}
|
||||
if cookies > 0 && cookies != syms && cookies != addrs {
|
||||
return nil, fmt.Errorf("Cookies must be exactly Symbols or Addresses in length: %w", errInvalidInput)
|
||||
}
|
||||
|
||||
if err := haveBPFLinkKprobeMulti(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
attr := &sys.LinkCreateKprobeMultiAttr{
|
||||
ProgFd: uint32(prog.FD()),
|
||||
AttachType: sys.BPF_TRACE_KPROBE_MULTI,
|
||||
KprobeMultiFlags: flags,
|
||||
}
|
||||
|
||||
switch {
|
||||
case syms != 0:
|
||||
attr.Count = syms
|
||||
attr.Syms = sys.NewStringSlicePointer(opts.Symbols)
|
||||
|
||||
case addrs != 0:
|
||||
attr.Count = addrs
|
||||
attr.Addrs = sys.NewPointer(unsafe.Pointer(&opts.Addresses[0]))
|
||||
}
|
||||
|
||||
if cookies != 0 {
|
||||
attr.Cookies = sys.NewPointer(unsafe.Pointer(&opts.Cookies[0]))
|
||||
}
|
||||
|
||||
fd, err := sys.LinkCreateKprobeMulti(attr)
|
||||
if errors.Is(err, unix.ESRCH) {
|
||||
return nil, fmt.Errorf("couldn't find one or more symbols: %w", os.ErrNotExist)
|
||||
}
|
||||
if errors.Is(err, unix.EINVAL) {
|
||||
return nil, fmt.Errorf("%w (missing kernel symbol or prog's AttachType not AttachTraceKprobeMulti?)", err)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &kprobeMultiLink{RawLink{fd, ""}}, nil
|
||||
}
|
||||
|
||||
type kprobeMultiLink struct {
|
||||
RawLink
|
||||
}
|
||||
|
||||
var _ Link = (*kprobeMultiLink)(nil)
|
||||
|
||||
func (kml *kprobeMultiLink) Update(prog *ebpf.Program) error {
|
||||
return fmt.Errorf("update kprobe_multi: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (kml *kprobeMultiLink) Pin(string) error {
|
||||
return fmt.Errorf("pin kprobe_multi: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (kml *kprobeMultiLink) Unpin() error {
|
||||
return fmt.Errorf("unpin kprobe_multi: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
var haveBPFLinkKprobeMulti = internal.NewFeatureTest("bpf_link_kprobe_multi", "5.18", func() error {
|
||||
prog, err := ebpf.NewProgram(&ebpf.ProgramSpec{
|
||||
Name: "probe_kpm_link",
|
||||
Type: ebpf.Kprobe,
|
||||
Instructions: asm.Instructions{
|
||||
asm.Mov.Imm(asm.R0, 0),
|
||||
asm.Return(),
|
||||
},
|
||||
AttachType: ebpf.AttachTraceKprobeMulti,
|
||||
License: "MIT",
|
||||
})
|
||||
if errors.Is(err, unix.E2BIG) {
|
||||
// Kernel doesn't support AttachType field.
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer prog.Close()
|
||||
|
||||
fd, err := sys.LinkCreateKprobeMulti(&sys.LinkCreateKprobeMultiAttr{
|
||||
ProgFd: uint32(prog.FD()),
|
||||
AttachType: sys.BPF_TRACE_KPROBE_MULTI,
|
||||
Count: 1,
|
||||
Syms: sys.NewStringSlicePointer([]string{"vprintk"}),
|
||||
})
|
||||
switch {
|
||||
case errors.Is(err, unix.EINVAL):
|
||||
return internal.ErrNotSupported
|
||||
// If CONFIG_FPROBE isn't set.
|
||||
case errors.Is(err, unix.EOPNOTSUPP):
|
||||
return internal.ErrNotSupported
|
||||
case err != nil:
|
||||
return err
|
||||
}
|
||||
|
||||
fd.Close()
|
||||
|
||||
return nil
|
||||
})
|
||||
336
vendor/github.com/cilium/ebpf/link/link.go
generated
vendored
336
vendor/github.com/cilium/ebpf/link/link.go
generated
vendored
@@ -1,336 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/btf"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
)
|
||||
|
||||
var ErrNotSupported = internal.ErrNotSupported
|
||||
|
||||
// Link represents a Program attached to a BPF hook.
|
||||
type Link interface {
|
||||
// Replace the current program with a new program.
|
||||
//
|
||||
// Passing a nil program is an error. May return an error wrapping ErrNotSupported.
|
||||
Update(*ebpf.Program) error
|
||||
|
||||
// Persist a link by pinning it into a bpffs.
|
||||
//
|
||||
// May return an error wrapping ErrNotSupported.
|
||||
Pin(string) error
|
||||
|
||||
// Undo a previous call to Pin.
|
||||
//
|
||||
// May return an error wrapping ErrNotSupported.
|
||||
Unpin() error
|
||||
|
||||
// Close frees resources.
|
||||
//
|
||||
// The link will be broken unless it has been successfully pinned.
|
||||
// A link may continue past the lifetime of the process if Close is
|
||||
// not called.
|
||||
Close() error
|
||||
|
||||
// Info returns metadata on a link.
|
||||
//
|
||||
// May return an error wrapping ErrNotSupported.
|
||||
Info() (*Info, error)
|
||||
|
||||
// Prevent external users from implementing this interface.
|
||||
isLink()
|
||||
}
|
||||
|
||||
// NewLinkFromFD creates a link from a raw fd.
|
||||
//
|
||||
// You should not use fd after calling this function.
|
||||
func NewLinkFromFD(fd int) (Link, error) {
|
||||
sysFD, err := sys.NewFD(fd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return wrapRawLink(&RawLink{fd: sysFD})
|
||||
}
|
||||
|
||||
// LoadPinnedLink loads a link that was persisted into a bpffs.
|
||||
func LoadPinnedLink(fileName string, opts *ebpf.LoadPinOptions) (Link, error) {
|
||||
raw, err := loadPinnedRawLink(fileName, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return wrapRawLink(raw)
|
||||
}
|
||||
|
||||
// wrap a RawLink in a more specific type if possible.
|
||||
//
|
||||
// The function takes ownership of raw and closes it on error.
|
||||
func wrapRawLink(raw *RawLink) (_ Link, err error) {
|
||||
defer func() {
|
||||
if err != nil {
|
||||
raw.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
info, err := raw.Info()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
switch info.Type {
|
||||
case RawTracepointType:
|
||||
return &rawTracepoint{*raw}, nil
|
||||
case TracingType:
|
||||
return &tracing{*raw}, nil
|
||||
case CgroupType:
|
||||
return &linkCgroup{*raw}, nil
|
||||
case IterType:
|
||||
return &Iter{*raw}, nil
|
||||
case NetNsType:
|
||||
return &NetNsLink{*raw}, nil
|
||||
case KprobeMultiType:
|
||||
return &kprobeMultiLink{*raw}, nil
|
||||
case PerfEventType:
|
||||
return nil, fmt.Errorf("recovering perf event fd: %w", ErrNotSupported)
|
||||
default:
|
||||
return raw, nil
|
||||
}
|
||||
}
|
||||
|
||||
// ID uniquely identifies a BPF link.
|
||||
type ID = sys.LinkID
|
||||
|
||||
// RawLinkOptions control the creation of a raw link.
|
||||
type RawLinkOptions struct {
|
||||
// File descriptor to attach to. This differs for each attach type.
|
||||
Target int
|
||||
// Program to attach.
|
||||
Program *ebpf.Program
|
||||
// Attach must match the attach type of Program.
|
||||
Attach ebpf.AttachType
|
||||
// BTF is the BTF of the attachment target.
|
||||
BTF btf.TypeID
|
||||
// Flags control the attach behaviour.
|
||||
Flags uint32
|
||||
}
|
||||
|
||||
// Info contains metadata on a link.
|
||||
type Info struct {
|
||||
Type Type
|
||||
ID ID
|
||||
Program ebpf.ProgramID
|
||||
extra interface{}
|
||||
}
|
||||
|
||||
type TracingInfo sys.TracingLinkInfo
|
||||
type CgroupInfo sys.CgroupLinkInfo
|
||||
type NetNsInfo sys.NetNsLinkInfo
|
||||
type XDPInfo sys.XDPLinkInfo
|
||||
|
||||
// Tracing returns tracing type-specific link info.
|
||||
//
|
||||
// Returns nil if the type-specific link info isn't available.
|
||||
func (r Info) Tracing() *TracingInfo {
|
||||
e, _ := r.extra.(*TracingInfo)
|
||||
return e
|
||||
}
|
||||
|
||||
// Cgroup returns cgroup type-specific link info.
|
||||
//
|
||||
// Returns nil if the type-specific link info isn't available.
|
||||
func (r Info) Cgroup() *CgroupInfo {
|
||||
e, _ := r.extra.(*CgroupInfo)
|
||||
return e
|
||||
}
|
||||
|
||||
// NetNs returns netns type-specific link info.
|
||||
//
|
||||
// Returns nil if the type-specific link info isn't available.
|
||||
func (r Info) NetNs() *NetNsInfo {
|
||||
e, _ := r.extra.(*NetNsInfo)
|
||||
return e
|
||||
}
|
||||
|
||||
// ExtraNetNs returns XDP type-specific link info.
|
||||
//
|
||||
// Returns nil if the type-specific link info isn't available.
|
||||
func (r Info) XDP() *XDPInfo {
|
||||
e, _ := r.extra.(*XDPInfo)
|
||||
return e
|
||||
}
|
||||
|
||||
// RawLink is the low-level API to bpf_link.
|
||||
//
|
||||
// You should consider using the higher level interfaces in this
|
||||
// package instead.
|
||||
type RawLink struct {
|
||||
fd *sys.FD
|
||||
pinnedPath string
|
||||
}
|
||||
|
||||
// AttachRawLink creates a raw link.
|
||||
func AttachRawLink(opts RawLinkOptions) (*RawLink, error) {
|
||||
if err := haveBPFLink(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if opts.Target < 0 {
|
||||
return nil, fmt.Errorf("invalid target: %s", sys.ErrClosedFd)
|
||||
}
|
||||
|
||||
progFd := opts.Program.FD()
|
||||
if progFd < 0 {
|
||||
return nil, fmt.Errorf("invalid program: %s", sys.ErrClosedFd)
|
||||
}
|
||||
|
||||
attr := sys.LinkCreateAttr{
|
||||
TargetFd: uint32(opts.Target),
|
||||
ProgFd: uint32(progFd),
|
||||
AttachType: sys.AttachType(opts.Attach),
|
||||
TargetBtfId: opts.BTF,
|
||||
Flags: opts.Flags,
|
||||
}
|
||||
fd, err := sys.LinkCreate(&attr)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("create link: %w", err)
|
||||
}
|
||||
|
||||
return &RawLink{fd, ""}, nil
|
||||
}
|
||||
|
||||
func loadPinnedRawLink(fileName string, opts *ebpf.LoadPinOptions) (*RawLink, error) {
|
||||
fd, err := sys.ObjGet(&sys.ObjGetAttr{
|
||||
Pathname: sys.NewStringPointer(fileName),
|
||||
FileFlags: opts.Marshal(),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("load pinned link: %w", err)
|
||||
}
|
||||
|
||||
return &RawLink{fd, fileName}, nil
|
||||
}
|
||||
|
||||
func (l *RawLink) isLink() {}
|
||||
|
||||
// FD returns the raw file descriptor.
|
||||
func (l *RawLink) FD() int {
|
||||
return l.fd.Int()
|
||||
}
|
||||
|
||||
// Close breaks the link.
|
||||
//
|
||||
// Use Pin if you want to make the link persistent.
|
||||
func (l *RawLink) Close() error {
|
||||
return l.fd.Close()
|
||||
}
|
||||
|
||||
// Pin persists a link past the lifetime of the process.
|
||||
//
|
||||
// Calling Close on a pinned Link will not break the link
|
||||
// until the pin is removed.
|
||||
func (l *RawLink) Pin(fileName string) error {
|
||||
if err := internal.Pin(l.pinnedPath, fileName, l.fd); err != nil {
|
||||
return err
|
||||
}
|
||||
l.pinnedPath = fileName
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unpin implements the Link interface.
|
||||
func (l *RawLink) Unpin() error {
|
||||
if err := internal.Unpin(l.pinnedPath); err != nil {
|
||||
return err
|
||||
}
|
||||
l.pinnedPath = ""
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsPinned returns true if the Link has a non-empty pinned path.
|
||||
func (l *RawLink) IsPinned() bool {
|
||||
return l.pinnedPath != ""
|
||||
}
|
||||
|
||||
// Update implements the Link interface.
|
||||
func (l *RawLink) Update(new *ebpf.Program) error {
|
||||
return l.UpdateArgs(RawLinkUpdateOptions{
|
||||
New: new,
|
||||
})
|
||||
}
|
||||
|
||||
// RawLinkUpdateOptions control the behaviour of RawLink.UpdateArgs.
|
||||
type RawLinkUpdateOptions struct {
|
||||
New *ebpf.Program
|
||||
Old *ebpf.Program
|
||||
Flags uint32
|
||||
}
|
||||
|
||||
// UpdateArgs updates a link based on args.
|
||||
func (l *RawLink) UpdateArgs(opts RawLinkUpdateOptions) error {
|
||||
newFd := opts.New.FD()
|
||||
if newFd < 0 {
|
||||
return fmt.Errorf("invalid program: %s", sys.ErrClosedFd)
|
||||
}
|
||||
|
||||
var oldFd int
|
||||
if opts.Old != nil {
|
||||
oldFd = opts.Old.FD()
|
||||
if oldFd < 0 {
|
||||
return fmt.Errorf("invalid replacement program: %s", sys.ErrClosedFd)
|
||||
}
|
||||
}
|
||||
|
||||
attr := sys.LinkUpdateAttr{
|
||||
LinkFd: l.fd.Uint(),
|
||||
NewProgFd: uint32(newFd),
|
||||
OldProgFd: uint32(oldFd),
|
||||
Flags: opts.Flags,
|
||||
}
|
||||
return sys.LinkUpdate(&attr)
|
||||
}
|
||||
|
||||
// Info returns metadata about the link.
|
||||
func (l *RawLink) Info() (*Info, error) {
|
||||
var info sys.LinkInfo
|
||||
|
||||
if err := sys.ObjInfo(l.fd, &info); err != nil {
|
||||
return nil, fmt.Errorf("link info: %s", err)
|
||||
}
|
||||
|
||||
var extra interface{}
|
||||
switch info.Type {
|
||||
case CgroupType:
|
||||
extra = &CgroupInfo{}
|
||||
case NetNsType:
|
||||
extra = &NetNsInfo{}
|
||||
case TracingType:
|
||||
extra = &TracingInfo{}
|
||||
case XDPType:
|
||||
extra = &XDPInfo{}
|
||||
case RawTracepointType, IterType,
|
||||
PerfEventType, KprobeMultiType:
|
||||
// Extra metadata not supported.
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown link info type: %d", info.Type)
|
||||
}
|
||||
|
||||
if extra != nil {
|
||||
buf := bytes.NewReader(info.Extra[:])
|
||||
err := binary.Read(buf, internal.NativeEndian, extra)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot read extra link info: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return &Info{
|
||||
info.Type,
|
||||
info.Id,
|
||||
ebpf.ProgramID(info.ProgId),
|
||||
extra,
|
||||
}, nil
|
||||
}
|
||||
36
vendor/github.com/cilium/ebpf/link/netns.go
generated
vendored
36
vendor/github.com/cilium/ebpf/link/netns.go
generated
vendored
@@ -1,36 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
)
|
||||
|
||||
// NetNsLink is a program attached to a network namespace.
|
||||
type NetNsLink struct {
|
||||
RawLink
|
||||
}
|
||||
|
||||
// AttachNetNs attaches a program to a network namespace.
|
||||
func AttachNetNs(ns int, prog *ebpf.Program) (*NetNsLink, error) {
|
||||
var attach ebpf.AttachType
|
||||
switch t := prog.Type(); t {
|
||||
case ebpf.FlowDissector:
|
||||
attach = ebpf.AttachFlowDissector
|
||||
case ebpf.SkLookup:
|
||||
attach = ebpf.AttachSkLookup
|
||||
default:
|
||||
return nil, fmt.Errorf("can't attach %v to network namespace", t)
|
||||
}
|
||||
|
||||
link, err := AttachRawLink(RawLinkOptions{
|
||||
Target: ns,
|
||||
Program: prog,
|
||||
Attach: attach,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &NetNsLink{*link}, nil
|
||||
}
|
||||
270
vendor/github.com/cilium/ebpf/link/perf_event.go
generated
vendored
270
vendor/github.com/cilium/ebpf/link/perf_event.go
generated
vendored
@@ -1,270 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"runtime"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/asm"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/tracefs"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// Getting the terminology right is usually the hardest part. For posterity and
|
||||
// for staying sane during implementation:
|
||||
//
|
||||
// - trace event: Representation of a kernel runtime hook. Filesystem entries
|
||||
// under <tracefs>/events. Can be tracepoints (static), kprobes or uprobes.
|
||||
// Can be instantiated into perf events (see below).
|
||||
// - tracepoint: A predetermined hook point in the kernel. Exposed as trace
|
||||
// events in (sub)directories under <tracefs>/events. Cannot be closed or
|
||||
// removed, they are static.
|
||||
// - k(ret)probe: Ephemeral trace events based on entry or exit points of
|
||||
// exported kernel symbols. kprobe-based (tracefs) trace events can be
|
||||
// created system-wide by writing to the <tracefs>/kprobe_events file, or
|
||||
// they can be scoped to the current process by creating PMU perf events.
|
||||
// - u(ret)probe: Ephemeral trace events based on user provides ELF binaries
|
||||
// and offsets. uprobe-based (tracefs) trace events can be
|
||||
// created system-wide by writing to the <tracefs>/uprobe_events file, or
|
||||
// they can be scoped to the current process by creating PMU perf events.
|
||||
// - perf event: An object instantiated based on an existing trace event or
|
||||
// kernel symbol. Referred to by fd in userspace.
|
||||
// Exactly one eBPF program can be attached to a perf event. Multiple perf
|
||||
// events can be created from a single trace event. Closing a perf event
|
||||
// stops any further invocations of the attached eBPF program.
|
||||
|
||||
var (
|
||||
errInvalidInput = tracefs.ErrInvalidInput
|
||||
)
|
||||
|
||||
const (
|
||||
perfAllThreads = -1
|
||||
)
|
||||
|
||||
// A perfEvent represents a perf event kernel object. Exactly one eBPF program
|
||||
// can be attached to it. It is created based on a tracefs trace event or a
|
||||
// Performance Monitoring Unit (PMU).
|
||||
type perfEvent struct {
|
||||
// Trace event backing this perfEvent. May be nil.
|
||||
tracefsEvent *tracefs.Event
|
||||
|
||||
// This is the perf event FD.
|
||||
fd *sys.FD
|
||||
}
|
||||
|
||||
func newPerfEvent(fd *sys.FD, event *tracefs.Event) *perfEvent {
|
||||
pe := &perfEvent{event, fd}
|
||||
// Both event and fd have their own finalizer, but we want to
|
||||
// guarantee that they are closed in a certain order.
|
||||
runtime.SetFinalizer(pe, (*perfEvent).Close)
|
||||
return pe
|
||||
}
|
||||
|
||||
func (pe *perfEvent) Close() error {
|
||||
runtime.SetFinalizer(pe, nil)
|
||||
|
||||
if err := pe.fd.Close(); err != nil {
|
||||
return fmt.Errorf("closing perf event fd: %w", err)
|
||||
}
|
||||
|
||||
if pe.tracefsEvent != nil {
|
||||
return pe.tracefsEvent.Close()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// perfEventLink represents a bpf perf link.
|
||||
type perfEventLink struct {
|
||||
RawLink
|
||||
pe *perfEvent
|
||||
}
|
||||
|
||||
func (pl *perfEventLink) isLink() {}
|
||||
|
||||
// Pinning requires the underlying perf event FD to stay open.
|
||||
//
|
||||
// | PerfEvent FD | BpfLink FD | Works |
|
||||
// |--------------|------------|-------|
|
||||
// | Open | Open | Yes |
|
||||
// | Closed | Open | No |
|
||||
// | Open | Closed | No (Pin() -> EINVAL) |
|
||||
// | Closed | Closed | No (Pin() -> EINVAL) |
|
||||
//
|
||||
// There is currently no pretty way to recover the perf event FD
|
||||
// when loading a pinned link, so leave as not supported for now.
|
||||
func (pl *perfEventLink) Pin(string) error {
|
||||
return fmt.Errorf("perf event link pin: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (pl *perfEventLink) Unpin() error {
|
||||
return fmt.Errorf("perf event link unpin: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (pl *perfEventLink) Close() error {
|
||||
if err := pl.fd.Close(); err != nil {
|
||||
return fmt.Errorf("perf link close: %w", err)
|
||||
}
|
||||
|
||||
if err := pl.pe.Close(); err != nil {
|
||||
return fmt.Errorf("perf event close: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (pl *perfEventLink) Update(prog *ebpf.Program) error {
|
||||
return fmt.Errorf("perf event link update: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
// perfEventIoctl implements Link and handles the perf event lifecycle
|
||||
// via ioctl().
|
||||
type perfEventIoctl struct {
|
||||
*perfEvent
|
||||
}
|
||||
|
||||
func (pi *perfEventIoctl) isLink() {}
|
||||
|
||||
// Since 4.15 (e87c6bc3852b "bpf: permit multiple bpf attachments for a single perf event"),
|
||||
// calling PERF_EVENT_IOC_SET_BPF appends the given program to a prog_array
|
||||
// owned by the perf event, which means multiple programs can be attached
|
||||
// simultaneously.
|
||||
//
|
||||
// Before 4.15, calling PERF_EVENT_IOC_SET_BPF more than once on a perf event
|
||||
// returns EEXIST.
|
||||
//
|
||||
// Detaching a program from a perf event is currently not possible, so a
|
||||
// program replacement mechanism cannot be implemented for perf events.
|
||||
func (pi *perfEventIoctl) Update(prog *ebpf.Program) error {
|
||||
return fmt.Errorf("perf event ioctl update: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (pi *perfEventIoctl) Pin(string) error {
|
||||
return fmt.Errorf("perf event ioctl pin: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (pi *perfEventIoctl) Unpin() error {
|
||||
return fmt.Errorf("perf event ioctl unpin: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (pi *perfEventIoctl) Info() (*Info, error) {
|
||||
return nil, fmt.Errorf("perf event ioctl info: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
// attach the given eBPF prog to the perf event stored in pe.
|
||||
// pe must contain a valid perf event fd.
|
||||
// prog's type must match the program type stored in pe.
|
||||
func attachPerfEvent(pe *perfEvent, prog *ebpf.Program, cookie uint64) (Link, error) {
|
||||
if prog == nil {
|
||||
return nil, errors.New("cannot attach a nil program")
|
||||
}
|
||||
if prog.FD() < 0 {
|
||||
return nil, fmt.Errorf("invalid program: %w", sys.ErrClosedFd)
|
||||
}
|
||||
|
||||
if err := haveBPFLinkPerfEvent(); err == nil {
|
||||
return attachPerfEventLink(pe, prog, cookie)
|
||||
}
|
||||
|
||||
if cookie != 0 {
|
||||
return nil, fmt.Errorf("cookies are not supported: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
return attachPerfEventIoctl(pe, prog)
|
||||
}
|
||||
|
||||
func attachPerfEventIoctl(pe *perfEvent, prog *ebpf.Program) (*perfEventIoctl, error) {
|
||||
// Assign the eBPF program to the perf event.
|
||||
err := unix.IoctlSetInt(pe.fd.Int(), unix.PERF_EVENT_IOC_SET_BPF, prog.FD())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("setting perf event bpf program: %w", err)
|
||||
}
|
||||
|
||||
// PERF_EVENT_IOC_ENABLE and _DISABLE ignore their given values.
|
||||
if err := unix.IoctlSetInt(pe.fd.Int(), unix.PERF_EVENT_IOC_ENABLE, 0); err != nil {
|
||||
return nil, fmt.Errorf("enable perf event: %s", err)
|
||||
}
|
||||
|
||||
return &perfEventIoctl{pe}, nil
|
||||
}
|
||||
|
||||
// Use the bpf api to attach the perf event (BPF_LINK_TYPE_PERF_EVENT, 5.15+).
|
||||
//
|
||||
// https://github.com/torvalds/linux/commit/b89fbfbb854c9afc3047e8273cc3a694650b802e
|
||||
func attachPerfEventLink(pe *perfEvent, prog *ebpf.Program, cookie uint64) (*perfEventLink, error) {
|
||||
fd, err := sys.LinkCreatePerfEvent(&sys.LinkCreatePerfEventAttr{
|
||||
ProgFd: uint32(prog.FD()),
|
||||
TargetFd: pe.fd.Uint(),
|
||||
AttachType: sys.BPF_PERF_EVENT,
|
||||
BpfCookie: cookie,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("cannot create bpf perf link: %v", err)
|
||||
}
|
||||
|
||||
return &perfEventLink{RawLink{fd: fd}, pe}, nil
|
||||
}
|
||||
|
||||
// unsafeStringPtr returns an unsafe.Pointer to a NUL-terminated copy of str.
|
||||
func unsafeStringPtr(str string) (unsafe.Pointer, error) {
|
||||
p, err := unix.BytePtrFromString(str)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return unsafe.Pointer(p), nil
|
||||
}
|
||||
|
||||
// openTracepointPerfEvent opens a tracepoint-type perf event. System-wide
|
||||
// [k,u]probes created by writing to <tracefs>/[k,u]probe_events are tracepoints
|
||||
// behind the scenes, and can be attached to using these perf events.
|
||||
func openTracepointPerfEvent(tid uint64, pid int) (*sys.FD, error) {
|
||||
attr := unix.PerfEventAttr{
|
||||
Type: unix.PERF_TYPE_TRACEPOINT,
|
||||
Config: tid,
|
||||
Sample_type: unix.PERF_SAMPLE_RAW,
|
||||
Sample: 1,
|
||||
Wakeup: 1,
|
||||
}
|
||||
|
||||
fd, err := unix.PerfEventOpen(&attr, pid, 0, -1, unix.PERF_FLAG_FD_CLOEXEC)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening tracepoint perf event: %w", err)
|
||||
}
|
||||
|
||||
return sys.NewFD(fd)
|
||||
}
|
||||
|
||||
// Probe BPF perf link.
|
||||
//
|
||||
// https://elixir.bootlin.com/linux/v5.16.8/source/kernel/bpf/syscall.c#L4307
|
||||
// https://github.com/torvalds/linux/commit/b89fbfbb854c9afc3047e8273cc3a694650b802e
|
||||
var haveBPFLinkPerfEvent = internal.NewFeatureTest("bpf_link_perf_event", "5.15", func() error {
|
||||
prog, err := ebpf.NewProgram(&ebpf.ProgramSpec{
|
||||
Name: "probe_bpf_perf_link",
|
||||
Type: ebpf.Kprobe,
|
||||
Instructions: asm.Instructions{
|
||||
asm.Mov.Imm(asm.R0, 0),
|
||||
asm.Return(),
|
||||
},
|
||||
License: "MIT",
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer prog.Close()
|
||||
|
||||
_, err = sys.LinkCreatePerfEvent(&sys.LinkCreatePerfEventAttr{
|
||||
ProgFd: uint32(prog.FD()),
|
||||
AttachType: sys.BPF_PERF_EVENT,
|
||||
})
|
||||
if errors.Is(err, unix.EINVAL) {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
if errors.Is(err, unix.EBADF) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
})
|
||||
76
vendor/github.com/cilium/ebpf/link/program.go
generated
vendored
76
vendor/github.com/cilium/ebpf/link/program.go
generated
vendored
@@ -1,76 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
)
|
||||
|
||||
type RawAttachProgramOptions struct {
|
||||
// File descriptor to attach to. This differs for each attach type.
|
||||
Target int
|
||||
// Program to attach.
|
||||
Program *ebpf.Program
|
||||
// Program to replace (cgroups).
|
||||
Replace *ebpf.Program
|
||||
// Attach must match the attach type of Program (and Replace).
|
||||
Attach ebpf.AttachType
|
||||
// Flags control the attach behaviour. This differs for each attach type.
|
||||
Flags uint32
|
||||
}
|
||||
|
||||
// RawAttachProgram is a low level wrapper around BPF_PROG_ATTACH.
|
||||
//
|
||||
// You should use one of the higher level abstractions available in this
|
||||
// package if possible.
|
||||
func RawAttachProgram(opts RawAttachProgramOptions) error {
|
||||
if err := haveProgAttach(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var replaceFd uint32
|
||||
if opts.Replace != nil {
|
||||
replaceFd = uint32(opts.Replace.FD())
|
||||
}
|
||||
|
||||
attr := sys.ProgAttachAttr{
|
||||
TargetFd: uint32(opts.Target),
|
||||
AttachBpfFd: uint32(opts.Program.FD()),
|
||||
ReplaceBpfFd: replaceFd,
|
||||
AttachType: uint32(opts.Attach),
|
||||
AttachFlags: uint32(opts.Flags),
|
||||
}
|
||||
|
||||
if err := sys.ProgAttach(&attr); err != nil {
|
||||
return fmt.Errorf("can't attach program: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type RawDetachProgramOptions struct {
|
||||
Target int
|
||||
Program *ebpf.Program
|
||||
Attach ebpf.AttachType
|
||||
}
|
||||
|
||||
// RawDetachProgram is a low level wrapper around BPF_PROG_DETACH.
|
||||
//
|
||||
// You should use one of the higher level abstractions available in this
|
||||
// package if possible.
|
||||
func RawDetachProgram(opts RawDetachProgramOptions) error {
|
||||
if err := haveProgAttach(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
attr := sys.ProgDetachAttr{
|
||||
TargetFd: uint32(opts.Target),
|
||||
AttachBpfFd: uint32(opts.Program.FD()),
|
||||
AttachType: uint32(opts.Attach),
|
||||
}
|
||||
if err := sys.ProgDetach(&attr); err != nil {
|
||||
return fmt.Errorf("can't detach program: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
63
vendor/github.com/cilium/ebpf/link/query.go
generated
vendored
63
vendor/github.com/cilium/ebpf/link/query.go
generated
vendored
@@ -1,63 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
)
|
||||
|
||||
// QueryOptions defines additional parameters when querying for programs.
|
||||
type QueryOptions struct {
|
||||
// Path can be a path to a cgroup, netns or LIRC2 device
|
||||
Path string
|
||||
// Attach specifies the AttachType of the programs queried for
|
||||
Attach ebpf.AttachType
|
||||
// QueryFlags are flags for BPF_PROG_QUERY, e.g. BPF_F_QUERY_EFFECTIVE
|
||||
QueryFlags uint32
|
||||
}
|
||||
|
||||
// QueryPrograms retrieves ProgramIDs associated with the AttachType.
|
||||
//
|
||||
// Returns (nil, nil) if there are no programs attached to the queried kernel
|
||||
// resource. Calling QueryPrograms on a kernel missing PROG_QUERY will result in
|
||||
// ErrNotSupported.
|
||||
func QueryPrograms(opts QueryOptions) ([]ebpf.ProgramID, error) {
|
||||
if haveProgQuery() != nil {
|
||||
return nil, fmt.Errorf("can't query program IDs: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
f, err := os.Open(opts.Path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("can't open file: %s", err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
// query the number of programs to allocate correct slice size
|
||||
attr := sys.ProgQueryAttr{
|
||||
TargetFd: uint32(f.Fd()),
|
||||
AttachType: sys.AttachType(opts.Attach),
|
||||
QueryFlags: opts.QueryFlags,
|
||||
}
|
||||
if err := sys.ProgQuery(&attr); err != nil {
|
||||
return nil, fmt.Errorf("can't query program count: %w", err)
|
||||
}
|
||||
|
||||
// return nil if no progs are attached
|
||||
if attr.ProgCount == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// we have at least one prog, so we query again
|
||||
progIds := make([]ebpf.ProgramID, attr.ProgCount)
|
||||
attr.ProgIds = sys.NewPointer(unsafe.Pointer(&progIds[0]))
|
||||
attr.ProgCount = uint32(len(progIds))
|
||||
if err := sys.ProgQuery(&attr); err != nil {
|
||||
return nil, fmt.Errorf("can't query program IDs: %w", err)
|
||||
}
|
||||
|
||||
return progIds, nil
|
||||
|
||||
}
|
||||
87
vendor/github.com/cilium/ebpf/link/raw_tracepoint.go
generated
vendored
87
vendor/github.com/cilium/ebpf/link/raw_tracepoint.go
generated
vendored
@@ -1,87 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
)
|
||||
|
||||
type RawTracepointOptions struct {
|
||||
// Tracepoint name.
|
||||
Name string
|
||||
// Program must be of type RawTracepoint*
|
||||
Program *ebpf.Program
|
||||
}
|
||||
|
||||
// AttachRawTracepoint links a BPF program to a raw_tracepoint.
|
||||
//
|
||||
// Requires at least Linux 4.17.
|
||||
func AttachRawTracepoint(opts RawTracepointOptions) (Link, error) {
|
||||
if t := opts.Program.Type(); t != ebpf.RawTracepoint && t != ebpf.RawTracepointWritable {
|
||||
return nil, fmt.Errorf("invalid program type %s, expected RawTracepoint(Writable)", t)
|
||||
}
|
||||
if opts.Program.FD() < 0 {
|
||||
return nil, fmt.Errorf("invalid program: %w", sys.ErrClosedFd)
|
||||
}
|
||||
|
||||
fd, err := sys.RawTracepointOpen(&sys.RawTracepointOpenAttr{
|
||||
Name: sys.NewStringPointer(opts.Name),
|
||||
ProgFd: uint32(opts.Program.FD()),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = haveBPFLink()
|
||||
if errors.Is(err, ErrNotSupported) {
|
||||
// Prior to commit 70ed506c3bbc ("bpf: Introduce pinnable bpf_link abstraction")
|
||||
// raw_tracepoints are just a plain fd.
|
||||
return &simpleRawTracepoint{fd}, nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &rawTracepoint{RawLink{fd: fd}}, nil
|
||||
}
|
||||
|
||||
type simpleRawTracepoint struct {
|
||||
fd *sys.FD
|
||||
}
|
||||
|
||||
var _ Link = (*simpleRawTracepoint)(nil)
|
||||
|
||||
func (frt *simpleRawTracepoint) isLink() {}
|
||||
|
||||
func (frt *simpleRawTracepoint) Close() error {
|
||||
return frt.fd.Close()
|
||||
}
|
||||
|
||||
func (frt *simpleRawTracepoint) Update(_ *ebpf.Program) error {
|
||||
return fmt.Errorf("update raw_tracepoint: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (frt *simpleRawTracepoint) Pin(string) error {
|
||||
return fmt.Errorf("pin raw_tracepoint: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (frt *simpleRawTracepoint) Unpin() error {
|
||||
return fmt.Errorf("unpin raw_tracepoint: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
func (frt *simpleRawTracepoint) Info() (*Info, error) {
|
||||
return nil, fmt.Errorf("can't get raw_tracepoint info: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
type rawTracepoint struct {
|
||||
RawLink
|
||||
}
|
||||
|
||||
var _ Link = (*rawTracepoint)(nil)
|
||||
|
||||
func (rt *rawTracepoint) Update(_ *ebpf.Program) error {
|
||||
return fmt.Errorf("update raw_tracepoint: %w", ErrNotSupported)
|
||||
}
|
||||
40
vendor/github.com/cilium/ebpf/link/socket_filter.go
generated
vendored
40
vendor/github.com/cilium/ebpf/link/socket_filter.go
generated
vendored
@@ -1,40 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// AttachSocketFilter attaches a SocketFilter BPF program to a socket.
|
||||
func AttachSocketFilter(conn syscall.Conn, program *ebpf.Program) error {
|
||||
rawConn, err := conn.SyscallConn()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var ssoErr error
|
||||
err = rawConn.Control(func(fd uintptr) {
|
||||
ssoErr = unix.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_ATTACH_BPF, program.FD())
|
||||
})
|
||||
if ssoErr != nil {
|
||||
return ssoErr
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// DetachSocketFilter detaches a SocketFilter BPF program from a socket.
|
||||
func DetachSocketFilter(conn syscall.Conn) error {
|
||||
rawConn, err := conn.SyscallConn()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var ssoErr error
|
||||
err = rawConn.Control(func(fd uintptr) {
|
||||
ssoErr = unix.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_DETACH_BPF, 0)
|
||||
})
|
||||
if ssoErr != nil {
|
||||
return ssoErr
|
||||
}
|
||||
return err
|
||||
}
|
||||
123
vendor/github.com/cilium/ebpf/link/syscalls.go
generated
vendored
123
vendor/github.com/cilium/ebpf/link/syscalls.go
generated
vendored
@@ -1,123 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/asm"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
// Type is the kind of link.
|
||||
type Type = sys.LinkType
|
||||
|
||||
// Valid link types.
|
||||
const (
|
||||
UnspecifiedType = sys.BPF_LINK_TYPE_UNSPEC
|
||||
RawTracepointType = sys.BPF_LINK_TYPE_RAW_TRACEPOINT
|
||||
TracingType = sys.BPF_LINK_TYPE_TRACING
|
||||
CgroupType = sys.BPF_LINK_TYPE_CGROUP
|
||||
IterType = sys.BPF_LINK_TYPE_ITER
|
||||
NetNsType = sys.BPF_LINK_TYPE_NETNS
|
||||
XDPType = sys.BPF_LINK_TYPE_XDP
|
||||
PerfEventType = sys.BPF_LINK_TYPE_PERF_EVENT
|
||||
KprobeMultiType = sys.BPF_LINK_TYPE_KPROBE_MULTI
|
||||
)
|
||||
|
||||
var haveProgAttach = internal.NewFeatureTest("BPF_PROG_ATTACH", "4.10", func() error {
|
||||
prog, err := ebpf.NewProgram(&ebpf.ProgramSpec{
|
||||
Type: ebpf.CGroupSKB,
|
||||
License: "MIT",
|
||||
Instructions: asm.Instructions{
|
||||
asm.Mov.Imm(asm.R0, 0),
|
||||
asm.Return(),
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
|
||||
// BPF_PROG_ATTACH was introduced at the same time as CGgroupSKB,
|
||||
// so being able to load the program is enough to infer that we
|
||||
// have the syscall.
|
||||
prog.Close()
|
||||
return nil
|
||||
})
|
||||
|
||||
var haveProgAttachReplace = internal.NewFeatureTest("BPF_PROG_ATTACH atomic replacement of MULTI progs", "5.5", func() error {
|
||||
if err := haveProgAttach(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
prog, err := ebpf.NewProgram(&ebpf.ProgramSpec{
|
||||
Type: ebpf.CGroupSKB,
|
||||
AttachType: ebpf.AttachCGroupInetIngress,
|
||||
License: "MIT",
|
||||
Instructions: asm.Instructions{
|
||||
asm.Mov.Imm(asm.R0, 0),
|
||||
asm.Return(),
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
defer prog.Close()
|
||||
|
||||
// We know that we have BPF_PROG_ATTACH since we can load CGroupSKB programs.
|
||||
// If passing BPF_F_REPLACE gives us EINVAL we know that the feature isn't
|
||||
// present.
|
||||
attr := sys.ProgAttachAttr{
|
||||
// We rely on this being checked after attachFlags.
|
||||
TargetFd: ^uint32(0),
|
||||
AttachBpfFd: uint32(prog.FD()),
|
||||
AttachType: uint32(ebpf.AttachCGroupInetIngress),
|
||||
AttachFlags: uint32(flagReplace),
|
||||
}
|
||||
|
||||
err = sys.ProgAttach(&attr)
|
||||
if errors.Is(err, unix.EINVAL) {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
if errors.Is(err, unix.EBADF) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
})
|
||||
|
||||
var haveBPFLink = internal.NewFeatureTest("bpf_link", "5.7", func() error {
|
||||
attr := sys.LinkCreateAttr{
|
||||
// This is a hopefully invalid file descriptor, which triggers EBADF.
|
||||
TargetFd: ^uint32(0),
|
||||
ProgFd: ^uint32(0),
|
||||
AttachType: sys.AttachType(ebpf.AttachCGroupInetIngress),
|
||||
}
|
||||
_, err := sys.LinkCreate(&attr)
|
||||
if errors.Is(err, unix.EINVAL) {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
if errors.Is(err, unix.EBADF) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
})
|
||||
|
||||
var haveProgQuery = internal.NewFeatureTest("BPF_PROG_QUERY", "4.15", func() error {
|
||||
attr := sys.ProgQueryAttr{
|
||||
// We rely on this being checked during the syscall.
|
||||
// With an otherwise correct payload we expect EBADF here
|
||||
// as an indication that the feature is present.
|
||||
TargetFd: ^uint32(0),
|
||||
AttachType: sys.AttachType(ebpf.AttachCGroupInetIngress),
|
||||
}
|
||||
|
||||
err := sys.ProgQuery(&attr)
|
||||
if errors.Is(err, unix.EINVAL) {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
if errors.Is(err, unix.EBADF) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
})
|
||||
68
vendor/github.com/cilium/ebpf/link/tracepoint.go
generated
vendored
68
vendor/github.com/cilium/ebpf/link/tracepoint.go
generated
vendored
@@ -1,68 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/internal/tracefs"
|
||||
)
|
||||
|
||||
// TracepointOptions defines additional parameters that will be used
|
||||
// when loading Tracepoints.
|
||||
type TracepointOptions struct {
|
||||
// Arbitrary value that can be fetched from an eBPF program
|
||||
// via `bpf_get_attach_cookie()`.
|
||||
//
|
||||
// Needs kernel 5.15+.
|
||||
Cookie uint64
|
||||
}
|
||||
|
||||
// Tracepoint attaches the given eBPF program to the tracepoint with the given
|
||||
// group and name. See /sys/kernel/tracing/events to find available
|
||||
// tracepoints. The top-level directory is the group, the event's subdirectory
|
||||
// is the name. Example:
|
||||
//
|
||||
// tp, err := Tracepoint("syscalls", "sys_enter_fork", prog, nil)
|
||||
//
|
||||
// Losing the reference to the resulting Link (tp) will close the Tracepoint
|
||||
// and prevent further execution of prog. The Link must be Closed during
|
||||
// program shutdown to avoid leaking system resources.
|
||||
//
|
||||
// Note that attaching eBPF programs to syscalls (sys_enter_*/sys_exit_*) is
|
||||
// only possible as of kernel 4.14 (commit cf5f5ce).
|
||||
func Tracepoint(group, name string, prog *ebpf.Program, opts *TracepointOptions) (Link, error) {
|
||||
if group == "" || name == "" {
|
||||
return nil, fmt.Errorf("group and name cannot be empty: %w", errInvalidInput)
|
||||
}
|
||||
if prog == nil {
|
||||
return nil, fmt.Errorf("prog cannot be nil: %w", errInvalidInput)
|
||||
}
|
||||
if prog.Type() != ebpf.TracePoint {
|
||||
return nil, fmt.Errorf("eBPF program type %s is not a Tracepoint: %w", prog.Type(), errInvalidInput)
|
||||
}
|
||||
|
||||
tid, err := tracefs.EventID(group, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fd, err := openTracepointPerfEvent(tid, perfAllThreads)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var cookie uint64
|
||||
if opts != nil {
|
||||
cookie = opts.Cookie
|
||||
}
|
||||
|
||||
pe := newPerfEvent(fd, nil)
|
||||
|
||||
lnk, err := attachPerfEvent(pe, prog, cookie)
|
||||
if err != nil {
|
||||
pe.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lnk, nil
|
||||
}
|
||||
199
vendor/github.com/cilium/ebpf/link/tracing.go
generated
vendored
199
vendor/github.com/cilium/ebpf/link/tracing.go
generated
vendored
@@ -1,199 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/btf"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
"github.com/cilium/ebpf/internal/unix"
|
||||
)
|
||||
|
||||
type tracing struct {
|
||||
RawLink
|
||||
}
|
||||
|
||||
func (f *tracing) Update(new *ebpf.Program) error {
|
||||
return fmt.Errorf("tracing update: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
// AttachFreplace attaches the given eBPF program to the function it replaces.
|
||||
//
|
||||
// The program and name can either be provided at link time, or can be provided
|
||||
// at program load time. If they were provided at load time, they should be nil
|
||||
// and empty respectively here, as they will be ignored by the kernel.
|
||||
// Examples:
|
||||
//
|
||||
// AttachFreplace(dispatcher, "function", replacement)
|
||||
// AttachFreplace(nil, "", replacement)
|
||||
func AttachFreplace(targetProg *ebpf.Program, name string, prog *ebpf.Program) (Link, error) {
|
||||
if (name == "") != (targetProg == nil) {
|
||||
return nil, fmt.Errorf("must provide both or neither of name and targetProg: %w", errInvalidInput)
|
||||
}
|
||||
if prog == nil {
|
||||
return nil, fmt.Errorf("prog cannot be nil: %w", errInvalidInput)
|
||||
}
|
||||
if prog.Type() != ebpf.Extension {
|
||||
return nil, fmt.Errorf("eBPF program type %s is not an Extension: %w", prog.Type(), errInvalidInput)
|
||||
}
|
||||
|
||||
var (
|
||||
target int
|
||||
typeID btf.TypeID
|
||||
)
|
||||
if targetProg != nil {
|
||||
btfHandle, err := targetProg.Handle()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer btfHandle.Close()
|
||||
|
||||
spec, err := btfHandle.Spec(nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var function *btf.Func
|
||||
if err := spec.TypeByName(name, &function); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
target = targetProg.FD()
|
||||
typeID, err = spec.TypeID(function)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
link, err := AttachRawLink(RawLinkOptions{
|
||||
Target: target,
|
||||
Program: prog,
|
||||
Attach: ebpf.AttachNone,
|
||||
BTF: typeID,
|
||||
})
|
||||
if errors.Is(err, sys.ENOTSUPP) {
|
||||
// This may be returned by bpf_tracing_prog_attach via bpf_arch_text_poke.
|
||||
return nil, fmt.Errorf("create raw tracepoint: %w", ErrNotSupported)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &tracing{*link}, nil
|
||||
}
|
||||
|
||||
type TracingOptions struct {
|
||||
// Program must be of type Tracing with attach type
|
||||
// AttachTraceFEntry/AttachTraceFExit/AttachModifyReturn or
|
||||
// AttachTraceRawTp.
|
||||
Program *ebpf.Program
|
||||
// Program attach type. Can be one of:
|
||||
// - AttachTraceFEntry
|
||||
// - AttachTraceFExit
|
||||
// - AttachModifyReturn
|
||||
// - AttachTraceRawTp
|
||||
// This field is optional.
|
||||
AttachType ebpf.AttachType
|
||||
// Arbitrary value that can be fetched from an eBPF program
|
||||
// via `bpf_get_attach_cookie()`.
|
||||
Cookie uint64
|
||||
}
|
||||
|
||||
type LSMOptions struct {
|
||||
// Program must be of type LSM with attach type
|
||||
// AttachLSMMac.
|
||||
Program *ebpf.Program
|
||||
// Arbitrary value that can be fetched from an eBPF program
|
||||
// via `bpf_get_attach_cookie()`.
|
||||
Cookie uint64
|
||||
}
|
||||
|
||||
// attachBTFID links all BPF program types (Tracing/LSM) that they attach to a btf_id.
|
||||
func attachBTFID(program *ebpf.Program, at ebpf.AttachType, cookie uint64) (Link, error) {
|
||||
if program.FD() < 0 {
|
||||
return nil, fmt.Errorf("invalid program %w", sys.ErrClosedFd)
|
||||
}
|
||||
|
||||
var (
|
||||
fd *sys.FD
|
||||
err error
|
||||
)
|
||||
switch at {
|
||||
case ebpf.AttachTraceFEntry, ebpf.AttachTraceFExit, ebpf.AttachTraceRawTp,
|
||||
ebpf.AttachModifyReturn, ebpf.AttachLSMMac:
|
||||
// Attach via BPF link
|
||||
fd, err = sys.LinkCreateTracing(&sys.LinkCreateTracingAttr{
|
||||
ProgFd: uint32(program.FD()),
|
||||
AttachType: sys.AttachType(at),
|
||||
Cookie: cookie,
|
||||
})
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
if !errors.Is(err, unix.EINVAL) && !errors.Is(err, sys.ENOTSUPP) {
|
||||
return nil, fmt.Errorf("create tracing link: %w", err)
|
||||
}
|
||||
fallthrough
|
||||
case ebpf.AttachNone:
|
||||
// Attach via RawTracepointOpen
|
||||
if cookie > 0 {
|
||||
return nil, fmt.Errorf("create raw tracepoint with cookie: %w", ErrNotSupported)
|
||||
}
|
||||
|
||||
fd, err = sys.RawTracepointOpen(&sys.RawTracepointOpenAttr{
|
||||
ProgFd: uint32(program.FD()),
|
||||
})
|
||||
if errors.Is(err, sys.ENOTSUPP) {
|
||||
// This may be returned by bpf_tracing_prog_attach via bpf_arch_text_poke.
|
||||
return nil, fmt.Errorf("create raw tracepoint: %w", ErrNotSupported)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("create raw tracepoint: %w", err)
|
||||
}
|
||||
default:
|
||||
return nil, fmt.Errorf("invalid attach type: %s", at.String())
|
||||
}
|
||||
|
||||
raw := RawLink{fd: fd}
|
||||
info, err := raw.Info()
|
||||
if err != nil {
|
||||
raw.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if info.Type == RawTracepointType {
|
||||
// Sadness upon sadness: a Tracing program with AttachRawTp returns
|
||||
// a raw_tracepoint link. Other types return a tracing link.
|
||||
return &rawTracepoint{raw}, nil
|
||||
}
|
||||
return &tracing{raw}, nil
|
||||
}
|
||||
|
||||
// AttachTracing links a tracing (fentry/fexit/fmod_ret) BPF program or
|
||||
// a BTF-powered raw tracepoint (tp_btf) BPF Program to a BPF hook defined
|
||||
// in kernel modules.
|
||||
func AttachTracing(opts TracingOptions) (Link, error) {
|
||||
if t := opts.Program.Type(); t != ebpf.Tracing {
|
||||
return nil, fmt.Errorf("invalid program type %s, expected Tracing", t)
|
||||
}
|
||||
|
||||
switch opts.AttachType {
|
||||
case ebpf.AttachTraceFEntry, ebpf.AttachTraceFExit, ebpf.AttachModifyReturn,
|
||||
ebpf.AttachTraceRawTp, ebpf.AttachNone:
|
||||
default:
|
||||
return nil, fmt.Errorf("invalid attach type: %s", opts.AttachType.String())
|
||||
}
|
||||
|
||||
return attachBTFID(opts.Program, opts.AttachType, opts.Cookie)
|
||||
}
|
||||
|
||||
// AttachLSM links a Linux security module (LSM) BPF Program to a BPF
|
||||
// hook defined in kernel modules.
|
||||
func AttachLSM(opts LSMOptions) (Link, error) {
|
||||
if t := opts.Program.Type(); t != ebpf.LSM {
|
||||
return nil, fmt.Errorf("invalid program type %s, expected LSM", t)
|
||||
}
|
||||
|
||||
return attachBTFID(opts.Program, ebpf.AttachLSMMac, opts.Cookie)
|
||||
}
|
||||
328
vendor/github.com/cilium/ebpf/link/uprobe.go
generated
vendored
328
vendor/github.com/cilium/ebpf/link/uprobe.go
generated
vendored
@@ -1,328 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"debug/elf"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/tracefs"
|
||||
)
|
||||
|
||||
var (
|
||||
uprobeRefCtrOffsetPMUPath = "/sys/bus/event_source/devices/uprobe/format/ref_ctr_offset"
|
||||
// elixir.bootlin.com/linux/v5.15-rc7/source/kernel/events/core.c#L9799
|
||||
uprobeRefCtrOffsetShift = 32
|
||||
haveRefCtrOffsetPMU = internal.NewFeatureTest("RefCtrOffsetPMU", "4.20", func() error {
|
||||
_, err := os.Stat(uprobeRefCtrOffsetPMUPath)
|
||||
if err != nil {
|
||||
return internal.ErrNotSupported
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
// ErrNoSymbol indicates that the given symbol was not found
|
||||
// in the ELF symbols table.
|
||||
ErrNoSymbol = errors.New("not found")
|
||||
)
|
||||
|
||||
// Executable defines an executable program on the filesystem.
|
||||
type Executable struct {
|
||||
// Path of the executable on the filesystem.
|
||||
path string
|
||||
// Parsed ELF and dynamic symbols' addresses.
|
||||
addresses map[string]uint64
|
||||
// Keep track of symbol table lazy load.
|
||||
addressesOnce sync.Once
|
||||
}
|
||||
|
||||
// UprobeOptions defines additional parameters that will be used
|
||||
// when loading Uprobes.
|
||||
type UprobeOptions struct {
|
||||
// Symbol address. Must be provided in case of external symbols (shared libs).
|
||||
// If set, overrides the address eventually parsed from the executable.
|
||||
Address uint64
|
||||
// The offset relative to given symbol. Useful when tracing an arbitrary point
|
||||
// inside the frame of given symbol.
|
||||
//
|
||||
// Note: this field changed from being an absolute offset to being relative
|
||||
// to Address.
|
||||
Offset uint64
|
||||
// Only set the uprobe on the given process ID. Useful when tracing
|
||||
// shared library calls or programs that have many running instances.
|
||||
PID int
|
||||
// Automatically manage SDT reference counts (semaphores).
|
||||
//
|
||||
// If this field is set, the Kernel will increment/decrement the
|
||||
// semaphore located in the process memory at the provided address on
|
||||
// probe attach/detach.
|
||||
//
|
||||
// See also:
|
||||
// sourceware.org/systemtap/wiki/UserSpaceProbeImplementation (Semaphore Handling)
|
||||
// github.com/torvalds/linux/commit/1cc33161a83d
|
||||
// github.com/torvalds/linux/commit/a6ca88b241d5
|
||||
RefCtrOffset uint64
|
||||
// Arbitrary value that can be fetched from an eBPF program
|
||||
// via `bpf_get_attach_cookie()`.
|
||||
//
|
||||
// Needs kernel 5.15+.
|
||||
Cookie uint64
|
||||
// Prefix used for the event name if the uprobe must be attached using tracefs.
|
||||
// The group name will be formatted as `<prefix>_<randomstr>`.
|
||||
// The default empty string is equivalent to "ebpf" as the prefix.
|
||||
TraceFSPrefix string
|
||||
}
|
||||
|
||||
func (uo *UprobeOptions) cookie() uint64 {
|
||||
if uo == nil {
|
||||
return 0
|
||||
}
|
||||
return uo.Cookie
|
||||
}
|
||||
|
||||
// To open a new Executable, use:
|
||||
//
|
||||
// OpenExecutable("/bin/bash")
|
||||
//
|
||||
// The returned value can then be used to open Uprobe(s).
|
||||
func OpenExecutable(path string) (*Executable, error) {
|
||||
if path == "" {
|
||||
return nil, fmt.Errorf("path cannot be empty")
|
||||
}
|
||||
|
||||
f, err := internal.OpenSafeELFFile(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parse ELF file: %w", err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
if f.Type != elf.ET_EXEC && f.Type != elf.ET_DYN {
|
||||
// ELF is not an executable or a shared object.
|
||||
return nil, errors.New("the given file is not an executable or a shared object")
|
||||
}
|
||||
|
||||
return &Executable{
|
||||
path: path,
|
||||
addresses: make(map[string]uint64),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (ex *Executable) load(f *internal.SafeELFFile) error {
|
||||
syms, err := f.Symbols()
|
||||
if err != nil && !errors.Is(err, elf.ErrNoSymbols) {
|
||||
return err
|
||||
}
|
||||
|
||||
dynsyms, err := f.DynamicSymbols()
|
||||
if err != nil && !errors.Is(err, elf.ErrNoSymbols) {
|
||||
return err
|
||||
}
|
||||
|
||||
syms = append(syms, dynsyms...)
|
||||
|
||||
for _, s := range syms {
|
||||
if elf.ST_TYPE(s.Info) != elf.STT_FUNC {
|
||||
// Symbol not associated with a function or other executable code.
|
||||
continue
|
||||
}
|
||||
|
||||
address := s.Value
|
||||
|
||||
// Loop over ELF segments.
|
||||
for _, prog := range f.Progs {
|
||||
// Skip uninteresting segments.
|
||||
if prog.Type != elf.PT_LOAD || (prog.Flags&elf.PF_X) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
if prog.Vaddr <= s.Value && s.Value < (prog.Vaddr+prog.Memsz) {
|
||||
// If the symbol value is contained in the segment, calculate
|
||||
// the symbol offset.
|
||||
//
|
||||
// fn symbol offset = fn symbol VA - .text VA + .text offset
|
||||
//
|
||||
// stackoverflow.com/a/40249502
|
||||
address = s.Value - prog.Vaddr + prog.Off
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
ex.addresses[s.Name] = address
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// address calculates the address of a symbol in the executable.
|
||||
//
|
||||
// opts must not be nil.
|
||||
func (ex *Executable) address(symbol string, opts *UprobeOptions) (uint64, error) {
|
||||
if opts.Address > 0 {
|
||||
return opts.Address + opts.Offset, nil
|
||||
}
|
||||
|
||||
var err error
|
||||
ex.addressesOnce.Do(func() {
|
||||
var f *internal.SafeELFFile
|
||||
f, err = internal.OpenSafeELFFile(ex.path)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("parse ELF file: %w", err)
|
||||
return
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
err = ex.load(f)
|
||||
})
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("lazy load symbols: %w", err)
|
||||
}
|
||||
|
||||
address, ok := ex.addresses[symbol]
|
||||
if !ok {
|
||||
return 0, fmt.Errorf("symbol %s: %w", symbol, ErrNoSymbol)
|
||||
}
|
||||
|
||||
// Symbols with location 0 from section undef are shared library calls and
|
||||
// are relocated before the binary is executed. Dynamic linking is not
|
||||
// implemented by the library, so mark this as unsupported for now.
|
||||
//
|
||||
// Since only offset values are stored and not elf.Symbol, if the value is 0,
|
||||
// assume it's an external symbol.
|
||||
if address == 0 {
|
||||
return 0, fmt.Errorf("cannot resolve %s library call '%s': %w "+
|
||||
"(consider providing UprobeOptions.Address)", ex.path, symbol, ErrNotSupported)
|
||||
}
|
||||
|
||||
return address + opts.Offset, nil
|
||||
}
|
||||
|
||||
// Uprobe attaches the given eBPF program to a perf event that fires when the
|
||||
// given symbol starts executing in the given Executable.
|
||||
// For example, /bin/bash::main():
|
||||
//
|
||||
// ex, _ = OpenExecutable("/bin/bash")
|
||||
// ex.Uprobe("main", prog, nil)
|
||||
//
|
||||
// When using symbols which belongs to shared libraries,
|
||||
// an offset must be provided via options:
|
||||
//
|
||||
// up, err := ex.Uprobe("main", prog, &UprobeOptions{Offset: 0x123})
|
||||
//
|
||||
// Note: Setting the Offset field in the options supersedes the symbol's offset.
|
||||
//
|
||||
// Losing the reference to the resulting Link (up) will close the Uprobe
|
||||
// and prevent further execution of prog. The Link must be Closed during
|
||||
// program shutdown to avoid leaking system resources.
|
||||
//
|
||||
// Functions provided by shared libraries can currently not be traced and
|
||||
// will result in an ErrNotSupported.
|
||||
func (ex *Executable) Uprobe(symbol string, prog *ebpf.Program, opts *UprobeOptions) (Link, error) {
|
||||
u, err := ex.uprobe(symbol, prog, opts, false)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lnk, err := attachPerfEvent(u, prog, opts.cookie())
|
||||
if err != nil {
|
||||
u.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lnk, nil
|
||||
}
|
||||
|
||||
// Uretprobe attaches the given eBPF program to a perf event that fires right
|
||||
// before the given symbol exits. For example, /bin/bash::main():
|
||||
//
|
||||
// ex, _ = OpenExecutable("/bin/bash")
|
||||
// ex.Uretprobe("main", prog, nil)
|
||||
//
|
||||
// When using symbols which belongs to shared libraries,
|
||||
// an offset must be provided via options:
|
||||
//
|
||||
// up, err := ex.Uretprobe("main", prog, &UprobeOptions{Offset: 0x123})
|
||||
//
|
||||
// Note: Setting the Offset field in the options supersedes the symbol's offset.
|
||||
//
|
||||
// Losing the reference to the resulting Link (up) will close the Uprobe
|
||||
// and prevent further execution of prog. The Link must be Closed during
|
||||
// program shutdown to avoid leaking system resources.
|
||||
//
|
||||
// Functions provided by shared libraries can currently not be traced and
|
||||
// will result in an ErrNotSupported.
|
||||
func (ex *Executable) Uretprobe(symbol string, prog *ebpf.Program, opts *UprobeOptions) (Link, error) {
|
||||
u, err := ex.uprobe(symbol, prog, opts, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lnk, err := attachPerfEvent(u, prog, opts.cookie())
|
||||
if err != nil {
|
||||
u.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lnk, nil
|
||||
}
|
||||
|
||||
// uprobe opens a perf event for the given binary/symbol and attaches prog to it.
|
||||
// If ret is true, create a uretprobe.
|
||||
func (ex *Executable) uprobe(symbol string, prog *ebpf.Program, opts *UprobeOptions, ret bool) (*perfEvent, error) {
|
||||
if prog == nil {
|
||||
return nil, fmt.Errorf("prog cannot be nil: %w", errInvalidInput)
|
||||
}
|
||||
if prog.Type() != ebpf.Kprobe {
|
||||
return nil, fmt.Errorf("eBPF program type %s is not Kprobe: %w", prog.Type(), errInvalidInput)
|
||||
}
|
||||
if opts == nil {
|
||||
opts = &UprobeOptions{}
|
||||
}
|
||||
|
||||
offset, err := ex.address(symbol, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
pid := opts.PID
|
||||
if pid == 0 {
|
||||
pid = perfAllThreads
|
||||
}
|
||||
|
||||
if opts.RefCtrOffset != 0 {
|
||||
if err := haveRefCtrOffsetPMU(); err != nil {
|
||||
return nil, fmt.Errorf("uprobe ref_ctr_offset: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
args := tracefs.ProbeArgs{
|
||||
Type: tracefs.Uprobe,
|
||||
Symbol: symbol,
|
||||
Path: ex.path,
|
||||
Offset: offset,
|
||||
Pid: pid,
|
||||
RefCtrOffset: opts.RefCtrOffset,
|
||||
Ret: ret,
|
||||
Cookie: opts.Cookie,
|
||||
Group: opts.TraceFSPrefix,
|
||||
}
|
||||
|
||||
// Use uprobe PMU if the kernel has it available.
|
||||
tp, err := pmuProbe(args)
|
||||
if err == nil {
|
||||
return tp, nil
|
||||
}
|
||||
if err != nil && !errors.Is(err, ErrNotSupported) {
|
||||
return nil, fmt.Errorf("creating perf_uprobe PMU: %w", err)
|
||||
}
|
||||
|
||||
// Use tracefs if uprobe PMU is missing.
|
||||
tp, err = tracefsProbe(args)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating trace event '%s:%s' in tracefs: %w", ex.path, symbol, err)
|
||||
}
|
||||
|
||||
return tp, nil
|
||||
}
|
||||
54
vendor/github.com/cilium/ebpf/link/xdp.go
generated
vendored
54
vendor/github.com/cilium/ebpf/link/xdp.go
generated
vendored
@@ -1,54 +0,0 @@
|
||||
package link
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/cilium/ebpf"
|
||||
)
|
||||
|
||||
// XDPAttachFlags represents how XDP program will be attached to interface.
|
||||
type XDPAttachFlags uint32
|
||||
|
||||
const (
|
||||
// XDPGenericMode (SKB) links XDP BPF program for drivers which do
|
||||
// not yet support native XDP.
|
||||
XDPGenericMode XDPAttachFlags = 1 << (iota + 1)
|
||||
// XDPDriverMode links XDP BPF program into the driver’s receive path.
|
||||
XDPDriverMode
|
||||
// XDPOffloadMode offloads the entire XDP BPF program into hardware.
|
||||
XDPOffloadMode
|
||||
)
|
||||
|
||||
type XDPOptions struct {
|
||||
// Program must be an XDP BPF program.
|
||||
Program *ebpf.Program
|
||||
|
||||
// Interface is the interface index to attach program to.
|
||||
Interface int
|
||||
|
||||
// Flags is one of XDPAttachFlags (optional).
|
||||
//
|
||||
// Only one XDP mode should be set, without flag defaults
|
||||
// to driver/generic mode (best effort).
|
||||
Flags XDPAttachFlags
|
||||
}
|
||||
|
||||
// AttachXDP links an XDP BPF program to an XDP hook.
|
||||
func AttachXDP(opts XDPOptions) (Link, error) {
|
||||
if t := opts.Program.Type(); t != ebpf.XDP {
|
||||
return nil, fmt.Errorf("invalid program type %s, expected XDP", t)
|
||||
}
|
||||
|
||||
if opts.Interface < 1 {
|
||||
return nil, fmt.Errorf("invalid interface index: %d", opts.Interface)
|
||||
}
|
||||
|
||||
rawLink, err := AttachRawLink(RawLinkOptions{
|
||||
Program: opts.Program,
|
||||
Attach: ebpf.AttachXDP,
|
||||
Target: opts.Interface,
|
||||
Flags: uint32(opts.Flags),
|
||||
})
|
||||
|
||||
return rawLink, err
|
||||
}
|
||||
391
vendor/github.com/cilium/ebpf/linker.go
generated
vendored
391
vendor/github.com/cilium/ebpf/linker.go
generated
vendored
@@ -1,391 +0,0 @@
|
||||
package ebpf
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
|
||||
"github.com/cilium/ebpf/asm"
|
||||
"github.com/cilium/ebpf/btf"
|
||||
"github.com/cilium/ebpf/internal"
|
||||
)
|
||||
|
||||
// handles stores handle objects to avoid gc cleanup
|
||||
type handles []*btf.Handle
|
||||
|
||||
func (hs *handles) add(h *btf.Handle) (int, error) {
|
||||
if h == nil {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
if len(*hs) == math.MaxInt16 {
|
||||
return 0, fmt.Errorf("can't add more than %d module FDs to fdArray", math.MaxInt16)
|
||||
}
|
||||
|
||||
*hs = append(*hs, h)
|
||||
|
||||
// return length of slice so that indexes start at 1
|
||||
return len(*hs), nil
|
||||
}
|
||||
|
||||
func (hs handles) fdArray() []int32 {
|
||||
// first element of fda is reserved as no module can be indexed with 0
|
||||
fda := []int32{0}
|
||||
for _, h := range hs {
|
||||
fda = append(fda, int32(h.FD()))
|
||||
}
|
||||
|
||||
return fda
|
||||
}
|
||||
|
||||
func (hs handles) close() {
|
||||
for _, h := range hs {
|
||||
h.Close()
|
||||
}
|
||||
}
|
||||
|
||||
// splitSymbols splits insns into subsections delimited by Symbol Instructions.
|
||||
// insns cannot be empty and must start with a Symbol Instruction.
|
||||
//
|
||||
// The resulting map is indexed by Symbol name.
|
||||
func splitSymbols(insns asm.Instructions) (map[string]asm.Instructions, error) {
|
||||
if len(insns) == 0 {
|
||||
return nil, errors.New("insns is empty")
|
||||
}
|
||||
|
||||
if insns[0].Symbol() == "" {
|
||||
return nil, errors.New("insns must start with a Symbol")
|
||||
}
|
||||
|
||||
var name string
|
||||
progs := make(map[string]asm.Instructions)
|
||||
for _, ins := range insns {
|
||||
if sym := ins.Symbol(); sym != "" {
|
||||
if progs[sym] != nil {
|
||||
return nil, fmt.Errorf("insns contains duplicate Symbol %s", sym)
|
||||
}
|
||||
name = sym
|
||||
}
|
||||
|
||||
progs[name] = append(progs[name], ins)
|
||||
}
|
||||
|
||||
return progs, nil
|
||||
}
|
||||
|
||||
// The linker is responsible for resolving bpf-to-bpf calls between programs
|
||||
// within an ELF. Each BPF program must be a self-contained binary blob,
|
||||
// so when an instruction in one ELF program section wants to jump to
|
||||
// a function in another, the linker needs to pull in the bytecode
|
||||
// (and BTF info) of the target function and concatenate the instruction
|
||||
// streams.
|
||||
//
|
||||
// Later on in the pipeline, all call sites are fixed up with relative jumps
|
||||
// within this newly-created instruction stream to then finally hand off to
|
||||
// the kernel with BPF_PROG_LOAD.
|
||||
//
|
||||
// Each function is denoted by an ELF symbol and the compiler takes care of
|
||||
// register setup before each jump instruction.
|
||||
|
||||
// hasFunctionReferences returns true if insns contains one or more bpf2bpf
|
||||
// function references.
|
||||
func hasFunctionReferences(insns asm.Instructions) bool {
|
||||
for _, i := range insns {
|
||||
if i.IsFunctionReference() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// applyRelocations collects and applies any CO-RE relocations in insns.
|
||||
//
|
||||
// Passing a nil target will relocate against the running kernel. insns are
|
||||
// modified in place.
|
||||
func applyRelocations(insns asm.Instructions, target *btf.Spec, bo binary.ByteOrder) error {
|
||||
var relos []*btf.CORERelocation
|
||||
var reloInsns []*asm.Instruction
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
if relo := btf.CORERelocationMetadata(iter.Ins); relo != nil {
|
||||
relos = append(relos, relo)
|
||||
reloInsns = append(reloInsns, iter.Ins)
|
||||
}
|
||||
}
|
||||
|
||||
if len(relos) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
if bo == nil {
|
||||
bo = internal.NativeEndian
|
||||
}
|
||||
|
||||
fixups, err := btf.CORERelocate(relos, target, bo)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for i, fixup := range fixups {
|
||||
if err := fixup.Apply(reloInsns[i]); err != nil {
|
||||
return fmt.Errorf("fixup for %s: %w", relos[i], err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// flattenPrograms resolves bpf-to-bpf calls for a set of programs.
|
||||
//
|
||||
// Links all programs in names by modifying their ProgramSpec in progs.
|
||||
func flattenPrograms(progs map[string]*ProgramSpec, names []string) {
|
||||
// Pre-calculate all function references.
|
||||
refs := make(map[*ProgramSpec][]string)
|
||||
for _, prog := range progs {
|
||||
refs[prog] = prog.Instructions.FunctionReferences()
|
||||
}
|
||||
|
||||
// Create a flattened instruction stream, but don't modify progs yet to
|
||||
// avoid linking multiple times.
|
||||
flattened := make([]asm.Instructions, 0, len(names))
|
||||
for _, name := range names {
|
||||
flattened = append(flattened, flattenInstructions(name, progs, refs))
|
||||
}
|
||||
|
||||
// Finally, assign the flattened instructions.
|
||||
for i, name := range names {
|
||||
progs[name].Instructions = flattened[i]
|
||||
}
|
||||
}
|
||||
|
||||
// flattenInstructions resolves bpf-to-bpf calls for a single program.
|
||||
//
|
||||
// Flattens the instructions of prog by concatenating the instructions of all
|
||||
// direct and indirect dependencies.
|
||||
//
|
||||
// progs contains all referenceable programs, while refs contain the direct
|
||||
// dependencies of each program.
|
||||
func flattenInstructions(name string, progs map[string]*ProgramSpec, refs map[*ProgramSpec][]string) asm.Instructions {
|
||||
prog := progs[name]
|
||||
|
||||
insns := make(asm.Instructions, len(prog.Instructions))
|
||||
copy(insns, prog.Instructions)
|
||||
|
||||
// Add all direct references of prog to the list of to be linked programs.
|
||||
pending := make([]string, len(refs[prog]))
|
||||
copy(pending, refs[prog])
|
||||
|
||||
// All references for which we've appended instructions.
|
||||
linked := make(map[string]bool)
|
||||
|
||||
// Iterate all pending references. We can't use a range since pending is
|
||||
// modified in the body below.
|
||||
for len(pending) > 0 {
|
||||
var ref string
|
||||
ref, pending = pending[0], pending[1:]
|
||||
|
||||
if linked[ref] {
|
||||
// We've already linked this ref, don't append instructions again.
|
||||
continue
|
||||
}
|
||||
|
||||
progRef := progs[ref]
|
||||
if progRef == nil {
|
||||
// We don't have instructions that go with this reference. This
|
||||
// happens when calling extern functions.
|
||||
continue
|
||||
}
|
||||
|
||||
insns = append(insns, progRef.Instructions...)
|
||||
linked[ref] = true
|
||||
|
||||
// Make sure we link indirect references.
|
||||
pending = append(pending, refs[progRef]...)
|
||||
}
|
||||
|
||||
return insns
|
||||
}
|
||||
|
||||
// fixupAndValidate is called by the ELF reader right before marshaling the
|
||||
// instruction stream. It performs last-minute adjustments to the program and
|
||||
// runs some sanity checks before sending it off to the kernel.
|
||||
func fixupAndValidate(insns asm.Instructions) error {
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
ins := iter.Ins
|
||||
|
||||
// Map load was tagged with a Reference, but does not contain a Map pointer.
|
||||
needsMap := ins.Reference() != "" || ins.Metadata.Get(kconfigMetaKey{}) != nil
|
||||
if ins.IsLoadFromMap() && needsMap && ins.Map() == nil {
|
||||
return fmt.Errorf("instruction %d: %w", iter.Index, asm.ErrUnsatisfiedMapReference)
|
||||
}
|
||||
|
||||
fixupProbeReadKernel(ins)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// fixupKfuncs loops over all instructions in search for kfunc calls.
|
||||
// If at least one is found, the current kernels BTF and module BTFis are searched to set Instruction.Constant
|
||||
// and Instruction.Offset to the correct values.
|
||||
func fixupKfuncs(insns asm.Instructions) (handles, error) {
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
ins := iter.Ins
|
||||
if ins.IsKfuncCall() {
|
||||
goto fixups
|
||||
}
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
|
||||
fixups:
|
||||
// only load the kernel spec if we found at least one kfunc call
|
||||
kernelSpec, err := btf.LoadKernelSpec()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fdArray := make(handles, 0)
|
||||
for {
|
||||
ins := iter.Ins
|
||||
|
||||
if !ins.IsKfuncCall() {
|
||||
if !iter.Next() {
|
||||
// break loop if this was the last instruction in the stream.
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// check meta, if no meta return err
|
||||
kfm, _ := ins.Metadata.Get(kfuncMeta{}).(*btf.Func)
|
||||
if kfm == nil {
|
||||
return nil, fmt.Errorf("kfunc call has no kfuncMeta")
|
||||
}
|
||||
|
||||
target := btf.Type((*btf.Func)(nil))
|
||||
spec, module, err := findTargetInKernel(kernelSpec, kfm.Name, &target)
|
||||
if errors.Is(err, btf.ErrNotFound) {
|
||||
return nil, fmt.Errorf("kfunc %q: %w", kfm.Name, ErrNotSupported)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := btf.CheckTypeCompatibility(kfm.Type, target.(*btf.Func).Type); err != nil {
|
||||
return nil, &incompatibleKfuncError{kfm.Name, err}
|
||||
}
|
||||
|
||||
id, err := spec.TypeID(target)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
idx, err := fdArray.add(module)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ins.Constant = int64(id)
|
||||
ins.Offset = int16(idx)
|
||||
|
||||
if !iter.Next() {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return fdArray, nil
|
||||
}
|
||||
|
||||
type incompatibleKfuncError struct {
|
||||
name string
|
||||
err error
|
||||
}
|
||||
|
||||
func (ike *incompatibleKfuncError) Error() string {
|
||||
return fmt.Sprintf("kfunc %q: %s", ike.name, ike.err)
|
||||
}
|
||||
|
||||
// fixupProbeReadKernel replaces calls to bpf_probe_read_{kernel,user}(_str)
|
||||
// with bpf_probe_read(_str) on kernels that don't support it yet.
|
||||
func fixupProbeReadKernel(ins *asm.Instruction) {
|
||||
if !ins.IsBuiltinCall() {
|
||||
return
|
||||
}
|
||||
|
||||
// Kernel supports bpf_probe_read_kernel, nothing to do.
|
||||
if haveProbeReadKernel() == nil {
|
||||
return
|
||||
}
|
||||
|
||||
switch asm.BuiltinFunc(ins.Constant) {
|
||||
case asm.FnProbeReadKernel, asm.FnProbeReadUser:
|
||||
ins.Constant = int64(asm.FnProbeRead)
|
||||
case asm.FnProbeReadKernelStr, asm.FnProbeReadUserStr:
|
||||
ins.Constant = int64(asm.FnProbeReadStr)
|
||||
}
|
||||
}
|
||||
|
||||
// resolveKconfigReferences creates and populates a .kconfig map if necessary.
|
||||
//
|
||||
// Returns a nil Map and no error if no references exist.
|
||||
func resolveKconfigReferences(insns asm.Instructions) (_ *Map, err error) {
|
||||
closeOnError := func(c io.Closer) {
|
||||
if err != nil {
|
||||
c.Close()
|
||||
}
|
||||
}
|
||||
|
||||
var spec *MapSpec
|
||||
iter := insns.Iterate()
|
||||
for iter.Next() {
|
||||
meta, _ := iter.Ins.Metadata.Get(kconfigMetaKey{}).(*kconfigMeta)
|
||||
if meta != nil {
|
||||
spec = meta.Map
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if spec == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
cpy := spec.Copy()
|
||||
if err := resolveKconfig(cpy); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
kconfig, err := NewMap(cpy)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer closeOnError(kconfig)
|
||||
|
||||
// Resolve all instructions which load from .kconfig map with actual map
|
||||
// and offset inside it.
|
||||
iter = insns.Iterate()
|
||||
for iter.Next() {
|
||||
meta, _ := iter.Ins.Metadata.Get(kconfigMetaKey{}).(*kconfigMeta)
|
||||
if meta == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if meta.Map != spec {
|
||||
return nil, fmt.Errorf("instruction %d: reference to multiple .kconfig maps is not allowed", iter.Index)
|
||||
}
|
||||
|
||||
if err := iter.Ins.AssociateMap(kconfig); err != nil {
|
||||
return nil, fmt.Errorf("instruction %d: %w", iter.Index, err)
|
||||
}
|
||||
|
||||
// Encode a map read at the offset of the var in the datasec.
|
||||
iter.Ins.Constant = int64(uint64(meta.Offset) << 32)
|
||||
iter.Ins.Metadata.Set(kconfigMetaKey{}, nil)
|
||||
}
|
||||
|
||||
return kconfig, nil
|
||||
}
|
||||
1478
vendor/github.com/cilium/ebpf/map.go
generated
vendored
1478
vendor/github.com/cilium/ebpf/map.go
generated
vendored
File diff suppressed because it is too large
Load Diff
249
vendor/github.com/cilium/ebpf/marshalers.go
generated
vendored
249
vendor/github.com/cilium/ebpf/marshalers.go
generated
vendored
@@ -1,249 +0,0 @@
|
||||
package ebpf
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"sync"
|
||||
"unsafe"
|
||||
|
||||
"github.com/cilium/ebpf/internal"
|
||||
"github.com/cilium/ebpf/internal/sys"
|
||||
)
|
||||
|
||||
// marshalPtr converts an arbitrary value into a pointer suitable
|
||||
// to be passed to the kernel.
|
||||
//
|
||||
// As an optimization, it returns the original value if it is an
|
||||
// unsafe.Pointer.
|
||||
func marshalPtr(data interface{}, length int) (sys.Pointer, error) {
|
||||
if ptr, ok := data.(unsafe.Pointer); ok {
|
||||
return sys.NewPointer(ptr), nil
|
||||
}
|
||||
|
||||
buf, err := marshalBytes(data, length)
|
||||
if err != nil {
|
||||
return sys.Pointer{}, err
|
||||
}
|
||||
|
||||
return sys.NewSlicePointer(buf), nil
|
||||
}
|
||||
|
||||
// marshalBytes converts an arbitrary value into a byte buffer.
|
||||
//
|
||||
// Prefer using Map.marshalKey and Map.marshalValue if possible, since
|
||||
// those have special cases that allow more types to be encoded.
|
||||
//
|
||||
// Returns an error if the given value isn't representable in exactly
|
||||
// length bytes.
|
||||
func marshalBytes(data interface{}, length int) (buf []byte, err error) {
|
||||
if data == nil {
|
||||
return nil, errors.New("can't marshal a nil value")
|
||||
}
|
||||
|
||||
switch value := data.(type) {
|
||||
case encoding.BinaryMarshaler:
|
||||
buf, err = value.MarshalBinary()
|
||||
case string:
|
||||
buf = []byte(value)
|
||||
case []byte:
|
||||
buf = value
|
||||
case unsafe.Pointer:
|
||||
err = errors.New("can't marshal from unsafe.Pointer")
|
||||
case Map, *Map, Program, *Program:
|
||||
err = fmt.Errorf("can't marshal %T", value)
|
||||
default:
|
||||
wr := internal.NewBuffer(make([]byte, 0, length))
|
||||
defer internal.PutBuffer(wr)
|
||||
|
||||
err = binary.Write(wr, internal.NativeEndian, value)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("encoding %T: %v", value, err)
|
||||
}
|
||||
buf = wr.Bytes()
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(buf) != length {
|
||||
return nil, fmt.Errorf("%T doesn't marshal to %d bytes", data, length)
|
||||
}
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
func makeBuffer(dst interface{}, length int) (sys.Pointer, []byte) {
|
||||
if ptr, ok := dst.(unsafe.Pointer); ok {
|
||||
return sys.NewPointer(ptr), nil
|
||||
}
|
||||
|
||||
buf := make([]byte, length)
|
||||
return sys.NewSlicePointer(buf), buf
|
||||
}
|
||||
|
||||
var bytesReaderPool = sync.Pool{
|
||||
New: func() interface{} {
|
||||
return new(bytes.Reader)
|
||||
},
|
||||
}
|
||||
|
||||
// unmarshalBytes converts a byte buffer into an arbitrary value.
|
||||
//
|
||||
// Prefer using Map.unmarshalKey and Map.unmarshalValue if possible, since
|
||||
// those have special cases that allow more types to be encoded.
|
||||
//
|
||||
// The common int32 and int64 types are directly handled to avoid
|
||||
// unnecessary heap allocations as happening in the default case.
|
||||
func unmarshalBytes(data interface{}, buf []byte) error {
|
||||
switch value := data.(type) {
|
||||
case unsafe.Pointer:
|
||||
dst := unsafe.Slice((*byte)(value), len(buf))
|
||||
copy(dst, buf)
|
||||
runtime.KeepAlive(value)
|
||||
return nil
|
||||
case Map, *Map, Program, *Program:
|
||||
return fmt.Errorf("can't unmarshal into %T", value)
|
||||
case encoding.BinaryUnmarshaler:
|
||||
return value.UnmarshalBinary(buf)
|
||||
case *string:
|
||||
*value = string(buf)
|
||||
return nil
|
||||
case *[]byte:
|
||||
*value = buf
|
||||
return nil
|
||||
case *int32:
|
||||
if len(buf) < 4 {
|
||||
return errors.New("int32 requires 4 bytes")
|
||||
}
|
||||
*value = int32(internal.NativeEndian.Uint32(buf))
|
||||
return nil
|
||||
case *uint32:
|
||||
if len(buf) < 4 {
|
||||
return errors.New("uint32 requires 4 bytes")
|
||||
}
|
||||
*value = internal.NativeEndian.Uint32(buf)
|
||||
return nil
|
||||
case *int64:
|
||||
if len(buf) < 8 {
|
||||
return errors.New("int64 requires 8 bytes")
|
||||
}
|
||||
*value = int64(internal.NativeEndian.Uint64(buf))
|
||||
return nil
|
||||
case *uint64:
|
||||
if len(buf) < 8 {
|
||||
return errors.New("uint64 requires 8 bytes")
|
||||
}
|
||||
*value = internal.NativeEndian.Uint64(buf)
|
||||
return nil
|
||||
case string:
|
||||
return errors.New("require pointer to string")
|
||||
case []byte:
|
||||
return errors.New("require pointer to []byte")
|
||||
default:
|
||||
rd := bytesReaderPool.Get().(*bytes.Reader)
|
||||
rd.Reset(buf)
|
||||
defer bytesReaderPool.Put(rd)
|
||||
if err := binary.Read(rd, internal.NativeEndian, value); err != nil {
|
||||
return fmt.Errorf("decoding %T: %v", value, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// marshalPerCPUValue encodes a slice containing one value per
|
||||
// possible CPU into a buffer of bytes.
|
||||
//
|
||||
// Values are initialized to zero if the slice has less elements than CPUs.
|
||||
//
|
||||
// slice must have a type like []elementType.
|
||||
func marshalPerCPUValue(slice interface{}, elemLength int) (sys.Pointer, error) {
|
||||
sliceType := reflect.TypeOf(slice)
|
||||
if sliceType.Kind() != reflect.Slice {
|
||||
return sys.Pointer{}, errors.New("per-CPU value requires slice")
|
||||
}
|
||||
|
||||
possibleCPUs, err := internal.PossibleCPUs()
|
||||
if err != nil {
|
||||
return sys.Pointer{}, err
|
||||
}
|
||||
|
||||
sliceValue := reflect.ValueOf(slice)
|
||||
sliceLen := sliceValue.Len()
|
||||
if sliceLen > possibleCPUs {
|
||||
return sys.Pointer{}, fmt.Errorf("per-CPU value exceeds number of CPUs")
|
||||
}
|
||||
|
||||
alignedElemLength := internal.Align(elemLength, 8)
|
||||
buf := make([]byte, alignedElemLength*possibleCPUs)
|
||||
|
||||
for i := 0; i < sliceLen; i++ {
|
||||
elem := sliceValue.Index(i).Interface()
|
||||
elemBytes, err := marshalBytes(elem, elemLength)
|
||||
if err != nil {
|
||||
return sys.Pointer{}, err
|
||||
}
|
||||
|
||||
offset := i * alignedElemLength
|
||||
copy(buf[offset:offset+elemLength], elemBytes)
|
||||
}
|
||||
|
||||
return sys.NewSlicePointer(buf), nil
|
||||
}
|
||||
|
||||
// unmarshalPerCPUValue decodes a buffer into a slice containing one value per
|
||||
// possible CPU.
|
||||
//
|
||||
// valueOut must have a type like *[]elementType
|
||||
func unmarshalPerCPUValue(slicePtr interface{}, elemLength int, buf []byte) error {
|
||||
slicePtrType := reflect.TypeOf(slicePtr)
|
||||
if slicePtrType.Kind() != reflect.Ptr || slicePtrType.Elem().Kind() != reflect.Slice {
|
||||
return fmt.Errorf("per-cpu value requires pointer to slice")
|
||||
}
|
||||
|
||||
possibleCPUs, err := internal.PossibleCPUs()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sliceType := slicePtrType.Elem()
|
||||
slice := reflect.MakeSlice(sliceType, possibleCPUs, possibleCPUs)
|
||||
|
||||
sliceElemType := sliceType.Elem()
|
||||
sliceElemIsPointer := sliceElemType.Kind() == reflect.Ptr
|
||||
if sliceElemIsPointer {
|
||||
sliceElemType = sliceElemType.Elem()
|
||||
}
|
||||
|
||||
step := len(buf) / possibleCPUs
|
||||
if step < elemLength {
|
||||
return fmt.Errorf("per-cpu element length is larger than available data")
|
||||
}
|
||||
for i := 0; i < possibleCPUs; i++ {
|
||||
var elem interface{}
|
||||
if sliceElemIsPointer {
|
||||
newElem := reflect.New(sliceElemType)
|
||||
slice.Index(i).Set(newElem)
|
||||
elem = newElem.Interface()
|
||||
} else {
|
||||
elem = slice.Index(i).Addr().Interface()
|
||||
}
|
||||
|
||||
// Make a copy, since unmarshal can hold on to itemBytes
|
||||
elemBytes := make([]byte, elemLength)
|
||||
copy(elemBytes, buf[:elemLength])
|
||||
|
||||
err := unmarshalBytes(elem, elemBytes)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cpu %d: %w", i, err)
|
||||
}
|
||||
|
||||
buf = buf[step:]
|
||||
}
|
||||
|
||||
reflect.ValueOf(slicePtr).Elem().Set(slice)
|
||||
return nil
|
||||
}
|
||||
1026
vendor/github.com/cilium/ebpf/prog.go
generated
vendored
1026
vendor/github.com/cilium/ebpf/prog.go
generated
vendored
File diff suppressed because it is too large
Load Diff
152
vendor/github.com/cilium/ebpf/run-tests.sh
generated
vendored
152
vendor/github.com/cilium/ebpf/run-tests.sh
generated
vendored
@@ -1,152 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test the current package under a different kernel.
|
||||
# Requires virtme and qemu to be installed.
|
||||
# Examples:
|
||||
# Run all tests on a 5.4 kernel
|
||||
# $ ./run-tests.sh 5.4
|
||||
# Run a subset of tests:
|
||||
# $ ./run-tests.sh 5.4 ./link
|
||||
# Run using a local kernel image
|
||||
# $ ./run-tests.sh /path/to/bzImage
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
script="$(realpath "$0")"
|
||||
readonly script
|
||||
|
||||
# This script is a bit like a Matryoshka doll since it keeps re-executing itself
|
||||
# in various different contexts:
|
||||
#
|
||||
# 1. invoked by the user like run-tests.sh 5.4
|
||||
# 2. invoked by go test like run-tests.sh --exec-vm
|
||||
# 3. invoked by init in the vm like run-tests.sh --exec-test
|
||||
#
|
||||
# This allows us to use all available CPU on the host machine to compile our
|
||||
# code, and then only use the VM to execute the test. This is because the VM
|
||||
# is usually slower at compiling than the host.
|
||||
if [[ "${1:-}" = "--exec-vm" ]]; then
|
||||
shift
|
||||
|
||||
input="$1"
|
||||
shift
|
||||
|
||||
# Use sudo if /dev/kvm isn't accessible by the current user.
|
||||
sudo=""
|
||||
if [[ ! -r /dev/kvm || ! -w /dev/kvm ]]; then
|
||||
sudo="sudo"
|
||||
fi
|
||||
readonly sudo
|
||||
|
||||
testdir="$(dirname "$1")"
|
||||
output="$(mktemp -d)"
|
||||
printf -v cmd "%q " "$@"
|
||||
|
||||
if [[ "$(stat -c '%t:%T' -L /proc/$$/fd/0)" == "1:3" ]]; then
|
||||
# stdin is /dev/null, which doesn't play well with qemu. Use a fifo as a
|
||||
# blocking substitute.
|
||||
mkfifo "${output}/fake-stdin"
|
||||
# Open for reading and writing to avoid blocking.
|
||||
exec 0<> "${output}/fake-stdin"
|
||||
rm "${output}/fake-stdin"
|
||||
fi
|
||||
|
||||
for ((i = 0; i < 3; i++)); do
|
||||
if ! $sudo virtme-run --kimg "${input}/bzImage" --memory 768M --pwd \
|
||||
--rwdir="${testdir}=${testdir}" \
|
||||
--rodir=/run/input="${input}" \
|
||||
--rwdir=/run/output="${output}" \
|
||||
--script-sh "PATH=\"$PATH\" CI_MAX_KERNEL_VERSION="${CI_MAX_KERNEL_VERSION:-}" \"$script\" --exec-test $cmd" \
|
||||
--kopt possible_cpus=2; then # need at least two CPUs for some tests
|
||||
exit 23
|
||||
fi
|
||||
|
||||
if [[ -e "${output}/status" ]]; then
|
||||
break
|
||||
fi
|
||||
|
||||
if [[ -v CI ]]; then
|
||||
echo "Retrying test run due to qemu crash"
|
||||
continue
|
||||
fi
|
||||
|
||||
exit 42
|
||||
done
|
||||
|
||||
rc=$(<"${output}/status")
|
||||
$sudo rm -r "$output"
|
||||
exit $rc
|
||||
elif [[ "${1:-}" = "--exec-test" ]]; then
|
||||
shift
|
||||
|
||||
mount -t bpf bpf /sys/fs/bpf
|
||||
mount -t tracefs tracefs /sys/kernel/debug/tracing
|
||||
|
||||
if [[ -d "/run/input/bpf" ]]; then
|
||||
export KERNEL_SELFTESTS="/run/input/bpf"
|
||||
fi
|
||||
|
||||
if [[ -f "/run/input/bpf/bpf_testmod/bpf_testmod.ko" ]]; then
|
||||
insmod "/run/input/bpf/bpf_testmod/bpf_testmod.ko"
|
||||
fi
|
||||
|
||||
dmesg --clear
|
||||
rc=0
|
||||
"$@" || rc=$?
|
||||
dmesg
|
||||
echo $rc > "/run/output/status"
|
||||
exit $rc # this return code is "swallowed" by qemu
|
||||
fi
|
||||
|
||||
if [[ -z "${1:-}" ]]; then
|
||||
echo "Expecting kernel version or path as first argument"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
readonly input="$(mktemp -d)"
|
||||
readonly tmp_dir="${TMPDIR:-/tmp}"
|
||||
|
||||
fetch() {
|
||||
echo Fetching "${1}"
|
||||
pushd "${tmp_dir}" > /dev/null
|
||||
curl --no-progress-meter -L -O --fail --etag-compare "${1}.etag" --etag-save "${1}.etag" "https://github.com/cilium/ci-kernels/raw/${BRANCH:-master}/${1}"
|
||||
local ret=$?
|
||||
popd > /dev/null
|
||||
return $ret
|
||||
}
|
||||
|
||||
if [[ -f "${1}" ]]; then
|
||||
readonly kernel="${1}"
|
||||
cp "${1}" "${input}/bzImage"
|
||||
else
|
||||
# LINUX_VERSION_CODE test compares this to discovered value.
|
||||
export KERNEL_VERSION="${1}"
|
||||
|
||||
readonly kernel="linux-${1}.bz"
|
||||
readonly selftests="linux-${1}-selftests-bpf.tgz"
|
||||
|
||||
fetch "${kernel}"
|
||||
cp "${tmp_dir}/${kernel}" "${input}/bzImage"
|
||||
|
||||
if fetch "${selftests}"; then
|
||||
echo "Decompressing selftests"
|
||||
mkdir "${input}/bpf"
|
||||
tar --strip-components=4 -xf "${tmp_dir}/${selftests}" -C "${input}/bpf"
|
||||
else
|
||||
echo "No selftests found, disabling"
|
||||
fi
|
||||
fi
|
||||
shift
|
||||
|
||||
args=(-short -coverpkg=./... -coverprofile=coverage.out -count 1 ./...)
|
||||
if (( $# > 0 )); then
|
||||
args=("$@")
|
||||
fi
|
||||
|
||||
export GOFLAGS=-mod=readonly
|
||||
export CGO_ENABLED=0
|
||||
|
||||
echo Testing on "${kernel}"
|
||||
go test -exec "$script --exec-vm $input" "${args[@]}"
|
||||
echo "Test successful on ${kernel}"
|
||||
|
||||
rm -r "${input}"
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user