3 Commits

Author SHA1 Message Date
Andre Courchesne
9b49f798c0 - Update changelog, bump to v0.0.2 2026-02-12 10:19:48 -05:00
Andre Courchesne
2e5afdc7ec Merge pull request #4 from Telecominfraproject/ucentral-support
Add Ucentral support
2026-02-12 10:15:54 -05:00
NavneetBarwal-RA
a8a7535a6f - Add UCentral support and documentation 2026-02-12 10:15:19 -05:00
9 changed files with 1213 additions and 25 deletions

View File

@@ -3,6 +3,10 @@ All notable changes to this project will be documented in this file.
NOTE: the project follows [Semantic Versioning](http://semver.org/).
## v0.0.2 - February 12th, 2026
- Add Ucentral support and documentation
## v0.0.1 - December 17th 2025
- First release

View File

@@ -19,11 +19,11 @@ The result of this will be an ISO in the project working folder.
- Boot on the ISO, once the install is completed the server will power-off
- Power back the server
- Login to the Linux host with username `olgadm` and password `olgadm`
- Edit `/opt/staging_scripts/setup-config` and adgst the network interface names and if required the VyOS VM sizing parameters
- You might need to adjust the VyOS rolling release path. Reference: https://github.com/vyos/vyos-nightly-build/releases
- Edit `/opt/staging_scripts/setup-config` and adjust the network interface names and if required the VyOS VM sizing parameters
- If you will be using the "Standalone mode", you might need to adjust the VyOS rolling release path. Reference: https://github.com/vyos/vyos-nightly-build/releases
- Run the setup script:
- `sudo /opt/staging_scripts/setup-vyos-bridge.sh` to use the network bridge method
- `sudo /opt/staging_scripts/setup-vyos-hw-passthru.sh` to use the hardware passthru for the network interfaces (WIP)
- `sudo /opt/staging_scripts/setup-vyos-hw-passthru.sh` to use the hardware passthru for the network interfaces (WIP, not tested)
- Reboot the host
- Connect to the VyOS console with `virsh console vyos`
- Login with username `vyos` and password `vyos`
@@ -32,7 +32,44 @@ The result of this will be an ISO in the project working folder.
- Once completed, type `reboot` to reboot the VM
- For some reason the VyOS VM does not reboot after this first `reboot` command. You must restart it manually with `virsh start vyos`
### Load the initial factory default configuration
## Configuration options
At this time you can either load a default configuration and use VyOS in a "standalone" mode or have it connected to an OpenWifi Cloud SDK instance.
### OpenWifi Cloud SDK mode
### Setup of Ucentral-Client Container
- SSH to the OLG Ubuntu host
- Run `sudo /opt/staging_scripts/ucentral-setup.sh setup` to setup the ucentral-client container
- You can also use the following parameters for the `ucentral-setup.sh` script:
- `shell`: To get access to ucentral container.
- `cleanup`: To clean the setup.
- Copy your certificates to olg and then to container at `/etc/ucentral` in order to work with your cloud-controller.
- `sudo docker cp cert.pem ucentral-olg:/etc/ucentral/operational.pem`
- `sudo docker cp cas.pem ucentral-olg:/etc/ucentral/operational.ca`
- `sudo docker cp key.pem ucentral-olg:/etc/ucentral/`
- Run `sudo /opt/staging_scripts/ucentral-setup.sh shell` to get shell access to the ucentral container and perform the following tasks:
- Modify `/etc/ucentral/vyos-info.json` and update `host` value to the IP address assigned to the VyOS VM br-wan interface.
- Start the ucentral-client:
- Debug mode: `SERIALNUM=my_olg_serial ; URL=my_cloudsdk_uri ; /usr/sbin/ucentral -S $SERIALNUM -s $URL -P 15002 -d`
- Deamonized mode: `SERIALNUM=my_olg_serial ; URL=my_cloudsdk_uri ; /usr/sbin/ucentral -S $SERIALNUM -s $URL -P 15002 -d &`
- For example `SERIALNUM=74d4ddb965dc` for which certs are generated, `URL=openwifi1.routerarchitects.com`, for your OpenWifi Cloud SDK instance.
> [!WARNING]
> Ucentral Client must be started only after VyOS gets started.
>
> There is a bit of a chicken-and-the-egg scenario if the OLG device was never seen by the OpenWifi Cloud SDK instance. A blank configuration will be pushed to VyOS and the connection with the ucentral client might be broken.
>
> At this time the order of execution should be the following if the OLG device was never seen by the OpenWifi Cloud SDK instance:
> - Stop the VyOS VM with `virsh shutdown vyos`
> - Start the ucentral-client
> - Populate a configuration in the OpenWifi Cloud SDK for the OLG device
> - Restart the VyOS VM with `virsh start vyos`
> - If IP Address of VyOS VM gets changed then reconfigure value of `host` in `/etc/ucentral/vyos-info.json` , in ucentral and restart ucentral client.
### Standalone mode
The factory configuration consists of:
@@ -61,7 +98,15 @@ Here is how to load this configuration:
exit
```
## Testes platforms
## Sample UCentral configurations
Here are some sample configuration(s)
| File path | Description |
|-----------|-------------|
| [mdu.json](sample-configurations/mdu.json) | MDU configuration with two VLAN networks and also adhere to olg ucentral schema enhancement for NAT object as proposed in [olg-ucentral-schema ra_proposal branch](https://github.com/Telecominfraproject/olg-ucentral-schema/tree/ra_proposal) |
## Tested platforms
- MinisForum MS-01
@@ -72,7 +117,7 @@ Here is how to load this configuration:
- Code
- Ask for review and get your changes merged
### Protip
## Protip
Use the Shipit CLI (https://gitlab.com/intello/shipit-cli-go)

View File

@@ -4,7 +4,7 @@ insmod png
loadfont unicode
gfxpayload text
ISO_VERSION="v0.0.1"
ISO_VERSION="v0.0.2"
menuentry "Install Open LAN Gateway (ISO $ISO_VERSION)" {
linux /casper/vmlinuz autoinstall fsck.mode=skip ds=nocloud\;s=/cdrom/nocloud/ ipv6.disable=1 console=ttyS0,115200n8 console=tty0 network-config=disabled ---

764
iso-files/get-docker.sh Normal file
View File

@@ -0,0 +1,764 @@
#!/bin/sh
set -e
# Docker Engine for Linux installation script.
#
# This script is intended as a convenient way to configure docker's package
# repositories and to install Docker Engine, This script is not recommended
# for production environments. Before running this script, make yourself familiar
# with potential risks and limitations, and refer to the installation manual
# at https://docs.docker.com/engine/install/ for alternative installation methods.
#
# The script:
#
# - Requires `root` or `sudo` privileges to run.
# - Attempts to detect your Linux distribution and version and configure your
# package management system for you.
# - Doesn't allow you to customize most installation parameters.
# - Installs dependencies and recommendations without asking for confirmation.
# - Installs the latest stable release (by default) of Docker CLI, Docker Engine,
# Docker Buildx, Docker Compose, containerd, and runc. When using this script
# to provision a machine, this may result in unexpected major version upgrades
# of these packages. Always test upgrades in a test environment before
# deploying to your production systems.
# - Isn't designed to upgrade an existing Docker installation. When using the
# script to update an existing installation, dependencies may not be updated
# to the expected version, resulting in outdated versions.
#
# Source code is available at https://github.com/docker/docker-install/
#
# Usage
# ==============================================================================
#
# To install the latest stable versions of Docker CLI, Docker Engine, and their
# dependencies:
#
# 1. download the script
#
# $ curl -fsSL https://get.docker.com -o install-docker.sh
#
# 2. verify the script's content
#
# $ cat install-docker.sh
#
# 3. run the script with --dry-run to verify the steps it executes
#
# $ sh install-docker.sh --dry-run
#
# 4. run the script either as root, or using sudo to perform the installation.
#
# $ sudo sh install-docker.sh
#
# Command-line options
# ==============================================================================
#
# --version <VERSION>
# Use the --version option to install a specific version, for example:
#
# $ sudo sh install-docker.sh --version 23.0
#
# --channel <stable|test>
#
# Use the --channel option to install from an alternative installation channel.
# The following example installs the latest versions from the "test" channel,
# which includes pre-releases (alpha, beta, rc):
#
# $ sudo sh install-docker.sh --channel test
#
# Alternatively, use the script at https://test.docker.com, which uses the test
# channel as default.
#
# --mirror <Aliyun|AzureChinaCloud>
#
# Use the --mirror option to install from a mirror supported by this script.
# Available mirrors are "Aliyun" (https://mirrors.aliyun.com/docker-ce), and
# "AzureChinaCloud" (https://mirror.azure.cn/docker-ce), for example:
#
# $ sudo sh install-docker.sh --mirror AzureChinaCloud
#
# --setup-repo
#
# Use the --setup-repo option to configure Docker's package repositories without
# installing Docker packages. This is useful when you want to add the repository
# but install packages separately:
#
# $ sudo sh install-docker.sh --setup-repo
#
# Automatic Service Start
#
# By default, this script automatically starts the Docker daemon and enables the docker
# service after installation if systemd is used as init.
#
# If you prefer to start the service manually, use the --no-autostart option:
#
# $ sudo sh install-docker.sh --no-autostart
#
# Note: Starting the service requires appropriate privileges to manage system services.
#
# ==============================================================================
# Git commit from https://github.com/docker/docker-install when
# the script was uploaded (Should only be modified by upload job):
SCRIPT_COMMIT_SHA="f381ee68b32e515bb4dc034b339266aff1fbc460"
# strip "v" prefix if present
VERSION="${VERSION#v}"
# The channel to install from:
# * stable
# * test
DEFAULT_CHANNEL_VALUE="stable"
if [ -z "$CHANNEL" ]; then
CHANNEL=$DEFAULT_CHANNEL_VALUE
fi
DEFAULT_DOWNLOAD_URL="https://download.docker.com"
if [ -z "$DOWNLOAD_URL" ]; then
DOWNLOAD_URL=$DEFAULT_DOWNLOAD_URL
fi
DEFAULT_REPO_FILE="docker-ce.repo"
if [ -z "$REPO_FILE" ]; then
REPO_FILE="$DEFAULT_REPO_FILE"
# Automatically default to a staging repo fora
# a staging download url (download-stage.docker.com)
case "$DOWNLOAD_URL" in
*-stage*) REPO_FILE="docker-ce-staging.repo";;
esac
fi
mirror=''
DRY_RUN=${DRY_RUN:-}
REPO_ONLY=${REPO_ONLY:-0}
NO_AUTOSTART=${NO_AUTOSTART:-0}
while [ $# -gt 0 ]; do
case "$1" in
--channel)
CHANNEL="$2"
shift
;;
--dry-run)
DRY_RUN=1
;;
--mirror)
mirror="$2"
shift
;;
--version)
VERSION="${2#v}"
shift
;;
--setup-repo)
REPO_ONLY=1
shift
;;
--no-autostart)
NO_AUTOSTART=1
;;
--*)
echo "Illegal option $1"
;;
esac
shift $(( $# > 0 ? 1 : 0 ))
done
case "$mirror" in
Aliyun)
DOWNLOAD_URL="https://mirrors.aliyun.com/docker-ce"
;;
AzureChinaCloud)
DOWNLOAD_URL="https://mirror.azure.cn/docker-ce"
;;
"")
;;
*)
>&2 echo "unknown mirror '$mirror': use either 'Aliyun', or 'AzureChinaCloud'."
exit 1
;;
esac
case "$CHANNEL" in
stable|test)
;;
*)
>&2 echo "unknown CHANNEL '$CHANNEL': use either stable or test."
exit 1
;;
esac
command_exists() {
command -v "$@" > /dev/null 2>&1
}
# version_gte checks if the version specified in $VERSION is at least the given
# SemVer (Maj.Minor[.Patch]), or CalVer (YY.MM) version.It returns 0 (success)
# if $VERSION is either unset (=latest) or newer or equal than the specified
# version, or returns 1 (fail) otherwise.
#
# examples:
#
# VERSION=23.0
# version_gte 23.0 // 0 (success)
# version_gte 20.10 // 0 (success)
# version_gte 19.03 // 0 (success)
# version_gte 26.1 // 1 (fail)
version_gte() {
if [ -z "$VERSION" ]; then
return 0
fi
version_compare "$VERSION" "$1"
}
# version_compare compares two version strings (either SemVer (Major.Minor.Path),
# or CalVer (YY.MM) version strings. It returns 0 (success) if version A is newer
# or equal than version B, or 1 (fail) otherwise. Patch releases and pre-release
# (-alpha/-beta) are not taken into account
#
# examples:
#
# version_compare 23.0.0 20.10 // 0 (success)
# version_compare 23.0 20.10 // 0 (success)
# version_compare 20.10 19.03 // 0 (success)
# version_compare 20.10 20.10 // 0 (success)
# version_compare 19.03 20.10 // 1 (fail)
version_compare() (
set +x
yy_a="$(echo "$1" | cut -d'.' -f1)"
yy_b="$(echo "$2" | cut -d'.' -f1)"
if [ "$yy_a" -lt "$yy_b" ]; then
return 1
fi
if [ "$yy_a" -gt "$yy_b" ]; then
return 0
fi
mm_a="$(echo "$1" | cut -d'.' -f2)"
mm_b="$(echo "$2" | cut -d'.' -f2)"
# trim leading zeros to accommodate CalVer
mm_a="${mm_a#0}"
mm_b="${mm_b#0}"
if [ "${mm_a:-0}" -lt "${mm_b:-0}" ]; then
return 1
fi
return 0
)
is_dry_run() {
if [ -z "$DRY_RUN" ]; then
return 1
else
return 0
fi
}
is_wsl() {
case "$(uname -r)" in
*microsoft* ) true ;; # WSL 2
*Microsoft* ) true ;; # WSL 1
* ) false;;
esac
}
is_darwin() {
case "$(uname -s)" in
*darwin* ) true ;;
*Darwin* ) true ;;
* ) false;;
esac
}
deprecation_notice() {
distro=$1
distro_version=$2
echo
printf "\033[91;1mDEPRECATION WARNING\033[0m\n"
printf " This Linux distribution (\033[1m%s %s\033[0m) reached end-of-life and is no longer supported by this script.\n" "$distro" "$distro_version"
echo " No updates or security fixes will be released for this distribution, and users are recommended"
echo " to upgrade to a currently maintained version of $distro."
echo
printf "Press \033[1mCtrl+C\033[0m now to abort this script, or wait for the installation to continue."
echo
sleep 10
}
get_distribution() {
lsb_dist=""
# Every system that we officially support has /etc/os-release
if [ -r /etc/os-release ]; then
lsb_dist="$(. /etc/os-release && echo "$ID")"
fi
# Returning an empty string here should be alright since the
# case statements don't act unless you provide an actual value
echo "$lsb_dist"
}
start_docker_daemon() {
# Use systemctl if available (for systemd-based systems)
if command_exists systemctl; then
is_dry_run || >&2 echo "Using systemd to manage Docker service"
if (
is_dry_run || set -x
$sh_c systemctl enable --now docker.service 2>/dev/null
); then
is_dry_run || echo "INFO: Docker daemon enabled and started" >&2
else
is_dry_run || echo "WARNING: unable to enable the docker service" >&2
fi
else
# No service management available (container environment)
if ! is_dry_run; then
>&2 echo "Note: Running in a container environment without service management"
>&2 echo "Docker daemon cannot be started automatically in this environment"
>&2 echo "The Docker packages have been installed successfully"
fi
fi
>&2 echo
}
echo_docker_as_nonroot() {
if is_dry_run; then
return
fi
if command_exists docker && [ -e /var/run/docker.sock ]; then
(
set -x
$sh_c 'docker version'
) || true
fi
# intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output
echo
echo "================================================================================"
echo
if version_gte "20.10"; then
echo "To run Docker as a non-privileged user, consider setting up the"
echo "Docker daemon in rootless mode for your user:"
echo
echo " dockerd-rootless-setuptool.sh install"
echo
echo "Visit https://docs.docker.com/go/rootless/ to learn about rootless mode."
echo
fi
echo
echo "To run the Docker daemon as a fully privileged service, but granting non-root"
echo "users access, refer to https://docs.docker.com/go/daemon-access/"
echo
echo "WARNING: Access to the remote API on a privileged Docker daemon is equivalent"
echo " to root access on the host. Refer to the 'Docker daemon attack surface'"
echo " documentation for details: https://docs.docker.com/go/attack-surface/"
echo
echo "================================================================================"
echo
}
# Check if this is a forked Linux distro
check_forked() {
# Check for lsb_release command existence, it usually exists in forked distros
if command_exists lsb_release; then
# Check if the `-u` option is supported
set +e
lsb_release -a -u > /dev/null 2>&1
lsb_release_exit_code=$?
set -e
# Check if the command has exited successfully, it means we're in a forked distro
if [ "$lsb_release_exit_code" = "0" ]; then
# Print info about current distro
cat <<-EOF
You're using '$lsb_dist' version '$dist_version'.
EOF
# Get the upstream release info
lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[:space:]')
dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[:space:]')
# Print info about upstream distro
cat <<-EOF
Upstream release is '$lsb_dist' version '$dist_version'.
EOF
else
if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ] && [ "$lsb_dist" != "raspbian" ]; then
if [ "$lsb_dist" = "osmc" ]; then
# OSMC runs Raspbian
lsb_dist=raspbian
else
# We're Debian and don't even know it!
lsb_dist=debian
fi
dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
case "$dist_version" in
13|14|forky)
dist_version="trixie"
;;
12)
dist_version="bookworm"
;;
11)
dist_version="bullseye"
;;
10)
dist_version="buster"
;;
9)
dist_version="stretch"
;;
8)
dist_version="jessie"
;;
esac
fi
fi
fi
}
do_install() {
echo "# Executing docker install script, commit: $SCRIPT_COMMIT_SHA"
if command_exists docker; then
cat >&2 <<-'EOF'
Warning: the "docker" command appears to already exist on this system.
If you already have Docker installed, this script can cause trouble, which is
why we're displaying this warning and provide the opportunity to cancel the
installation.
If you installed the current Docker package using this script and are using it
again to update Docker, you can ignore this message, but be aware that the
script resets any custom changes in the deb and rpm repo configuration
files to match the parameters passed to the script.
You may press Ctrl+C now to abort this script.
EOF
( set -x; sleep 20 )
fi
user="$(id -un 2>/dev/null || true)"
sh_c='sh -c'
if [ "$user" != 'root' ]; then
if command_exists sudo; then
sh_c='sudo -E sh -c'
elif command_exists su; then
sh_c='su -c'
else
cat >&2 <<-'EOF'
Error: this installer needs the ability to run commands as root.
We are unable to find either "sudo" or "su" available to make this happen.
EOF
exit 1
fi
fi
if is_dry_run; then
sh_c="echo"
fi
# perform some very rudimentary platform detection
lsb_dist=$( get_distribution )
lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')"
if is_wsl; then
echo
echo "WSL DETECTED: We recommend using Docker Desktop for Windows."
echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop/"
echo
cat >&2 <<-'EOF'
You may press Ctrl+C now to abort this script.
EOF
( set -x; sleep 20 )
fi
case "$lsb_dist" in
ubuntu)
if command_exists lsb_release; then
dist_version="$(lsb_release --codename | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then
dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")"
fi
;;
debian|raspbian)
dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
case "$dist_version" in
13)
dist_version="trixie"
;;
12)
dist_version="bookworm"
;;
11)
dist_version="bullseye"
;;
10)
dist_version="buster"
;;
9)
dist_version="stretch"
;;
8)
dist_version="jessie"
;;
esac
;;
centos|rhel)
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
*)
if command_exists lsb_release; then
dist_version="$(lsb_release --release | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
esac
# Check if this is a forked Linux distro
check_forked
# Print deprecation warnings for distro versions that recently reached EOL,
# but may still be commonly used (especially LTS versions).
case "$lsb_dist.$dist_version" in
centos.8|centos.7|rhel.7)
deprecation_notice "$lsb_dist" "$dist_version"
;;
debian.buster|debian.stretch|debian.jessie)
deprecation_notice "$lsb_dist" "$dist_version"
;;
raspbian.buster|raspbian.stretch|raspbian.jessie)
deprecation_notice "$lsb_dist" "$dist_version"
;;
ubuntu.focal|ubuntu.bionic|ubuntu.xenial|ubuntu.trusty)
deprecation_notice "$lsb_dist" "$dist_version"
;;
ubuntu.oracular|ubuntu.mantic|ubuntu.lunar|ubuntu.kinetic|ubuntu.impish|ubuntu.hirsute|ubuntu.groovy|ubuntu.eoan|ubuntu.disco|ubuntu.cosmic)
deprecation_notice "$lsb_dist" "$dist_version"
;;
fedora.*)
if [ "$dist_version" -lt 41 ]; then
deprecation_notice "$lsb_dist" "$dist_version"
fi
;;
esac
# Run setup for each distro accordingly
case "$lsb_dist" in
ubuntu|debian|raspbian)
pre_reqs="ca-certificates curl"
apt_repo="deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] $DOWNLOAD_URL/linux/$lsb_dist $dist_version $CHANNEL"
(
if ! is_dry_run; then
set -x
fi
$sh_c 'apt-get -qq update >/dev/null'
$sh_c "DEBIAN_FRONTEND=noninteractive apt-get -y -qq install $pre_reqs >/dev/null"
$sh_c 'install -m 0755 -d /etc/apt/keyrings'
$sh_c "curl -fsSL \"$DOWNLOAD_URL/linux/$lsb_dist/gpg\" -o /etc/apt/keyrings/docker.asc"
$sh_c "chmod a+r /etc/apt/keyrings/docker.asc"
$sh_c "echo \"$apt_repo\" > /etc/apt/sources.list.d/docker.list"
$sh_c 'apt-get -qq update >/dev/null'
)
if [ "$REPO_ONLY" = "1" ]; then
exit 0
fi
pkg_version=""
if [ -n "$VERSION" ]; then
if is_dry_run; then
echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
else
# Will work for incomplete versions IE (17.12), but may not actually grab the "latest" if in the test channel
pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/~ce~.*/g' | sed 's/-/.*/g')"
search_command="apt-cache madison docker-ce | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3"
pkg_version="$($sh_c "$search_command")"
echo "INFO: Searching repository for VERSION '$VERSION'"
echo "INFO: $search_command"
if [ -z "$pkg_version" ]; then
echo
echo "ERROR: '$VERSION' not found amongst apt-cache madison results"
echo
exit 1
fi
if version_gte "18.09"; then
search_command="apt-cache madison docker-ce-cli | grep '$pkg_pattern' | head -1 | awk '{\$1=\$1};1' | cut -d' ' -f 3"
echo "INFO: $search_command"
cli_pkg_version="=$($sh_c "$search_command")"
fi
pkg_version="=$pkg_version"
fi
fi
(
pkgs="docker-ce${pkg_version%=}"
if version_gte "18.09"; then
# older versions didn't ship the cli and containerd as separate packages
pkgs="$pkgs docker-ce-cli${cli_pkg_version%=} containerd.io"
fi
if version_gte "20.10"; then
pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version"
fi
if version_gte "23.0"; then
pkgs="$pkgs docker-buildx-plugin"
fi
if version_gte "28.2"; then
pkgs="$pkgs docker-model-plugin"
fi
if ! is_dry_run; then
set -x
fi
$sh_c "DEBIAN_FRONTEND=noninteractive apt-get -y -qq install $pkgs >/dev/null"
)
if [ "$NO_AUTOSTART" != "1" ]; then
start_docker_daemon
fi
echo_docker_as_nonroot
exit 0
;;
centos|fedora|rhel)
if [ "$(uname -m)" = "s390x" ]; then
echo "Effective v27.5, please consult RHEL distro statement for s390x support."
exit 1
fi
repo_file_url="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE"
(
if ! is_dry_run; then
set -x
fi
if command_exists dnf5; then
$sh_c "dnf -y -q --setopt=install_weak_deps=False install dnf-plugins-core"
$sh_c "dnf5 config-manager addrepo --overwrite --save-filename=docker-ce.repo --from-repofile='$repo_file_url'"
if [ "$CHANNEL" != "stable" ]; then
$sh_c "dnf5 config-manager setopt \"docker-ce-*.enabled=0\""
$sh_c "dnf5 config-manager setopt \"docker-ce-$CHANNEL.enabled=1\""
fi
$sh_c "dnf makecache"
elif command_exists dnf; then
$sh_c "dnf -y -q --setopt=install_weak_deps=False install dnf-plugins-core"
$sh_c "rm -f /etc/yum.repos.d/docker-ce.repo /etc/yum.repos.d/docker-ce-staging.repo"
$sh_c "dnf config-manager --add-repo $repo_file_url"
if [ "$CHANNEL" != "stable" ]; then
$sh_c "dnf config-manager --set-disabled \"docker-ce-*\""
$sh_c "dnf config-manager --set-enabled \"docker-ce-$CHANNEL\""
fi
$sh_c "dnf makecache"
else
$sh_c "yum -y -q install yum-utils"
$sh_c "rm -f /etc/yum.repos.d/docker-ce.repo /etc/yum.repos.d/docker-ce-staging.repo"
$sh_c "yum-config-manager --add-repo $repo_file_url"
if [ "$CHANNEL" != "stable" ]; then
$sh_c "yum-config-manager --disable \"docker-ce-*\""
$sh_c "yum-config-manager --enable \"docker-ce-$CHANNEL\""
fi
$sh_c "yum makecache"
fi
)
if [ "$REPO_ONLY" = "1" ]; then
exit 0
fi
pkg_version=""
if command_exists dnf; then
pkg_manager="dnf"
pkg_manager_flags="-y -q --best"
else
pkg_manager="yum"
pkg_manager_flags="-y -q"
fi
if [ -n "$VERSION" ]; then
if is_dry_run; then
echo "# WARNING: VERSION pinning is not supported in DRY_RUN"
else
if [ "$lsb_dist" = "fedora" ]; then
pkg_suffix="fc$dist_version"
else
pkg_suffix="el"
fi
pkg_pattern="$(echo "$VERSION" | sed 's/-ce-/\\\\.ce.*/g' | sed 's/-/.*/g').*$pkg_suffix"
search_command="$pkg_manager list --showduplicates docker-ce | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'"
pkg_version="$($sh_c "$search_command")"
echo "INFO: Searching repository for VERSION '$VERSION'"
echo "INFO: $search_command"
if [ -z "$pkg_version" ]; then
echo
echo "ERROR: '$VERSION' not found amongst $pkg_manager list results"
echo
exit 1
fi
if version_gte "18.09"; then
# older versions don't support a cli package
search_command="$pkg_manager list --showduplicates docker-ce-cli | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'"
cli_pkg_version="$($sh_c "$search_command" | cut -d':' -f 2)"
fi
# Cut out the epoch and prefix with a '-'
pkg_version="-$(echo "$pkg_version" | cut -d':' -f 2)"
fi
fi
(
pkgs="docker-ce$pkg_version"
if version_gte "18.09"; then
# older versions didn't ship the cli and containerd as separate packages
if [ -n "$cli_pkg_version" ]; then
pkgs="$pkgs docker-ce-cli-$cli_pkg_version containerd.io"
else
pkgs="$pkgs docker-ce-cli containerd.io"
fi
fi
if version_gte "20.10"; then
pkgs="$pkgs docker-compose-plugin docker-ce-rootless-extras$pkg_version"
fi
if version_gte "23.0"; then
pkgs="$pkgs docker-buildx-plugin docker-model-plugin"
fi
if ! is_dry_run; then
set -x
fi
$sh_c "$pkg_manager $pkg_manager_flags install $pkgs"
)
if [ "$NO_AUTOSTART" != "1" ]; then
start_docker_daemon
fi
echo_docker_as_nonroot
exit 0
;;
sles)
echo "Effective v27.5, please consult SLES distro statement for s390x support."
exit 1
;;
*)
if [ -z "$lsb_dist" ]; then
if is_darwin; then
echo
echo "ERROR: Unsupported operating system 'macOS'"
echo "Please get Docker Desktop from https://www.docker.com/products/docker-desktop"
echo
exit 1
fi
fi
echo
echo "ERROR: Unsupported distribution '$lsb_dist'"
echo
exit 1
;;
esac
exit 1
}
# wrapped up in a function so that we have some protection against only getting
# half the file during "curl | sh"
do_install

View File

@@ -5,4 +5,13 @@ WAN_IF="enp2s0f0np0"
LAN_IF="enp2s0f1np1"
ADMIN_IF="enx000ec6f01419"
INTEL_AMT_PORT="enp88s0"
ISO_URL="https://github.com/vyos/vyos-nightly-build/releases/download/2025.12.13-0020-rolling/vyos-2025.12.13-0020-rolling-generic-amd64.iso"
OLG_ISO_PLATFORM="VM" # BAREMETAL or VM
# RouterArchitects ISO (for OpenWifi Cloud SDK mode)
ISO_URL="'https://drive.usercontent.google.com/download?id=14W0hnFhM64b8_jn1CwDWiPybntIrBzlh&confirm=t&uuid=e9d7cc8d-0a8d-483f-a650-af5ca22ace15&at=APcXIO18cq3_AFU_XdJgtU1JXTyd%3A1770305061284'"
ISO_SHA256="3d6ad7bc5b5f51566cf8353cf231c395390a509588c4c855a7bd37df275d104e"
# VyOS official ISO (for stand-alone mode)
#ISO_URL="'https://github.com/vyos/vyos-nightly-build/releases/download/2026.02.03-0027-rolling/vyos-2026.02.03-0027-rolling-generic-amd64.iso'"
#ISO_SHA256="26ed11122794d4e3a457abfbdc54bd77996966e0d7e714bf4c8f66a16f0a6673"

View File

@@ -13,6 +13,13 @@ DISK_PATH="$IMAGES_DIR/${VM_NAME}.qcow2"
BR_WAN="br-wan"
BR_LAN="br-lan"
NETPLAN_FILE="/etc/netplan/99-vyos-bridges.yaml"
# Default: no DHCP on WAN bridge
BR_WAN_DHCP4="false"
# OLG_ISO_PLATFORM is where this ISO is running
# Valid values: BAREMETAL | VM . Extensible (later: CLOUD, etc)
if [[ "${OLG_ISO_PLATFORM}" == "VM" ]]; then
BR_WAN_DHCP4="true"
fi
# Make sure we are running as root
if [[ $EUID -ne 0 ]]; then
@@ -29,7 +36,6 @@ for IFACE in "$WAN_IF" "$LAN_IF"; do
fi
done
echo ">>> Set the host hostname"
/opt/staging_scripts/set-hostname
@@ -37,6 +43,63 @@ echo ">>> Installing virtualization packages..."
apt-get update -y
apt-get install -y qemu-kvm libvirt-daemon-system libvirt-clients virtinst bridge-utils cloud-image-utils libguestfs-tools xorriso genisoimage syslinux-utils
echo ">>> Installing Docker..."
sh /opt/staging_scripts/get-docker.sh
# Get VyOS ISO
# All Downloads/Updates should be done on stable network, before network configurations are changed
if [[ ! -f "$ISO_PATH" ]]; then
echo ">>> Downloading VyOS ISO to $ISO_PATH"
echo -e "#!/bin/bash\ncurl -fL $ISO_URL -o $ISO_PATH" >download_iso.sh
chmod +x download_iso.sh
./download_iso.sh
rm -f download_iso.sh
# Verify SHA256 checksum
echo ">>> Verifying SHA256 checksum..."
ACTUAL_SHA256=$(sha256sum "$ISO_PATH" | awk '{print $1}')
if [[ "$ACTUAL_SHA256" != "$ISO_SHA256" ]]; then
echo "ERROR: SHA256 checksum mismatch!"
echo "Expected: $ISO_SHA256"
echo "Actual: $ACTUAL_SHA256"
echo "Removing corrupted ISO file..."
rm -f "$ISO_PATH"
exit 1
fi
echo ">>> SHA256 checksum verified successfully"
else
echo ">>> VyOS ISO already present at $ISO_PATH"
# Verify SHA256 checksum of existing file
echo ">>> Verifying SHA256 checksum of existing ISO..."
ACTUAL_SHA256=$(sha256sum "$ISO_PATH" | awk '{print $1}')
if [[ "$ACTUAL_SHA256" != "$ISO_SHA256" ]]; then
echo "WARNING: SHA256 checksum mismatch for existing ISO!"
echo "Expected: $ISO_SHA256"
echo "Actual: $ACTUAL_SHA256"
echo "Removing existing ISO and re-downloading..."
rm -f "$ISO_PATH"
echo -e "#!/bin/bash\ncurl -fL $ISO_URL -o $ISO_PATH" >download_iso.sh
chmod +x download_iso.sh
./download_iso.sh
rm -f download_iso.sh
# Verify SHA256 checksum of new download
echo ">>> Verifying SHA256 checksum..."
ACTUAL_SHA256=$(sha256sum "$ISO_PATH" | awk '{print $1}')
if [[ "$ACTUAL_SHA256" != "$ISO_SHA256" ]]; then
echo "ERROR: SHA256 checksum mismatch!"
echo "Expected: $ISO_SHA256"
echo "Actual: $ACTUAL_SHA256"
echo "Removing corrupted ISO file..."
rm -f "$ISO_PATH"
exit 1
fi
echo ">>> SHA256 checksum verified successfully"
else
echo ">>> SHA256 checksum verified successfully"
fi
fi
echo ">>> Ensuring libvirtd is running..."
systemctl enable --now libvirtd
mkdir -p "$IMAGES_DIR"
@@ -57,7 +120,7 @@ network:
bridges:
${BR_WAN}:
interfaces: [${WAN_IF}]
dhcp4: false
dhcp4: ${BR_WAN_DHCP4}
dhcp6: false
parameters:
stp: false
@@ -76,23 +139,14 @@ echo ">>> Applying netplan (this may momentarily disrupt links on $WAN_IF/$LAN_I
netplan apply
# System settings
echo br_netfilter | sudo tee /etc/modules-load.d/br_netfilter.conf
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/99-bridge-nf-off.conf >/dev/null <<'EOF'
echo br_netfilter | tee /etc/modules-load.d/br_netfilter.conf
modprobe br_netfilter
tee /etc/sysctl.d/99-bridge-nf-off.conf >/dev/null <<'EOF'
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-arptables=0
EOF
sudo sysctl --system
# Get VyOS ISO
if [[ ! -f "$ISO_PATH" ]]; then
echo ">>> Downloading VyOS ISO to $ISO_PATH"
curl -fL $ISO_URL -o $ISO_PATH
else
echo ">>> VyOS ISO already present at $ISO_PATH"
fi
sysctl --system
# Create an ISO with out example config files
mkisofs -joliet -rock -volid "cidata" -output /var/lib/libvirt/boot/vyos-configs.iso /opt/staging_scripts/vyos-configs/vyos-factory-config
@@ -124,7 +178,7 @@ virt-install -n "$VM_NAME" \
--graphics vnc \
--hvm \
--virt-type kvm \
--disk path=/var/lib/libvirt/images/vyos.qcow2,bus=virtio,size=8 \
--disk path="$DISK_PATH",bus=virtio,size="$DISK_GB" \
--disk /var/lib/libvirt/boot/vyos-configs.iso,device=cdrom \
--noautoconsole

View File

@@ -33,6 +33,9 @@ apt-get install -y qemu-kvm libvirt-daemon-system libvirt-clients virtinst bridg
echo ">>> Ensuring libvirtd is running..."
systemctl enable --now libvirtd
echo ">>> Installing Docker..."
sh /opt/staging_scripts/get-docker.sh
mkdir -p "$IMAGES_DIR"
# Get PCI addresses for interfaces
@@ -129,9 +132,54 @@ done
# Get VyOS ISO
if [[ ! -f "$ISO_PATH" ]]; then
echo ">>> Downloading VyOS ISO to $ISO_PATH"
curl -fL $ISO_URL -o $ISO_PATH
echo -e "#!/bin/bash\ncurl -fL $ISO_URL -o $ISO_PATH" >download_iso.sh
chmod +x download_iso.sh
./download_iso.sh
rm -f download_iso.sh
# Verify SHA256 checksum
echo ">>> Verifying SHA256 checksum..."
ACTUAL_SHA256=$(sha256sum "$ISO_PATH" | awk '{print $1}')
if [[ "$ACTUAL_SHA256" != "$ISO_SHA256" ]]; then
echo "ERROR: SHA256 checksum mismatch!"
echo "Expected: $ISO_SHA256"
echo "Actual: $ACTUAL_SHA256"
echo "Removing corrupted ISO file..."
rm -f "$ISO_PATH"
exit 1
fi
echo ">>> SHA256 checksum verified successfully"
else
echo ">>> VyOS ISO already present at $ISO_PATH"
# Verify SHA256 checksum of existing file
echo ">>> Verifying SHA256 checksum of existing ISO..."
ACTUAL_SHA256=$(sha256sum "$ISO_PATH" | awk '{print $1}')
if [[ "$ACTUAL_SHA256" != "$ISO_SHA256" ]]; then
echo "WARNING: SHA256 checksum mismatch for existing ISO!"
echo "Expected: $ISO_SHA256"
echo "Actual: $ACTUAL_SHA256"
echo "Removing existing ISO and re-downloading..."
rm -f "$ISO_PATH"
echo -e "#!/bin/bash\ncurl -fL $ISO_URL -o $ISO_PATH" >download_iso.sh
chmod +x download_iso.sh
./download_iso.sh
rm -f download_iso.sh
# Verify SHA256 checksum of new download
echo ">>> Verifying SHA256 checksum..."
ACTUAL_SHA256=$(sha256sum "$ISO_PATH" | awk '{print $1}')
if [[ "$ACTUAL_SHA256" != "$ISO_SHA256" ]]; then
echo "ERROR: SHA256 checksum mismatch!"
echo "Expected: $ISO_SHA256"
echo "Actual: $ACTUAL_SHA256"
echo "Removing corrupted ISO file..."
rm -f "$ISO_PATH"
exit 1
fi
echo ">>> SHA256 checksum verified successfully"
else
echo ">>> SHA256 checksum verified successfully"
fi
fi
# Create an ISO with out example config files

134
iso-files/ucentral-setup.sh Executable file
View File

@@ -0,0 +1,134 @@
#!/usr/bin/env bash
set -e
ACTION="$1"
# ================= CONFIG =================
CONTAINER="ucentral-olg"
IMAGE="routerarchitect123/ucentral-client:olgV5"
BRIDGE="br-wan"
HOST_VETH="veth-${CONTAINER:0:5}-h"
CONT_VETH="veth-${CONTAINER:0:5}-c"
CONT_IF="eth0"
DOCKER_RUN_OPTS="--privileged --network none"
# ==========================================
usage() {
echo "Usage: $0 setup | cleanup | shell"
exit 1
}
[ -z "$ACTION" ] && usage
container_pid() {
docker inspect -f '{{.State.Pid}}' "$CONTAINER" 2>/dev/null
}
container_exists() {
docker inspect "$CONTAINER" &>/dev/null
}
container_running() {
docker inspect -f '{{.State.Running}}' "$CONTAINER" 2>/dev/null | grep -q true
}
veth_exists() {
ip link show "$HOST_VETH" &>/dev/null
}
attached_to_bridge() {
bridge link show | grep -q "$HOST_VETH"
}
setup() {
echo "[+] Setup container on $BRIDGE"
if ! container_exists; then
echo "[+] Creating container $CONTAINER"
docker run -dit --name "$CONTAINER" $DOCKER_RUN_OPTS "$IMAGE"
fi
if ! container_running; then
echo "[+] Starting container"
docker start "$CONTAINER"
fi
PID=$(container_pid)
[ -z "$PID" ] && { echo "Failed to get container PID"; exit 1; }
if veth_exists && attached_to_bridge; then
echo "[!] Setup already done: container already attached to $BRIDGE"
exit 0
fi
echo "[+] Creating veth pair"
ip link add "$HOST_VETH" type veth peer name "$CONT_VETH"
echo "[+] Attaching host veth to bridge $BRIDGE"
ip link set "$HOST_VETH" master "$BRIDGE"
ip link set "$HOST_VETH" up
echo "[+] Moving container veth into netns"
ip link set "$CONT_VETH" netns "$PID"
echo "[+] Configuring container interface"
nsenter -t "$PID" -n -m -p sh <<EOF
ip link set lo up
ip link set "$CONT_VETH" name "$CONT_IF"
ip link set "$CONT_IF" up
udhcpc -i "$CONT_IF" -b -p /var/run/udhcpc.eth0.pid -s /usr/share/udhcpc/default.script
ubusd &
EOF
echo "[✓] Setup complete"
}
cleanup() {
local did_something=false
if veth_exists; then
echo "[+] Removing veth"
ip link del "$HOST_VETH"
did_something=true
fi
if container_exists; then
echo "[+] Stopping container"
docker stop "$CONTAINER" || true
echo "[+] Removing container"
docker rm "$CONTAINER" || true
did_something=true
fi
if ! $did_something; then
echo "[!] Nothing to cleanup"
else
echo "[✓] Cleanup complete"
fi
}
shell() {
if ! container_exists; then
echo "Container $CONTAINER does not exist"
exit 1
fi
if ! container_running; then
echo "Container $CONTAINER is not running"
exit 1
fi
echo "[+] Opening shell in $CONTAINER"
exec docker exec -it "$CONTAINER" /bin/ash
}
case "$ACTION" in
setup) setup ;;
cleanup) cleanup ;;
shell) shell ;;
*) usage ;;
esac

View File

@@ -0,0 +1,130 @@
{
"interfaces": [
{
"ethernet": [
{
"select-ports": [
"WAN*"
]
}
],
"ipv4": {
"addressing": "dynamic"
},
"name": "WAN",
"role": "upstream",
"services": [
"ssh"
]
},
{
"ethernet": [
{
"select-ports": [
"LAN*"
]
}
],
"ipv4": {
"addressing": "static",
"dhcp": {
"lease-count": 128,
"lease-first": 10,
"lease-time": "6h"
},
"gateway": "192.168.60.1",
"send-hostname": true,
"subnet": "192.168.60.1/24"
},
"name": "LAN",
"role": "downstream",
"services": [
"ssh"
]
},
{
"ethernet": [
{
"isolate": false,
"learning": true,
"multicast": true,
"reverse-path": false,
"select-ports": [
"LAN2"
],
"vlan-tag": "auto"
}
],
"ipv4": {
"addressing": "static",
"dhcp": {
"lease-count": 128,
"lease-first": 10,
"lease-time": "6h"
},
"gateway": "192.168.10.1",
"send-hostname": true,
"subnet": "192.168.10.1/24"
},
"name": "LAN.10",
"role": "downstream",
"vlan": {
"id": 10
}
},
{
"ethernet": [
{
"isolate": false,
"learning": true,
"multicast": true,
"reverse-path": false,
"select-ports": [
"LAN1"
],
"vlan-tag": "auto"
}
],
"ipv4": {
"addressing": "static",
"dhcp": {
"lease-count": 128,
"lease-first": 10,
"lease-time": "6h"
},
"gateway": "192.168.20.1",
"send-hostname": true,
"subnet": "192.168.20.1/24"
},
"name": "LAN.20",
"role": "downstream",
"vlan": {
"id": 20
}
}
],
"nat": {
"source": {
"rule": {
"100": {
"description": "LAN SNAT",
"outbound-interface": {
"name": "br0"
},
"source": {
"address": "192.168.60.0/24"
},
"translation": {
"address": "masquerade"
}
}
}
}
},
"services": {
"ssh": {
"port": 22
}
},
"uuid": 1770703498
}