前置条件

Untitled

LXC KVM

在 OpenWrt 软路由下的 lxc kvm 宿主机(192.168.2.3)上部署 LXD,操作系统采用 Arch Linux 则要使用 pacman 包管理器安装,其他 Linux 系统如何部署 LXD 需要自行查阅官方文档,最好该系统的文件系统使用 btrfs 文件系统,方便后续 lxc 存储池的扩容。

#本机:192.168.2.3
sudo pacman -S lxd qemu-full   #安装 lxd lxc 和 qemu 套件
sudo systemctl enable --now lxd
sudo systemctl start/status lxd.service

#启用对运行非特权容器的支持
su root
# 分配231072~296607范围65536个gid&uid,便于虚拟机子进程不会和其他进程混淆和其他系统权限问题
vim /etc/lxc/default.conf
#-----------追加内容-----------
lxc.idmap = u 0 231072 65536 
lxc.idmap = g 0 231072 65536
#-----------------------------
echo "lxd:231072:65536" > /etc/sub**gid**
echo "lxd:231072:65536" > /etc/sub**uid**
# 创建一个网桥(bridge),名字叫做 lxdbr0,网段 10.10.10.1/24,可以通过 ip addr 命令查看
****lxc network create lxdbr0 --type=bridge ipv4.address=10.10.10.1/24 ipv4.nat=true
****exit #退出 root 用户

#添加到用户组
sudo usermod -a -G lxd $USER
sudo chmod 666 /var/lib/lxd/unix.socket
sudo lxd init #初始化 lxd 容器
#-------------输出日志----------------------
Identified face as teaper
Would you like to use LXD clustering? (yes/no) [default=no]: no #你想使用 LXD 聚类吗?(仅在多台设备 LXD 集群时使用)
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes #是否要配置新的存储池
Name of the new storage pool [default=default]: **lxd** #新存储池的名称
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: btrfs #要使用的存储后端的类型(宿主机需要使用 btrfs 文件系统,其他文件系统选其他类型)
Create a new BTRFS pool? (yes/no) [default=yes]: yes #创建一个新的 BTRFS 池
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: no #您想使用现有的空块设备(例如磁盘或分区)
Size in GB of the new loop device (1GB minimum) [default=30GB]: **30** #新循环设备的大小
Would you like to connect to a MAAS server? (yes/no) [default=no]: no #您想连接到 MAAS 服务器吗
Would you like to create a new local network bridge? (yes/no) [default=yes]: no #你想创建一个新的本地网桥吗(上面已经创建好了)
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes #您想将 LXD 配置为使用现有的网桥或主机接口吗
Name of the existing bridge or host interface: **lxdbr0** #现有网桥或主机接口的名称
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes #您希望 LXD 服务器在网络上可用吗
Address to bind LXD to (not including port) [default=all]: all #将 LXD 绑定到的地址(不包括端口)
Port to bind LXD to [default=8443]: 8443 #将 LXD 绑定到 8443 端口
Trust password for new clients: 123 #新客户的信任密码
Again: 123
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: yes #您希望自动更新陈旧的缓存图像吗?
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no #你想打印一个 YAML “lxd init” 预置吗?
❯ lxc profile ls #查看 lxc 规格配置列表
+----------+-----------------------+---------+
|   NAME   |      DESCRIPTION      | USED BY |
+----------+-----------------------+---------+
| default  | Default LXD profile   | 0       |
+----------+-----------------------+---------+
❯ lxc profile copy default 4U4G40GB #copy default 的配置到 4U4G40GB
❯ lxc profile ls #再次查看
+----------+-----------------------+---------+
|   NAME   |      DESCRIPTION      | USED BY |
+----------+-----------------------+---------+
| 4U4G40GB | Default LXD profile   | 0       |
+----------+-----------------------+---------+
| default  | Default LXD profile   | 0       |
+----------+-----------------------+---------+
❯ echo "export EDITOR=vim" >> ~/.bashrc  #将 lxc edit 编辑器改为 vim,可以替换其他编辑器 nvim / nano
❯ source ~/.bashrc
❯ lxc profile edit 4U4G40GB #这时候使用 edit 编辑 4U4G40GB 配置就是使用设定好的 vim

#---------------------编辑配置------------------
config:
  limits.cpu: "4"                        
  limits.memory: 4GB                     
  security.nesting: "true"                
description: 4 核 4 内存 40G 存储          
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: lxd
    size: 40GB                            
    type: disk
name: 4U4G40GB                             
used_by: []
#--------------------:wq 保存推出-------------------
lxc profile ls #再次查看
#查看容器
lxc list
Identified face as teaper
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

#添加清华的 lxd 镜像源
lxc remote add tuna-images <https://mirrors.tuna.tsinghua.edu.cn/lxc-images/> --protocol=simplestreams --public

lxc image list tuna-images:archlinux #查看清华源中有哪些 archlinux 镜像
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
|              ALIAS               | FINGERPRINT  | PUBLIC |               DESCRIPTION                | ARCHITECTURE |      TYPE       |   SIZE    |          UPLOAD DATE          |
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| archlinux (5 more)               | dda22c5beeed | yes    | Archlinux current amd64 (20220628_04:27) | x86_64       | VIRTUAL-MACHINE | 524.00MB  | Jun 28, 2022 at 12:00am (UTC) |
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| archlinux (5 more)               | f227f07b396d | yes    | Archlinux current amd64 (20220628_04:27) | x86_64       | CONTAINER       | 165.51MB  | Jun 28, 2022 at 12:00am (UTC) |
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| archlinux/arm64 (2 more)         | ef595a1ad998 | yes    | Archlinux current arm64 (20220628_04:27) | aarch64      | CONTAINER       | 160.84MB  | Jun 28, 2022 at 12:00am (UTC) |
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| archlinux/armhf (2 more)         | 691639e72776 | yes    | Archlinux current armhf (20220628_04:27) | armv7l       | CONTAINER       | 152.44MB  | Jun 28, 2022 at 12:00am (UTC) |
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| archlinux/cloud (3 more)         | 8afd8c4b621b | yes    | Archlinux current amd64 (20220628_04:27) | x86_64       | CONTAINER       | 185.76MB  | Jun 28, 2022 at 12:00am (UTC) |
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| archlinux/cloud (3 more)         | 22d9d990eba8 | yes    | Archlinux current amd64 (20220628_04:27) | x86_64       | VIRTUAL-MACHINE | 538.75MB  | Jun 28, 2022 at 12:00am (UTC) |
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
| archlinux/desktop-gnome (3 more) | 2d8f5031225c | yes    | Archlinux current amd64 (20220628_04:27) | x86_64       | VIRTUAL-MACHINE | 1376.15MB | Jun 28, 2022 at 12:00am (UTC) |
+----------------------------------+--------------+--------+------------------------------------------+--------------+-----------------+-----------+-------------------------------+
#部署虚拟机(VIRTUAL-MACHINE)
lxc launch tuna-images:dda22c5beeed vnt-node1 --vm -c security.secureboot=false -p 4U4G40GB #security.secureboot=false 表示关闭安全启动 
lxc launch tuna-images:dda22c5beeed vnt-node2 --vm -c security.secureboot=false -p 4U4G40GB
lxc launch tuna-images:dda22c5beeed vnt-node3 --vm -c security.secureboot=false -p 4U4G40GB
lxc exec vnt-node1 passwd #设置容器密码 123
lxc exec vnt-node2 passwd #设置容器密码 123
lxc exec vnt-node3 passwd #设置容器密码 123
lxc console vnt-node1  #进入虚拟机
lxc console vnt-node2  #进入虚拟机
lxc console vnt-node3  #进入虚拟机

#进入 lxc 虚拟机后先安装 openssh 开启 ssh 远程服务,因为 ssh 会比 lxc console 方式访问
#虚拟机更加便捷,并且不会出现终端适配问题,开启 PermitRootLogin yes 这个相信大家都熟练操作
#上面 nvt-node1~3 台kvm都要手动安装启用 openssh
#后续 lxc 虚拟机需要关闭,可以使用 lxc stop vnt-node1 vnt-node2 vnt-node3 命令

部署 VNTS

VNTS 是 VNT 的服务端,用来注册 VNT 客户端及其他网络中继时使用,需要一台能公网访问的机子。这里我选用腾讯云主机(debian系统 x86_64 架构 4 核 4G 内存 6 Mbps/s)的机子 43.136.94.42 来搭建。

#本机IP:43.136.94.42
curl -LO <https://github.com/lbl8603/vnts/releases/download/1.2.9.7/vnts-x86_64-unknown-linux-gnu-1.2.9.7.tar.gz>
tar -zxvf vnts-x86_64-unknown-linux-gnu-1.2.9.7.tar.gz

vim /etc/systemd/system/vnts.service
#----------------添加内容------------------
[Unit]
Description=VNT Service
After=network.target

[Service]
ExecStart=/root/**vnts --white-token teaper_vnt_nets --gateway 10.22.0.1 --netmask 255.255.255.0 --username admin --password admin**
ExecStop=pkill vnts
User=root
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target
#-----------------------------------------------
systemctl daemon-reload
systemctl start/enable/status vnts.service
● vnts.service - VNT Service
     Loaded: loaded (/etc/systemd/system/vnts.service; enabled; preset: enabled)
     Active: active (running) since Mon 2024-05-13 23:13:15 CST; 15h ago
   Main PID: 2707335 (vnts)
      Tasks: 11 (limit: 4490)
     Memory: 5.9M
        CPU: 24.281s
     CGroup: /system.slice/vnts.service
             └─2707335 /root/vnts --white-token teaper_vnt_nets --gateway 10.22.0.1 --netmask 255.255.255.0 --username admin --password admin

May 13 23:13:15 VM-12-4-debian vnts[2707335]: Serial: 2405071448-891
May 13 23:13:15 VM-12-4-debian vnts[2707335]: 端口: 29872
May 13 23:13:15 VM-12-4-debian vnts[2707335]: web端口: 29870
May 13 23:13:15 VM-12-4-debian vnts[2707335]: token白名单: Some({"teaper_vnt_nets"})
May 13 23:13:15 VM-12-4-debian vnts[2707335]: 网关: 10.22.0.1
May 13 23:13:15 VM-12-4-debian vnts[2707335]: 子网掩码: 255.255.255.0
May 13 23:13:15 VM-12-4-debian vnts[2707335]: 密钥指纹: 0TP5zIdSrGkz57C1lGOB5bpRNjq6QjmdmJOJrtFuwOs=
May 13 23:13:15 VM-12-4-debian vnts[2707335]: 监听udp端口: 29872
May 13 23:13:15 VM-12-4-debian vnts[2707335]: 监听tcp端口: 29872
May 13 23:13:15 VM-12-4-debian vnts[2707335]: 监听http端口: 29870

日志中有三个端口,都需要前往腾讯云 → 轻量服务器 → 防火墙中开通 0.0.0.0/0 到 TCP/UDP 29872 和 TCP 29870 端口的访问,VNT 后台前往:http://43.136.94.42:29870 用户名密码:admin/admin

另外还需要前往 Cloudflare → DNS 添加一条 TXT 类型的解析,名称为 vnts ,内容为 43.136.94.42:29872 ,仅 DNS,自动 TTL

KVM 部署 VNT

使用 ssh 方式进入前面创建好的 vnt-node1 vnt-node2 vnt-node3 虚拟机中,手动创建 vnt.service 文件,要求 ExecStart 命令不能冲突,使用 -n 标记 vnt 客户端名称,使用 -d 区分编号,使用 --ip 手动指定分配的虚拟 IP 地址,使用 -s txt:vnts.teaper.dev 指定 VNTS 服务器

vim /etc/systemd/system/vnt.service
#----------------vnt-node1 添加内容----------------------
[Unit]
Description=VNT Service
After=network.target

[Service]
ExecStart=/root/vnt-cli -k teaper_vnt_nets -s txt:vnts.teaper.dev -n vnt-node1 -d 2 -o 0.0.0.0/0 -w password123 --ip 10.22.0.2 --model aes_gcm
# ExecStartPost=iptables -t nat -A POSTROUTING -o vnt-tun -j MASQUERADE
ExeStop=pkill -9 vnt-cli
# ExecStopPost=iptables -t nat -D POSTROUTING -o vnt-tun -j MASQUERADE
User=root
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target
#------------------------------------------------------

#----------------vnt-node2 添加内容----------------------
[Unit]
Description=VNT Service
After=network.target

[Service]
ExecStart=/root/vnt-cli -k teaper_vnt_nets -s txt:vnts.teaper.dev -n vnt-node2 -d 3 -o 0.0.0.0/0 -w password123 --ip 10.22.0.3 --model aes_gcm
ExeStop=pkill -9 vnt-cli
User=root
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target
#------------------------------------------------------

#----------------vnt-node3 添加内容----------------------
[Unit]
Description=VNT Service
After=network.target

[Service]
ExecStart=/root/vnt-cli -k teaper_vnt_nets -s txt:vnts.teaper.dev -n vnt-node3 -d 4 -o 0.0.0.0/0 -w password123 --ip 10.22.0.4 --model aes_gcm
ExeStop=pkill -9 vnt-cli
User=root
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target
#-------------------------------------------------------
systemctl daemon-reload
systemctl start/enable/status vnt.service

#kvm 开启 ipv4 端口转发
sysctl net.ipv4.ip_forward  #0关闭ipv4端口转发,1 开启
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf
sysctl -p /etc/sysctl.conf