Proxmox VE is een opensourceplatform voor virtualisatie gebaseerd op kvm en lxc-containers. Het kan via een webinterface worden beheerd, en daarnaast zijn een commandline en een rest-api beschikbaar. Voor meer informatie verwijzen we naar deze pagina en verschillende videotutorials kunnen op deze pagina worden bekeken. Het geheel wordt onder de agpl uitgegeven. Versie 6.4 is uitgebracht met de volgende veranderingen:
Proxmox VE 6.4Virtual Machines (KVM/QEMU):
- Based on Debian Buster (10.9)
- Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20
- Kernel 5.4 default
- Kernel 5.11 opt-in
- LXC 4.0
- QEMU 5.2
- ZFS 2.0.4 - new major version
- Support pinning a VM to a specific QEMU machine version.
- Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation. This improves stability and guarantees that the hardware layout can stay the same even with newer QEMU versions.
- Address issues with hanging QMP commands, causing VMs to freeze on disk resize and indeterministic edge cases. Note that some QMP timeout log messages are still being investigated, they are harmless and only of informative nature.
- cloud-init: re-add Stateless Address Autoconfiguration (SLAAC) option to IPv6 configuration.
- Improve output in task log for mirroring drives and VM live-migration.
Container
Backup and Restore
- Improved cgroup v2 (control group) handling.
- Support and provide appliance templates for Alpine Linux 3.13, Devuan 3, Fedora 34, Ubuntu 21.04.
- Implement unified single-file restore for virtual machine (VM) and container (CT) backup archives located on a Proxmox Backup Server. The file-restore is available in the GUI and in a new command line tool
proxmox-file-restore
.- Live-Restore of VM backup archives located on a Proxmox Backup Server. No more watching the task log, waiting for a restore to finish; VMs can now be brought up while the restore runs in the background.
- Consistent handling of excludes for container backups across the different backup modes and storage types.
- Container restores now default to the privilege setting from the backup archive.
Ceph Server
Storage
- Improve integration for placement group (PG) auto-scaler status and configuration.
Allow configuration of the CRUSH-rule,Target Size
andTarget Ratio
settings, and automatically calculating the optimal numbers of PGs based on this.
- Support editing of backup notes on any CephFS, CIFS or NFS storage.
- Support configuring a namespace for accessing a Ceph pool.
- Improve handling ZFS pool by doing separate checks for imported and mounted.
This separation helps in situations where a pool was imported but not mounted, and executing another import command failed.Disk Management
Enhancements in the web interface (GUI)
- Return partitions and display them in tree format.
- Improve detection of disk and partition usage.
External metric servers:
- Show current usage of host memory and CPU resources by each guest in a node's search-view.
- Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, providing consistency with the units used in the current usage gauge.
- Make columns in the firewall rule view more responsive and flexible by default.
- Improve Ceph pool view, show auto-scaler related columns.
- Support editing existing Ceph pools, adapting the CRUSH-rule,
Target Size
andTarget Ratio
, among other things.Proxmox VE API Proxy Daemon (
- Support the InfluxDB 1.8 and 2.0 HTTP(s) API.
- Allow use of InfluxDB instances placed behind a reverse-proxy.
pveproxy
)Installation ISO:
- Make listening IP configurable (in
/etc/default/pveproxy
). This can help to limit exposure to the outside (e.g. by only binding to an internal IP).pveproxy
now listens for both IPv4 and IPv6, by defaultpve-zsync
- Installation on ZFS:
- if booted with legacy BIOS (non-UEFI), now also copy the kernel images to the second VFAT partition (ESP), allowing the system to boot from there with grub, making it possible to enable all ZFS features on such systems.
- set up the boot-partition and boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable.
- The installer environment attempts to do an NTP time synchronization before actually starting the installation, avoiding telemetry and cluster issues, if the RTC had a huge time-drift.
Firewall
- Improved snapshot handling allowing for multiple sync intervals for a source and destination pair.
- Better detection of aborted syncs, which previously caused pve-zsync to stop the replication.
Known Issues
- Fixes in the API schema to prevent storing rules with a big IP-address list, which get rejected by
iptables-restore
due to its size limitations. We recommended you to create and use IP-Sets for that use case.- Improvements to the command-line parameter handling.
- New default bind address for pveproxy and spiceproxy, unifying the default behavior with Proxmox Backup Server
- With making the LISTEN_IP configurable, the daemon now binds to both wildcard addresses (IPv4
0.0.0.0:8006
and IPv6[::]:8006
) by default.
- Should you wish to prevent it from listening on IPv6, simply configure the IPv4 wildcard as LISTEN_IP in
/etc/default/pveproxy
:LISTEN_IP="0.0.0.0"
- Additionally, the logged IP address format changed for IPv4 in pveproxy's access log (
/var/log/pveproxy/access.log
). They are now logged as IPv4-mapped IPv6 addresses instead of:
192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
- the line now looks like:
::ffff:192.168.16.68 - root@pam [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
- If you want to restore the old logging format, also set
LISTEN_IP="0.0.0.0"
- Resolving the Ceph `insecure global_id reclaim` Health Warning
- With Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20 we released an update to fix a security issue (CVE-2021-20288) where Ceph was not ensuring that reconnecting/renewing clients were presenting an existing ticket when reclaiming their global_id value.
- Updating from an earlier version will result in the above health warning.
- See the forum post here for more details and instructions to address this warning.