I literally ran the script using (as per eadme.md)
wget https://raw.githubusercontent.com/dazeb/proxmox-dietpi-installer/main/dietpi-install.sh
chmod +x dietpi-install.sh
./dietpi-install.sh
I literally ran the script using (as per eadme.md)
wget https://raw.githubusercontent.com/dazeb/proxmox-dietpi-installer/main/dietpi-install.sh
chmod +x dietpi-install.sh
./dietpi-install.sh
By the look of it there could be some breaking changes introduced in Proxmox 8.
I will do some testing very soon to get the script working but may take a few days.
Many many thanks.
alan
I have tested the script with Proxmox 8 and cannot find any differences. A new branch has been opened for version 8 of the script if further changes are needed. At the moment the script is working as expected on Proxmox 8.
@g1gop I think your problem may be unrelated to the script but if you can please post some more information and I will do my best to help.
@MichaIng I still have not found any info regarding the RAID differences. WiIl keep looking.
@g1gop
Indeed this kernel error should be unrelated to the script, but probably due to some changes in Proxmox 8. Are you able to catch some more logs from above the call trace, so that we might see which service/target/script was just running? It is 191 seconds after boot, so not immediately during kernel boot phase but later, probably during some first run setup or install step.
Hi, We have been away on holiday. Literally just got back.
I will try to run the script and catch some logs for you tomorrow.
Many thanks
Alan
OK ran script(script on Proxmox in /root folder.
got this output:
root@proxmox1:~# ./dietpi-install.sh
--2023-09-15 12:43:06-- #removed due to restiction of links
Resolving #link(#link)... 172.67.170.219, 104.21.28.141, 2606:4700:3032::ac43:aadb, ...
Connecting to dietpi.com (dietpi.com)|172.67.170.219|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 205115674 (196M) [application/x-7z-compressed]
Saving to: ‘DietPi_Proxmox-x86_64-Bullseye.7z.1’
DietPi_Proxmox-x86_64-Bul 100%[====================================>] 195.61M 11.0MB/s in 18s
2023-09-15 12:43:25 (10.8 MB/s) - ‘DietPi_Proxmox-x86_64-Bullseye.7z.1’ saved [205115674/205115674]
7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,64 bits,4 CPUs Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz (506E3),ASM,AES-NI)
Scanning the drive for archives:
1 file, 205115674 bytes (196 MiB)
Extracting archive: DietPi_Proxmox-x86_64-Bullseye.7z
--
Path = DietPi_Proxmox-x86_64-Bullseye.7z
Type = 7z
Physical Size = 205115674
Headers Size = 251
Method = LZMA2:26
Solid = +
Blocks = 1
Would you like to replace the existing file:
Path: ./DietPi_Proxmox-x86_64-Bullseye.qcow2
Size: 222462464 bytes (213 MiB)
Modified: 2023-08-27 02:25:57
with the file from archive:
Path: DietPi_Proxmox-x86_64-Bullseye.qcow2
Size: 222462464 bytes (213 MiB)
Modified: 2023-08-27 02:25:57
? (Y)es / (N)o / (A)lways / (S)kip all / A(u)to rename all / (Q)uit? yes
Everything is Ok
Size: 222462464
Compressed: 205115674
importing disk 'DietPi_Proxmox-x86_64-Bullseye.qcow2' to VM 114 ...
Formatting '/mnt/pve/storage1/images/114/vm-114-disk-0.raw', fmt=raw size=8589934592 preallocation=off
transferred 0.0 B of 8.0 GiB (0.00%)
transferred 81.9 MiB of 8.0 GiB (1.00%)
transferred 164.7 MiB of 8.0 GiB (2.01%)
transferred 246.6 MiB of 8.0 GiB (3.01%)
transferred 328.5 MiB of 8.0 GiB (4.01%)
transferred 411.2 MiB of 8.0 GiB (5.02%)
transferred 493.2 MiB of 8.0 GiB (6.02%)
transferred 575.1 MiB of 8.0 GiB (7.02%)
transferred 657.8 MiB of 8.0 GiB (8.03%)
transferred 739.7 MiB of 8.0 GiB (9.03%)
transferred 821.7 MiB of 8.0 GiB (10.03%)
transferred 904.4 MiB of 8.0 GiB (11.04%)
transferred 986.3 MiB of 8.0 GiB (12.04%)
transferred 1.0 GiB of 8.0 GiB (13.04%)
transferred 1.1 GiB of 8.0 GiB (14.05%)
transferred 1.2 GiB of 8.0 GiB (15.05%)
transferred 1.3 GiB of 8.0 GiB (16.05%)
transferred 1.4 GiB of 8.0 GiB (17.06%)
transferred 1.4 GiB of 8.0 GiB (18.06%)
transferred 1.5 GiB of 8.0 GiB (19.06%)
transferred 1.6 GiB of 8.0 GiB (20.07%)
transferred 1.7 GiB of 8.0 GiB (21.07%)
transferred 1.8 GiB of 8.0 GiB (22.07%)
transferred 1.8 GiB of 8.0 GiB (23.07%)
transferred 1.9 GiB of 8.0 GiB (24.08%)
transferred 2.0 GiB of 8.0 GiB (25.08%)
transferred 2.1 GiB of 8.0 GiB (26.08%)
transferred 2.2 GiB of 8.0 GiB (27.09%)
transferred 2.2 GiB of 8.0 GiB (28.09%)
transferred 2.3 GiB of 8.0 GiB (29.09%)
transferred 2.4 GiB of 8.0 GiB (30.10%)
transferred 2.5 GiB of 8.0 GiB (31.10%)
transferred 2.6 GiB of 8.0 GiB (32.10%)
transferred 2.6 GiB of 8.0 GiB (33.11%)
transferred 2.7 GiB of 8.0 GiB (34.11%)
transferred 2.8 GiB of 8.0 GiB (35.11%)
transferred 2.9 GiB of 8.0 GiB (36.12%)
transferred 3.0 GiB of 8.0 GiB (37.12%)
transferred 3.0 GiB of 8.0 GiB (38.12%)
transferred 3.1 GiB of 8.0 GiB (39.13%)
transferred 3.2 GiB of 8.0 GiB (40.13%)
transferred 3.3 GiB of 8.0 GiB (41.13%)
transferred 3.4 GiB of 8.0 GiB (42.14%)
transferred 3.5 GiB of 8.0 GiB (43.14%)
transferred 3.5 GiB of 8.0 GiB (44.14%)
transferred 3.6 GiB of 8.0 GiB (45.15%)
transferred 3.7 GiB of 8.0 GiB (46.15%)
transferred 3.8 GiB of 8.0 GiB (47.15%)
transferred 3.9 GiB of 8.0 GiB (48.16%)
transferred 3.9 GiB of 8.0 GiB (49.16%)
transferred 4.0 GiB of 8.0 GiB (50.16%)
transferred 4.1 GiB of 8.0 GiB (51.17%)
transferred 4.2 GiB of 8.0 GiB (52.17%)
transferred 4.3 GiB of 8.0 GiB (53.17%)
transferred 4.3 GiB of 8.0 GiB (54.18%)
transferred 4.4 GiB of 8.0 GiB (55.18%)
transferred 4.5 GiB of 8.0 GiB (56.18%)
transferred 4.6 GiB of 8.0 GiB (57.19%)
transferred 4.7 GiB of 8.0 GiB (58.19%)
transferred 4.7 GiB of 8.0 GiB (59.19%)
transferred 4.8 GiB of 8.0 GiB (60.20%)
transferred 4.9 GiB of 8.0 GiB (61.21%)
transferred 5.0 GiB of 8.0 GiB (62.21%)
transferred 5.1 GiB of 8.0 GiB (63.21%)
transferred 5.1 GiB of 8.0 GiB (64.22%)
transferred 5.2 GiB of 8.0 GiB (65.22%)
transferred 5.3 GiB of 8.0 GiB (66.22%)
transferred 5.4 GiB of 8.0 GiB (67.23%)
transferred 5.5 GiB of 8.0 GiB (68.23%)
transferred 5.5 GiB of 8.0 GiB (69.23%)
transferred 5.6 GiB of 8.0 GiB (70.24%)
transferred 5.7 GiB of 8.0 GiB (71.24%)
transferred 5.8 GiB of 8.0 GiB (72.24%)
transferred 5.9 GiB of 8.0 GiB (73.25%)
transferred 5.9 GiB of 8.0 GiB (74.25%)
transferred 6.0 GiB of 8.0 GiB (75.25%)
transferred 6.1 GiB of 8.0 GiB (76.26%)
transferred 6.2 GiB of 8.0 GiB (77.26%)
transferred 6.3 GiB of 8.0 GiB (78.26%)
transferred 6.3 GiB of 8.0 GiB (79.27%)
transferred 6.4 GiB of 8.0 GiB (80.27%)
transferred 6.5 GiB of 8.0 GiB (81.27%)
transferred 6.6 GiB of 8.0 GiB (82.28%)
transferred 6.7 GiB of 8.0 GiB (83.28%)
transferred 6.7 GiB of 8.0 GiB (84.28%)
transferred 6.8 GiB of 8.0 GiB (85.29%)
transferred 6.9 GiB of 8.0 GiB (86.29%)
transferred 7.0 GiB of 8.0 GiB (87.29%)
transferred 7.1 GiB of 8.0 GiB (88.30%)
transferred 7.1 GiB of 8.0 GiB (89.30%)
transferred 7.2 GiB of 8.0 GiB (90.30%)
transferred 7.3 GiB of 8.0 GiB (91.31%)
transferred 7.4 GiB of 8.0 GiB (92.31%)
transferred 7.5 GiB of 8.0 GiB (93.31%)
transferred 7.5 GiB of 8.0 GiB (94.31%)
transferred 7.6 GiB of 8.0 GiB (95.32%)
transferred 7.7 GiB of 8.0 GiB (96.32%)
transferred 7.8 GiB of 8.0 GiB (97.32%)
transferred 7.9 GiB of 8.0 GiB (98.33%)
transferred 7.9 GiB of 8.0 GiB (99.33%)
transferred 8.0 GiB of 8.0 GiB (100.00%)
transferred 8.0 GiB of 8.0 GiB (100.00%)
Successfully imported disk as 'unused0:storage1:114/vm-114-disk-0.raw'
update VM 114: -cores 2
update VM 114: -memory 2048
update VM 114: -net0 virtio,bridge=vmbr0
unable to parse directory volume name 'vm-114-disk-0'
update VM 114: -boot order=scsi0
invalid bootorder: device 'scsi0' does not exist'
update VM 114: -scsihw virtio-scsi-pci
VM 114 Created.
Noticed the disk was not 'attached to the vm. added it and altered boot order (needed to press esc and use legacy rom in console). Then got this
![Screenshot from 2023-09-15 12-56-12|690x384](upload://hHxAPvb5TPegojwqbmrMmhu1r0Q.png)
@g1gop Thanks for the logs, the disk is not being added to scsi0, not sure why, give me some time ill try to reproduce the error and see what’s going on.
Also, try using the script from the testing branch, select BTRFS and see what happens.
No need for the above, testing branch has now been merged with main branch.
EDIT:
Ive just tested and seen the differences, your filesystem is different from thin provisioning. In a normal install, the script looks like this
╭─root@pve ~
╰─# ./dietpi-install.sh
Is it a BTRFS storage? (y/N)
--2023-09-15 14:21:24-- https://dietpi.com/downloads/images/DietPi_Proxmox-x86_64-Bullseye.7z
Resolving dietpi.com (dietpi.com)... 104.21.28.141, 172.67.170.219, 2606:4700:3032::ac43:aadb, ...
Connecting to dietpi.com (dietpi.com)|104.21.28.141|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 205115674 (196M) [application/x-7z-compressed]
Saving to: ‘DietPi_Proxmox-x86_64-Bullseye.7z.1’
DietPi_Proxmox-x86_64 100%[======================>] 195.61M 43.4MB/s in 5.0s
2023-09-15 14:21:30 (39.2 MB/s) - ‘DietPi_Proxmox-x86_64-Bullseye.7z.1’ saved [205115674/205115674]
7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,64 bits,8 CPUs Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz (506E3),ASM,AES-NI)
Scanning the drive for archives:
1 file, 205115674 bytes (196 MiB)
Extracting archive: DietPi_Proxmox-x86_64-Bullseye.7z
--
Path = DietPi_Proxmox-x86_64-Bullseye.7z
Type = 7z
Physical Size = 205115674
Headers Size = 251
Method = LZMA2:26
Solid = +
Blocks = 1
Everything is Ok
Size: 222462464
Compressed: 205115674
importing disk 'DietPi_Proxmox-x86_64-Bullseye.qcow2' to VM 102 ...
Logical volume "vm-102-disk-0" created.
transferred 0.0 B of 8.0 GiB (0.00%)
transferred 81.9 MiB of 8.0 GiB (1.00%)
transferred 164.7 MiB of 8.0 GiB (2.01%)
transferred 246.6 MiB of 8.0 GiB (3.01%)
transferred 328.5 MiB of 8.0 GiB (4.01%)
transferred 411.2 MiB of 8.0 GiB (5.02%)
transferred 493.2 MiB of 8.0 GiB (6.02%)
transferred 575.1 MiB of 8.0 GiB (7.02%)
transferred 657.8 MiB of 8.0 GiB (8.03%)
transferred 739.7 MiB of 8.0 GiB (9.03%)
transferred 821.7 MiB of 8.0 GiB (10.03%)
transferred 904.4 MiB of 8.0 GiB (11.04%)
transferred 986.3 MiB of 8.0 GiB (12.04%)
transferred 1.0 GiB of 8.0 GiB (13.04%)
transferred 1.1 GiB of 8.0 GiB (14.05%)
transferred 1.2 GiB of 8.0 GiB (15.05%)
transferred 1.3 GiB of 8.0 GiB (16.05%)
transferred 1.4 GiB of 8.0 GiB (17.06%)
transferred 1.4 GiB of 8.0 GiB (18.06%)
transferred 1.5 GiB of 8.0 GiB (19.06%)
transferred 1.6 GiB of 8.0 GiB (20.07%)
transferred 1.7 GiB of 8.0 GiB (21.07%)
transferred 1.8 GiB of 8.0 GiB (22.07%)
transferred 1.8 GiB of 8.0 GiB (23.07%)
transferred 1.9 GiB of 8.0 GiB (24.08%)
transferred 2.0 GiB of 8.0 GiB (25.08%)
transferred 2.1 GiB of 8.0 GiB (26.08%)
transferred 2.2 GiB of 8.0 GiB (27.09%)
transferred 2.2 GiB of 8.0 GiB (28.09%)
transferred 2.3 GiB of 8.0 GiB (29.09%)
transferred 2.4 GiB of 8.0 GiB (30.10%)
transferred 2.5 GiB of 8.0 GiB (31.10%)
transferred 2.6 GiB of 8.0 GiB (32.10%)
transferred 2.6 GiB of 8.0 GiB (33.11%)
transferred 2.7 GiB of 8.0 GiB (34.11%)
transferred 2.8 GiB of 8.0 GiB (35.11%)
transferred 2.9 GiB of 8.0 GiB (36.12%)
transferred 3.0 GiB of 8.0 GiB (37.12%)
transferred 3.0 GiB of 8.0 GiB (38.12%)
transferred 3.1 GiB of 8.0 GiB (39.13%)
transferred 3.2 GiB of 8.0 GiB (40.13%)
transferred 3.3 GiB of 8.0 GiB (41.13%)
transferred 3.4 GiB of 8.0 GiB (42.14%)
transferred 3.5 GiB of 8.0 GiB (43.14%)
transferred 3.5 GiB of 8.0 GiB (44.14%)
transferred 3.6 GiB of 8.0 GiB (45.15%)
transferred 3.7 GiB of 8.0 GiB (46.15%)
transferred 3.8 GiB of 8.0 GiB (47.15%)
transferred 3.9 GiB of 8.0 GiB (48.16%)
transferred 3.9 GiB of 8.0 GiB (49.16%)
transferred 4.0 GiB of 8.0 GiB (50.16%)
transferred 4.1 GiB of 8.0 GiB (51.17%)
transferred 4.2 GiB of 8.0 GiB (52.17%)
transferred 4.3 GiB of 8.0 GiB (53.17%)
transferred 4.3 GiB of 8.0 GiB (54.18%)
transferred 4.4 GiB of 8.0 GiB (55.18%)
transferred 4.5 GiB of 8.0 GiB (56.18%)
transferred 4.6 GiB of 8.0 GiB (57.19%)
transferred 4.7 GiB of 8.0 GiB (58.19%)
transferred 4.7 GiB of 8.0 GiB (59.19%)
transferred 4.8 GiB of 8.0 GiB (60.20%)
transferred 4.9 GiB of 8.0 GiB (61.21%)
transferred 5.0 GiB of 8.0 GiB (62.21%)
transferred 5.1 GiB of 8.0 GiB (63.21%)
transferred 5.1 GiB of 8.0 GiB (64.22%)
transferred 5.2 GiB of 8.0 GiB (65.22%)
transferred 5.3 GiB of 8.0 GiB (66.22%)
transferred 5.4 GiB of 8.0 GiB (67.23%)
transferred 5.5 GiB of 8.0 GiB (68.23%)
transferred 5.5 GiB of 8.0 GiB (69.23%)
transferred 5.6 GiB of 8.0 GiB (70.24%)
transferred 5.7 GiB of 8.0 GiB (71.24%)
transferred 5.8 GiB of 8.0 GiB (72.24%)
transferred 5.9 GiB of 8.0 GiB (73.25%)
transferred 5.9 GiB of 8.0 GiB (74.25%)
transferred 6.0 GiB of 8.0 GiB (75.25%)
transferred 6.1 GiB of 8.0 GiB (76.26%)
transferred 6.2 GiB of 8.0 GiB (77.26%)
transferred 6.3 GiB of 8.0 GiB (78.26%)
transferred 6.3 GiB of 8.0 GiB (79.27%)
transferred 6.4 GiB of 8.0 GiB (80.27%)
transferred 6.5 GiB of 8.0 GiB (81.27%)
transferred 6.6 GiB of 8.0 GiB (82.28%)
transferred 6.7 GiB of 8.0 GiB (83.28%)
transferred 6.7 GiB of 8.0 GiB (84.28%)
transferred 6.8 GiB of 8.0 GiB (85.29%)
transferred 6.9 GiB of 8.0 GiB (86.29%)
transferred 7.0 GiB of 8.0 GiB (87.29%)
transferred 7.1 GiB of 8.0 GiB (88.30%)
transferred 7.1 GiB of 8.0 GiB (89.30%)
transferred 7.2 GiB of 8.0 GiB (90.30%)
transferred 7.3 GiB of 8.0 GiB (91.31%)
transferred 7.4 GiB of 8.0 GiB (92.31%)
transferred 7.5 GiB of 8.0 GiB (93.31%)
transferred 7.5 GiB of 8.0 GiB (94.31%)
transferred 7.6 GiB of 8.0 GiB (95.32%)
transferred 7.7 GiB of 8.0 GiB (96.32%)
transferred 7.8 GiB of 8.0 GiB (97.32%)
transferred 7.9 GiB of 8.0 GiB (98.33%)
transferred 7.9 GiB of 8.0 GiB (99.33%)
transferred 8.0 GiB of 8.0 GiB (100.00%)
transferred 8.0 GiB of 8.0 GiB (100.00%)
Successfully imported disk as 'unused0:nvme:vm-102-disk-0'
update VM 102: -cores 2
update VM 102: -memory 2048
update VM 102: -net0 virtio,bridge=vmbr0
update VM 102: -scsi0 nvme:vm-102-disk-0
update VM 102: -boot order=scsi0
update VM 102: -scsihw virtio-scsi-pci
VM 102 Created.
You can see it’s not imported the disk to the right place, your disk is stored in a folder on your storage1
eg, 102/vm-102-disk-0.raw
Mine: Successfully imported disk as 'unused0:nvme:vm-102-disk-0'
Yours: Successfully imported disk as 'unused0:storage1:114/vm-114-disk-0.raw'
EDIT 2: @MichaIng just approved the merge request so no need to use the testing branch anymore.
@g1gop please select yes when it asks if you use BTRFS and see how you go, it should work. Let us know if not.
Brilliant, thanks for getting back to me. Proxmox is storing your VM disks on a different filesystem type, likely a RAID or similar. That’s why the error was happening I’m pretty sure.
Can you let us know what filesystem you used to set up your VM disk storage?
Pic below of my physical disk that I use to store my VM disks.
Thanks!
Perfect, thank you. This will help me make sure the script is compatible with all storage types.
Much appreciated
The script has now been updated and should work with the most used storage types.
A new dialog box will show asking what type of storage you are using after you have selected your storage by entering its name.
The storage types supported are:
BTRFS
ZFS
Directory Storage
Thin Provisioning
The main branch on github.com has been updated. Thanks everyone for the feedback!
@dazeb
Thank you for your great work.
But why not improve the script to sail around user imput errors?
Its easy to request the needed info from proxmox.
pvesm status
here is mine:
root@pve:/etc/pve/lxc# pvesm status
Name Type Status Total Used Available %
local dir disabled 0 0 0 N/A
local-btrfs btrfs active 487861252 366537424 118699936 75.13%
local-zPool dir active 3131481344 332993536 2798487808 10.63%
pvePool zfspool active 2827707936 29220018 2798487918 1.03%
This will list i.e. all storage names, types and state of storage.
Show the relevant infos as radio list and let the user select one of them.
No more input errors
Here is a simple script to make this happen
#!/bin/bash
# get all active storage names into an array
storage_Names=($(pvesm status | grep active | tr -s ' ' | cut -d ' ' -f1))
# get all active storage types into another array
storage_Types=($(pvesm status | grep active | tr -s ' ' | cut -d ' ' -f2))
# lets find how many names are in our array
storage_Count=${#storage_Names[@]}
# create a new arry for use with whiptail
storage_Array=()
I=1
for STORAGE in "${storage_Names[@]}"; do
storage_Array+=("$I" ":: $STORAGE " "off")
I=$(( I + 1 ))
done
# lets select a storage name
choice=""
while [ "$choice" == "" ]
do
choice=$(whiptail --title "DietPi Installation" --radiolist "Select Storage Pool" 20 50 $storage_Count "${storage_Array[@]}" 3>&1 1>&2 2>&3 )
done
# get name of choosen storage
Name=${storage_Names[$choice]}
echo 'Name: ' $Name
# get type of choosen storage
Type=${storage_Types[$choice]}
echo 'Typ: ' $Type
exit
Ah nice, I was wondering whether we could implement some auto-detection, but this is nearly as good. A whiptail --menu
would work as well, which requires one key press less.
Wait, with such we could even omit the storage name input, isn’t it?
Looks like a good solution but above my skill level lol
Happy to approve a PR on this if you want to make one.
Thanks for helping to make the script better, much appreciated
Yes, of course.
Just try my script on a proxmox host, to see how it works
Hello,
I figured out some more problems with the installer.
If the script is started from different locations, by specifiying the complete path like “/usr/local/bin/dietpi-install.sh” all temporary files are stored in this directory.
So my solution is to change to a defined directory inside the script first.
The download of the dietpi image is done every time the script runs. This is unneeded and only eats up disk space. My solution: Only download if source has changed
Creating a VM on zfspool did not work. My solution: create qm_disk_parm depending on storage type
I have forked the original scipt, and modified it to my need’s
Proxmox DietPi Installer
Added my script to convert from VM to LXC too
Proxmox VM-2-LXC Converter
Best would be to always download everything to the current working dir, regardless where the script is located. I thought this is the case already?
We cannot know whether the source changed or not without downloading it. But, with next release we provide SHA256 hashes which can be used to check for changes. But I guess you mean to not download the image of there is already one with the same name in current dir?
We did just that, or do I misunderstand or did we wrong in case of a ZFS pool?
EDIT: Ah yours is based in the version where it is asked for Btrfs only.