Changing to a definded path before writing anything to disk will do the job fine.
Regardless of a newly created directory or one from an old one.
And there is no hint in Readme.md, that the user has to remove everything after finishing an import.
Why not? The “N” parm of wget checks the timestamps of source and exisiting file.
Ok. If someone changed the source an resets the timestamp afterwards wget is unable to detect the change.
I did not find another version, which did things correct…
Do you mean to let the user decide where to store the file, or that we hardcode one? The letter is naturally problematic, the first would be okay, but also a little unnecessary IMO, as we have so many questions already. Isn’t CWD just common for creating/downloading any files, usually assured to be where the particular user has write permissions to, by default its home, and just the intuitive choice?
EDIT: I see you use /tmp/proxmox-dietpi-installer. Usually this is a tmpfs and I am not sure whether we can just assume that is has sufficient space? Default is 512 MiB, which wouldn’t be enough to extract the image.
Ah, I haven’t thought of that. That seems clever and simple .
See the latest upstream commits: Added dialog box for filesystem selection · dazeb/proxmox-dietpi-installer@9bfd7ae · GitHub
Of course your auto-detection is best, but at least with the current dialog ZFS is mentioned as well which would have lead to the correct storage naming scheme.
EDIT: Ah, checking your fork: Does a ZFS pool result in the same naming scheme as LVM-Thin? Or is there another type of ZFS based storage device?
In this case we are “root” @ “proxmox” server. Its not like a “normal” distribution.
So if you do a " cd ~" you will be at “/”.
Again. We are on a Proxmox server. I have installed my PVE with standard settings. And “/tmp” is just below “/” on the root filesystem. Just the same as “/root”. So there is no space problem if there is sufficent space at “/”.
root@pve:~# mount |grep tmp
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16058220k,nr_inodes=4014555,mode=755,inode64)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3218572k,mode=755,inode64)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=3218568k,nr_inodes=804642,mode=700,inode64)
tmpfs on /run/user/1201 type tmpfs (rw,nosuid,nodev,relatime,size=3218568k,nr_inodes=804642,mode=700,uid=1201,gid=1201,inode64)
Maybe that it’s different if xou are running proxmox on a RasPi. But this did not make sense even if you are running a homelab with a little usage. I’m running proxmox 24/7 with a Ryzen 5600G CPU, 32GB RAM, 5 SSD’s. Power consumption is 90% of the day below 30W including the APC UPS.
Everything that runs permanently in my house is moved to proxmox. Including house-automation, file-services, cups-server, multimedia-server, nextcloud, … And CPU intensive jobs are delegated to the proxmox server too. No more RasPi based servers anymore. No fat Clients anymore. Only one seperate system for gaming. This results in dramatic lower over-all power consumption.
So in fact there is still no need to run SBC’s for this kind of work, if you run proxmox.
Yes. Storage type “zfspool” need’s other syntax, then “btrfs” or “dir” or even “lvm-thin”.
I did not test other types, since I did not use them. And there are many more …
Maybe this could solve the issue by showing the user the storage pools something like this it is messy and needs cleaning up. credit to tteckk for his scripts, I have just heavily modified it.
I think we can also add storage validation but I haven’t got that far. I’m sure there are probably errors aswell and its overdoing it but it is a start.
# Variables for whiptail dialog
declare -a STORAGE_MENU=()
# Read disk stats line by line
DISK_STATS=$(pvesm status -content images)
# Create a temporary file to store the disk stats
DISK_STATS_FILE=$(mktemp)
echo "$DISK_STATS" > "$DISK_STATS_FILE"
MSG_MAX_LENGTH=0
# Read storage info and format it
while read -r line; do
TAG=$(echo $line | awk '{print $1}')
TYPE=$(echo $line | awk '{print $2}')
FREE=$(echo $line | numfmt --field 4-6 --from-unit=K --to=iec --format "%.2f")
FREE="Free: $(printf "%9sB" "$FREE")"
ITEM=" Type: $TYPE $FREE"
STORAGE_MENU+=("$TAG" "$ITEM" "OFF")
done < <(pvesm status -content images | grep -P '^\w')
# Display menu with formatted storage information and increased width
STORAGE=$(whiptail --backtitle "DietPi Proxmox Installer" --title "Storage Pools" --radiolist \
"Which storage pool would you like to use for the new virtual machine?\nTo make a selection, use the Spacebar.\n\n$DISK_STATS" \
20 100 6 \
"${STORAGE_MENU[@]}" 3>&1 1>&2 2>&3) || exit
# Get the type of the selected storage
STORAGE_TYPE=$(pvesm status | grep "^$STORAGE " | awk '{print \$2}')
# Determine the disk parameter based on the storage type
if [[ "$STORAGE_TYPE" == "dir" || "$STORAGE_TYPE" == "lvmthin" ]]; then
qm_disk_param="$STORAGE:$ID/vm-$ID-disk-0.raw"
else
qm_disk_param="$STORAGE:vm-$ID-disk-0"
fi
this gives a radio checkbox option and spacebar to select the storage.
To be honest I’m not even sure all of it is needed. This is the best I could do.
Also just found out the script doesn’t exit on whiptail dialog cancel selection so I need to have a look at that.
Edit: may aswell put this here aswell. A oneliner to run the script directly.
Ok. This is “just” another way to display storage infos as radiolist.
If your intention is, to display more info’s then just the pool name, its more handy of course.
But more fine tuning has to be done. Because the output of pvesm status needs more attention to hide some lines …
root@pve:/tmp# pvesm status -content images | grep -P '^\w'
storage does not support content type 'vztmpl'
storage does not support content type 'iso'
storage does not support content type 'backup'
Name Type Status Total Used Available %
local-btrfs btrfs active 487861252 367610908 117632052 75.35%
local-zPool dir active 3131636096 333205632 2798430464 10.64%
pvePool zfspool active 2827650573 29220103 2798430470 1.03%
Determine disk parameter did not work too. There are at least 3 different versions for qm_disk_param needed. See my script…
Ive just added your script and it works as you said. I like it its simpler and cleaner.
Will do more work on it soon. Experimental branch here
Edit: I saw you have the 3 params. If btrfs and dir is essentially the same thing and zfs is the same as lvmthin as far as storage location, why are they both or all 3 needed if we are requesting the pool type which = dir or not basically?
# prepare disk-parm depending on storage type
if [ "$FSType" = "btrfs" ]; then
qm_disk_param="$STORAGE:$ID/vm-$ID-disk-0.raw"
elif [ "$FSType" = "dir" ]; then
qm_disk_param="$STORAGE:$ID/vm-$ID-disk-0.raw"
elif [ "$FSType" = "zfspool" ]; then
qm_disk_param="$STORAGE:vm-$ID-disk-0"
else
qm_disk_param="$STORAGE/vm-$ID-disk-0"
fi
You are right. btrfs and dir use the same syntax. I had choosen this kind of “if elseif else fi” because there are many more type of filesystem, and I don’t know the right syntax of each type. But if I had to add a new one, I just have to copy 2 lines (elif, qm_disk_param) and change type & parm if needed.
So this seems an easy way to me for adding new types later.
And the “else” path is just default like in your first implementation.
Maybe there is an easier or more readable solution. I’m not a scripting guru …
But “zfspool” and “lvm-thin” in fact requires different syntax. See the “:” and “/” after “$STORAGE” variable.
This looks quite good to me already, some minor adjustments:
# get all active storage names into an array
-storage_Names=($(pvesm status | grep active | tr -s ' ' | cut -d ' ' -f1))
+storage_Names=($(pvesm status | awk '/active/{print $1}))
# get all active storage types into another array
-storage_Types=($(pvesm status | grep active | tr -s ' ' | cut -d ' ' -f2))
+storage_Names=($(pvesm status | awk '/active/{print $2}))
-# lets find how many names are in our array
-storage_Count=${#storage_Names[@]}
-
# create a new arry for use with whiptail
storage_Array=()
I=1
for STORAGE in "${storage_Names[@]}"; do
- storage_Array+=("$I" ":: $STORAGE " "off")
+ storage_Array+=("$I" ":: $STORAGE")
- I=$(( I + 1 ))
+ ((I++))
done
# lets select a storage name
-choice=""
-while [ "$choice" == "" ]
-do
- choice=$(whiptail --title "DietPi Installation" --radiolist "Select Storage Pool" 20 50 $storage_Count "${storage_Array[@]}" 3>&1 1>&2 2>&3 )
+choice=$(whiptail --title "DietPi Installation" --menu "Select Storage Pool" 0 0 "$I" "${storage_Array[@]}" 3>&1 1>&2 2>&3) || exit 1
-done
# get name of choosen storage
Name=${storage_Names[$choice]}
-echo 'Name: ' $Name
+echo "Name: $Name"
# get type of choosen storage
Type=${storage_Types[$choice]}
-echo 'Typ: ' $Type
+echo "Typ: $Type"
exit
Merging status output scraping to use awk only
Skip storage_Count, as we have I already
Skip while loop, instead exit script when “Cancel” is selected.
Use whiptail menu instead of radiolist, which allows selection with one click less
echo with a single argument
I also missed this. Brutal that there are such tiny differences in the naming scheme. I wonder why they do not just use the same for all storage types, since the contained information, name, ID and number are all the same anyway . An alternative syntax would be:
case $Type in
btrfs|dir) qm_disk_param="$Name:$ID/vm-$ID-disk-0.raw";;
zfspool) qm_disk_param="$Name:vm-$ID-disk-0";;
*) qm_disk_param="$Name/vm-$ID-disk-0";;
esac
But this actually contradicts with the information above, where lvmthin uses $Name:vm-$ID-disk-0 as well, just like zfspool here. I do not see $Name/vm-$ID-disk-0 used anywhere. Also does dir really use the same scheme as btrfs? I thought it was the same as lvmthin?
Most adjustments, such as determining the number of array elements, are only cosmetic. I’m completely dispassionate about it as long as the code is reasonably easy to understand.
No idea. During the Cold War I would have said: Just to confuse the Russians.
About the syntax.
I haven’t used LVM for quite some time. So I can’t say anything about lvm.
At least I did just a quick test with the file systems available to me.
Here the storage pools that have been set up, including comments on how the pools are (physical) structured. Note the 2 different ZFS types (dataset/zvol) !
root@pve:~# cat /etc/pve/storage.cfg
dir: local
disable
path /var/lib/vz
content vztmpl,iso,backup
# BTRFS Raid-0 Array
# (2 * 2TB SSD)
btrfs: local-btrfs
path /var/lib/pve/local-btrfs
content images,rootdir
# ZFS Raid-Z1 dataPool with dataset mounted to /zPool/images/Proxmox
# (3 * 4TB HDD, similar to Raid-5)
dir: local-zPool
path /zPool/images/Proxmox
content images,rootdir,vztmpl,backup,iso
# ZFS Raid-Z1 dataPool with zvol
# Same dataPool / HDD's as before
zfspool: pvePool
pool ssd_Pool/local/pve
content images,rootdir
sparse 1
This results in the following view of the pools.
root@pve:~# pvesm status -enabled
Name Type Status Total Used Available %
local-btrfs btrfs active 487861252 386071408 99222144 79.14%
local-zPool dir active 3124376704 332533760 2791842944 10.64%
pvePool zfspool active 2821063077 29220028 2791843048 1.04%
With my script I created a VM for each “PVE” type. Any other syntax did not work.
# prepare disk-parm depending on storage type
if [ "$FSType" = "btrfs" ]; then
qm_disk_param="$STORAGE:$ID/vm-$ID-disk-0.raw"
elif [ "$FSType" = "dir" ]; then
qm_disk_param="$STORAGE:$ID/vm-$ID-disk-0.raw"
elif [ "$FSType" = "zfspool" ]; then
qm_disk_param="$STORAGE:vm-$ID-disk-0"
else
qm_disk_param="$STORAGE/vm-$ID-disk-0"
fi
Successfully imported disk as 'unused0:local-btrfs:101/vm-101-disk-0.raw'
update VM 101: -description ### [DietPi Website](https://dietpi.com/)
update VM 101: -cores 2
update VM 101: -memory 2048
update VM 101: -net0 virtio,bridge=vmbr0
update VM 101: -scsihw virtio-scsi-pci
update VM 101: -scsi0 local-btrfs:101/vm-101-disk-0.raw
update VM 101: -boot order=scsi0
VM 101 Created.
Successfully imported disk as 'unused0:local-zPool:102/vm-102-disk-0.raw'
update VM 102: -description ### [DietPi Website](https://dietpi.com/)
update VM 102: -cores 2
update VM 102: -memory 2048
update VM 102: -net0 virtio,bridge=vmbr0
update VM 102: -scsihw virtio-scsi-pci
update VM 102: -scsi0 local-zPool:102/vm-102-disk-0.raw
update VM 102: -boot order=scsi0
VM 102 Created.
Successfully imported disk as 'unused0:pvePool:vm-103-disk-0'
update VM 103: -description ### [DietPi Website](https://dietpi.com/)
update VM 103: -cores 2
update VM 103: -memory 2048
update VM 103: -net0 virtio,bridge=vmbr0
update VM 103: -scsihw virtio-scsi-pci
update VM 103: -scsi0 pvePool:vm-103-disk-0
update VM 103: -boot order=scsi0
VM 103 Created.
Okay, so you do not have an example where $STORAGE/vm-$ID-disk-0 is used either. So that “else” should be merged with the zfspool storage type, which is correct for lvmthin as well:
case $Type in
btrfs|dir) qm_disk_param="$Name:$ID/vm-$ID-disk-0.raw";;
*) qm_disk_param="$Name:vm-$ID-disk-0";; # lvmthin|zfspool
esac
This also matches the info we currently use in the script:
If using BTRFS, ZFS or Directory storage? Select YES
“ZFS” was tested with a “local-zPool”-like storage, which is actually type dir, hence the confusion.