I’m looking into a cloud solution to be able to access my files remotely like a network share on my own NAS at home, when I’m working remotely.
I don’t really need all the extra features nextcloud offers, I’m more just after being able to have internet access to a network share on my NAS and open it and move files/folders to it just like a local drive on a Windows machine.
I’ve tried to get filecloud working via docker as it has this functionality I want, but I’m having an issue with my docker container instantly exiting the moment it runs.
I’ve been using the below guide on setting up filecloud.
I’ve timestamped the below link - I’m doing the on premises setup with docker at 23:33:
I’ve followed it up to the point of 25:28 minutes, and run the command seeing what docker containers are running.
But mine doesn’t run which is strange, they immediately exit.
CREATED STATUS PORTS NAMES
1603ae2f820c filecloud/filecloudsolr21.3:latest "sh /opt/solr/docker…32 seconds ago Exited (1) 22 seconds ago filecloud.solr
5018415cf8bb filecloud/filecloudserver23.1:latest "bash /usr/local/bin…3 weeks ago Exited (1) 20 seconds ago filecloud.server
0d514c7181e4 filecloud/filecloudpreview22.1:latest "/opt/libreoffice/in…3 weeks ago Exited (1) 22 seconds ago filecloud.preview
275008bbadca mongo:6.0.8 "docker-entrypoint.s…3 weeks ago Exited (132) 21 seconds ago filecloud.mongodb
Given I only want the very basics of network share over the internet to my NAS, is there other solutions, and something that can be installed with the dietpi-software functionality that comes with Diet-Pi? Everything from there generally works.
Although, I would expect if someone publishes a Docker image for this architecture with with MongoDB version, that our works. Otherwise what is the purpose of a container which cannot work?
Or it was just an architecture-independant Dockerfile?
However, is there a way to limit its access to one specific folder on my NAS?
Done the filebrowser install, works perfectly first try - awesome.
But it allows access to a lot of my system through filebrowser, is there a config file where I can limit it to show only one specific folder of my NAS?
It doesn’t seem to create a config file and using the web browser interface I’m not sure how I can limit its access to just one folder that I want it to see and share.
Using the filebrowser documentation, it shows a command to export a config file for it, but then it returns an error saying filebrowser command not found?
Thanks
EDIT: managed to figure it out, through the web interface.
-- Journal begins at Tue 2024-02-13 14:17:01 AEDT, ends at Sat 2024-02-24 17:39:30 AEDT. --
Feb 24 16:47:23 fcnas.ath.cx systemd[1]: Started File Browser (DietPi).
**Feb 24 16:47:23 fcnas.ath.cx filebrowser[134448]: 2024/02/24 16:47:23 No config file used**
Feb 24 16:47:23 fcnas.ath.cx filebrowser[134448]: 2024/02/24 16:47:23 Listening on [::]:8084
root@fcnas:~# filebrowser config export
-bash: filebrowser: command not found
Oh, it was from the link in the tutorial video, that may be why it doesn’t work, it might not support RPi architecture like you’re saying if it includes this incompatible software.
root@fcnas:~# filebrowser config export
-bash: filebrowser: command not found
To edit it from CLI you would need to call /opt/filebrowser/filebrowser since it’s not in any sbin/bin directory.
For future users who want to change config from CLI:
(in this example we change the root directory, which file browser will use. In this example we set it to /mnt/NAS.
To show all options, run /opt/filebrowser/filebrowser --help or have a look into https://filebrowser.org/cli)
-r specifies the root path which the app will use -d is the path to the database where the config is stored, the path shown is the default path on DietPi for File Browser’s config
Chop it into several files, which are below the limit and then reconstrcut it on the server.
split -b 140M big_file big_file_part_
It will split it into serveral files named big_file_part_aa, big_file_part_ab etc…
Send these files through the tunnel.
Reconstruct them on the server back into the original big_file.
cat big_file_part_* > big_file
edit:
no need for using tar, you can split every file with this method.
Engaging a contractor to do some video editing work for me remotely, they’ll need to access and edit large video files over the internet - 10GB+
I might need to set up my own tunnel, I’ve seen something called Tailscale being used, even if I can just use that to allow them to map my NAS as a network drive via a Samba share and they can move files in and out of it with explorer on their windows machine - that will do fine.
I need to limit the samba share to one particular folder as the NAS stores personal information too I don’t want them having access to, but one assumes I can limit that in config by creating a new user and only giving it access to that one folder.
Good evening guys, we’ve had somewhat success using Tailscale to create a VPN and accessing the NAS over the internet - hooray.
We’re trying to do a compare to sync files across the internet, however it seems extremely slow doing a file compare?
I’ve attached some images below - it doesn’t use that much processor, it looks like a lot of files but it shouldn’t take 320 hours as there’s huge files:
Its only using about 20% processor but its very slowly reading from the NAS for some reason - its connected to the Pi4B’s USB3 port, its a QNAP TR004 in RAID10 mode running a hardware RAID.
Transferring files to it locally, 113MB/s and it maxes out a gigabit LAN connection,
I’ve got a 100/20 internet connection, and the contractor has a 100/40 connection so its not like we’re using overly slow connections to do these operations either.
You could try another VPN solution like Wiregard. In the end, however, the limiting factor is your upload speed. It doesn’t matter how much download bandwidth you or the other side has.