In the previous part, we have set up a headless Raspberry Pi with Alpine Linux. In this post, I'll guide you through the configuration of the hard drive, Docker, and NAS-related applications.
It's not a definitive guide in any way, I just want to share what works for me, for educational purposes. I recommend you tinker with each thing, adapt it to your needs, find improvements, etc. I would also gladly hear your feedback.
Hard drive configuration
Time to connect and setup the hard drive.
We'll create a partition, configure the mounting point, and set up the directory structure.
In my and probably most cases, a single partition with a ext4
filesystem is all we need, although I have little knowledge about more advanced filesystems such as zfs
or btrfs
.
fdisk -l # find the /dev/sdX of your external hard drive
apk add gptfdisk
gdisk /dev/sdX
Create new partition table with o
(this wipes the disk), create new partition with n
, accept all the defaults, then save & quit with w
.
Create the ext4
filesystem:
apk add e2fsprogs
mkfs.ext4 /dev/sdX1
Let's add an entry in /etc/fstab
to permanently configure the mounting point. First, run blkid
and copy your partition's UUID.
Then, add the following line to your /etc/fstab
file:
UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /media/hdd0 ext4 rw 0 0
If everything is configured well, you should be able to manually mount the HDD simply with:
mount /media/hdd0
You now have a persistent storage on /media/hdd0
that will be automatically mounted on startup! 😎
After a few iterations I have settled on this directory structure:
├── storage
│ ├── user1
│ ├── user2
│ ├── user3
│ ├── group1
│ └── public
└── system
├── config <- configuration files kept in sync with your
│ local config folder provided by the repo below
└── docker <- docker daemon folder
You can now uninstall the
gptfdisk
ande2fsprogs
packages usingapk del
.
Installing Docker
Now let's install our main tool. Edit /etc/apk/repositories
and uncomment the community
repo.
Then, update your repo index and install docker.
apk update
apk add docker
One important thing now, is to make docker use our external hard-drive to store all its config, images, containers, volumes, etc. Edit the /etc/docker/daemon.json
, and add :
{
"data-root": "/media/hdd0/system/docker",
"ipv6": false
}
The
ipv6
setting is there to fix a bug of docker integration with iptables, occurring on Alpine at the time of writing.
Lastly, create the directory specified above, set docker to be launched on startup, and start it now manually, and add your user to the docker group.
mkdir -p /media/hdd0/system/docker
rc-update add docker default
rc-service docker start
adduser konrad docker
Docker should have started, try running docker run --rm hello-world
. 😎
Configuring a local Docker client
This step is optional, but I highly recommend it. It will make docker so much more convinient : you'll be able to use the docker
command locally to control the remote docker daemon.
Downloading the docker client
Skip this step if you already have docker installed on your workstation.
Let's create a local bin directory, add it to the PATH
, and download and extract the docker client.
If you use
zsh
, replace.bashrc
by.zshrc
.
mkdir -p ~/.local/bin
echo "export PATH=$HOME/.local/bin:$PATH" >> ~/.bashrc
curl https://download.docker.com/linux/static/stable/x86_64/docker-20.10.16.tgz \
| tar -xzvf - -C ~/.local/bin --strip-components=1 docker/docker
Now restart your shell, and the docker command should be available.
Configuring Docker context
Next, let's set up the docker client so that it connects through SSH to the docker daemon running on your server.
Create the Docker context with the right IP to your server.
docker context create raspberry --docker "host=ssh://konrad@192.168.X.X"
docker context use raspberry
Now you should be able to run Docker commands from the command line of your local machine 🤯
If you wish to go back to use using the local docker daemon:
docker context use default
Deploying NAS applications
NAS simply means Network Attached Storage, and there are multiple different ways to achieve this. I have decided to rely on pretty primitive but well supported and powerful protocols, such as:
- an SMB server for access from Windows on the local network,
- an SFTP server, for access remotely and/or from other devices,
- The DLNA multimedia server.
I can't really argue this is the best way to make a NAS. I am still experimenting with different setups, but for now I have not really been convinced by any all integrated ready-to-use solution.
I've tried Nextcloud, and although it seemed pretty user-friendly and easy to setup, I found it a bit bloated and slow, too tightly integrated and closed (hard to integrate with other apps), and too web-UI centered for configuration, which does not suit my preference. You can still try it very easily withdocker-compose
to make your own opinion.
Let's configure these services. They are all independent, but we'll try to integrate them well.
I keep all the building code, deployment code, and configuration for every service in one well organized folder following the structure below:
├── config <- configuration files, synchronized with the server
│  ├── minidlna.conf
│  └── smb.conf
├── docker <- shell scripts for deploying docker services
│  ├── minidlna.sh
│  ├── samba.sh
│  ├── samba-wsdd.sh
│  ├── sshd.sh
│  ├── tidal-dl.sh
│  └── transmission.sh
├── minidlna <- build directory for minidlna
│  ├── cmd.sh
│  └── Dockerfile
├── samba <- build directory for the SMB server
│  ├── cmd.sh
│  ├── Dockerfile
│  └── setup-users.sh
├── samba-wsdd <- build directory for Samba WSDD
│  └── Dockerfile
├── sshd <- build directory for the SSH+SFTP server
|  ├── Dockerfile
|  ├── repositories
|  └── setup-users.sh
└── users.gpg <- encrypted list of users with their password
Passwords should preferably not be stored in a decryptable manner, I need to find an alternative way.
The following parts will be using these files, that are available at this repository.
Regarding the
config
folder, I will soon make an article about a convenient way to keep it synchronized with your server. Stay tuned.
The users
file
I tried to centralize the user credentials so that I can keep them in sync across all services easier. For now, my approach consists of keeping users configuration in a csv-like file with the users' name, password and home directory path, like this:
konrad pass1234 /mnt/storage/konrad
user2 password123 /mnt/storage/user2
user3 bonjour /mnt/storage/user3
The home directory needs to be the location inside the container: you'll need to mount you users home directories at the same path across all services
Then, I symmetrically encrypt it with gpg
. You'll be asked for a passphrase that must be stored in a secure place.
gpg -c users
You should now remove the unencrypted users
file.
Now, for services requiring users setup, the encrypted file can be decrypted on-the-fly and piped to a setup-users
script running inside the container. Everything is automated in deployment scripts, so that you only need to provide the passphrase.
Deploying the Samba server
Samba is the SMB implementation for Linux. SMB is the protocol used by MS Windows to exchange files over the network. This SMB server will allow us to create network shared folders accessible by devices on the LAN. You can also get it to work on Linux with samba client, on MacOS, and on Android with a file explorer app.
samba/Dockerfile
samba/cmd.sh
samba/setup-users.sh
config/smb.conf
docker/samba.sh
Additionally, you need to configure a Web Service Discovery Daemon (WSDD) so that your server can be found in the Windows "Network" view.
docker/samba-wsdd.sh
samba-wsdd/Dockerfile
To deploy, simply run:
docker/samba.sh
docker/samba-wsdd.sh
Deploying the SSH server
SMB is cool for the local network. But what to use to access your files remotely?
After experimenting without success with things such as WebDAV, I've realized that SSH is the most versatile type of connection you can provide for your users:
- It's completely encrypted, and can be used with the convinient key-based authentication,
- It gives access to files through SFTP,
- It gives access to the command line, which although pretty inaccessible, is more powerful than anything else. Users will be able to create and extract archives, download files, manage permissions, use
rsync
for fast file synchronization, etc. - It's compatible with many types of devices natively or through simple third-party apps.
Now, the elephant in the room: there's already a SSH daemon running on the root system, taking the default SSH port 22. How to conciliate them? There are several different solutions, I ultimately chose to just deploy the second in-docker SSH server to listen on a different port.
To deploy, simply run:
docker/sshd.sh
To avoid having to enter this custom port every time, you can setup a second hostname resolving to your server (i.e. ssh.konradpoweska.com), and add the following rule in your clients' ~/.ssh/config
file:
Host ssh.konradpoweska.com
Port 83764
To make it accessible remotely, you'll need to setup a port redirection on your home router. I won't cover it for now.
Deploying the MiniDLNA server
DLNA is the protocol used by most smart multimedia devices (TVs, speakers, etc.) to play media from a network devices. A MiniDLNA server will allow these devices to play your music, movies and pictures from your NAS.
minidlna/Dockerfile
minidlna/cmd.sh
config/minidlna.conf
: This custom configuration is based on the default file shipped with theminidlna
apk package.docker/minidlna.sh
: run this file to build and deploy.
Conclusion
At the end of this tutorial, you should have a working NAS with SMB, SSH and DLNA. Let me know how your NAS turned out, I would love to hear about it.