lfcode.ca notes compiled for future reference

Setting up client certs for secure remote access to home lab services

Because I have some masochistic tendencies at times, I decided that it was a totally good idea™ to set up client certificate authentication to secure remote access to my lab services such as Grafana or Guacamole.

Unsurprisingly, since it's a rather uncommonly used finicky authentication method, there were problems. There were quite a few.

I'm writing this post mostly just for myself if I ever do this again, because it felt like it took too long to accomplish.

First, the list of requirements:

  • Should allow access without certs on the local network

  • Should use nginx

The latter was pretty easy, since I'm most familiar with nginx, however the former was rather interesting. I realized that, to implement this, I need to set verification as optional, then enforce it manually. This meant modifying the back ends (meaning maintaining patches, nope!) or doing it within nginx.

One issue is that nginx has if statements that are rather strange, presumably due to simplistic grammar while parsing the configuration. There is no way to do an and statement without hacks. The hack that I chose to use was some variable concatenation (which cannot be done in a single line on the if statement, it must be in its own separate if statement). Here's how I enforce certs from non-LAN hosts:

if ( $ssl_client_verify != "SUCCESS" ) {
    set $clientfail "F";
}
if ( $client_loc = "OUT" ) {
    set $clientfail "${clientfail}F";
}
if ( $clientfail = "FF" ) {
    return 401;
}

$client_loc is defined in a geo block:

geo $client_loc {
    default OUT;
    10.10.0.0/16 IN;
    10.11.0.0/16 IN;
}

But defining ssl_client_certificate and setting up the clients would be too easy. In setting this up, I learned that nginx has an error message: "The SSL certificate error". Yes. That's an error message. It's so bad that it could be written by Microsoft. Fortunately, it's very simple to just write an error_log logs/debug.log debug and get some slightly less cryptic details.

The big thing that tripped me up with the server setup was that ssl_verify_depth is set by default such that with a Root→Intermediate→Client hierarchy, clients fail to be verified. Set it to something like 3 and it will work.

Next, for the certificate setup:

The server directive ssl_client_certificate needs to point to a chain certificate file, or else it will fail with an error that suggests problems with the server certificate (thankfully).

The clients (for now, Chrome on Linux), need a pkcs12 file with some chain like stuff in it. Generate one with something like:

openssl pkcs12 -in client-chain.cert.pem -out client.pfx -inkey client.key.pem -export

where client-chain.cert.pem is a full chain from client to root CA and client.key.pem is a key file.

The other issue with the clients was that they didn't trust my CA that was imported as part of the pfx file to authenticate servers. This was quickly solved with a trip to the CA tab in the Chrome cert settings.

The client certs used in this were from my CA and have the Client Authentication property enabled.

Tags: homelab, nginx, tls

NUT not finding my UPS + fix

I use a CyberPower CP1500AVRLCD as a UPS in my lab. I'm just now getting more stuff running on it to the point that I want automatic shutdown (because it won't run for long with the higher power usage of more equipment). So, I plugged it into the pi that was running as a cups-cloud-print server and sitting on a shelf with my network equipment. The problem was that the driver for it in NUT didn't want to load. As is frighteningly common, it's a permissions problem:

Here's the log showing the issue:

Jul 09 16:49:58 print_demon upsdrvctl[8816]: USB communication driver 0.33
Jul 09 16:49:58 print_demon upsdrvctl[8816]: No matching HID UPS found
Jul 09 16:49:58 print_demon upsdrvctl[8816]: Driver failed to start (exit status=1)

Here's the udev rule that fixes it:

ACTION=="ADD",SUBSYSTEM=="usb",ATTR{idProduct}=="0501",ATTR{idVendor}=="0764",MODE="0660",GROUP="nut"

What this does is, when udev gets an event of the device with USB product id 0501 and vendor id 0764 being added to the system, it changes the permissions on the device files (think /dev/bus/usb/001/004 and /devices/platform/soc/20980000.usb/usb1/1-1/1-1.3) to allow group nut to read and write to it, allowing comms between the NUT driver and the device.

Tags: homelab, linux, raspberry-pi, udev

Human error is the root of all problems, especially with network bridges

When in doubt, the problem is directly caused by one's own stupidity.

I was trying to run an LXD host in a Hyper-V VM and went to set up bridged networking (in this case, notworking). Twice. The good old rule that it's caused by my stupidity rang very true. The problem was caused by the network adapter in the VM not having the ability to change the MAC address of its packets. The toggle is in the VM properties under advanced settings in the child node on the NIC.

This is why you should have a routed network.

Tags: homelab, hyper-v, lxd, containers, networking

How to have a functional dhcrelay

I'm dumb. Or ignorant. Or inexperienced. I haven't decided which.

dhcrelay only gets proper responses if it's listening on both the interface that it's actually listening on for requests and the one where it will get the responses.

My command line for it to forward dhcp requests to my Windows dhcp server in my virtual lab is:

/usr/bin/dhcrelay -4 -d -i eth1 -i eth2 10.x.x.x

eth1 is the interface with the Windows dhcp server on its subnet

eth2 is the interface with the clients on it

10.x.x.x is the address of the Windows dhcp server

This is run on my arch (yes, I know. Debian took longer than Windows to install. The only stuff on it is in base, vim, and dhcp) gateway VM. I could also stand up a Windows box and have it do NAT, but that doesn't use 512MB of RAM nearly as happily.

Tags: Windows Server, dhcp, linux, homelab