Part 3 - Base Config
- In Part Four We get CoreDNS built
- In Part Five We deploy and configure CoreDNS
- In Part Six We add a few odds & ends to make the host more durable
- And in Part Seven We add the bits for the eInk display
So. You’ve got everything ready to go… lets power yer Pi up and get to configuring stuff.
You’re likely to want to login to the host locally first. It just removes one more potential hiccup point.
My usual practice is to create a couple users on a host.
- one user for myself. This user is usually
wolf
orwnoble
shrug. - one user for automation.
- This user usually has password authentication disabled.
- the user(s) actually running the services
I keep the same UID/gid across all my machines so’s that nfsmounted homedirs, if I ever choose to use them, aren’t an utter mess.
useradd --uid 2002 --create-home --home-dir /home/loiosh --shell /usr/bin/bash --comment "Wolf Noble" loiosh
passwd loiosh
Change the password of each user that is capable of logging in.. The password should be unique, long, and stored someplace safe. like a password manager like 1password.
ubuntu
pi
root
- any other password-enabled user you’ve created.
as each user
if [[ -f ${HOME}/.ssh/id_rsa.pub ]]; then
echo "ssh key exists for `whoami`"
else
echo "`whoami` has no ssh key. Generating."
ssh-keygen
next add your newly minted sshkey to authorized_keys
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys && chmod 0600 ~/.ssh/authorized_keys
for user in USERLISTHERE; do
for host in coredns-01 coredns-02 coredns-03; do
su - ${user}
ssh-copy-id ${user}@${host}
done
done
/etc/sudoers.d/010_pi-nopasswd
:
|
|
/etc/sudoers.d/011_wnoble-nopasswd
:
|
|
validate you can:
- login to each host as the user expected.
- users that should be able to perform sudo actions can.
- each host should be able to be accessed by the others as relevant users.
…. Lets continue, shall we?
As a general rule, Your core infrastructure should be as self-reliant as possible.
This means removing as many functional dependencies as possible.
When viewed through that lens, it makes a lot of sense to statically configure your DNS server’s network stack.
That being said, it’s also worthwhile to make things as antifragile as possible.
On the off chance that host reverts its’ networking config to use DHCP,
it’d be nice if your DHCP server issued it the address everything expects it to be at, right?
……RIGHT?
running the command ip a show eth0
should give you everything you need here:
root@coredns-03:/home/pi# ip a show eth0
This should output something similar to:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether dc:a6:32:ff:aa:dd brd ff:ff:ff:ff:ff:ff
inet 192.168.1.53/24 metric 100 brd 192.168.1.255 scope global dynamic eth0
valid_lft 24271sec preferred_lft 24271sec
inet6 fe80::dea6:32ff:fe55:9063/64 scope link
valid_lft forever preferred_lft forever
This is telling you that your eth0
interface has the mac address dc:a6:32:ff:aa:dd
. Use this to configure your DHCP server.
Configuring your DHCP server to assign a specific address to this host is optional, and outside the scope of this guide.
It’s worth doing, imho.
Ubuntu 22 uses a combination of cloud-init and netplan to configure networking.
The way to statically configure the network is by
As you can see below, the original example config for netplan isn’t terribly informative.
Fortunately the netplan
manpage has some good examples in it.man netplan
/etc/netplan/50-cloud-init.yaml
(original)
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
eth0:
dhcp4: true
optional: true
version: 2
/etc/netplan/50-cloud-init.yaml
(static configuration)
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
eth0:
match:
macaddress: 'dc:a6:32:ff:aa:dd'
wakeonlan: true
set-name: 'eth0'
dhcp4: false
addresses:
- '192.168.53.53/16'
gateway4: '192.168.1.1'
nameservers:
search: [dmz.wolfspyre.io, wolfspyre.io]
addresses: ['127.0.0.1']
Press ENTER
before a timer resets. If not, wait a minute or so and ssh back in and try again.This might be obvious, but if you’re changing the address your host will have, a successful test will kill your ssh connection.
In this case, getting a ping going from your workstation to the new target address will make it easy for you to assess when you can ssh into the host at its new address.
The Nice thing about the example Netplan config, is it tells you exactly how to disable cloudinit
.
To disable cloud-init’s network configuration capabilities, write a file
/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
with the following:network: {config: disabled}
so… that’s pretty straight forward:
echo 'network: {config: disabled}' > /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
Some basic packages that we are certain to want:
apt-get install dstat grub-common grub2-common ifstat jq logrotate net-tools netstat-nat nicstat ntpdate raspi-config os-prober screen silversearcher-ag sysstat tmux tree unzip
Its fairly simple to set the local timezone:
rm -rf /etc/localtime; ln -s /usr/share/zoneinfo/America/Chicago /etc/localtime
A lot of this stuff could and arguabbly should be done with automation..
AND I don’t want to get into a chicken/egg situation with my core infrastructure.
I will likely write some ansible playbooks to configure a lot of this in the future, which will retain steady-state config over time.
BUT I wanted to have this focus on CoreDNS not on the automation-tool-du-jour1
So for critical infra hosts, I populate /etc/hosts
with the IP addresses of the hosts that system will need to talk to in order to function.
This helps reduce fragility in times of odd fluctuation.
Uncomment the entries for the relevant network segment for the host being built.
the rc.local
script runs after the system is running.
Specifically WHEN it runs depends on a few things, but it’s sufficient to say it runs late in the boot process.
We will use it later in this guide to optionally refresh things
It’s easy to forget that the script must be executable in order to be run. Ergo:chmod +x /etc/rc.local
I try to populate /etc/services
with relevant information for the ports services will listen on. This is of questionable value.
IANAs port assignment registry
should be considered the canonical source
Now lets update the host, and wipe the slate clean… thus making sure everything comes back as expected.
sudo su - && apt-get update && apt-get -y upgrade && reboot
Go get a cup of something, this should take a few minutes, and your system will reboot. We’ll pick back up when the host has booted
“Puppet, Chef, Ansible, Salt, CFEngine, Bash-in-a-for-loop” ↩︎