Linux: How to detect the current shell inside a script

This seems obvious, right? You just do:

echo $SHELL

and that’s it, correct?

Nope, that gives you the user’s default shell. If the user’s shell is bash, and a script has a shebang (#!) in the first line to specify a different shell (e.g. #!/bin/ksh), the above will say it’s “/bin/bash“. And that’s not what we want here.

Some quick googling was in order, and most suggestions were either:

echo $0

or:

ps -p $$

These work fine in an interactive shell… but inside a script, they’ll reply with the script‘s path, not the shell.

Finally, I found one which worked. It requires /proc, which may not exist in some Unixes, but, AFAIK, it’s mostly universal in Linux:

readlink /proc/$$/exe

So, if, for instance, a user’s default shell is bash, the script:

#!/bin/ksh
readlink /proc/$$/exe

will reply (correctly) with something like (on a Red Hat 8.x system; other distros may have different versions of ksh and/or paths, by default):

/usr/bin/ksh93

Linux: “Oops, forgot to copy directory ACLs in an rsync…”

Let’s say you want to move a (relatively large) filesystem, with an application’s data files (including a lot of subdirectories, many of them with specific ACLs), to a faster disk (for instance). The old FS is mounted as /PROD, while the new disk is mounted as /NEW; the idea is to stop the application, rsync the data, and switch mount points. Easy, you think:

rsync -av /PROD/ /NEW/

So, the application guys stop the app, you enter the command above, then switch mount points so the old /NEW is now /PROD, and the old /PROD is now /OLD . Everything looks fine, the application guys start their thing again…

… and then complain about access problems. Oops, the rsync didn’t copy the ACLs — you needed the -A argument to include them (e.g. rsync -aAv).

But they took a while to notice it, so that there’s a lot of new data in the new filesystem, so simply doing a new rsync from /OLD to /PROD is unacceptable. You’d simply like to copy directory ACLs (for simplification’s sake, let’s assume there are no file ACLs here — but see below). But there are a couple thousand directories, and of course you don’t want to do it manually…

The solution I came up with was:

cd /OLD
find . -type d | cut -d '.' -f 2- > /tmp/directories.txt
IFS=$'\n' ; for i in `cat /tmp/directories.txt`; do getfacl "/OLD$i" | setfacl --set-file=- "/PROD$i" ; done

Basically, it stores a list of directories in /OLD , then, for each of them, it copies the ACL from /OLD/<directory> to /PROD/<directory> . setfacl’s “–set-file=-” means use the standard input as a “file”, which comes from getfacl‘s output. The “IFS=$’\n’” bit means that the for loop cycles through entire lines, not “words” — otherwise, it would try to split paths with spaces in them.

A limitation of this is that any newly created directories in the new /PROD filesystem won’t be affected, but hopefully they won’t be too many.

What if you wanted to include files as well? Just remove the “-type d” in the find command. Note that, in this case, it’s much more likely that there are new files in the new /PROD FS that won’t have their ACLs corrected.

Moving to TLSv1.2 or newer: Nginx, Apache, Postfix, Dovecot

Moving on from “should we do it?” (with the answer to most real-world scenarios being “yes, and as a bonus it can help block a lot of spambots“), here’s how to restrict several Internet services — Nginx, Apache, Postfix, and Dovecot — to TLSv1.2 or newer.

As usual, these are not complete guides for any of those servers; I’m assuming you already have them working fine (including TLS encryption), and just want to disable any TLS protocols lower than v1.2. (If you need to add TLS to a non-TLS server, see instructions for Nginx, Apache, Postfix, and Dovecot.)

Nginx:

In each virtual host’s server section — or, even better, if you’re using Let’s Encrypt, in /etc/letsencrypt/options-ssl-nginx.conf or its equivalent –, add the following (or replace any existing ssl_protocols entry):

ssl_protocols TLSv1.2 TLSv1.3;

Restart Nginx, and test it on SSL Labs. You should get something like this1:

TLS v1.2 and v1.3 only

(Note: if you’re using a very old version of Nginx, it may not accept the “TLSv1.3” parameter and refuse to start; in such a case, remove it — or, better yet, upgrade your system. 🙂 )

Apache:

Similarly to Nginx, you can add one of the following to each Virtual Host, or to the global HTTPS configuration (typically in /etc/httpd/conf.d/ssl.conf), or, if using Let’s Encrypt, to /etc/letsencrypt/options-ssl-apache.conf :

SSLProtocol            +TLSv1.2 +TLSv1.3

or:

SSLProtocol             all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1

Right now, they’ll do the same thing: allow TLSv1.2 and v1.3 only. Personally, I like the second version (which disables older protocols) better, for two reasons: 1) it’ll work even with some ancient Apache version that doesn’t recognize “TLSv1.3”, and 2) when future TLS versions are added, they’ll be enabled, making it more future-proof.

Again, you can test the new configuration on SSL Labs.

Postfix:

Add the following to /etc/postfix/main.cf (replacing any equivalent entries, if they exist).

smtp_tls_mandatory_protocols=!SSLv2, !SSLv3, !TLSv1, !TLSv1.1
smtpd_tls_mandatory_protocols=!SSLv2, !SSLv3, !TLSv1, !TLSv1.1
smtp_tls_protocols=!SSLv2, !SSLv3, !TLSv1, !TLSv1.1
smtpd_tls_protocols=!SSLv2, !SSLv3, !TLSv1, !TLSv1.1

After restarting Postfix, you can test its available protocols with Immuniweb’s SSL Security Test (specify something like yourhostname:25, or yourhostname:465 if not using STARTTLS).

Dovecot:

Add the following (or replace it if it exists) to your SSL configuration (typically /etc/dovecot/conf.d/10-ssl.conf):

ssl_min_protocol = TLSv1.2

Restart Dovecot, and test it on Immuniweb’s test (use something like yourhostname:143 , or yourhostname:993 if not using STARTTLS).

Moving to TLSv1.2 or newer: is now the time?

TLSv1.0, used not only for HTTPS but also for secure SMTP, IMAP, etc., has been around for a while — 1999, in fact –, and, while not “hopelessly broken” like SSL 2 and 3 are, there have been many successful attacks/exploits against it in the past two decades, and while most implementations (in terms of both servers and clients) have been patched to work around those, there may always be another one just around the corner. Also, TLSv1.0 came from a time where it was desirable to support as many different ciphers as possible, which means that, by default, some ciphers later found to be insecure may still be enabled.

Therefore, the end of TLSv1.0 has been talked about for a while, and, indeed, the main browser vendors — Google, Mozilla, Apple, and Microsoft — have announced that they’ll deprecate TLSv1.0 (and 1.1, which is little more than a revision of 1.0) by early 2020.

Now, until a few days ago I still had TLSv1.0 enabled on my servers (including Nginx, Postfix and Dovecot); my belief then being that some encryption was always better than no encryption, and besides we were mostly talking about public websites, that didn’t ask for user credentials or anything. Furthermore, any modern browsers/email clients default to secure protocols (TLSv1.2 or better), so there would never be, “in real life”, a situation where important/sensitive data (say, me logging on to my own mail server, or to one of my WordPresses) was being transmitted in an insecure way.

On the other hand… doesn’t the above mean that, “in real life”, nobody will really be using TLSv1.0 “legitimately”? Doesn’t it mean that setting up TLSv1.2 as a minimum will have no adverse consequences?

So I investigated it a little:

  • in terms of browsers/clients, requiring TLSv1.2 or above means: 1) no Internet Explorer before version 11; 2) no Safari before 6.x; 3) no default Android browser before Android 4.4.x (but those users can still use Chrome or other non-default browsers). (You can check this out on SSL Labs). I don’t have a way to check email clients that old, but you can assume that anything released after the browsers just mentioned will be OK;
  • in terms of my own server logs (mostly Nginx, but also Postfix, in terms of opportunistic encryption), apparently, only spambots and such are using TLSv1.0; any “legitimate” email servers use TLSv1.2 or 1.3. The same thing with my websites — any TLSv1.0 entries in my logs are always from some bot (and not one for a “well-known” search engine), not actual visitors. 1

In short: you can probably move to requiring TLSv1.2 or above now (and, indeed, I’ve done so since a few days ago), unless you’re in some very peculiar situation — maybe your site needs to be accessed by users of a company whose IT policy includes not using software newer than 20 years old? Or you have a CEO who’s always refused to upgrade his beloved Symbian phone, but who’d flip out if he couldn’t access your company’s webmail with it? 🙂 Otherwise, my advice is: go TLSv1.2 and don’t look back.

However, and since this post is already quite long, I’ll leave the specific instructions for disabling TLS before 1.2 in Nginx, Apache, Postfix, and Dovecot for the next one

Nginx: How to prioritize ChaCha20 for devices without hardware AES support

Some time ago, I noticed something like this on a site (not mine)’s rating on SSL Labs:

SSL Labs - "P"

According to SSL Labs’ test, that little P means:

(P) This server prefers ChaCha20 suites with clients that don’t have AES-NI (e.g., Android devices)

Naturally, my curiosity was piqued, and a bit of investigating followed…

First (and briefly), the theory:

  1. both AES and ChaCha20 ciphers are thought to be equally secure.
  2. many modern CPUs provide hardware acceleration for AES, in which case it’s faster…
  3. … however, if the CPU doesn’t accelerate AES, then ChaCha20 is faster.

Therefore, setting a web server (or any other kind of server, such as IMAP or SMTP, but those are outside this article’s scope) to use either option all the time means that some clients won’t have the optimal cipher(s) (in terms of performance) for their device. It won’t probably be noticeable in 99% of cases, but, hey, I’m a geek, and geeks optimize stuff for fun, not just for real-world performance. 🙂

Fortunately, as the SSL Labs test shows, it’s possible to configure a server to use/prefer one cipher for non-hardware accelerated devices, and another cipher (or, more precisely, a list of preferred ciphers) for the rest.

(For the curious, the selection process works like this: if the client’s preferred cipher is ChaCha20, then the server assumes it’s a device without hardware AES and uses that. If the client’s top cipher is anything else, then the server uses its own cipher list.)

This guide shows how to do it in Nginx, using Let’s Encrypt certificates. I’m using an Ubuntu 19.04 server, but any relatively recent Nginx should work, as long as you’re using OpenSSL 1.1.0 or newer, or LibreSSL. If your distro still provides only OpenSSL 1.0.x or older by default, you can always compile a newer version to a separate directory, and then compile Nginx to use it: here are instructions for OpenSSL 1.1.x or LibreSSL.

This is actually relatively simple, though some specific distros/servers may require some other changes:

1- Give Nginx a list of ciphers that doesn’t have ChaCha20 in first place (you need to specify some AES ciphers first). The list I’m using (currently specified in /etc/letsencrypt/options-ssl-nginx.conf , since I’m using Let’s Encrypt) is:

ssl_ciphers 'TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA::!DSS:!3DES';

2- Edit your OpenSSL/LibreSSL configuration file (in Ubuntu/Debian it’s /etc/ssl/openssl.cnf) and, just before the first section in square brackets (in Ubuntu it’s “[ new_oids ]“), add the following:

openssl_conf = default_conf

[default_conf]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
Options = ServerPreference,PrioritizeChaCha

3- Restart Nginx, and test it on SSL Labs. If you see the superscript “P” at the right of each ChaCha20 line, everything went well. (Problems? Ask here, and maybe I, or some other reader, can help.)

You can also set up your Nginx to log ciphers (possibly to a separate log file), using something like:

log_format logciphers '$remote_addr ' '$ssl_protocol/$ssl_cipher ' '"$request" $status $body_bytes_sent ' '"$http_user_agent"';

access_log /var/log/nginx/servername-ciphers.log logciphers;

And then access the site with some of your devices, noting what ciphers they use. For instance, both my (2015) i7 laptop and my (2016) ZenFone 3 Android phone use AES (yes, apparently the Snapdragon 625 chip includes AES acceleration), but a 2013 Nexus 7 tablet and a 2015 iPad Pro (Safari, in this case — all other examples used Chrome) went for ChaCha. This shows that everything’s working correctly — the devices that specifically want to use ChaCha are indeed using it, while the rest aren’t. And, yes, this means that the very same browser (in this case, Chrome on Android) has different cipher preference defaults depending on the hardware it’s running on.

Today I learned: root’s cron (including stuff like logrotate) doesn’t run if root’s password is expired

… which can be hard to spot, since in most places you never use root’s password anywhere (you “sudo su” to root using your user‘s password).

Today’s story:

  1. server has a logfile of several GB;
  2. head logfile shows it hasn’t been rotated in more than a year;
  3. running logrotate /etc/logrotate.conf manually works;
  4. /var/log/cron includes entries like:

Mar 22 14:10:01 xxxxxxxx crond[26561]: (root) PAM ERROR (Authentication token is no longer valid; new one required)
Mar 22 14:10:01 xxxxxxxx crond[26561]: (root) FAILED to authorize user with PAM (Authentication token is no longer valid; new one required)

  1. chage -l root shows that its password has expired…

Now, why did root have password expiration enabled? It’s a mystery 🙂 — probably someone ran a script configuring password expiration for all users and forgot to add some exceptions to it, root among them. Anyway,

chage -E -1 -M -1 root ; passwd root

solved the problem. Hope this is useful. 🙂

Your own OpenVPN, or: How to make all WiFi hotspots trustworthy

If you normally browse the web (and you don’t block ads), you’ve probably already seen hundreds, if not thousands, of advertisements for “VPNs” or “VPN software/subscriptions” — how you “need” one to have any kind of privacy, how “they”1 can track you everywhere if you don’t use one, and so on. And they’re usually not cheap. Interestingly, and probably because of some recent “revelations”, all those advertisements seem to focus on privacy or anonymity only.

But using a VPN can offer you something else: mobile security; namely, the ability to make your mobile devices “be”, on demand, part of your home network (or your server’s network, if you have, say, a VPS somewhere), no matter where you are, regardless of whether you’re using 3G/4G mobile data, or some random WiFi hotspot. Yes, even one with no encryption at all; it won’t matter. You can, absolutely, trust random open hotspots again; even their owner won’t be able to read or alter your traffic in any way. And you can do it for free, too (assuming you already have an always-connected Linux server); you just need to configure your own OpenVPN server, which you’ll be able to do by following this (hopefully accessible) guide.

Continue reading “Your own OpenVPN, or: How to make all WiFi hotspots trustworthy”

Red Hat/CentOS: How to add HTTPS to an existing Nginx website (both with and without Let’s Encrypt)

(Yes, you read the title correctly. For extra fun, and to prevent this blog from being too focused on Ubuntu/Debian, this time I’ll be using Red Hat Enterprise Linux / CentOS (and, I assume, Fedora as well.) Later on, I may post a Debian-based version.)

Configuring a basic HTTP site on Nginx

(Note: if you already have a working HTTP site, you can skip to the next section (“Adding encryption…”))

Yes, the post title mentions an “existing” website, which I believe will be the case in most “real world” situations, but installing a new one is actually very easy on CentOS1. First, do:

yum -y install epel-release; yum -y install nginx

Then create a very basic configuration file for the (non-HTTPS) site, as /etc/nginx/conf.d/mysite.conf :

server {
        listen 80;
        server_name mysite.mydomain.com;
        root /var/www/mysite;

        location / {
        }
}

Then, of course, create the /var/www/mysite directory (CentOS doesn’t use /var/www by default, but I’m far too used to it to change. 🙂 ) If you’d like, create an index.html text file in that directory, restart nginx (“service nginx restart” or “systemctl restart nginx“, depending on your system’s major version), and try browsing to http://mysite.mydomain.com . If it works, congratulations, you have a running web server and a basic site.

Adding encryption to the site (not using Let’s Encrypt):

First, you need a key/certificate pair. I have tutorials for creating an RSA certificate or an ECDSA certificate.

Second, edit the site’s configuration file (in the “starting from scratch” example above, it’s “/etc/nginx/conf.d/mysite.conf“), and copy the entire server section so that it appears twice on that text file (one after the other). Pick either the original or the copy (not both!), and, inside it, change the line:

listen 80;

to:

listen 443 ssl http2;

(Note: the “http2” option is only available in Nginx 1.9.5 or newer. If your version complains about it, just remove it, or upgrade.)

Inside that section, add the following options:

ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/privatekey.crt;

(replacing the paths and file names, of course.)

This should be enough — restart Nginx and you should have an HTTPS site as well as the HTTP one.

And what if you want to disable HTTP for that site and use HTTPS only? Just edit the same configuration file, look for the server section you didn’t change (the one that still includes “listen 80;“), and replace the inside of that section with:

listen 80;
server_name mysite.domain.com;
return 301 https://mysite.domain.com$request_uri;

Afterward, while you’re at it, why not go for an A+ rating on SSL Labs (skip the certification creation part from that tutorial, you’ve done it already) and an A rating on securityheaders.io?

Adding encryption to the site (using Let’s Encrypt):

First, install certbot:

yum install python2-certbot-nginx

Then check if Nginx is running and your normal HTTP site is online (it’ll be needed in a short while, to activate the certificate). If so, then enter:

certbot --nginx --rsa-key-size 4096 -d mysite.mydomain.com

(replacing “mysite.mydomain.com” with yours, of course.)

Answer the questions it asks you: a contact email, whether you agree with the terms (you need to say yes to this one), if you want to share your email with the EFF, and finally if you want “No redirect” (i.e. keep the HTTP site) or “Redirect” (make your site HTTPS only).

And that’s it (almost — see the next paragraph) — when you get the shell prompt back, certbot will already have reconfigured Nginx in the way you chose in the paragraph above, and restarted it so that it’s running the new configuration. You may want to add “http2” to the “listen 443 ssl;” line in the configuration file (it’ll probably be the default someday, but as of this post’s date it isn’t), and don’t forget your options for improved security and security headers.

Only one thing is missing: automatically renewing certificates. Strangely, the certbot package configures that automatically on Ubuntu, but not on CentOS, from what I’ve seen (please correct me if I’m wrong). The official Let’s Encrypt docs recommend adding this (which includes some randomization so that entire timezones don’t attempt to renew their certificates at precisely the same time) to root’s crontab:

0 0,12 * * * python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew 

(Note: It’s possible to use Let’s Encrypt to create ECDSA certificates, but as of this writing you have to do most of the work manually (creating a CSR, etc.), and you lose the automatic renewal, so for the moment I suggest using RSA certificates. I hope this changes in the future.)

Linux: ‘top’ shows almost all of my memory being used! Should I panic?

Top The short answer is, of course, “no.” 🙂 1

A common mistake that Linux users make — indeed, one that even some less experienced Linux sysadmins sometimes make — is treating Linux2 as if memory technology hadn’t evolved since MS-DOS, when you needed as much memory as possible — especially conventional RAM (anyone remember that?) — to be free, so that applications could be loaded into it and run. If there wasn’t enough free memory, then applications wouldn’t run, or would run worse, so you’d have to “optimize” your DOS system to have as much free conventional memory as possible (using tricks such as loading device drivers into upper memory, and, of course, exiting a program completely before starting another.)

Somehow, that line of thinking still exists — even, apparently, in people who never used MS-DOS or similar systems before(!). “Free memory is good,” people think. “If most of the RAM is used up, then there’s a problem,” they insist — one to be solved either by killing applications or by adding more memory to the system.

Linux — like all modern systems, and I’ll even include Windows here 🙂 3, uses virtual memory. Put simply (and, yes, I’m simplifying it a lot — there’s more to it, of course), it means that the amount of available memory is actually (mostly) independent of the physical system RAM, since the system uses the disk as memory, too. Since real RAM is (virtually) always faster — but (usually) smaller — than a disk drive, the system typically uses prefers to use it for more critical stuff (such as the kernel, device drivers, and currently actively running code), with less critical stuff (say, a program waiting for something to happen) being relegated to the disk — even though it’s still “memory”. Note that even most of the aforementioned “critical stuff” can be temporarily moved to the disk if needed.

You may now be asking: if RAM is typically preferred for “critical stuff” only, and that stuff doesn’t take up the entire available memory, then what’s the rest of the RAM used for? The answer is, of course, disk caching — both for reading (loading up recently and/or frequently accessed data from the disk and keeping it available in RAM for a while, in case it’s needed again in the future) and for writing (which works the other way around — the system tells the program the data is already written to disk, so that it can go on instead of waiting for it, but it’s still only in memory, to be written a short while later.) So, any free RAM is typically used for disk caching, with the system managing it automatically.

But people keep looking at the output of “top”, seeing 90% of the physical RAM used, and panicking. 🙂

In fact, this confusion was so common that modern Linux systems even changed some labels on the “top” command. For instance, this is from a Red Hat Enterprise Linux 6.x:

"top" on RHEL 6.x
“top” on RHEL 6.x. Nice machine, eh? 🙂

As seen above, the server has (rounding down) 567 GB of physical RAM, of which 506 GB — i.e. 89% — are used. Looks scary (if you haven’t been reading so far), right? I mean, the server is at almost 90% capacity! But then you look at the value for “cached”, in the line below. That’s 479 GB — or 84% — used for disk cache! In other words, the entire operating system and applications are running in just 5% of the server’s RAM, with the rest of it being used just to make disk access faster.

And that “cached” value — and this is the important bit — counts as free memory. It’s available to the system whenever needs it. In “computer terms”, it’s not technically free, it’s being used for the cache, but in “human terms”, it’s just as free as the one that actually says “free” in front of it.

As I said above, this caused so much confusion to users (for decades) that, eventually, it was changed. Here’s how it looks like in a RHEL 7.x system:

Top in RHEL 7.x
“top” in RHEL 7.x. Puny machine, I know.

In the first line: 79044 KB free + 240444 KB used + 697064 KB for buffers and cache (which were separated in the old version, by the way) = … 1016512 KB, which you’ll note is exactly the first value in the same line, for total physical memory.

In the second line, the first three values refer to swap, so they’re not relevant here, but the “avail Mem” one is new. What is it? According to the “top” manual page,

The avail number on line 2 is an estimation of physical memory available for starting new applications, without swapping. Unlike the free field, it attempts to account for readily reclaimable page cache and memory slabs.

In other words (and this is again an oversimplification), it’s what the system has readily available — in other words, the “free” part, plus (most of) what is being used for caching. That doesn’t mean it can only use that much memory, it just means that that’s what it can use right now, without swapping anything out. Which, in my opinion, ends up not mattering that much, in terms of a precise value. However, at least it now gives a better (though rough) idea of what part of the physical memory is available, not simply “free” (i.e. not doing anything).

So, what about those specific cases mentioned (in a note) at the beginning? Well, if there is very little memory free and very little swap space free and very little memory (compared to the total amount of physical memory) is being used for buffers/cache, then and only then4 may it be the case that the server actually needs more memory and/or swap space, or that some process has some kind of memory leak or is otherwise using much more memory than usual (if restarting the service fixes it, then this is probably the case.)

Note to Self #1: resolv.conf uses “nameserver”, not “server”

(Welcome to “Note to Self”, a new series of hopefully short posts where I attempt to remind my future self of the correct way of doing things I often sometimes get wrong, even after decades.)

The correct syntax for specifying nameservers in /etc/resolv.conf is:

nameserver 1.2.3.4

not:

server 1.2.3.4

Worse, the system won’t complain; it’ll just ignore those entries, so you may wonder for a while about why name resolution is not working.

Of course, when adding an entry to an existing file, you probably won’t make this basic mistake, as any current entries will use the correct “nameserver” syntax. But when creating a new file from scratch… not that it happened to me yesterday, oh no. 🙂

Linux: How to increase the size of an ext2/3/4 filesystem (using LVM)

(Insert obligatory disclaimer about this being basic stuff, but this blog being — among other things — about documenting stuff I’m asked about/help others with at work, because it may be useful to other users, etc. etc.)

If you work, or have worked, as a sysadmin, you’ve probably had to do this in the past, but, even on your own system — if you were wise enough to select “use LVM” during installation — you may find yourself facing a common problem: needing to grow a filesystem, such as / (the root filesystem), /opt, etc..

First, in many cases, you may not actually need to grow an existing filesystem: if you will need to store a lot of data in a specific directory (let’s say /opt/data, but /opt is currently almost full), then simply creating a new partition and mounting it (possibly mounting it first in a temporary location so that you can move the existing directory into it) as (for the above example) /opt/data 1, may well solve your problem. In this case, by the way, you don’t even need to be using LVM. You’ve just freed space on /opt and (depending on the new partition’s size) you now have a lot more room to grow on /opt/data as well.

But, since we want to challenge ourselves at least a little bit, let’s say we really need to grow that filesystem. As a requisite, the current disk must be a Logical Volume (LV) of an existing Volume Group (VG), made up of one or more Physical Volumes (PVs). Let’s also assume the filesystem you want to grow is /opt, that it’s an ext4 filesystem 2, and that it’s an LV named “lv_opt“, which is part of a VG called “vg_group1“, which currently has no free/unused space (if it did have enough unused space for your needs, you could skip to step 4 below.)

The process has several steps:

1- Add the new disk (or disks, but let’s say it’s just one for simplicity’s sake) to the system. This may mean adding a physical disk to a machine, or adding a new drive to a VM, or the company’s storage team presenting a new LUN, or… Anyway, the details of the many possibilities go beyond the scope of this tutorial, so let’s just assume that you have a new disk on your machine, and that Linux sees it (possibly after a reboot, in the case of a non-hotpluggable physical disk), as something like /dev/sdc (which we’ll use for the rest of the examples.)

2- Create the new PV: if you’re going to add the entire sdc disk to the VG, just enter:

pvcreate /dev/sdc

Otherwise, use fdisk to create a new partition inside it (sdc1), with the desired size, and change it to type “LVM” (it’s “8e” on fdisk). After exiting fdisk (saving the current configuration), enter (this is apparently not mandatory on modern Linux versions, but there’s no harm in it):

pvcreate /dev/sdc1

3- Add the new PV to the existing VG:

vgextend vg_group1 /dev/sdc # or /dev/sdc1, if the PV is just a partition instead of the entire disk

If you enter a “vgs” command now, you should see the added free space on the VG.

4- Extend the LV: if you want to add the entirety of the VG’s currently unused space, enter:

lvextend -l +100%FREE lv_opt

If, instead, you want to specify how much space to add (say, 10 GB):

lvextend -L +10G lv_opt

Note that one version of the command uses “-l” (lower case L) and the other uses “-L“. That’s because one refers to extents, while the other refers directly to size.

Now the LV has been extended, but wait! You still need to…

5- Extend the filesystem itself: just enter:

resize2fs /opt

Note that it’s a “2” in the command name, regardless of whether the filesystem is ext2, 3 or 4. This operation may take a while if the filesystem is large enough, but after it’s completed you should be able to see the new filesystem’s larger size (and added free space) with a “df” command.

Any questions or suggestions, feel free to add a comment.

Linux: What’s filling up my almost full filesystem?

Let’s say you realize (maybe because you got an alarm for it) that a particular filesystem — let’s say /qwerty — is full, or almost full, and you want to find out what’s taking up the most space. Simply enter something like:

du -k -x /qwerty | sort -n | tail -20

to get a list of the directories taking up the most space in that filesystem, sorted by size, with the largest ones at the bottom.

Note that the “/qwerty“, in this case, should be the filesystem’s top directory, not some subdirectory of it. In other words, it should be something that shows up on a “df” command.

Explanation:

du -k -x” shows the subdirectories (and their total sizes) of the specified directory (or of the current one, if you don’t specify one). “-k” means that sizes are reported in kilobytes (KB) — this is not mandatory, but different versions of “du” may use other units, and this one is easy to read. “-x” means “don’t go out of the specified filesystem”, and that’s quite important here, as you could have a “/qwerty/asdfgh” filesystem, mounted in a subdirectory of “/qwerty“, but since it’s a different filesystem (appearing separately in the results of a “df” command) you won’t want to include it in this list.

sort -n” is a simple numeric sort, and “tail -20” just means “show only the last 20 lines. 20 is a reasonable number that fits in most terminals at their default sizes (typically 80×25), so that you don’t have to scroll up.

(Yes, this is very basic, but, hey, one of the goals of this blog is to document stuff I help co-workers with, since, at least in theory, if someone has a question about something they need to do, then many other people may have the same doubt/need as well, and we have a couple of new team members whose backgrounds are not Linux/Unix-related, so…)

Linux: Creating and using an encrypted data partition

Encrypted disks and/or filesystems are nothing new on Linux; most distributions, these days, allow you to encrypt some or all your disk partitions, so that they can only be accessed with a password. In this tutorial, however, we’re going to add a new encrypted partition to an existing system, using only the command line. I’ve found that tutorials on the web seem to make this issue more complex than it actually is, so here’s mine — hopefully it’ll be easier than most.

In this guide, we’ll be making a few assumptions. First, as said above, it’ll be an existing system, to which we’ve just added a new disk, /dev/sdb . Adapt to your situation, of course. If you’re not using an entire disk, just create a new partition with fdisk (e.g. /dev/sdb1) and use it instead. Second, we’ll have the machine boot with the disk unmounted, then use a command to mount it (asking for a passphrase, of course), and another to dismount it when you don’t need it. The obvious usage of such a partition is for storing sensitive/private data, but theoretically, you could run software on it — as long as you don’t automatically attempt to start it on boot, and don’t mind keeping it mounted most or all of the time, which perhaps defeats its purpose: if it’s mounted, it’s accessible.

So, without further ado…

Continue reading “Linux: Creating and using an encrypted data partition”

Networking definitions: Allowing traffic, Port forwarding, NAT, and Routing

I’m more of a systems administrator than a network admin (though I’ve also worked as the latter, in the past), but, of course, one can’t be a sysadmin without knowing at least something about networking. And yet (and like my previous post about compiling stuff), I’ve found that it’s possible to do a perfectly good job as a sysadmin, and yet still mix up a few networking concepts from time to time.

Therefore, I wanted to write about four different, but related, concepts, that I’ve noticed people sometimes confuse.

1. Allowing traffic

This just means that a network interface allows traffic to a specific destination and/or port, possibly restricted to a specific source (an IP address, or a network). Any traffic that is not allowed is simply refused.

Note that this by itself doesn’t mean that the interface (of a server, a router, etc.) will do anything (or anything desirable, at least) with the received traffic. For instance, it may not have something listening on that port, or it may not be configured to route that traffic to somewhere useful. This just means that it doesn’t instantly block the connection.

This is typically controlled by a local software firewall such as iptables.

2. Port forwarding

Port forwarding just means: if you receive a packet on interface A, port B, then redirect it to IP address C, port D. “D” may be the same as “B”, and “C” may be on the same host or at the other end of the planet.

Again, note that the mere fact that there’s a matching redirect rule for the initial destination interface and port doesn’t mean that the host actually accepts redirecting the traffic (see 1.) or that it knows how to route it to its final destination (see 4.)

3. NAT

NAT, or Network Address Translation, can be seen as a special case of two-way port forwarding (see 2.). Basically, a router (which may well be a simple server with two or more (physical or virtual) network interfaces, it doesn’t have to be a “router” bought in a store) accepts traffic from a (typically private) network, then translates it so that it goes to the destination address in the (typically public) network, with (and this is the important part) the source “masked” as the router’s public IP address, and a different source port that the router “remembers”, and then knows how to handle the returning traffic and “untranslate” it so that it goes to the original source.

In short, NAT allows many private hosts in a network to access the Internet using a single public IP, and a single connection. (It can have other similar uses, even some not related to the public Internet, of course; this is just the most common one.)

4. Routing

Routing is simply a host knowing that a packet intended for IP address X should be directed to IP address Y 1.

Again, the obligatory caveats: this doesn’t mean that the traffic is even accepted before attempting to route it (see 1.), or that the host actually knows how to reach IP address Y itself (it may not have a local route to it). It may also be that the routing is correct, but there is no address translation (see 3.), so the destination host receives a packet in a public interface that claims to be from a private address, and refuses it. Finally, the destination host itself needs a route to the original source, and it may not have one configured (or have a misconfigured one, causing asynchronous routing and possibly making the source refuse the returning traffic).

Thoughts? Corrections? Clarifications? Yes, I know this is relatively basic stuff. 🙂

Stuff you can do quickly with MRTG (that has nothing to do with router traffic)

Come on, everyone knows MRTG, right? That venerable tool for graphing router traffic, among other things. If you’ve worked as a sysadmin and/or network admin in the past, you’re probably familiar with it since decades ago, although these days you probably use something more modern, such as Cacti, Zabbix, etc..

The thing is, I still use MRTG a lot, old though it is, and even though there are many newer alternatives, since MRTG has, to me, one huge advantage. Well, two, in fact, though they really amount to one thing: ease of use. MRTG is simple: it doesn’t need to run as a daemon, it’s just a binary you can run through cron. And MRTG is versatile: unlike other tools that really, really want to work with network traffic, or system memory, or processor usage, or something else (preferably to be read through SNMP, or a local agent), MRTG just wants one or two values, and it’ll call the first one “input” and the second “output”. And those values can come from anywhere.

Just to show how easy it is: an MRTG configuration file usually looks something like this:

WorkDir:/var/www/htdocs/ptmail/

Refresh: 36000

Target[ptmail]: `/usr/local/bin/getptmail.sh`;
Options[ptmail]: growright, nopercent, integer, noinfo, gauge
MaxBytes[ptmail]: 12500000000000
kilo[ptmail]: 1000
YLegend[ptmail]: MB used / Emails
ShortLegend[ptmail]:  
LegendI[ptmail]:  MB used (uncompressed):
LegendO[ptmail]:  Emails:
Title[ptmail]: MB used / Emails
PageTop[ptmail]: <h1>MB used / Emails</h1>

XSize[ptmail]: 500
YSize[ptmail]: 250
XScale[ptmail]: 1.4
YScale[ptmail]: 1.4

Timezone[ptmail]: Europe/Lisbon

The key part is that “Target” line which, you’ll notice, is between backticks, meaning that it’ll actually execute a script. And what does it expect from that script? Simple: just four lines:

  • A number (which it’ll think of as “input”, and which will typically be graphed in green)
  • Another number (for “output”, to be graphed in blue)
  • The system’s uptime (on Linux, uptime | cut -d ” ” -f 4-7 | cut -d “,” -f 1-2 typically provides it in the format MRTG wants, although it’s really just text)
  • The system’s name (just use uname -n)

That’s it! If you only want to graph a single value, just add “noi” or “noo” to the Options line (and you can then replace the first or second line, respectively, with “echo 0” on your script). You can also make it not show the uptime and name with the “noinfo” option. And, finally, the “gauge” option makes MRTG graph current values, while, without it, it adds the current value to the previous one (like, for instance, a “bytes transferred” counter). Both have their uses.

Now, you’re probably thinking: “yeah, yeah, I’ve known how to use MRTG for decades (and these days I use XYZ instead; who even uses MRTG in 2017?!?); why are you writing about something so basic and, well, old?” The answer is, again, 1) that it’s ridiculously easy to use (just create a script to write 4 lines), and 2) that it’s not for routers (or network interfaces) only; it can be used for so much else, and it can typically be done in minutes. So I’ll just show a few examples of stuff I already do with it on my servers. Each example will be just the shell script; the MRTG configuration file for these is virtually always the same thing, except for axis legends, labels, etc..

Note: all of the following need the “gauge” option.

“Steal” time on a VPS

#!/bin/bash

# Graphs "steal" time on a VPS. A high value means you need to complain to your
# VPS provider that the host your server is on is overbooked, or there's
# another customer abusing it (mining Bitcoins? :) )

NUM=$((3 + ($RANDOM % 3)))

rm -f /tmp/steals.txt

top -b -n $NUM | grep Cpu | cut -d ',' -f 8 | tr -d ' ' | cut -d 's' -f 1 > /tmp/steals.txt

TOTAL=0 ; for i in `cat /tmp/steals.txt`; do TOTAL=`echo "$TOTAL + $i" | bc -l`; done

AVERAGE=`echo "$TOTAL / $NUM" | bc -l`

echo 0
echo $AVERAGE
uptime | cut -d " " -f 4-7 | cut -d "," -f 1-2
uname -n

Total size and number of emails in a Maildir

#!/bin/bash

# Total size and number of mails in a Maildir

# total UNCOMPRESSED size of your mailbox, in MB, 2 decimal cases
# actual compressed size will probably be much lower; if you'd rather show it,
# just replace the following with a "du -ms /home/ptmail/Maildir | cut -f 1"
find /home/ptmail/Maildir -type f | grep drax | cut -d '=' -f 2 | cut -d ',' -f 1 > /tmp/sizes-ptmail ; perl -e "printf \"%.2f\n\", `paste -s -d+ /tmp/sizes-ptmail | bc` / 1024 / 1024"

# number of emails
find /home/ptmail/Maildir -type f | grep drax | wc -l

uptime | cut -d " " -f 4-7 | cut -d "," -f 1-2
uname -n

System load (* 100) and number of processes

#!/usr/bin/php
<?php

# Graph system load (multiplied by 100) and number of processes.

# Yes, this is trivial and uses mostly shell commands. :) It's just to show
# that "scripts" invoked by MRTG can be in any language (in this case, PHP),
# not just shell scripts. You could even use compiled C code, for instance.

$x = exec ("uptime | cut -d ':' -s -f 5 | cut -d ',' -f 1 | cut -d ' ' -f 2");

$x*=100;

print $x . "\n";

system ("ps ax | wc -l");

system ("uptime | cut -d ' ' -f 4-7 | cut -d ',' -f 1-2");
system ("uname -n");

?>

Average time per request of a URL

#!/bin/bash

# Average time per request of the URL below. Run it locally to measure how fast
# your site and/or server is, or remotely to measure its network connection as well
# (though that would be less reliable; best use something like Pingdom)

# use "noi" for this one (hence the first "echo 0")

echo 0
ab -c 4 -n 100 -k https://zurgl.com/ | grep "Time per request" | head -n 1 | cut -d ':' -f 2 | tr -d ' ' | cut -d '[' -f 1

uptime | cut -d " " -f 4-7 | cut -d "," -f 1-2
uname -n

I could go on, but I’m sure you get the idea. 🙂