Nope, that gives you the user’s default shell. If the user’s shell is bash, and a script has a shebang (#!) in the first line to specify a different shell (e.g. #!/bin/ksh), the above will say it’s “/bin/bash“. And that’s not what we want here.
Some quick googling was in order, and most suggestions were either:
echo $0
or:
ps -p $$
These work fine in an interactive shell… but inside a script, they’ll reply with the script‘s path, not the shell.
Finally, I found one which worked. It requires /proc, which may not exist in some Unixes, but, AFAIK, it’s mostly universal in Linux:
readlink /proc/$$/exe
So, if, for instance, a user’s default shell is bash, the script:
#!/bin/ksh
readlink /proc/$$/exe
will reply (correctly) with something like (on a Red Hat 8.x system; other distros may have different versions of ksh and/or paths, by default):
Let’s say you want to move a (relatively large) filesystem, with an application’s data files (including a lot of subdirectories, many of them with specific ACLs), to a faster disk (for instance). The old FS is mounted as /PROD, while the new disk is mounted as /NEW; the idea is to stop the application, rsync the data, and switch mount points. Easy, you think:
rsync -av /PROD/ /NEW/
So, the application guys stop the app, you enter the command above, then switch mount points so the old /NEW is now /PROD, and the old /PROD is now /OLD . Everything looks fine, the application guys start their thing again…
… and then complain about access problems. Oops, the rsync didn’t copy the ACLs — you needed the -A argument to include them (e.g. rsync -aAv).
But they took a while to notice it, so that there’s a lot of new data in the new filesystem, so simply doing a new rsync from /OLD to /PROD is unacceptable. You’d simply like to copy directory ACLs (for simplification’s sake, let’s assume there are no file ACLs here — but see below). But there are a couple thousand directories, and of course you don’t want to do it manually…
The solution I came up with was:
cd /OLD
find . -type d | cut -d '.' -f 2- > /tmp/directories.txt
IFS=$'\n' ; for i in `cat /tmp/directories.txt`; do getfacl "/OLD$i" | setfacl --set-file=- "/PROD$i" ; done
Basically, it stores a list of directories in /OLD , then, for each of them, it copies the ACL from /OLD/<directory> to /PROD/<directory> . setfacl’s “–set-file=-” means use the standard input as a “file”, which comes from getfacl‘s output. The “IFS=$’\n’” bit means that the for loop cycles through entire lines, not “words” — otherwise, it would try to split paths with spaces in them.
A limitation of this is that any newly created directories in the new /PROD filesystem won’t be affected, but hopefully they won’t be too many.
What if you wanted to include files as well? Just remove the “-type d” in the find command. Note that, in this case, it’s much more likely that there are new files in the new /PROD FS that won’t have their ACLs corrected.
Moving on from “should we do it?” (with the answer to most real-world scenarios being “yes, and as a bonus it can help block a lot of spambots“), here’s how to restrict several Internet services — Nginx, Apache, Postfix, and Dovecot — to TLSv1.2 or newer.
As usual, these are not complete guides for any of those servers; I’m assuming you already have them working fine (including TLS encryption), and just want to disable any TLS protocols lower than v1.2. (If you need to add TLS to a non-TLS server, see instructions for Nginx, Apache, Postfix, and Dovecot.)
Nginx:
In each virtual host’s server section — or, even better, if you’re using Let’s Encrypt, in /etc/letsencrypt/options-ssl-nginx.conf or its equivalent –, add the following (or replace any existing ssl_protocols entry):
ssl_protocols TLSv1.2 TLSv1.3;
Restart Nginx, and test it on SSL Labs. You should get something like this1:
(Note: if you’re using a very old version of Nginx, it may not accept the “TLSv1.3” parameter and refuse to start; in such a case, remove it — or, better yet, upgrade your system. 🙂 )
Apache:
Similarly to Nginx, you can add one of the following to each Virtual Host, or to the global HTTPS configuration (typically in /etc/httpd/conf.d/ssl.conf), or, if using Let’s Encrypt, to /etc/letsencrypt/options-ssl-apache.conf :
SSLProtocol +TLSv1.2 +TLSv1.3
or:
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
Right now, they’ll do the same thing: allow TLSv1.2 and v1.3 only. Personally, I like the second version (which disables older protocols) better, for two reasons: 1) it’ll work even with some ancient Apache version that doesn’t recognize “TLSv1.3”, and 2) when future TLS versions are added, they’ll be enabled, making it more future-proof.
Again, you can test the new configuration on SSL Labs.
Postfix:
Add the following to /etc/postfix/main.cf (replacing any equivalent entries, if they exist).
After restarting Postfix, you can test its available protocols with Immuniweb’s SSL Security Test (specify something like yourhostname:25, or yourhostname:465 if not using STARTTLS).
Dovecot:
Add the following (or replace it if it exists) to your SSL configuration (typically /etc/dovecot/conf.d/10-ssl.conf):
ssl_min_protocol = TLSv1.2
Restart Dovecot, and test it on Immuniweb’s test (use something like yourhostname:143 , or yourhostname:993 if not using STARTTLS).
TLSv1.0, used not only for HTTPS but also for secure SMTP, IMAP, etc., has been around for a while — 1999, in fact –, and, while not “hopelessly broken” like SSL 2 and 3 are, there have been many successful attacks/exploits against it in the past two decades, and while most implementations (in terms of both servers and clients) have been patched to work around those, there may always be another one just around the corner. Also, TLSv1.0 came from a time where it was desirable to support as many different ciphers as possible, which means that, by default, some ciphers later found to be insecure may still be enabled.
Therefore, the end of TLSv1.0 has been talked about for a while, and, indeed, the main browser vendors — Google, Mozilla, Apple, and Microsoft — have announced that they’ll deprecate TLSv1.0 (and 1.1, which is little more than a revision of 1.0) by early 2020.
Now, until a few days ago I still had TLSv1.0 enabled on my servers (including Nginx, Postfix and Dovecot); my belief then being that some encryption was always better than no encryption, and besides we were mostly talking about public websites, that didn’t ask for user credentials or anything. Furthermore, any modern browsers/email clients default to secure protocols (TLSv1.2 or better), so there would never be, “in real life”, a situation where important/sensitive data (say, me logging on to my own mail server, or to one of my WordPresses) was being transmitted in an insecure way.
On the other hand… doesn’t the above mean that, “in real life”, nobody will really be using TLSv1.0 “legitimately”? Doesn’t it mean that setting up TLSv1.2 as a minimum will have no adverse consequences?
So I investigated it a little:
in terms of browsers/clients, requiring TLSv1.2 or above means: 1) no Internet Explorer before version 11; 2) no Safari before 6.x; 3) no default Android browser before Android 4.4.x (but those users can still use Chrome or other non-default browsers). (You can check this out on SSL Labs). I don’t have a way to check email clients that old, but you can assume that anything released after the browsers just mentioned will be OK;
in terms of my own server logs (mostly Nginx, but also Postfix, in terms of opportunistic encryption), apparently, only spambots and such are using TLSv1.0; any “legitimate” email servers use TLSv1.2 or 1.3. The same thing with my websites — any TLSv1.0 entries in my logs are always from some bot (and not one for a “well-known” search engine), not actual visitors. 1
In short: you can probably move to requiring TLSv1.2 or above now (and, indeed, I’ve done so since a few days ago), unless you’re in some very peculiar situation — maybe your site needs to be accessed by users of a company whose IT policy includes not using software newer than 20 years old? Or you have a CEO who’s always refused to upgrade his beloved Symbian phone, but who’d flip out if he couldn’t access your company’s webmail with it? 🙂 Otherwise, my advice is: go TLSv1.2 and don’t look back.
However, and since this post is already quite long, I’ll leave the specific instructions for disabling TLS before 1.2 in Nginx, Apache, Postfix, and Dovecot for the next one…
Some time ago, I noticed something like this on a site (not mine)’s rating on SSL Labs:
According to SSL Labs’ test, that little P means:
(P) This server prefers ChaCha20 suites with clients that don’t have AES-NI (e.g., Android devices)
Naturally, my curiosity was piqued, and a bit of investigating followed…
First (and briefly), the theory:
both AES and ChaCha20 ciphers are thought to be equally secure.
many modern CPUs provide hardware acceleration for AES, in which case it’s faster…
… however, if the CPU doesn’t accelerate AES, then ChaCha20 is faster.
Therefore, setting a web server (or any other kind of server, such as IMAP or SMTP, but those are outside this article’s scope) to use either option all the time means that some clients won’t have the optimal cipher(s) (in terms of performance) for their device. It won’t probably be noticeable in 99% of cases, but, hey, I’m a geek, and geeks optimize stuff for fun, not just for real-world performance. 🙂
Fortunately, as the SSL Labs test shows, it’s possible to configure a server to use/prefer one cipher for non-hardware accelerated devices, and another cipher (or, more precisely, a list of preferred ciphers) for the rest.
(For the curious, the selection process works like this: if the client’s preferred cipher is ChaCha20, then the server assumes it’s a device without hardware AES and uses that. If the client’s top cipher is anything else, then the server uses its own cipher list.)
This guide shows how to do it in Nginx, using Let’s Encrypt certificates. I’m using an Ubuntu 19.04 server, but any relatively recent Nginx should work, as long as you’re using OpenSSL 1.1.0 or newer, or LibreSSL. If your distro still provides only OpenSSL 1.0.x or older by default, you can always compile a newer version to a separate directory, and then compile Nginx to use it: here are instructions for OpenSSL 1.1.x or LibreSSL.
This is actually relatively simple, though some specific distros/servers may require some other changes:
1- Give Nginx a list of ciphers that doesn’t have ChaCha20 in first place (you need to specify some AES ciphers first). The list I’m using (currently specified in /etc/letsencrypt/options-ssl-nginx.conf , since I’m using Let’s Encrypt) is:
2- Edit your OpenSSL/LibreSSL configuration file (in Ubuntu/Debian it’s /etc/ssl/openssl.cnf) and, just before the first section in square brackets (in Ubuntu it’s “[ new_oids ]“), add the following:
3- Restart Nginx, and test it on SSL Labs. If you see the superscript “P” at the right of each ChaCha20 line, everything went well. (Problems? Ask here, and maybe I, or some other reader, can help.)
You can also set up your Nginx to log ciphers (possibly to a separate log file), using something like:
And then access the site with some of your devices, noting what ciphers they use. For instance, both my (2015) i7 laptop and my (2016) ZenFone 3 Android phone use AES (yes, apparently the Snapdragon 625 chip includes AES acceleration), but a 2013 Nexus 7 tablet and a 2015 iPad Pro (Safari, in this case — all other examples used Chrome) went for ChaCha. This shows that everything’s working correctly — the devices that specifically want to use ChaCha are indeed using it, while the rest aren’t. And, yes, this means that the very same browser (in this case, Chrome on Android) has different cipher preference defaults depending on the hardware it’s running on.
Mar 22 14:10:01 xxxxxxxx crond[26561]: (root) PAM ERROR (Authentication token is no longer valid; new one required) Mar 22 14:10:01 xxxxxxxx crond[26561]: (root) FAILED to authorize user with PAM (Authentication token is no longer valid; new one required)
chage -l root shows that its password has expired…
Now, why did root have password expiration enabled? It’s a mystery 🙂 — probably someone ran a script configuring password expiration for all users and forgot to add some exceptions to it, root among them. Anyway,
A commenter on my old post about configuring security headers on Nginx (and getting an A or A+ rating on securityheaders.io, now moved to securityheaders.com) asked for a version for Apache, so… here it is. It’s basically the same, just with the appropriate syntax changes:
Nginx setting
Explanation
header always set Strict-Transport-Security “max-age=31536000; includeSubdomains; preload”
Enforces HTTPS on the entire site. Don’t use if you still need to provide HTTP, of course.
header always set X-Content-Type-Options nosniff
Prevents browsers from trying to “guess” MIME types and such, forcing them to use what the server tells them.
header always set X-Frame-Options SAMEORIGIN
Stops your site from being included in iframes on other sites.
header always set X-Xss-Protection “1; mode=block”
Activates cross-scripting (XSS) protection in browsers.
header always set Referrer-Policy “unsafe-url”
Makes the site always send referrer information to other sites. NOTE: this is not the most secure setting, but it’s the one I prefer; see below.
header always set Content-Security-Policy “default-src https: data: ‘unsafe-inline’ ‘unsafe-eval'”
Forces TLS (don’t use if you still need to provide HTTP); prevents mixed content warnings. NOTE: again, this is not the most secure setting; see below.
About the last two settings, I’ll copy from my old post:
I choose to send referring information to other sites for a simple reason: I like to see where my own visitors come from (I don’t sell, share or monetize that in any way, it’s just for curiosity’s sake), and I’m a firm believer in treating others as I want to be treated. If you don’t care about that, you may want to change this to “no-referrer-when-downgrade“, or “strict-origin“.
As for the Content Security Policy, anything more “secure” than the above prevents (or at least makes it a lot more headache-inducing) the use of inline scripts and/or external scripts, which would mean no external tools/scripts/content (or at least a lot of work in adding every external domain to a list of exceptions), and a lot of hacking on software such as WordPress (seriously: add the setting, but then remove the “unsafe-*” options and see what stops working…). You might use the most paranoid settings for an internally developed, mostly static site, but I fail to see the point. So, this and the paragraph above are the reasons for an A rating, instead of an A+.
For extra fun, instead of Debian/Ubuntu, we’ll be using Red Hat/CentOS (more precisely, CentOS 7.6 with the latest updates as of January 10th, 2019, but this shouldn’t change for any other 7.x versions of either CentOS or RHCE). And we’ll keep firewalld and SELinux enabled 1, even though, as far as I know, in most companies (well, at least the ones I’ve worked at) it’s, typically, official policy to disable both. 🙂
So, let’s assume you already have a CentOS/RHEL machine, with internet access and the default repositories enabled. First, we’ll enable the EPEL repository (to provide Let’s Encrypt’s certbot):
Now we’ll install certbot (which will also install Apache’s httpd and mod_ssl as dependencies) and wget (which we may need later):
yum install python2-certbot-apache wget
To be run certbot and allow it to verify your site, you should first create a simple vhost. Add this to your Apache configuration (e.g. a new file ending in .conf in /etc/httpd/conf.d/):
You should now have the virtual host configured in /etc/httpd/conf.d/ssl.conf . Let’s add a few lines to it (inside the <VirtualHost></VirtualHost> section):
Header always set Strict-Transport-Security "max-age=15768000"
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384::ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384
SSLCipherSuite TLSv1.3 TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384
SSLHonorCipherOrder on
SSLCompression off
(Don’t mind the “TLSv1.3” line for now, it won’t be used — if at all — until later in this post, and won’t have any effect if it isn’t.)
This should get you an A+ rating, with all bars at 100% except for Key Exchange. A good start, right? 🙂 However, that final bar can be a bit tricky. The problem here is that SSL Labs limits the Key Exchange rating to 90% if the default ECDH curve is either prime256v1 or X25519, the first of them being the OpenSSL default… and Apache 2.4.6 (the version included in all 7.x RHELs/CentOSes) doesn’t yet include the option to specify a different list of curves (of which secp384r1 will give a 100% rating, and won’t add any compatibility issues).
Assuming you don’t want to install a non-official Apache, then the only way (as far as I know) to specify a new curve is to add the results of running:
openssl ecparam -name secp384r1
to your certificate file. But Let’s Encrypt certificates are renewed often, so you may need some trickery here — such as adding something like this to root’s crontab:
Alternatively, you can upgrade Apache to a newer version. CodeIT (for instance) provides replacement packages for RHEL and CentOS, which you can use by doing:
However! This will still not give 100%, since CoreIT’s Apache appears to come with a newer OpenSSL (1.1.1) statically linked to it, which means it’ll enable TLS 1.3 by default. And there’s nothing wrong with it… except that, from all the tests I’ve performed, Apache ignores the specified order of the ECDH curves when using TLS 1.3. With the line above, it’ll use secp384r1 for TLS 1.2, but X25519 for TLS 1.3 — and that’s enough to lower the Key Exchange bar to 90% again. 🙁
Given this, there are now several options:
1- disable TLS 1.3 (it’s still relatively new, and most people are still using 1.2): change the line:
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
to:
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1 -TLSv1.3
2- force secp384r1: change the “SSLOpenSSLConfCmd Curves” line to simply:
SSLOpenSSLConfCmd Curves secp384r1
3- ignore this situation altogether: X25519 is considered to be as secure as secp384r1, if not more; even SSL Labs mentions not penalizing it in the future in their (early 2018) grading guide — they just have been a bit lazy in updating their test. It’s quite possible that a future update of their tool will give a full 100% score to X25519.
Anyway, most of this was done for fun, for the challenge of it. 🙂
This post should be titled “How to get a 100% score on SSL Labs (Nginx, Let’s Encrypt) as of April 2018“, since SSL Labs‘s test evolves all the time (and a good thing it does, too.) But that would be too long a title. 🙂
The challenge:
As we’ve seen before, it’s relatively easy to get an A+ rating on SSL Labs. However, even that configuration will only give a score like this:
Certainly more than good enough (and, incidentally, almost certainly much better than whatever home banking site you use…), but your weirdo of a host can’t look at those less-than-100% bars and not see a challenge. 🙂
First, note that all of the following will be just for fun, as the settings to get 100% will demand recent browsers (and, in some cases, recent operating systems/devices), so you probably don’t want this in a world-accessible site. It would be more applicable, say, for a webmail used by you alone, or by half a dozen people. But the point here isn’t real-world use, it’s fun (OK, OK, and learning something new). So, let’s begin.
Getting that perfect SSL Labs score:
For this example, I’ll be using a just-installed Debian 9 server, in which I did something like:
apt -y install nginx
echo "deb http://ftp.debian.org/debian stretch-backports main" >> /etc/apt/sources.list.d/stretch-backports.list # if you don't have this repository active already
apt -y install python-certbot-nginx -t stretch-backports
(The reason for the above extra complexity is the fact that the default certbot in Debian Stretch is too old and tries to authenticate new certificates in an obsolete way, but there’s a newer one in stretch-backports.)
I also created a simple index.html file on /var/www/html/ that just says “Hello world!”. Right now, the site doesn’t even have HTTPS.
So, let’s create a new certificate and configure the HTTPS server in Nginx:
Testing it on SSL Labs, it gives… exactly the same bars as in the image above, but with an A rating instead of A+. Right, we need HSTS to get A+ (and currently certbot doesn’t support configuring it automatically in Nginx), so we add this to the virtual host’s server section:
(Note: remove “includeSubDomains;” if you have HTTP sites on any subdomain. Also, HSTS makes your site HTTPS-only, so you can’t use it if you want to keep an HTTP version — in which case you can’t get better than an A rating.)
This upgrades the rating to A+, as expected, but the bars still didn’t change. Let’s make them grow, shall we?
Certificate:
It’s already at 100%, so yay. 🙂
Protocol Support:
Just disable any TLS lower than 1.2. In this case, edit /etc/letsencrypt/options-ssl-nginx.conf and replace the ssl_protocols line (or comment it and add a new one) with:
ssl_protocols TLSv1.2;
WARNING: any change you make to the file above will affect all virtual hosts on this Nginx server. If you need some of them to support older protocols, it’s better to just comment out that option in that file, and then include it in each virtual host’s server section.
EDIT: actually, it turns out that ssl_protocols affects the entire server, with the first option Nginx finds (first in /etc/nginx.conf, then in the default website (which possibly includes /etc/letsencrypt/options-ssl-nginx.conf)) taking precedence. So just choose whether you want TLS 1.0 to 1.2 (maximum compatibility) or just 1.2 only (maximum security/rating); if you need both, use two separate servers.
Key Exchange:
We’ve already created the new certificate with a 4096-bit key (with “—rsa-key-size 4096“, see above), and according to SSL Labs’ docs this should be enough… but it isn’t. After some googling, I found out that you also need to add the following option inside the server section in Nginx:
ssl_ecdh_curve secp384r1;
Since the default Nginx+OpenSSL/LibreSSL setting, either “X25519” or “secp256r1” (actually “prime256v1“), also lowers the score.
EDIT: again, ssl_ecdh_curve affects the entire server, so you can’t use different default curves for each virtual host. So I’d suggest (in /etc/letsencrypt/options-ssl-nginx.conf):
ssl_ecdh_curve secp384r1:X25519:prime256v1;
if you want the 100% rating, or:
ssl_ecdh_curve X25519:prime256v1:secp384r1:
otherwise. This one doesn’t affect compatibility, by the way; it’s just a question of the preferred order.
The certificate’s key size (4096 or 2048) is, like the certificate itself, specific to each virtual host.
Cipher Strength:
For 100% here, you need to disable not only any old protocols, but also any 128-bit ciphers. I started from Mozilla’s SSL tool (with the “Modern” option selected), then removed anything with “128” in its name, then moved ChaCha20 to the front just because, and ended up with this line, which I added to /etc/letsencrypt/options-ssl-nginx.conf (replacing the current setting with the same name):
(This time, this setting can indeed be specified for each virtual host, though you’d need to comment it out in the file above (which affects all virtual hosts with Let’s Encrypt certificates) if you want to do so, otherwise they’ll conflict.)
The result:
As said near the beginning, this is not something you likely want to do (yet) for a production, world-accessible site, as it will require relatively modern browsers, and you typically can’t control what visitors use1. According to SSL Labs’ list, it’s not actually that bad: all versions of Chrome, Firefox, or Safari from the past couple of years are fine, but no Microsoft browser older than IE11 will work, nor will Android’s default browser before 7.0 (Chrome on Android, which is not the same thing, will do fine.) I’d suggest it for a site where you can “control” the users, such as a small or medium-sized company’s webmail or employee portal (where you, as a sysadmin, can and should demand up-to-date security from your users).
But the main point of this exercise was, of course, to see if I could do it. Challenging yourself is always good. 🙂 And if you can share what you learned with others, so much the better.
If you normally browse the web (and you don’t block ads), you’ve probably already seen hundreds, if not thousands, of advertisements for “VPNs” or “VPN software/subscriptions” — how you “need” one to have any kind of privacy, how “they”1 can track you everywhere if you don’t use one, and so on. And they’re usually not cheap. Interestingly, and probably because of some recent “revelations”, all those advertisements seem to focus on privacy or anonymity only.
But using a VPN can offer you something else: mobile security; namely, the ability to make your mobile devices “be”, on demand, part of your home network (or your server’s network, if you have, say, a VPS somewhere), no matter where you are, regardless of whether you’re using 3G/4G mobile data, or some random WiFi hotspot. Yes, even one with no encryption at all; it won’t matter. You can, absolutely, trust random open hotspots again; even their owner won’t be able to read or alter your traffic in any way. And you can do it for free, too (assuming you already have an always-connected Linux server); you just need to configure your own OpenVPN server, which you’ll be able to do by following this (hopefully accessible) guide.
(Yes, you read the title correctly. For extra fun, and to prevent this blog from being too focused on Ubuntu/Debian, this time I’ll be using Red Hat Enterprise Linux / CentOS (and, I assume, Fedora as well.) Later on, I may post a Debian-based version.)
Configuring a basic HTTP site on Nginx
(Note: if you already have a working HTTP site, you can skip to the next section (“Adding encryption…”))
Yes, the post title mentions an “existing” website, which I believe will be the case in most “real world” situations, but installing a new one is actually very easy on CentOS1. First, do:
yum -y install epel-release; yum -y install nginx
Then create a very basic configuration file for the (non-HTTPS) site, as /etc/nginx/conf.d/mysite.conf :
Then, of course, create the /var/www/mysite directory (CentOS doesn’t use /var/www by default, but I’m far too used to it to change. 🙂 ) If you’d like, create an index.html text file in that directory, restart nginx (“service nginx restart” or “systemctl restart nginx“, depending on your system’s major version), and try browsing to http://mysite.mydomain.com . If it works, congratulations, you have a running web server and a basic site.
Adding encryption to the site (not using Let’s Encrypt):
Second, edit the site’s configuration file (in the “starting from scratch” example above, it’s “/etc/nginx/conf.d/mysite.conf“), and copy the entire server section so that it appears twice on that text file (one after the other). Pick either the original or the copy (not both!), and, inside it, change the line:
listen 80;
to:
listen 443 ssl http2;
(Note: the “http2” option is only available in Nginx 1.9.5 or newer. If your version complains about it, just remove it, or upgrade.)
This should be enough — restart Nginx and you should have an HTTPS site as well as the HTTP one.
And what if you want to disable HTTP for that site and use HTTPS only? Just edit the same configuration file, look for the server section you didn’t change (the one that still includes “listen 80;“), and replace the inside of that section with:
(replacing “mysite.mydomain.com” with yours, of course.)
Answer the questions it asks you: a contact email, whether you agree with the terms (you need to say yes to this one), if you want to share your email with the EFF, and finally if you want “No redirect” (i.e. keep the HTTP site) or “Redirect” (make your site HTTPS only).
And that’s it (almost — see the next paragraph) — when you get the shell prompt back, certbot will already have reconfigured Nginx in the way you chose in the paragraph above, and restarted it so that it’s running the new configuration. You may want to add “http2” to the “listen 443 ssl;” line in the configuration file (it’ll probably be the default someday, but as of this post’s date it isn’t), and don’t forget your options for improved security and security headers.
Only one thing is missing: automatically renewing certificates. Strangely, the certbot package configures that automatically on Ubuntu, but not on CentOS, from what I’ve seen (please correct me if I’m wrong). The official Let’s Encrypt docs recommend adding this (which includes some randomization so that entire timezones don’t attempt to renew their certificates at precisely the same time) to root’s crontab:
(Note: It’s possible to use Let’s Encrypt to create ECDSA certificates, but as of this writing you have to do most of the work manually (creating a CSR, etc.), and you lose the automatic renewal, so for the moment I suggest using RSA certificates. I hope this changes in the future.)
A common mistake that Linux users make — indeed, one that even some less experienced Linux sysadmins sometimes make — is treating Linux2 as if memory technology hadn’t evolved since MS-DOS, when you needed as much memory as possible — especially conventional RAM (anyone remember that?) — to be free, so that applications could be loaded into it and run. If there wasn’t enough free memory, then applications wouldn’t run, or would run worse, so you’d have to “optimize” your DOS system to have as much free conventional memory as possible (using tricks such as loading device drivers into upper memory, and, of course, exiting a program completely before starting another.)
Somehow, that line of thinking still exists — even, apparently, in people who never used MS-DOS or similar systems before(!). “Free memory is good,” people think. “If most of the RAM is used up, then there’s a problem,” they insist — one to be solved either by killing applications or by adding more memory to the system.
Linux — like all modern systems, and I’ll even include Windows here 🙂 3, uses virtual memory. Put simply (and, yes, I’m simplifying it a lot — there’s more to it, of course), it means that the amount of available memory is actually (mostly) independent of the physical system RAM, since the system uses the disk as memory, too. Since real RAM is (virtually) always faster — but (usually) smaller — than a disk drive, the system typically uses prefers to use it for more critical stuff (such as the kernel, device drivers, and currently actively running code), with less critical stuff (say, a program waiting for something to happen) being relegated to the disk — even though it’s still “memory”. Note that even most of the aforementioned “critical stuff” can be temporarily moved to the disk if needed.
You may now be asking: if RAM is typically preferred for “critical stuff” only, and that stuff doesn’t take up the entire available memory, then what’s the rest of the RAM used for? The answer is, of course, disk caching — both for reading (loading up recently and/or frequently accessed data from the disk and keeping it available in RAM for a while, in case it’s needed again in the future) and for writing (which works the other way around — the system tells the program the data is already written to disk, so that it can go on instead of waiting for it, but it’s still only in memory, to be written a short while later.) So, any free RAM is typically used for disk caching, with the system managing it automatically.
But people keep looking at the output of “top”, seeing 90% of the physical RAM used, and panicking. 🙂
In fact, this confusion was so common that modern Linux systems even changed some labels on the “top” command. For instance, this is from a Red Hat Enterprise Linux 6.x:
As seen above, the server has (rounding down) 567 GB of physical RAM, of which 506 GB — i.e. 89% — are used. Looks scary (if you haven’t been reading so far), right? I mean, the server is at almost 90% capacity! But then you look at the value for “cached”, in the line below. That’s 479 GB — or 84% — used for disk cache! In other words, the entire operating system and applications are running in just 5% of the server’s RAM, with the rest of it being used just to make disk access faster.
And that “cached” value — and this is the important bit — counts as free memory. It’s available to the system whenever needs it. In “computer terms”, it’s not technically free, it’s being used for the cache, but in “human terms”, it’s just as free as the one that actually says “free” in front of it.
As I said above, this caused so much confusion to users (for decades) that, eventually, it was changed. Here’s how it looks like in a RHEL 7.x system:
In the first line: 79044 KB free + 240444 KB used + 697064 KB for buffers and cache (which were separated in the old version, by the way) = … 1016512 KB, which you’ll note is exactly the first value in the same line, for total physical memory.
In the second line, the first three values refer to swap, so they’re not relevant here, but the “avail Mem” one is new. What is it? According to the “top” manual page,
The avail number on line 2 is an estimation of physical memory available for starting new applications, without swapping. Unlike the free field, it attempts to account for readily reclaimable page cache and memory slabs.
In other words (and this is again an oversimplification), it’s what the system has readily available — in other words, the “free” part, plus (most of) what is being used for caching. That doesn’t mean it can only use that much memory, it just means that that’s what it can use right now, without swapping anything out. Which, in my opinion, ends up not mattering that much, in terms of a precise value. However, at least it now gives a better (though rough) idea of what part of the physical memory is available, not simply “free” (i.e. not doing anything).
So, what about those specific cases mentioned (in a note) at the beginning? Well, if there is very little memory free and very little swap space free and very little memory (compared to the total amount of physical memory) is being used for buffers/cache, then and only then4 may it be the case that the server actually needs more memory and/or swap space, or that some process has some kind of memory leak or is otherwise using much more memory than usual (if restarting the service fixes it, then this is probably the case.)
(Welcome to “Note to Self”, a new series of hopefully short posts where I attempt to remind my future self of the correct way of doing things I often sometimes get wrong, even after decades.)
The correct syntax for specifying nameservers in /etc/resolv.conf is:
nameserver 1.2.3.4
not:
server 1.2.3.4
Worse, the system won’t complain; it’ll just ignore those entries, so you may wonder for a while about why name resolution is not working.
Of course, when adding an entry to an existing file, you probably won’t make this basic mistake, as any current entries will use the correct “nameserver” syntax. But when creating a new file from scratch… not that it happened to me yesterday, oh no. 🙂
SPF, or Sender Policy Framework, is a way for domain owners to say: “these are the email servers my domain sends mail from; anyone else is attempting to impersonate me.” These days, a lot of email services implement it in some way (both for receiving and sending email), and, if you administer your own server, it’s quite easy to implement, as we’ll see.
There are actually two parts to SPF: configuring it for your domain (so that your outgoing emails have a better chance of being accepted and emails impersonating your domain have a better chance of being refused), and configuring your email server to check whether incoming emails come from an SPF authorized server or not. Without further ado:
1- Configuring SPF for your domain’s outgoing mail:
This is actually done in DNS, not in the email server. Simply add a TXT record for your domain, with something like this:
There are other configuration parameters, such as getting the servers from the domain’s MX record (add “mx“), and copying another domain’s authorized servers (“include:otherdomain.com“). And, of course, if you only have one server sending mail, you don’t need two “a:” entries.
Pay special attention to the “-all” part at the end: that instructs other servers to refuse any email apparently coming from your domain that isn’t authorized for it. That should ideally be the final configuration, but if you’re doing this for the first time you may want to change it to “~all“, which means “softfail”: it suggests to the other side that the mail is probably not legit, but that it shouldn’t refuse it outright because of it. After testing it for a while, if you see no problems then you can change it to “-all“.
2- Configuring SPF checking for incoming mail in Postfix:
I’ll assume you already have a working Postfix server (hopefully already using TLS encryption, hint hint), and that you want to respect any “-all” configurations (see above) in incoming mail — in other words, if a domain says “all my mail comes from server X, refuse anything else” and you receive an email purportedly from that domain, but from a server Y, it’s refused (for “~all“, it’d generate a warning).
First, install pypolicyd-spf. On a Debian-based system, the package (as of this writing) still has its old name, so enter the following:
apt install postfix-policyd-spf-python
Edit /etc/postfix/master.cf, and add:
# spf
policyd-spf unix - n n - 0 spawn
user=policyd-spf argv=/usr/bin/policyd-spf
Then edit /etc/postfix/main.cf, and look for the “smtpd_recipient_restrictions” section. That section will probably begin by accepting mail from your networks and from authenticated users, then rejecting non-authorized relaying, possibly followed by some white- or blacklists, and maybe a couple of filters, finally ending with “permit“. Before that “permit“, add:
check_policy_service unix:private/policyd-spf,
(note the comma at the end; without it, Postfix wouldn’t read the rest of that section.)
IMPORTANT: while you’re still in testing, it may be a good idea to add “warn_if_reject” to the beginning of the previous configuration line. With it, Postfix will still log SPF failures, but won’t actually reject an email because of it. When you’re certain everything is fine, you can then remove that prefix (and then reload Postfix, of course).
Add also the following to that configuration file, outside of any section:
policy-spf_time_limit = 3600
And you’re done. Reload or restart Postfix, and check your logs to see if everything is working as intended.
Note that other tools, such as SpamAssassin, may also use SPF to weigh whether an incoming mail is more or less likely to be spam. That, however, is unrelated to the Postfix configuration above, which does a binary “block/don’t block”.
(Insert obligatory disclaimer about this being basic stuff, but this blog being — among other things — about documenting stuff I’m asked about/help others with at work, because it may be useful to other users, etc. etc.)
If you work, or have worked, as a sysadmin, you’ve probably had to do this in the past, but, even on your own system — if you were wise enough to select “use LVM” during installation — you may find yourself facing a common problem: needing to grow a filesystem, such as / (the root filesystem), /opt, etc..
First, in many cases, you may not actually need to grow an existing filesystem: if you will need to store a lot of data in a specific directory (let’s say /opt/data, but /opt is currently almost full), then simply creating a new partition and mounting it (possibly mounting it first in a temporary location so that you can move the existing directory into it) as (for the above example) /opt/data1, may well solve your problem. In this case, by the way, you don’t even need to be using LVM. You’ve just freed space on /optand (depending on the new partition’s size) you now have a lot more room to grow on /opt/data as well.
But, since we want to challenge ourselves at least a little bit, let’s say we really need to grow that filesystem. As a requisite, the current disk must be a Logical Volume (LV) of an existing Volume Group (VG), made up of one or more Physical Volumes (PVs). Let’s also assume the filesystem you want to grow is /opt, that it’s an ext4 filesystem 2, and that it’s an LV named “lv_opt“, which is part of a VG called “vg_group1“, which currently has no free/unused space (if it did have enough unused space for your needs, you could skip to step 4 below.)
The process has several steps:
1- Add the new disk (or disks, but let’s say it’s just one for simplicity’s sake) to the system. This may mean adding a physical disk to a machine, or adding a new drive to a VM, or the company’s storage team presenting a new LUN, or… Anyway, the details of the many possibilities go beyond the scope of this tutorial, so let’s just assume that you have a new disk on your machine, and that Linux sees it (possibly after a reboot, in the case of a non-hotpluggable physical disk), as something like /dev/sdc (which we’ll use for the rest of the examples.)
2- Create the new PV: if you’re going to add the entire sdc disk to the VG, just enter:
pvcreate /dev/sdc
Otherwise, use fdisk to create a new partition inside it (sdc1), with the desired size, and change it to type “LVM” (it’s “8e” on fdisk). After exiting fdisk (saving the current configuration), enter (this is apparently not mandatory on modern Linux versions, but there’s no harm in it):
pvcreate /dev/sdc1
3- Add the new PV to the existing VG:
vgextend vg_group1 /dev/sdc # or /dev/sdc1, if the PV is just a partition instead of the entire disk
If you enter a “vgs” command now, you should see the added free space on the VG.
4- Extend the LV: if you want to add the entirety of the VG’s currently unused space, enter:
lvextend -l +100%FREE lv_opt
If, instead, you want to specify how much space to add (say, 10 GB):
lvextend -L +10G lv_opt
Note that one version of the command uses “-l” (lower case L) and the other uses “-L“. That’s because one refers to extents, while the other refers directly to size.
Now the LV has been extended, but wait! You still need to…
5- Extend the filesystem itself: just enter:
resize2fs /opt
Note that it’s a “2” in the command name, regardless of whether the filesystem is ext2, 3 or 4. This operation may take a while if the filesystem is large enough, but after it’s completed you should be able to see the new filesystem’s larger size (and added free space) with a “df” command.
Any questions or suggestions, feel free to add a comment.