How to configure TLS encryption in Dovecot

DovecotAs with most other internet services, Dovecot can be configured to use TLS encryption — and, unlike some others (such as web servers or SMTP servers), there’s little reason not to enforce it.

Notes:

  • Like other recent TLS tutorials on this blog, this is, of course, not a full Dovecot guide — that would be far too complex for a blog post. Instead, it’s just focused on enabling/configuring TLS encryption. I’ll be assuming you already have Dovecot configured, and are able to access mailboxes on your server using an email client.
  • I’ll be focusing on IMAP only, not POP3 or anything else.
  • The server will be configured to require encryption, for both privacy and security reasons. I don’t know of any modern email client that doesn’t allow encrypted connections (and even then, you might work around it by configuring an encrypted tunnel, but that falls out of this post’s scope).

1. Getting a certificate

While I believe most modern email clients will support ECDSA certificates, such clients are not a known quantity like, say, web browsers are, so I suggest that you create an RSA certificate, unless you already have one whose Common Name (CN) matches your server’s public name (e.g. mail.domain.com). Dovecot versions 2.2.31 and newer support configuring alternative certificates so that you could support both kinds at the same time, but there’s probably little gain in doing that, since most clients will likely default to RSA anyway (and, unless you’re using Let’s Encrypt, certificates are not free).

So, put your certificate and private key in /etc/dovecot/. Let’s assume the certificate is called myserver-full.crt 1, and the private key is myserver.key. You should protect the private key from any users on your server, so do, for instance,

chown root:dovecot /etc/dovecot/myserver.key
chmod 640 /etc/dovecot/myserver.key

2. Configuring Dovecot for TLS

I’m assuming your Dovecot installation has a basic configuration file in /etc/dovecot/dovecot.conf, but the real “meat” of the configuration is in included files in /etc/dovecot/conf.d/. Your system may be slightly different, but I’m sure you can adapt. 🙂 Also, the numbers at the beginning of the file names may differ in your system.

This one isn’t really related to TLS, but it’s a good idea: edit /etc/dovecot/conf.d/20-imap.conf, and look for a line like this:

mail_plugins = $mail_plugins

and add to it (separated by a space):

imap_zlib

After all, there’s no reason not to use compression here, and bandwidth (especially on mobile) is still precious.

Now, edit /etc/dovecot/conf.d/10-ssl.conf, and add 2:

ssl = required
ssl_cert = </etc/dovecot/myserver-full.crt
ssl_key = </etc/dovecot/myserver.key
ssl_cipher_list = ECDHE-RSA-CHACHA20-POLY1305:ALL:!LOW:!SSLv2:!EXP:!aNULL
ssl_protocols = !SSLv2 !SSLv3
ssl_prefer_server_ciphers = yes

(The ssl_cipher_list line, besides setting secure defaults, sets the ChaCha20 protocol as the first one to be tried, since it’s considered one of the fastest and most secure. Note that it’ll require Dovecot linked to LibreSSL or OpenSSL 1.1.x to use that cipher, though Dovecot won’t complain if it doesn’t have access to it; it’ll just use the normal, secure defaults.)

If you don’t change anything else from the default configuration, the server will be listening on port 143 (IMAP) and port 993 (IMAPS). “But,” you ask, “isn’t standard IMAP unencrypted? I thought you were enforcing encryption…” Yes, but the standard IMAP port can be “upgraded” to TLS by entering the STARTTLS command, and email clients not only support that, but typically default to it (if not, just make sure you enable it). The server will refuse to authenticate any users on port 143 before they’ve “STARTTLSed”.

If you wanted to use only IMAP+STARTTLS, or only IMAPS, just edit /etc/dovecot/conf.d/10-master.conf,  look for the “service” configuration, and disable the one you don’t want. But I see no problem with keeping both enabled.

Thoughts? Questions?

(To do the same to SMTP / Postfix, please see How to configure TLS encryption in Postfix.)

How to create an RSA certificate

Whether it’s a web server, an email server (two, in fact, assuming you’re using one for SMTP and another for IMAP, as is common), or other types of applications, to have encrypted connections a TLS certificate 1 is required. It can be self-signed (say, if you’ll be the only one needing to access that server), but browsers and email clients will complain (loudly!), therefore, if you want your server to be universally accessed, a “real” certificate is needed, so let’s show how to get one.

This recipe creates an RSA certificate (strongly suggested for, say, Postfix, since you can’t control what other email servers support, so you should go for the most common option); for web servers (accessed by standard browsers), an ECDSA certificate might be a good alternative.

A few notes:

  • There are alternatives (e.g. key sizes, etc.) for basically every parameter I’m using, but I’m not going into those. This is supposed to be as quick and easy as possible, after all.
  • Similarly, I’m not going into Let’s Encrypt; instead, I’m assuming a “normal” certificate authority such as Comodo. If you do use Let’s Encrypt, just use it to create the certificate instead of the “send the CSR to the certification authority” part.
  • The certificate (and server) will be compatible with most browsers and applications (email clients, etc.), but that “most” won’t include any Microsoft browsers on Windows XP (Firefox or Chrome on that abomination of an OS will still work).

Generating the key and the Certificate Signing Request (CSR):

openssl req -nodes -newkey rsa:4096 -keyout myserver.key -out myserver.csr

The command will ask you for details about your server/company (location, etc.). You should fill in every field, although the only mandatory one is “Common Name” (CN), which must match your server’s public name (not necessarily the machine’s name, but the hostname people will type in the browser/email client, such as “zurgl.com” or “mail.something.com“. Note that a certificate for “domain.com” also includes “www.domain.com” (so don’t include the “www.” in the CN), but if the server is reached at “subdomain.domain.com“, then that’s what the CN needs to be.

Ordering and receiving the new certificate:

Now go to a certification authority (CA), order a new certificate, and when asked for a CSR, send them (usually you can just copy and paste it into a text entry window) that myserver.csr file.

If everything went well, then the CA should email you the new certificate in a short while. Typically they send you two files: the certificate itself, and a couple of “intermediate” certificates. Only the first is really needed, but I’ve had best results with concatenating your certificate (first) and the intermediate certs (last) into a single file, which you might call myserver-full.crt .

That file is, to all intents and purposes, your new certificate, and it’s ready to be used in application servers. You can use the same certificate for several services (e.g. an HTTPS website, an SMTP server, an IMAP server), as long as the hostname matches the certificate’s CN (so a certificate for “mail.myserver.com” won’t work for “www.myserver.com“, and vice-versa, but that’s because the names don’t match — not because they’re different services).

Adding TLS encryption (with your shiny new certificate) to internet services:

Here’s a (growing) list of tutorials:

(Note: most of this post’s content originally appeared in a previous one.  The difference is that that one refers to ECDSA certificates, while this one is about the older, more common RSA certificates.)

How to configure TLS encryption in Postfix

Although Postfix (and the SMTP protocol in general) can function without any kind of encryption, enabling TLS it can be a good idea in terms of both security and privacy, so let’s look at how it can be easily done.

We’ll actually be configuring two separate types of encryption:

  • Opportunistic encryption for regular SMTP (port 25), both incoming 1 and outgoing 2. “Opportunistic”, here, means that the server will ask for encryption and use it if the other side also supports it, but if it doesn’t then it’ll work without encryption. Although forcing it might sound tempting for security reasons, in reality many “smaller” servers around the world still don’t support it, so you would be unable to send or receive mail to/from those. Opportunistic TLS at least means that most “big” email services (e.g. Gmail) will communicate with you (and you with them) with encryption.
  • Mandatory encryption for the submission service (port 587). This is used by your own users (even if it’s just you) to send mail through your server, and typically has different restrictions (e.g. it’s authenticated, but on the other hand it can be used to send mail to the outside (unlike incoming port 25 which doesn’t, as that would make it an open relay)). Both the users’ authentication that would otherwise be in clear text, and the fact that only a limited number of users (your users, too) will be accessing that port through email clients (never other email servers that you don’t control) make forcing encryption here a no-brainer.

NOTE: naturally, this is not an exhaustive Postfix tutorial. I’ll be assuming you already have a working server, and just want to add TLS encryption to it. Also, some parts may not apply to your case.

1. Have/get a TLS certificate for your email host’s public name

If you don’t already have one, see how to create a new TLS certificate. We’ll be using an RSA certificate here (instead of going with ECDSA) since unlike, say, a web server accessed by standard browsers, you can’t control what other email servers support 3, therefore it’s best to choose the most common option. 4

Note that the certificate’s CN must match your email host’s public name (e.g. “mail.domain.com“). That will also probably correspond to (one of) the domain’s MX records.

2. Postfix configuration

Again, I’ll be assuming your non-TLS Postfix is already working fine.

Put your certificate and key in /etc/postfix (for instance). For this example, I’ll be calling them myserver-full.crt and myserver.key.

In /etc/postfix/main.cf, add the following lines:

# TLS configuration starts here

tls_random_source = dev:/dev/urandom

# openssl_path=/usr/local/libressl/bin/openssl
# uncomment and edit the above if you're using a different "openssl" than the system's
# (in this case, LibreSSL)

# SMTP from your server to others
smtp_tls_key_file = /etc/postfix/myserver.key
smtp_tls_cert_file = /etc/postfix/myserver-full.crt
smtp_tls_CAfile = /etc/postfix/myserver-full.crt
smtp_tls_security_level = may
smtp_tls_note_starttls_offer = yes
smtp_tls_mandatory_protocols=!SSLv2,!SSLv3
smtp_tls_protocols=!SSLv2,!SSLv3
smtp_tls_loglevel = 1
smtp_tls_session_cache_database =
    btree:/var/lib/postfix/smtp_tls_session_cache

# SMTP from other servers to yours
smtpd_tls_key_file = /etc/postfix/myserver.key
smtpd_tls_cert_file = /etc/postfix/myserver-full.crt
smtpd_tls_CAfile = /etc/postfix/myserver-full.crt
smtpd_tls_security_level = may
smtpd_tls_auth_only = yes
smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3
smtpd_tls_protocols=!SSLv2,!SSLv3
smtpd_tls_loglevel = 1
smtpd_tls_session_cache_database =
    btree:/var/lib/postfix/smtpd_tls_session_cache

# TLS configuration ends here

And in /etc/postfix/master.cf, assuming you already have a “submission” section a bit like this:

submission  inet  n     -       n       -       -       smtpd
    -o smtpd_etrn_restrictions=reject
    -o smtpd_sasl_auth_enable=yes
    -o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject
    -o smtpd_client_restrictions=permit_mynetworks,permit_sasl_authenticated,reject
    -o smtpd_helo_restrictions=permit_mynetworks,permit

Add to that section:

  -o smtpd_tls_security_level=encrypt

The above forces encryption for the submission service (remember that it’s not required for normal SMTP, it’s just desired).

In short:

  • your server tries to connect to others using TLS, but falls back to an unencrypted connection if the other side doesn’t support encryption;
  • other servers can connect to yours using TLS, but if they don’t attempt it then the connection will be unencrypted;
  • your users *must* use encryption (and authentication) to send mail through your server.

Restart your server, and check the logs: you should be getting mentions of TLS now. For instance, an email server starting an encrypted connection to yours looks like:

Sep 6 14:25:58 drax postfix/smtpd[22727]: Anonymous TLS connection established from lists.openbsd.org[192.43.244.163]: TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)

Questions? Suggestions?

(And what about securing mailbox access? Enjoy: How to configure TLS encryption in Dovecot)

How to create an ECDSA certificate

Whether it’s a web server, an email server (two, in fact, assuming you’re using one for SMTP and another for IMAP, as is common), or other types of applications, to have encrypted connections a TLS certificate 1 is required. It can be self-signed (say, if you’ll be the only one needing to access that server), but browsers and email clients will complain (loudly!), therefore, if you want your server to be universally accessed, a “real” certificate is needed, so let’s show how to get one.

In this tutorial, I’ll be creating an ECDSA certificate with an EC key, instead of the more usual RSA type; ECDSA is more modern and is theoretically more secure even with smaller keys. If you need an RSA certificate (which I’d recommend for Postfix, for instance, since you can’t control what other email servers around the world support, so, you should go for the most common option; for web servers, ECDSA is fine), see How to create an RSA certificate.

A few notes:

  • There are alternatives (e.g. key sizes, etc.) for basically every parameter I’m using, but I’m not going into those. This is supposed to be as quick and easy as possible, after all.
  • Similarly, I’m not going into Let’s Encrypt; instead, I’m assuming a “normal” certificate authority such as Comodo (which supports ECDSA certificates; if you choose another, you should make sure of that in advance). If you do use Let’s Encrypt, just use it to create the certificate instead of the “send the CSR to the certification authority” part.
  • The certificate (and server) will be compatible with most browsers and applications (email clients, etc.), but that “most” won’t include any Microsoft browsers on Windows XP (Firefox or Chrome on that abomination of an OS will still work).

Generating the key and the Certificate Signing Request (CSR):

openssl ecparam -genkey -name secp384r1 | openssl ec -out myserver.key

openssl req -new -key myserver.key -out myserver.csr

The second command will ask you for details about your server/company (location, etc.). You should fill in every field, although the only mandatory one is “Common Name” (CN), which must match your server’s public name (not necessarily the machine’s name, but the hostname people will type in the browser/email client, such as “zurgl.com” or “mail.something.com“. Note that a certificate for “domain.com” also includes “www.domain.com” (so don’t include the “www.” in the CN), but if the server is reached at “subdomain.domain.com“, then that’s what the CN needs to be.

Ordering and receiving the new certificate:

Now go to a certification authority (CA), order a new certificate, and when asked for a CSR, send them (usually you can just copy and paste it to a text entry window) that myserver.csr file.

If everything went well, then the CA should email you the new certificate in a short while. Typically they send you two files: the certificate itself, and a couple of “intermediate” certificates. Only the first is really needed, but I’ve had best results with concatenating your certificate (first) and the intermediate certs (last) into a single file, which you might call myserver-full.crt .

That file is, to all intents and purposes, your new certificate, and it’s ready to be used in application servers. You can use the same certificate for several services (e.g. an HTTPS website, an SMTP server, an IMAP server), as long as the hostname matches the certificate’s CN (so a certificate for “mail.myserver.com” won’t work for “www.myserver.com“, and vice-versa, but that’s because the names don’t match — not because they’re different services).

Adding TLS encryption (with your shiny new certificate) to internet services:

Here’s a (growing) list of tutorials:

(Note: most of this post’s content originally appeared in a previous one.  The reason for its creation is that I’m planning several other TLS-related posts where a certificate will be required and, to avoid repeating the “how to create a certificate” instructions in every single one of them, I’d rather have this post to link to when needed.)

How to set up an Nginx HTTPS website with an ECDSA certificate (and get an A+ rating on SSL Labs)

Most people who maintain web servers (or email servers, or…) have probably had to deal with SSL/TLS certificates in the past, but the process is, in my opinion, typically more complex than it should be, and not that well documented, typically forcing the user to consult several tutorials on the web so that they can adapt parts of each for their needs. Since I had to renew a couple of certificates last week, I thought about writing a quick and easy tutorial for a particular case: HTTPS on Nginx (although the certificate itself could, of course, be used for other services; right now I have one I’m using for IMAP (Dovecot) and SMTP (Postfix) as well). For extra fun, I’ll be creating/using an ECDSA certificate with an EC key, instead of the more usual RSA type; ECDSA is more modern and is theoretically more secure even with smaller keys.

A few notes:

  • There are alternatives (e.g. key sizes, etc.) for basically every parameter I’m using, but I’m not going into those. This is supposed to be as quick and easy as possible, after all.
  • Similarly, I’m not going into Let’s Encrypt; instead, I’m assuming a “normal” certificate authority such as Comodo (which supports ECDSA certificates; if you choose another, you should make sure of that in advance). If you do use Let’s Encrypt, just use it to create the certificate instead of the “send the CSR to the certification authority” part.
  • The certificate (and server) will be compatible with most browsers, but that “most” won’t include any Microsoft browsers on Windows XP (Firefox or Chrome on that abomination of an OS will still work).
  • If you already have a certificate (whether ECDSA or not, whether from Let’s Encrypt or not) and you’re here just for the A+ rating on SSL Labs, skip to “Setting up Nginx” below.

Generating the key and the Certificate Signing Request (CSR):

openssl ecparam -genkey -name secp384r1 | openssl ec -out myserver.key

openssl req -new -key myserver.key -out myserver.csr

The second command will ask you for details about your server/company (location, etc.). You should fill in every field, although the only mandatory one is “Common Name” (CN), which must match your server’s public name (not necessarily the machine’s name, but the host name people will type in the browser, such as “zurgl.com“. Note that a certificate for “domain.com” also includes “www.domain.com” (so don’t include the “www.” in the CN), but if the server is reached at “subdomain.domain.com“, then that’s what the CN needs to be.

Ordering and receiving the new certificate:

Now go to a certification authority (CA), order a new certificate, and when asked for a CSR, send them (usually you can just copy and paste it to a text entry window) that myserver.csr file.

If everything went well, then the CA should email you the new certificate in a short while. Typically they send you two files: the certificate itself, and a couple of “intermediate” certificates. Only the first is really needed, but I’ve had best results with concatenating your certificate (first) and the intermediate certs (last) into a single file, which you might call myserver-full.crt . Put it somewhere Nginx can access (e.g. /etc/nginx), and also the key you generated earlier (in this example, myserver.key). You don’t need the CSR there, by the way; it was just needed for ordering the new certificate.

Setting up Nginx:

This is, of course, not a complete Nginx tutorial (that would take a lot more space than a single post), just a simple recipe for configuring an HTTPS site with your new certificate, and have it be secure (and get a great score at SSL Labs, too). I’m assuming you can take care of all the non-HTTPS bits.

So, inside a virtual host, you need the server section:

server {
        listen 443 ssl http2; # if your nginx complains about 'http2', remove it, or (better yet) upgrade to a recent version
        server_name mysite.com # replace with your site, obviously

        # all the non-HTTPS bits (directories, log paths, etc.) go here

        # the rest of the recipe -- see below -- can go here
}

See the last comment? OK, let’s begin adding stuff there (I won’t indent any configuration lines from now on, but they look better if aligned with the rest of the configuration inside the { } ).

ssl_certificate /etc/nginx/myserver-full.crt; # the certificate and the intermediate certs, as seen above...
ssl_certificate_key /etc/nginx/myserver.key; # ... and the key

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS:!3DES';
# the above comes from Mozilla's Server Side TLS guide

add_header X-Content-Type-Options nosniff;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;	# basic defaults.

Great, you now have a basic HTTPS server! I suggest you try it now: save the configuration, restart Nginx, and test it. Access the URL with a browser (and check if the browser doesn’t complain about the certificate: if it did, something went wrong), and run it through SSL Labs’ Server Test. Check if it complains about something; if so, something needs fixing (ask in the comments, maybe I or someone else can help).

If all went well, you probably got a decent score, maybe even an A. But how to get an A+, like I promised at the beginning?

An A+ score on SSL Labs’ Server Test:

As of now (September 2017), SSL Labs only asks for one more thing: “HTTP Strict Transport Security (HSTS) with long duration”. HSTS is a mechanism to tell browsers: “this site should be accessed through HTTPS only; if you attempt to connect through normal HTTP, then deny access”. This information is actually cached by the browser, it’s not the server that refuses HTTP connections (though it’s, of course, possible to configure Nginx to do just that — or simply do 301 redirects to the equivalent HTTPS URL. But I digress…). In short, it protects against downgrade attacks. As for the “with long duration” part, that’s simply the time that browsers should cache that information.

IMPORTANT: don’t proceed until you have the basic HTTPS site running, and SSL Labs reporting no problems (see the previous section)!

ALSO IMPORTANT: if you *do* want your site to be accessible through both HTTP and HTTPS, stop here, as the rest of the configuration will make it HTTPS-only! That means, however, giving up on the A+ score.

So, add the following:

add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

Restart Nginx, try SSL Labs’ test again. Did you get an A+?

If not, or you have any questions, feel free to ask.

For extra fun: have your Nginx use LibreSSL or OpenSSL 1.1.x, and enjoy a few more modern ciphers (look for “ChaCha20” on SSL Labs) without any configuration changes from the above.

Ubuntu/Debian: Installing Nginx/Postfix/Dovecot using OpenSSL 1.1.x

OpenSSLSo, let’s say you have a Ubuntu or Debian server, using one or more of Nginx, Postfix, and Dovecot, and you’d like to have them link to OpenSSL 1.1.x instead of the default OpenSSL (as of Ubuntu 17.04, it’s version 1.0.2g). (Reasons may include wanting to use modern ciphers such as ChaCha20, or trying out support for the most recent TLS 1.3 draft. Also, if you want to try out LibreSSL instead of OpenSSL 1.1, please check out the previous post.)

So, here’s a relatively simple way, that doesn’t change the system’s default OpenSSL (believe me, that wouldn’t be a good idea, unless you recompiled everything):

Install dependencies:

apt-get install build-essential
apt-get build-dep openssl nginx dovecot postfix

Install OpenSSL 1.1.x:

  • download the latest 1.1.x source from www.openssl.org
  • compile and install it with:
./config --prefix=/usr/local/openssl11 --openssldir=/usr/local/openssl11 && make && make install

Install Nginx:

rm -rf /usr/local/src/nginx-openssl11
mkdir /usr/local/src/nginx-openssl11
cd /usr/local/src/nginx-openssl11
apt-get source nginx # ignore the permissions error at the end
  • edit nginx-<version>/debian/rules:  add

–with-openssl=/usr/local/src/openssl-

to the beginning of the common_configure_flags option (note that that’s the source directory you used to compile OpenSSL 1.1.x, not where you installed it to);

debuild -uc -us -b
  • install the required packages in the parent directory with “dpkg -i ” (do dpkg -l | grep nginx to see which you have installed; typically you’ll want to install the newly created versions of those);
apt-mark hold nginx* # to prevent nginx from being updated from Ubuntu / Debian updates

Done! You can now play around with the Mozilla TLS Guide to add support for modern ciphers to your Nginx’s configuration, and use SSLLabs’s SSL Server Test tool to check if they are correctly enabled.

Install Postfix:

It’s just like Nginx (replacing “nginx” with “postfix” in every command / directory name, of course), except that the changes to debian/rules are these:

  • find -DHAS_SSL, add -I/usr/include/openssl11/include/openssl in front of it;
  • find AUXLIBS += , add -L/usr/local/openssl11/lib in front of it
  • find the line with dh_shlibdeps -a, add –dpkg-shlibdeps-params=–ignore-missing-info to it
  • don’t forget the apt-mark hold postfix* at the end.

Install Dovecot:

Again, use the Nginx instructions, using “dovecot” instead of “nginx” everywhere, except that the changes to debian/rules should be:

  • after the line:
export DEB_BUILD_MAINT_OPTIONS=hardening=+all

add:

export SSL_CFLAGS=-I/usr/local/openssl11/include
export SSL_LIBS=-L/usr/local/openssl11/lib -lssl -lcrypto
  • after the section:
override_dh_makeshlibs:
# Do not add an ldconfig trigger; none of the dovecot shared libraries
# are public.
        dh_makeshlibs -n

add:

override_dh_shlibdeps:
        dh_shlibdeps --dpkg-shlibdeps-params=--ignore-missing-info

NOTE: the indentation in the second line needs to be a tab, don’t use spaces.

Again, remember to apt-mark hold dovecot* after installation.

How to check if your new installations of Postfix and/or Dovecot are using OpenSSL 1.1.x instead of the default OpenSSL 1.0.x? You could use ldd to check what SSL / TLS libraries your binaries and/or libraries link to, but the best way is probably to use a tool such as sslscan, which you can use to check what ciphers your SMTP, IMAP, etc. support (including with STARTTLS). If you see ChaCha20 in there, everything is fine. 🙂

If you ever want to go back to “normal” versions of these servers, just do apt-mark unhold nginx* (for instance).

Ubuntu/Debian: Installing Nginx/Postfix/Dovecot using LibreSSL

LibreSSLSo, let’s say you have a Ubuntu or Debian server, using one or more of Nginx, Postfix, and Dovecot, and you’d like to have them link to LibreSSL instead of the default OpenSSL. (I won’t go much into the possible reasons for it; maybe you’re bothered because modern distros are still sticking with OpenSSL 1.0.x, which is ancient and doesn’t support modern ciphers such as ChaCha20, or you trust the OpenBSD developers more than you trust the OpenSSL ones, or — and there’s nothing wrong with that — you want to do it just for fun. You could also use OpenSSL 1.1.x — check out this (very similar) post.)

So, here’s a relatively simple way, that doesn’t change the system’s default OpenSSL (believe me, that wouldn’t be a good idea, unless you recompiled everything):

Install dependencies:

apt-get install build-essential
apt-get build-dep openssl nginx dovecot postfix

Install LibreSSL:

  • download the latest portable source from www.libressl.org
  • compile and install it with:
./configure --prefix=/usr/local/libressl --with-openssldir=/usr/local/libressl && make && make install

Install Nginx:

rm -rf /usr/local/src/nginx-libressl
mkdir /usr/local/src/nginx-libressl
cd /usr/local/src/nginx-libressl
apt-get source nginx # ignore the permissions error at the end
  • edit the file nginx-<versão>/debian/rules:  add
-I/usr/local/libressl/include

to the line beginning with “debian_cflags:=“, and:

-L/usr/local/libressl/lib

to the line that begins with “debian_ldflags:“. After that, enter the nginx-<version> directory and compile the packages with the command:

debuild -uc -us -b
  • install the required packages in the parent directory with “dpkg -i <package names>” (do “dpkg -l | grep nginx” to see which you have installed; typically you’ll want to install the newly created versions of those);
apt-mark hold nginx* # to prevent nginx from being updated from Ubuntu / Debian updates

Done! You can now play around with the Mozilla TLS Guide to add support for modern ciphers to your Nginx’s configuration, and use SSL Labs’s SSL Server Test tool to check if they are correctly enabled.

Install Postfix:

It’s just like Nginx (replacing “nginx” with “postfix” in every command / directory name, of course), except that the changes to debian/rules are these:

  • find -DHAS_SSL, add -I/usr/include/libressl/include/openssl in front of it;
  • find AUXLIBS += , add -L/usr/local/libressl/lib in front of it
  • find the line with dh_shlibdeps -a, add –dpkg-shlibdeps-params=–ignore-missing-info to it
  • don’t forget the apt-mark hold postfix* at the end.

Install Dovecot:

Again, use the Nginx instructions, using “dovecot” instead of “nginx” everywhere, except that the changes to debian/rules should be:

  • after the line:
export DEB_BUILD_MAINT_OPTIONS=hardening=+all

add:

export SSL_CFLAGS=-I/usr/local/libressl/include
export SSL_LIBS=-L/usr/local/libressl/lib -lssl -lcrypto
  • after the section:
override_dh_makeshlibs:
# Do not add an ldconfig trigger; none of the dovecot shared libraries
# are public.
        dh_makeshlibs -n

add:

override_dh_shlibdeps:
        dh_shlibdeps --dpkg-shlibdeps-params=--ignore-missing-info

NOTE: the indentation in the second line needs to be a tab, don’t use spaces.

Again, remember to apt-mark hold dovecot* after installation.

How to check if your new installations of Postfix and/or Dovecot are using LibreSSL instead of the default OpenSSL? You could use ldd to check what SSL / TLS libraries your binaries and/or libraries link to, but the best way is probably to use a tool such as sslscan, which you can use to check what ciphers your SMTP, IMAP, etc. support (including with STARTTLS). If you see ChaCha20 in there, everything is fine. 🙂

If you ever want to go back to “normal” versions of these servers, just do apt-mark unhold nginx* (for instance).

I’ve also added /usr/local/libressl/bin to the beginning of my PATH environment variable, so that the LibreSSL binaries are used by default (e.g. to generate keys, CSRs, etc.), although this isn’t necessary for Nginx, etc. to work.

Linux: create a Volume Group with all newly added disks

Let’s say you’ve just added one or more disk drives to a (physical or virtual) Linux system, and you know you want to create a volume group named “vgdata” with all of them — or add them to that VG if it already exists.

For extra fun, let’s also say you want to do it to a lot of systems at the same time, and they’re a heterogeneous bunch — some of them may have the “vgdata” VG already, while some don’t; some of them may have had just one new disk added to it, while others got several. How to script it?

#!/bin/bash

# create full-size LVM partitions on all drives with no partitions yet; also create PVs for them
for i in b c d e f g h i j k l m n o p q r s t u v w x y z; do sfdisk -s /dev/sd$i >/dev/null 2>&1 && ( sfdisk -s /dev/sd${i}1 >/dev/null 2>&1 || ( parted /dev/sd$i mklabel msdos && parted -a optimal /dev/sd$i mkpart primary ext4 "0%" "100%" && parted -s /dev/sd$i set 1 lvm on && pvcreate /dev/sd${i}1 ) ) ; done

# if the "vgdata" VG exists, extend it with all unused PVs...
vgs | grep -q vgdata && pvs --no-headings -o pv_name -S vg_name="" | sed 's/^ *//g' | xargs vgextend vgdata

# ... otherwise, create it with those same PVs
vgs | grep -q vgdata || pvs --no-headings -o pv_name -S vg_name="" | sed 's/^ *//g' | xargs vgcreate vgdata

As always, you can use your company’s automation system to run it on a bunch of servers, or use pssh, or a bash “for” cycle, or…

Linux: find users with full sudo access on many machines

Disclaimer: there are surely many, far better ways to do this — feel free to add them in the comments. This was just a quick and dirty script I came up with yesterday, after a co-worker wondered if there was an easy way to do this on all the servers we administer.

The situation: you administer 1000 or more servers, you and your team are the only users who are supposed to be able to sudo to root (unlike simply running certain specific commands, which is typically OK), but sometimes you have to grant temporary full sudo access to a particular user or group of users who, for instance, are doing the initial application installations, but who are supposed to lose that access when the server enters production.

The problem: it’s easy to forget about those, and so the temporary access becomes permanent (yes, there are other ways around that, such as using a specific syntax for those accesses that includes a comment that you then use a script, called by the “at” daemon, to remove later, but bear with me for now). Wouldn’t it be useful to be able to look at a group of some, or even all, of the servers you administer, and find those unwanted, forgotten sudo accesses?

Continue reading “Linux: find users with full sudo access on many machines”

Red Hat / CentOS: How to list RPMs installed on year X

Paranoid-minded auditor: “I need a list of all RPMs that were installed this year on these servers, stat!”

You: “OK, let’s see here…”

On a single machine:

for i in `rpm -qa`; do rpm -qi $i | grep Install | grep -q 2017 && echo $i; done

If you need to check several servers, you can use a bash “for” cycle, or pssh, or…

How to update Red Hat or CentOS without changing minor versions

If you’re a Linux system administrator or even a “mere” user, you’ve probably noticed that, when using a Red Hat-like system, if you do “yum update” it may well raise the minor version level (e.g. 6.7 to 6.9). In fact, it should move your system to the latest minor version (the number after the dot) of your current major version (the number before it).

You may, then, have wondered if it is possible to update your system and yet remain on your current version. You may even have been asked to do so by a very, very timid boss, or some development/application team (“this is supported only on Red Hat 7.1, we can’t move to 7.2!”).

Before I go on, I have to say that there is absolutely no technical reason to do this (EDIT: not necessarily true any longer, at least for 7.x, see CertDepot’s comment and link. Still true in most cases; the reason for this demand is almost always ignorance, fear, and laziness, not knowledge of any actual change causing incompatibilities). I really hope you’ve arrived here just because a boss, project manager or developer is demanding it (and, sadly, you don’t work at a place where you can say “no, that’s stupid, I won’t do it”… yet 😉 ), or simply because of scientific curiosity, not because you actually think that doing this is a good idea.

Red Hat (or CentOS) minor versions aren’t really” versions” in the usual sense, where new versions of software packages, libraries, etc. are included. Instead, (with a few desktop-related exceptions, such as web browsers) they take pains to only fix security problems and other bugs. If you look at a particular package’s versions, whether you’re on Red Hat Enterprise Linux 7.0 or 7.3, those always stay the same, only the “Red Hat” number (e.g. file-5.11-21.el7) increases. Therefore, there is never (EDIT: see above edit) any question of “compatibility”; it may, however, be a question of “officially supported”, which is code for “we tested our product with this version, and can’t be bothered to test it with any others.”

Sorry about the rant. 🙂 So, since you’re obviously a competent sysadmin, I’ll assume you’re being forced to do it. Here’s how:

With Satellite:

To see which releases you have available:

subscription-manager release --list

Example:

# subscription-manager release --list
+-------------------------------------------+
 Available Releases
+-------------------------------------------+
5.11
5Server
6.2
6.7
6.8
6.9
6Server
7.0
7.1
7.2
7.3
7Server

To lock on a release (e.g. 7.1):

subscription-manager release --set=7.1

And to unlock it:

subscription-manager release --unset

(or maybe –set=7Server)

Without Satellite:

For a single update, add –releasever=x.y to your yum command; for instance:

yum --releasever=7.1 update

To set it permanently, add:

distroverpkg=x.y

to the [main] section in your /etc/yum.conf file.

Notes: at least on CentOS, since CentOS 7.x, versions aren’t just “x.y”, they also include a third number, apparently the year and date of release. Browsing on http://vault.centos.org/centos/ , for instance, you see you have these versions available:

[DIR] 6.7/ 21-Jan-2016 13:22 - 
[DIR] 6.8/ 24-May-2016 17:36 - 
[DIR] 6.9/ 10-Apr-2017 12:48 - 
[DIR] 6/ 10-Apr-2017 12:48 - 
[DIR] 7.0.1406/ 07-Apr-2015 14:36 - 
[DIR] 7.1.1503/ 13-Nov-2015 13:01 - 
[DIR] 7.2.1511/ 18-May-2016 16:48 - 
[DIR] 7.3.1611/ 20-Feb-2017 22:23 - 
[DIR] 7/ 20-Feb-2017 22:23 -

and, yes, you have to specify the third number in your command/config file.

You may also have to enable the several entries in your /etc/yum.repos.d/CentOS-Vault.repo file (change enabled=0 to 1).

Sources: 1