Building an OpenSSL Client: an Overview

This post is a precursor to my multi-part series on writing a HTTPS client using OpenSSL. I’ve wanted each part of the series to be very practical and hands-on with using the OpenSSL functions, and as such I needed a place to give more of an overview of the goals one would want to accomplish in writing a fully-functioning, secure HTTPS client. I also wanted to explain what lead me to write this series. Thus, we have part 0 of the OpenSSL Client Tutorial.

Why I Wanted To Make This Tutorial

When you first begin to look into OpenSSL or come across code written with the library, there are several identifiers in functions and structs that aren’t immediately clear. Words such as SSL, SSL_CTX, BIO, and X509 are sprinkled all across the library, but unless you have an understanding of what the structs are used for the functions themselves will seem incoherent. To compound this confusion, OpenSSL’s website provides little in the way of explaining the methodology behind these structures.

While the official OpenSSL manual pages can be helpful as a reference for individual functions, they give little or no hint as to how these different pieces fit together, leading to confusion and frustration for those who are attempting to learn the API. The OpenSSL Wiki, which should bridge such gap, is sparsely populated and hasn’t been updated to reflect best practice for the most recent releases of the library. Of course, nobody is to blame for this–the library is an open-source project after all, so nobody gets paid to update old code. It’s just surprising that a library as old and as widely used as OpenSSL would lack clear documentation after all these years.

From a more personal viewpoint, I began learning the OpenSSL library last December as part of work I was doing with a research lab. The first month or two would have been incredible grueling had my employer not been understanding of the complexity of the library; it took me a few weeks to even get to the point where I could build a program that could connect to the internet with OpenSSL, let alone securely. Nowadays, learning new parts has become easier–but only really because I know where to look in the source code now. Since the library is still extensively used for existing and new applications, I figured I’d document what I’ve learned along the way so that the next guy doesn’t have quite as much of an uphill battle. It’d be a shame for all those long hours to go to waste.

But enough of my ramblings!

How HTTPS Works, Briefly

There are a few fundamental security goals behind HTTPS (HTTP-Secure) connections:

  1. Ensuring that the peer you’re communicating with is who they claim they are (Authentication)
  2. Making sure that nobody else can read the data you send to and from that peer (Confidentiality)
  3. Making sure that nobody can tamper with any of the data sent (Integrity)

Most attacks on HTTPS attempt to circumvent one or more of these features. Man In The Middle (MITM) attacks are particularly geared towards HTTPS connections, as they attempt to place a malicious peer in between your connection in an effort to siphon off valuable data. Usually these attacks are aimed at stealing sensitive information or bank credit card details a client is giving a server, though they can also be used to trick a client into downloading malware onto their computer. Such attacks are trivially easy if one is using HTTP on an unsecured network; on the other hand, HTTPS is built in such a way that it can fully prevent these attacks from succeeding if it is implemented fully and correctly.

HTTPS brings about the goals stated above through two means:

  • Certificates/Certificate Chains. These are used to verify that the peer you’re communicating with actually owns the domain, rather than a malicious peer pretending to be that domain. This helps to ensure that the connection is authenticated.
  • Encryption of traffic sent to/from peers. If a person were to intercept some of the traffic being sent after a connection has been made, it would appear to be garbage values to them–they would need the encryption information that only you and the peer have in order to decode the message.

When it comes to certificates and chains, an infrastructure has been established for the internet specifying what certificates should look like, how they should link up, and when a certificate would be considered valid (as well as when it would be invalid). The framework is called the Public Key Infrastructure (PKI), and without it in place certificates would do little more than add a hoop to jump through for those who want to maliciously spoof websites.

An Overview of PKI

There are a lot of ins and outs to PKI, and I don’t intend for this article to span 30+ minutes of reading time, so I’ll just give a brief overview (if you feel you need a more in depth understanding of it, [check out this article]). For the sake of brevity, I’ll also assume that you already have a solid understanding of [how public and private keys work].

With the Public Key Infrastructure, there are two types of certificates: regular, everyday certificates that websites use, and Certificate Authority (CA) certificates. CA certificates are considered to be universally trusted, and they verify that a regular website certificate is actually being used by, well, that website (instead of some malicious actor intercepting your web traffic). The way this works is that every computer has a folder containing a long list of CA certificates. These certificates contain the public key of their respective CA, and they can be used to verify the correctness of anything the CA signs. The CA uses their private key, known only to them, to “sign” certificates that websites request (after they can sufficiently verify that it is the website owner trying to obtain a certificate. Those website certificates also have a public key within their data, with a corresponding private key known only to the owner of the website.

When a client connects to a given website, the website gives them their certificate. The client can then verify that the website is who it says it is by checking the signature on the certificate. If the computer’s local copy of the CA’s public key successfully verifies that the signature on the certificate is correct, then the client can trust that the server actually is who it says it is. Only the server owns the private key that corresponds with the public key in its certificate, so the client can then encrypt confidential data and send it to the server with the guarantee that only the server can decrypt it.

But What About Compromised Private Keys?

But, as it happens, servers lose private keys. A lot. When a private key is lost, the certificate associated with it can no longer be trusted (since an unauthorized third party could spoof being the website, with no encryption blocking their way). However, the server’s certificate might be valid for another month or up to two years), and it’s not like it can be un-signed. This is where revocation steps in.

Revocation is the process by which a compromised certificate becomes labelled as untrusted. To start, every Certificate Authority has servers in place that can provide responses as to whether a certificate is invalid or not. These responses are signed by the CA’s private key, so they can be verified easily by a client. The two main protocols by which CAs return these responses are Certificate Revocation Lists (CRLs) and the Online Certificate Status (OCSP). You don’t need to know in detail how these work for now, just that they exist and are pretty well universally used.

When a server becomes aware that its private key is compromised, it contacts its Certificate Authority. That CA then lists the server’s certificate as being revoked on the servers it has in place. On the client’s side, an additional check is made to the CA of the server it connects to so that it can ensure that the server’s certificate has not been revoked before its expiration date.

Let’s review quickly what PKI accomplishes with these features:

  • CAs are able to easily yet securely issue certificates that can be verified by almost any computer worldwide.
  • The issued certificates are guaranteed to be valid for a given range of time, and they come with a private key that only the server has.
  • Clients can connect securely to servers that have such certificates, be confident that the server is who they say they are, and send private information such as banking info or passwords without fear of eavesdropping.
  • If a server loses their private key, it can be revoked and clients will no longer trust that certificate (assuming they check revocation).

Extending PKI to Transport Layer Security

In many ways, Transport Layer Security (TLS) utilizes the existing Public Key Infrastructure to reach the end goal of secure, private connections. TLS mainly comprises the mechanisms that allow for encryption/decryption of traffic to and from peers, and the encryption used for the vast majority of a connection is incredibly simple–the client and server have two secret keys (symmetric ones, not public/private) that each uses to encrypt any data before they send it and decrypt data as it is received. Most of the complexity in TLS has to do with the “handshake”, or the way that the client and server both generate/exchange these symmetric keys without anyone else being able to intercept them.

For the sake of brevity, the TLS handshake won’t be looked into in detail here, but the main takeaways are:

  • The client gives the server a list of ciphers it supports
  • The server chooses a cipher, and gives the client its certificate so that the client can send the server information only it can decrypt
  • The client checks the certificate and verifies that the peer it’s communicating with is actually the server it wants to connect to (this would include all of the PKI verification above, including revocation checks)
  • From there, the client gives the server secret information that can be used to generate symmetric keys

OpenSSL handles all of the underlying encryption decryption, certificate checks, and sending/receiving of data internally; for all intents and purposes, this is about as much as one would need to know about TLS in order to use the OpenSSL library’s API. There’s a definite advantage to understanding what’s going on inside (many exploits are found in the gaps between developer’s assumptions about a given library and the library’s actual guarantees and limitations), but for the sake of this tutorial that’s all you’ll need to start writing a client using OpenSSL.

Tying it all in to the OpenSSL API

Now that we have an idea of the sort of features a connection will need to be secure, let’s take a look at the kind of steps we’ll need to take in order to complete a TLS connection with a peer and communicate securely. These steps may be implemented in a different order for other libraries; I’ve chosen to align them based on the order that they would appear if using OpenSSL.

Steps to a TLS Client Connection

In my next several posts, I’ll be going over how to implement each of these steps using the OpenSSL library. It won’t necessarily be one post for each step (my first blog post on the topic actually just covers the use of certain structs in OpenSSL), but they will flow sequentially from one to another. This series will go into some depth on each aspect (such as why different versions of TLS behave differently in terms of security, or what ciphers are insecure and why), and as a result it probably won’t be the best match for those who want to connect to an HTTPS client and then get on with other aspects of their C code. For that use case, I’ll have a much simpler cheat laying out all of these steps simply in OpenSSL code (with some annotation).