Bitcoin fuzzers

I got some requests to fuzz Bitcoin, so I did. They can be found here:

I expect them to be merged into the main project soon.

So far only one issue has been found: . This code is currently unused and does not pose a security risk (forks of Bitcoin may want to check whether they are using it).

Judging by the number of issues found (1) after extensive fuzzing, the Bitcoin code appears to be exceptionally well-written. Which is also exceptionally good news, because this code is not only used by Bitcoin but also by many, many altcoins, and thus guards billions and billions of dollars.

I’m actively working on expanding the fuzzers and their code coverage (as much as time permits).

Tip jar: 1BnLyXN2QwdMZLZTNqKqY48bU4hN2A3MwZ

In other news, I have a new OpenVPN vulnerability coming up that’s the worst yet in terms of severity but only affects a small number of users. To be announced.


11 remote vulnerabilities (inc. 2x RCE) in FreeRADIUS packet parsers

FreeRADIUS is the most widely deployed RADIUS server in the world. It is the basis for multiple commercial offerings. It supplies the AAA needs of many Fortune-500 companies and Tier 1 ISPs.” (

FreeRADIUS asked me to fuzz their DHCP and RADIUS packet parsers in version 3.0.x (stable branch) and version 2.2.x (EOL, but receives security updates). 11 distinct issues that can be triggered remotely were found.

The following is excerpted from which I advise you to consult for more detailed descriptions of the issues at hand.

There are about as many issues disclosed in this page as in the previous ten years combined.

v2, v3: CVE-2017-10978. No remote code execution is possible. A denial of service is possible.
v2: CVE-2017-10979. Remote code execution is possible. A denial of service is possible.
v2: CVE-2017-10980. No remote code execution is possible. A denial of service is possible.
v2: CVE-2017-10981. No remote code execution is possible. A denial of service is possible.
v2: CVE-2017-10982. No remote code execution is possible. A denial of service is possible.
v2, v3: CVE-2017-10983. No remote code execution is possible. A denial of service is possible.
v3: CVE-2017-10984. Remote code execution is possible. A denial of service is possible.
v3: CVE-2017-10985. No remote code execution is possible. A denial of service is possible.
v3: CVE-2017-10986. No remote code execution is possible. A denial of service is possible.
v3: CVE-2017-10987. No remote code execution is possible. A denial of service is possible.
v3: CVE-2017-10988. No remote code execution is possible. No denial of service is possible. Exploitation does not cross a privilege boundary in a correct and realistic product deployment.

Contact me if

  • you are a vendor of a (open source) C/C++ application and want to eliminate security issues in your product
  • you or your company relies on an (open source) C/C++ application and want ensure that it is secure to use
  • you’d like to organize a crowdfunding campaign to eliminate security issues in an open source C/C++ application for the benefit of all who rely on it
  • for any other reason

I almost always find security issues.

guidovranken at gmail com

libFuzzer-gv: new techniques for dramatically faster fuzzing

It’s not how long you let it run, it’s how you wiggle your fuzzer

Sun Tzu

I spent some time hacking libFuzzer and pondering its techniques. I’ve come up with some additions that I expect will dramatically speed up finding certain edge cases.

First of all a huge vote of appreciation for Michał Zalewski and the people behind libFuzzer and the various sanitizers for their work. The remarkable ease by which fuzzers can be attached to arbitrary software to find world-class bugs that affect millions is at least as commendable as the technical underpinnings. The shoulders of giants.

You can find my fuzzer here:

Remember that these features are very experimental. Developers of libFuzzer and other fuzzers are encouraged to merge these features into their work if they like it.

Code coverage is just one way to guide the fuzzer

Code coverage is the chief metric that a fuzzer like libFuzzer uses to increase the likelihood that a code path resulting in an error is found. But the exact course of code execution is determined by many more factors. These factors are not accounted for by code coverage metrics alone. So I’ve implemented a number of additional program state signalers that help reach faulty code quickly. Without these, certain bugs will be uncovered only after a very long time of fuzzing.

Stack-depth-guided fuzzing

void recur(size_t depth, size_t maxdepth)
    if (depth >= maxdepth) {

    recur(depth + 1, maxdepth);

extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
    size_t i, maxdepth = 0;

    for (i = 0; i < size; i++) {
        if (i % 3 == 0 && data[i] == 0xAA) {
            maxdepth += 1;

    maxdepth *= 400;
    recur(0, maxdepth);
    return 0;

Given enough 0xAA’s in the input, the program will crash due a stack overflow (recursing too deep). With -stack_depth_guided=1 -use_value_profile=1 it usually takes about 0.5 – 5 seconds to crash on my system.

With just -value_profile=1 (and ASAN_OPTIONS=coverage=1:coverage_counters=1), it takes about 5-10 minutes. I think this is pure chance though. I’ve done runs where it was still busy after an hour.

static void getStackDepth(void) {
  size_t p;
  asm("movq %%rsp,%0" : "=r"(p));
  p = 0x8000000000000000 - p;
  if (p > fuzzer::stackDepthRecord) {
      fuzzer::stackDepthRecord = p;
      if (fuzzer::stackDepthBase == 0) {
          fuzzer::stackDepthBase = p;

(yes, this specific implementation works only on x86-64. If this doesn’t work for you, comment it out or change it to suit your architecture.)

If you need a fuzzer input that exceeds a certain stack depth as a file, you can lower the stack size with ulimit -s before running the fuzzer. It will crash and libFuzzer writes the fuzzer input to disk.

Crashes due to excessive recursion are, I think, an under-appreciated class of vulnerabilities. For server applications, it matters a lot that an untrusted client can perform a stack overflow on the server. These vulnerabilities are relatively rare, but I did manage to find a remote, unauthenticated crasher in high-profile software (Apache httpd CVE-2015-0228).

A lot of applications that parse context-free grammar, such as

  • Programming languages (an expression can contain an expression can contain an expression..)
  • Serialization formats (JSON: an array can contain an array can contain an array ..)

are in theory susceptible to this.

PS: you can use my tool to find call graph loops in binaries.

Intensity-guided fuzzing

This feature quantifies the number of instrumented locations that are hit in a single run. It is the aggregate of non-unique locations accessed.

So if a certain for loop of 1 iteration causes the coverage callback to be called 5 times, the same loop of 5 iterations results in an aggregate value of 5*5=25.

Great to find slow inputs.

Allocation-guided fuzzing

extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
    size_t i, alloc = 0;
    void* p;

    for (i = 0; i < size; i++) {
        if (i % 3 == 0 && data[i] == 0xAA) {
            alloc += 1;

    if (alloc >= 1350)
        alloc = -1;
    p = malloc(alloc);
    return 0;

Given enough 0xAA’s in the input, the program will perform an allocation of -1 bytes. AddressSanitizer does not tolerate this and it will crash.

With -alloc_guided=1 -value_profile=1, it usually takes 10-25 seconds on my system until it crashes (which is what we want).

With just -value_profile=1 (and ASAN_OPTIONS=coverage=1:coverage_counters=1), it was still running after more than an hour. It has very little to go on, and it cannot figure out the logic.

I expect this feature will help to find certain threshold-constrained issues. For instance, an application runs fine if less than 8192 elements of something are involved. Beyond that threshold, it resorts to different, erroneous logic (maybe a wrong use of realloc()). This feature guides the fuzzer towards that pivot.

Aside from finding crashes, this feature is great at providing insight into the top memory usage of an application, and it automatically finds the worst case input in terms of heap usage (because fuzzing is guided by the malloc()s). If you can discover an input that makes a server application reserve 50MB of memory whereas the average memory usage for normal requests is 100KB, it’s not a vulnerability in the traditional sense (although it may be a very cheap DoS opportunity), but it might make you consider refactoring some code.

Custom-guided fuzzing

libFuzzer expects that LLVMFuzzerTestOneInput returns 0. It will halt if it returns something else. It isn’t used for anything else at this moment. So I thought I’d put it to good use. Use -custom_guided=1.

You can now connect libFuzzer to literally anything. I’m experimenting with connecting to a remove server in LLVMFuzzerTestOneInput, hashing what the server returns, and return the number of unique hashes produced so far. So I am in fact fuzzing a remote, uninstrumented application.

Disable coverage-guided fuzzing

Use -no_coverage_guided=1 to disable coverage-guided fuzzing. This is useful if you want to rely purely on, say, allocation guidance.

Techniques tried and discarded

Favoring efficient mutators

I’ve tried keeping a histogram for mutator efficacy. So each time a certain mutator (like EraseBytes, InsertBytes, …) was responsible for an increase in code coverage, I incremented its histogram value. Then, when the mutator for the next iteration had to be selected, I favored the most efficient mutator (but less efficient mutators could be chosen as well, just with a smaller likelihood).

Upon class construction I created a lin-log look-up table. For 5 mutators, it looks like this:

LUT = [0, 1, 1, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4]

Every iteration, I sorted the histogram and save the order of the indices. So if the histogram looks like this:

Mutator 0: 100 hits
Mutator 1: 1000 hits
Mutator 2: 500 hits
Mutator 3: 1200 hits
Mutator 4: 10 hits

The (reverse) sorted sequence of indices is then:

LUT2 = [4, 0, 2, 1, 3]

To choose a new mutator:

curMutator = LUT2[ LUT[ rand() % numMutators ] ]

So mutator 3 is now strongly favored (chance of 1 in 3), but there is still a 1 in 15 chance that mutator 4 gets chosen.

Unfortunately, this effort was in vain. It appeared to only slow down fuzzing. Apparently the fuzzer needs mutator diversity in order to reach new coverage. Or I have been overlooking something, in which case you are free to comment ;).

Unique call graph traversal

I figured that an approach that embeds both stack-depth-guidance and code intensity-guidance is to keep an array of code locations hit by the application in one run, hash the array, and use the number of unique hashes as guidance. Unfortunately this number increments for nearly every input, and soon memory is exhausted. Maybe a less granular coverage instrumentation could work.

Fuzzing tips du jour

  • Sanitizers and fuzzers are distinct technologies. You can fuzz without sanitizers (and sanitize without fuzzing): speed up corpus generation by an order of magnitude -> then test the corpus with sanitizers.
  • Developers: you can use fuzzing to verify application logic. Put an abort() where you normally print a debug message when an assert() failed that you believe should never fail. Now fuzz it.
  • Sometimes optimizations and compiler versions matter. gcc + ASAN detects an issue in the following program with -O0, but not with -O1 and higher: int main(){char* b;int l=strlen(b);} . clang doesn’t find it with any optimization flag. The reverse (crashes with -O3, not with -O1) can also happen (see my OpenSSH CVE-2016-10012). Security that relies on specific compiler versions and flags is probably a great way to contribute backdoored code to open-source software, if you are so inclined. Had I been a bad hombre, this is what I would do. Maintainers testing your code with a their clang -O2 build system + regression tests + fuzzing rig will probably not detect your malicious code hiding in plain sight, but it is nonetheless going to creep into some percentage of binaries.


There’s been a lot of commercial interest in my activities after OpenVPN. Yes, I am available for contracting work.

I’ve recently completed work for a well-respected open-source application. I had a wonderful run: about 10 remote vulnerabilities in one week (release 17 Jul 2017).

I love to go full-out on software and exploit every technique known to me to squeeze out every vulnerability. I’ve got a lot of lesser-known tricks up my sleeve that I like to use.

Feel free to contact me: guidovranken @ gmail com and inquire about the possibilities.

fuzzing is literally magic

Which software should I audit next?

I’m always looking for new challenges. Of course I’m particularly interested in finding vulnerabilities that impact many users. I always practice responsible disclosure. So which open source application or library should I put on my dissection table? C/C++ software is my specialty so only software written in those languages please.

Please fill out this single-question survey. Thanks !

OpenVPN fuzzers released + notes

First of all a very heartfelt thanks to all those who donated in the wake of my OpenVPN findings.

Private Internet Access donated $1000! Another donor whose I identity I know is Shawn C[ a.k.a “citypw”] of HardenedLinux. An unknown person or company donated $1000 as well. And about 15 others donated too. Thank you so much! Very inspiring and your generous tokens of appreciation mean a lot to me. UPDATE: IPredator donated $7500! Wow.

I’ve uploaded my fuzzers here.

Maybe you think that OpenVPN has now been completely X-rayed by fuzzers; this is not the case.  There are still signification portions of the code that are left to explore. It wouldn’t surprise me at all if more vulnerabilities emerge. This requires that more fuzzers are written for specific parts (for example all the code in ssl.c !). I have disabled some ASSERTs and other code because they crash the fuzzer, for example here, and it requires more research to determine whether these asserts are reachable remotely. The fuzzing framework + IO abstractions I have published will make the creation of more fuzzers a relatively easy effort, as long as you know what you are doing (set variables passed to a function to sane values, comment out code where applicable, etc).

If you find more vulnerabilities, report them to before you disclose them and enjoy your bragging rights and maybe a bounty from OSTIF.

There appears to be an impression that I’m in favor of abolishing manual code reviews. This is not true. In fact, I listed several types of vulnerabilities that cannot be found with fuzzers. All I’m saying is to be aware of the respective strengths and weakness of each strategy, and to use a sensible approach to find vulnerabilities. If you have a function that takes a null-terminated a string followed by 500 lines of spaghetti code that does string operations like it’s 1983, I’d rather write a fuzzer in 3 minutes that finds 5 overflows in 5 minutes, than frying my brain for an hour to discern the logic and manually concocting input that leads to corner cases. Of course, you can still do a manual examination once the fuzzer has run its course. But a fuzzer isn’t going to detect that you’re accidentally sending your private key to the peer due to program logic gone awry, and instead requires human intelligence determine this. So the upshot is; use both, and be sensible in striking a balance.

Interestingly, the wrong method to free a GENERAL_NAMES structure is not limited to OpenVPN. By using the fantastic Debian Codesearch I discovered that the issue was also present in FreeRADIUS.

There’s some cool stuff I’m working on (not related to OpenVPN); check back here in a few days.

Update: by mistake I reported that StrongSwan also might be vulnerable to the GENERAL_NAMES structure. This is not the case.

The OpenVPN post-audit bug bonanza

UPDATE: OpenVPN fuzzers now released.


I’ve discovered 4 important security vulnerabilities in OpenVPN. Interestingly, these were not found by the two recently completed audits of OpenVPN code. Below you’ll find mostly technical information about the vulnerabilities and about how  I found them, but also some commentary on why commissioning code audits isn’t always the best way to find vulnerabilities.

Here you can find the latest version of OpenVPN:

This was a labor of love. Nobody paid me to do this. If you appreciate this effort, please donate BTC to 1BnLyXN2QwdMZLZTNqKqY48bU4hN2A3MwZ.


After a hardening of the OpenVPN code (as commissioned by the Dutch intelligence service AIVD) and two recent audits 1 2, I thought it was now time for some real action ;).

Most of this issues were found through fuzzing. I hate admitting it, but my chops in the arcane art of reviewing code manually, acquired through grueling practice, are dwarfed by the fuzzer in one fell swoop; the mortal’s mind can only retain and comprehend so much information at a time, and for programs that perform long cycles of complex, deeply nested operations it is simply not feasible to expect a human to perform an encompassing and reliable verification.

End users and companies who want to invest in validating the security of an application written in an “unsafe” language like C, such as those who crowd-funded the OpenVPN audit, should not request a manual source code audit, but rather task the experts with the goal of ensuring intended operation and finding vulnerabilities, using that strategy that provides the optimal yield for a given funding window.

Upon first thought you’d assume both endeavors boil down to the same thing, but my fuzzing-based strategy is evidently more effective. What’s more, once a set of fuzzers has been written, these can be integrated into a continuous integration environment for permanent protection henceforth, whereas a code review only provides a “snapshot” security assessment of a particular software version.

Manual reviews may still be part of the effort, but only there where automation (fuzzing) is not adequate. Some examples:

  • verify cryptographic operations
  • other application-level logic, like path traversal (though a fuzzer may help if you’re clever)
  • determine the extent to which timing discrepancies divulge sensitive information
  • determine the extent to which size of (encrypted) transmitted data divulges sensitive information (see also). Beyond the sphere of cryptanalysis, I think this is an underappreciated way of looking at security.
  • applications that contain a lot of pointer comparisons (not a very good practice to begin with — OpenVPN is very clean in this regard, by the way) may require manual inspection to see if behavior relies on pointer values (example)
  • can memory leaks (which may be considered a vulnerability themselves) can lead to more severe vulnerabilities? (eg. will memory corruption take place if the system is drained of memory?)
  • can very large inputs (say megabytes, gigabytes, which would be very slow to fuzz) cause problems?
  • does the software rely on the behavior of certain library versions/flavors? (eg. a libc function that behaves a certain way with glibc may behave differently with the BSD libc — I’ve tried making a case around the use of ctime() in OpenVPN)

So doing a code audit to find memory vulnerabilities in a C program is a little like asking car wash employees to clean your car with a makeup brush. A very noble pursuit indeed, and if you manage to complete it, the overall results may be even better than automated water blasting, but unless you have infinite funds and time, resources are better spent on cleaning the exterior with a machine, vacuuming the interior followed by an evaluation of the overall cleanliness, and acting where necessary.


Remote server crashes/double-free/memory leaks in certificate processing

Reported to the OpenVPN security list on May 13.


There are several issues in the extract_x509_extension() function in ssl_verify_openssl.c. This function is called if the user has used the ‘x509-username-field’ directive in their configuration.

GENERAL_NAMES *extensions;
int nid = OBJ_txt2nid(fieldname);

extensions = (GENERAL_NAMES *)X509_get_ext_d2i(cert, nid, NULL, NULL);

The first issue. The ‘fieldname’ variable is the value specified in the configuration file after the ‘x509-username-directive’. Different NID’s require different storage structures. That is to say, using a GENERAL_NAMES structure for every NID will result in spectacular crashes for some NIDs.

ASN1_STRING_to_UTF8((unsigned char **)&buf, name->d.ia5);
if (strlen(buf) != name->d.ia5->length)
    msg(D_TLS_ERRORS, "ASN1 ERROR: string contained terminating zero");
    strncpynt(out, buf, size);
    retval = true;

The second issue. The return value of ASN1_STRING_to_UTF8 is not checked. It may return failure, in which case buf retains its value. This code is executed in a loop (for every GENERAL_NAME encoded in the certificate). So let’s consider this scenario:

First loop: ASN1_STRING_to_UTF8 succeeds, and buf is processed and freed in any of the following branches.
Second loop: ASN1_STRING_to_UTF8 fails, and buf is processed (use-after-free) and freed (double-free) in any of the following branches.

In spite of extensive fuzzing I could not trigger a single ASN1_STRING_to_UTF8 failure using OpenSSL 1.0.2l. It may or may not be possible with other versions of OpenSSL, LibreSSL, BoringSSL. This would NOT indicate a bug in those libraries — as an API, they are allowed to fail for any reason. The actual error is OpenVPN not checking the return value.

But what makes this interesting is that at the end of this function, the following attempt is made to free the ‘extensions’ variable.


This is wrong. The correct way to do this is to call GENERAL_NAMES_free. This is because sk_GENERAL_NAME_free frees only the containing structure, whereas GENERAL_NAMES_free frees the structure AND its items.

Hence, there is a remote memory leak here.

If you look in the OpenSSL source code, one way through which ASN1_STRING_to_UTF8 can fail is if it cannot allocate sufficient memory. So the fact that an attacker can trigger a double-free IF the server has insufficient memory, combined with the fact that the attacker can arbitrarily drain the server of memory, makes it plausible that a remote double-free can be achieved. But if a double-free is inadequate to achieve remote code execution, there are probably other functions, whose behavior is wildly different under memory duress, that you can exploit.

Furthermore, there are two more instances of ASN1_STRING_to_UTF8 in this file:

(in the function extract_x509_field_ssl)

tmp = ASN1_STRING_to_UTF8(&buf, asn1);
if (tmp <= 0) {    return FAILURE; } 

(in the function x509_setenv_track)

 if (ASN1_STRING_to_UTF8(&buf, val) > 0)
    do_setenv_x509(es, xt->name, (char *)buf, depth);

(in the function x509_setenv)

if (ASN1_STRING_to_UTF8(&buf, val) <= 0)

Here, the code assumes that a return value that is negative or zero indicates failure, and ‘buf’ is not initialized, and needs not to be freed. But in fact, this is ONLY the case if ASN1_STRING_to_UTF8 returns a negative value. A return value 0 simply means a string of length 0, but memory is nonetheless allocated, so there are memory leaks here as well.

Remote (including MITM) client crash, data leak

Reported to the OpenVPN security list on May 19.


This only affects clients who use OpenVPN to connect to an NTLM version 2 proxy.

ntlm_phase_3() in ntlm.c:

if (( *((long *)&buf2[0x14]) & 0x00800000) == 0x00800000)          /* Check for Target Information block */
    tib_len = buf2[0x28];            /* Get Target Information block size */
    if (tib_len > 96)
        tib_len = 96;
        char *tib_ptr = buf2 + buf2[0x2c];           /* Get Target Information block pointer */
        memcpy(&ntlmv2_blob[0x1c], tib_ptr, tib_len);           /* Copy Target Information block into the blob */

‘buf2’ is an array of type char (signed), which contains data sent by the peer (the proxy).
‘tib_len’ is of type int.

First issue: remote crash. If buf[0x28] contains a value of 0x80 or higher, ‘tib_len’ will be negative; both variables are signed, after all. This will cause memcpy to crash.

Second issue: data leak. buf[0x2c] is used as an index to the buf2 array. Because ‘buf2[0x2c]’ is a signed value, if it is >= 0x80, it will cause tib_ptr to point BEFORE ‘buf2’.
Memory at this location is then copied to ntlmv2_blob, which is then sent to the peer.
This constitutes a data leak.

Because the user’s password is also stored on the stack (the variable ‘pwbuf’ in this function), this or other sensitive information to the peer in cleartext.

These issues can be triggered by an actor in an active man-in-the-middle role.

Remote (including MITM) client stack buffer corruption

Reported to the OpenVPN security list on June 6.

This is exceedingly unlikely to occur.

The my_strupr function in ntlm.c is constructed as follows:

unsigned char *
my_strupr(unsigned char *str)
    /* converts string to uppercase in place */
    unsigned char *tmp = str;

        *str = toupper(*str);
    } while (*(++str));
    return tmp;

From this code it is obvious that if a string of length 0 is passed, OOB read(s) and possibly write(s) will occur.

In the case of a string of length 0, the null terminator is toupper()’ed, pointer is incremented, byte AFTER null terminator is evaluated, and if not null toupper()’ed, until a second NULL byte is seen.

The function is invoked once:

my_strupr((unsigned char *)strcpy(userdomain, username));

Exploitation can only be achieved if:

  • NTLM version 2 is used.
  • The user specified a username ending with a backslash.
  • The (uninitialized) ‘username’ array constists entirely of non-null values.
  • The stack layout is such that the ‘username’ array is followed by a pointer, or something else that, if toupper()’ed, could cause arbitrary code execution.

This issue can be triggered by an actor in an active man-in-the-middle role.

Remote server crash (forced assertion failure)

Reported to the OpenVPN security list on May 20.

The OpenVPN server can be crashed by sending crafted data.

mss_fixup_ipv6() in mss.c:

if (buf_advance( &newbuf, 40 ) )
    struct openvpn_tcphdr *tc = (struct openvpn_tcphdr *) BPTR(&newbuf);
    if (tc->flags & OPENVPN_TCPH_SYN_MASK)
        mss_fixup_dowork(&newbuf, (uint16_t) maxmss-20);

in mss_fixup_dowork():

ASSERT(BLEN(buf) >= (int) sizeof(struct openvpn_tcphdr));

It is possible to construct a packet to the server such that this assertion will fail, and the server will stop.

Crash mbed TLS/PolarSSL-based server

Reported to the OpenVPN security list on May 22.


This requires that the –x509-track configuration option has been set. It affects OpenVPN 2.4 (not 2.3) compiled with mbed TLS/PolarSSL as the cryptography backend. The crafted certificate must have been signed by the CA.

When parsing the client certificate, asn1_buf_to_c_string() may be called (via x509_setenv_track -> do_setenv_name).
It iterates over an ASN1 string as follows:

for (i = 0; i < orig->len; ++i)
    if (orig->p[i] == '\0')
        return "ERROR: embedded null value";

If a null byte is found within this string (ASN1 allows this), the static string “ERROR: embedded null value” is returned. If no null byte is found, a heap-allocated string is returned.

The static string becomes problematic if a while later string_mod() is called. This attempts to modify the string. This will typically cause a crash, because the static string is stored in a read-only memory region.

Stack buffer overflow if long –tls-cipher is given

Reported to the OpenVPN security list on May 12

An excessively long –tls-cipher option can cause stack buffer corruption. This can only affect the user if they load untrusted options. Not considered an actual vulnerability because untrusted options may execute arbitrary code via other option directives by design (see commit message).

As a general rule, don’t load untrusted configuration files.

(v)s(n)printf hardening

Reported to the OpenVPN security list on May 23

This is not a vulnerability. It is a proposed hardening technique. My motivation can be read here:
The gist is that vsnprintf and related functions (upon which OpenVPN heavily relies) can, in theory, fail. The reasons for this are entirely inherent to the libc’s internal logic, and behavior may differ from one libc to the other. It must be noted that it is exceedingly unlikely that these functions fail in practice. However, should this happen, this could create dangerous data leaks of sensitive data. My proposed patch remedies this and ensures no data is ever leaked.

Other bugs

Some other minor bugs, that don’t impact security, have been found:

How I fuzzed OpenVPN

Fuzzing OpenVPN has been an extensive effort. You can’t just chain the fuzzer to arbitary internal functions for various reasons:

  • OpenVPN executes external programs like ipconfig and route to modify the system’s networking state. This is not acceptable within a fuzzing environment.
  • Direct resource access (files, networking) occurs throughout the code. You certainly don’t want the fuzzer to end up writing random files and sending data to random IP’s.
  • There are many ASSERT() statements throughout the code. These will cause a direct abort if the enclosed condition is false. This makes fuzzing impossible; you want the fuzzer to run for hours, not abort after 2 seconds.

To work around the first problem, I modified the source code such that in fuzzing mode, everything leading up to the actual execve() is executed (processing of arguments to the external program), but the actual execve() call is commented out. It will return success or failure based on a bit in the fuzzer input data.

To prevent access to resources, I implemented abstractions for libc functions. For example, recv() is now platform_recv(), and within platform_recv() I either call recv() directly (in non-fuzzing mode), or grab data from the fuzzer input data (in fuzzing mode). Similarly, through abstractions such as platform_read(), the application can open, read and write to files at will. The data that it expects is transparently pulled from the fuzzing input.

To deal with the assertions, there is no other way than to comment them out (#ifndef FUZZING .. ASSERT(condition) .. #else if (condition) return; #endif), but only in certain cases.

I leave them in place in situations where the assertion condition depends directly on untrusted data. As an example, say the application recv()’s data from the peer, and then does ASSERT(recvd_data[3] == 0x20). It is important to leave this ASSERT in; it implies that the client can force an abort() on the server (or vice-versa); this can be considered a security issue.

But there are also ASSERTs that rely on variables within an internal data structure. I typically fill these data structures with fuzzer input. Rather than manually ensuring that these variables are valid and coherent with regard to the application’s logic, I simply change the ASSERTs that rely on this validity into ‘return’ where possible (and free objects, where applicable).

I’ve used libFuzzer combined with AddressSanitizer (ASAN), UndefinedBehaviorSanitizer (UBSAN) and MemorySanitizer (MSAN). ASAN cannot be combined with MSAN, and moreover MSAN does not work with libFuzzer (due to the apparent use of uninitialized memory within libFuzzer itself). So the way to go is to generate a corpus with the fuzzer, and then execute each of the resulting inputs with a MSAN-enabled standalone version.

There are various discrete components in OpenVPN that together constitute the application. There is an extensive suite of functions to deal with data buffers (buffer.c, buffer.h), an extensive option parser (options.c — parses the configuration file, command line arguments as well as commands pushed by server to client), a base64 encoder/decoder, etc.
Thanks to this relative modularity in OpenVPN it has been possible to use and abuse these components as if they were an API with relatively little effort.
My approach for testing all of these API-like components is as follows:

(assume 3 functions to be tested)

  • Get a number from the fuzzer input data in the range 0 – 2.
  • Call either of the three functions based in the number
  • Provide each function with parameters derived from the input data where dynamic parameters are required
  • Repeat the above process a number of times (for example (for i = 0; i < 5; i++) { … })

This will cause an ever-permutating sequence of invocations. Essentially the coverage surface becomes (near-) absolute, that is to say, (almost) every conceivable way to use the API is a contender to be tested via this algorithm.

This approach is especially useful to test the functions that operate on the same structure. If there exists any sequenced set of functions that would cause memory violations, this setup is bound to find it. Of course, the actual use of any group of functions within the application is only a small subset of all permutations and parameters that the fuzzer sets out to test, and any mishaps the fuzzer finds for very particular circumstances may not actually occur within the code. But it is nonetheless good to know, because:

  • If you know that a certain sequence of calls and their parameters will lead to memory corruption, you can now perform a manual code analysis to see if this situation occurs.
  • Corner-case API bugs that are not invoked now, may become manifest in the future once code (with calls to the API) is added that does trigger these bugs.

In MSAN-enabled builds I serialize the output structure (if there is one) to /dev/null. For example, the options parser stores all its data in a struct options variable. MSAN does not immediately report the use of uninitialized data; it only does so if it is used in conditions that lead to branching (if (x) …) or when the data is used for I/O.

Hence, by serializing this data to /dev/null (normally a no-op), I force MSAN to detect uninitialized data. In C, there is no automatic way to serialize nested data structures (struct A contains a pointer to struct B etc), so for some structures I had to manually make a serialization stack of functions.

Limited fuzzing on a 32 bit platform has also been performed. This did not find any issues that do not occur on 64 bit.