C Unit Test Framework  0.1
 All Files Functions Enumerations Enumerator Macros Pages
C Unit Test Framework Documentation

Table of Contents

Author
Gerhard Gappmeier

The C Unit Test Framework is a portable unit test framework written in ANSI C which is intended to test other portable C code. The design is heavily based on the Qt TestLib Framework and JUnit.

When developing portable ANSI C code it makes no sense to use a unit test framework that requires JAVA, C++ or any other third-party library like Qt. The unittests must be as portable as the code that should be tested, so that it is possible to cross-compile the tests with your code and run them on the target device.

Features

Requirements

Notes on Windows: Windows is also supported, but because many C99 features are missing this project comes with some additional headers and sources to make it working on Windows. This violates the one header and one source rule, but it works.

Notes on UTEST_MAIN(): The test_main() implementation uses getopt() which conforms to POSIX.2 and POSIX.1-2001. getopt() is not available on all systems (e.g. Windows), that's why this project ships with an alternative implementation that was tested on Windows. It is plain C and should work also on other systems. Anyway you can use the test framework also without using UTEST_MAIN() by providing your own main function and calling testlib_run_tests().

Introduction

A unit test may contain multiple test suites, where each test suite contains a set of test cases. Typically for each module/datastructure that should be tested you create an own test suite. E.g. a StringTest, a LinkedListTest, etc.

One executing a test suite all according tests will be executed. This way you can run tests for single software components easily without specifying all the single test cases.

For debugging purpose it is of course possible to run only a single test case.

Howto write unit tests

The example below shows a very basic working unit test example.

#include <testlib.h>
void test_helloworld()
{
UVERIFY2(true, "Hello World");
}
void register_tests()
{
UREGISTER_TEST(test_helloworld);
}

The test function is a simple void function without arguments. For each test you can create such a function. In a test function you can add multiple checks like UVERIFY2(). The table below gives you an overview about available check macros.

The next step is to provide a register_tests() function which is called by the main function. You can use the UREGISTER_TEST() and UREGISTER_DATADRIVEN_TEST() macros to make your functions known to the framework.

The macro UTEST_MAIN() provides a default main implementation which processes command line arguments and executes the tests.

Macro Description
UVERIFY Checks if a condition is true or not.
UVERIFY2 Like UVERIFY but with additional info.
UCOMPARE Compares an actual value with an expected value.
UCOMPAREF Same as UCOMPARE but for floats.
UFUZZY_COMPAREF Performs a fuzzy compare to avoid rounding errors.
UCOMPARESTR Same as UCOMPARE but for strings.
UFATAL Only for fatal errors. This stops test execution.

When comparing data values you should always use UCOMPARE() and not UVERIFY(x == 5);

Writing data-driven unit tests

When you need to test a function with many different values to test all corner-cases it is cumbersome to repeat the same test code again and again. Instead you should write a data-driven test.

With data-driven tests you can write one single test function which is called multiple times by the test framework with different datasets. For data-driven test you must provide another function which sets up the test data. By convention you should give this function the same name as the test function with a _data suffix appended.

The example below demonstrates howto implement a simple data-drive test.

void test_toupper_data()
{
testlib_add_column("string", "%s");
testlib_add_column("result", "%s");
testlib_add_row("all lower", "hello", "HELLO");
testlib_add_row("mixed", "Hello", "HELLO");
testlib_add_row("all upper", "HELLO", "HELLO");
testlib_add_row("umlauts", "öäü", "ÖÄÜ");
}
void test_toupper()
{
char *string = testlib_fetch("string");
char *result = testlib_fetch("result");
char tmp[50];
string_toupper(tmp, sizeof(tmp), string);
UEXPECT_FAIL("umlauts", "We can't handle umlauts yet. Will be fixed in the next release", Continue);
UCOMPARESTR(tmp, result);
}

Note: the UEXPECT_FAIL() macro marks one check to be expected to fail. So known bugs can be excluded from the result to avoid being reported by the buildbot. This way is preferred over disabling or skipping test cases.

  1. As you can see in this example it possible to disable only one dataset, and the other sets are still tested.
  2. This way the problem still gets reported in the test output and cannot be forgotten.

To register this data driven test you need to use the UREGISTER_DATADRIVEN_TEST() macro which takes two arguments.

UREGISTER_DATADRIVEN_TEST(test_toupper, test_toupper_data);

Running the unit tests

When running the test executable with th option -h all options are listed.

* $> ./test -h
*

Output:

C Unit Test Framework 0.1
Copyright (C) 2014 Gerhard Gappmeier
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Usage: ../bld/test/test_Test [options] [testfunction] [testset]
  By default all testfunctions will be run.

Options:
-l      : print a list of all testfunctions
-v      : increase verbosity of output (can be specified multiple times)
-s      : decrease verbosity of output
-c WHEN : colorize the output. WHEN defaults to 'auto' or can be 'never' or 'always'.
          If WHEN=auto it tries to detect if stdout goes to a terminal. If so color output
          is enabled, otherwise it's disabled.
-h      : Shows this help

So to execute all tests simply run the executable without any arguments.

* $> ./test
*

Output:

********* Start testing of test_Test *********
test_init
PASS   : test_toupper
PASS   : test_toupper
PASS   : test_toupper
XFAIL! : test_toupper(umlauts) Compared values are not the same.
   We can't handle umlauts yet. Will be fixed in the next release
   Loc: [/home/gergap/work/devel/unittest/src/test/main.c(42)]
PASS   : test_fopen
PASS   : test_memcpy
Prepare test_foo
XFAIL! : test_foo() 'f != NULL' returned FALSE.
   Will fix in the next release
   Loc: [/home/gergap/work/devel/unittest/src/test/main.c(84)]
Cleanup test_foo
PASS   : test_multiplication
PASS   : test_multiplication
PASS   : test_multiplication
PASS   : test_multiplication
PASS   : test_multiplication
PASS   : test_multiplication
Prepare test_multiplicationf
PASS   : test_multiplicationf
PASS   : test_multiplicationf
PASS   : test_multiplicationf
PASS   : test_multiplicationf
PASS   : test_multiplicationf
PASS   : test_multiplicationf
Cleanup test_multiplicationf
PASS   : test_encoder
PASS   : test_encoder
PASS   : test_encoder
PASS   : test_encoder
PASS   : test_encoder
PASS   : test_encoder
PASS   : test_encoder
PASS   : test_encoder
PASS   : test_encoder
test_cleanup
Test finished: 7 tests of 7 were run.
  7 PASSED.
  0 FAILED.
  0 SKIPPED.
********* Finished testing of test_Test *********

Integration in CMake

On the CMake Dashboard (CDash) it is not useful to see every test case in the test result table. Nor it useful to only see one result for the whole unittest. The recommended way is to show each test suite as a separate row in the test result table. By clicking on the test name you can inspect the output of this test and see the results of each single test case.

To achieve this you need to add an ADD_TEST call for each test suite. This can be done using a single large unittest executable with an argument which specifies which suite should be run, or you can generate independent test executables for each test suite. This is shown below.

To be able to compile individual test executables you can use the UTEST_MAIN() macro. This provides a default main implementation.

Step 1 - Enable testing support

To enable the CMake test target (make test) you need to call enable_testing() in your toplevel CMake file.

* project(demo C)
* cmake_minimum_required(VERSION 2.8)
*
* # enable test target
* enable_testing()
*
* # build my cool libraries
* add_subdirectory(foo)
* add_subdirectory(bar)
*

Step 2 - Enable CDash support (optional)

If you want to support CDash to upload test results to the dashboard using 'make Nightly', 'make Continuous', etc. you need to include CTest.

* project(demo C)
* cmake_minimum_required(VERSION 2.8)
*
* # enable test target
* enable_testing()
*
* # enable CDash
* include(CTest)
*
* # build my cool libraries
* add_subdirectory(foo)
* add_subdirectory(bar)
*

Step 3 - Add unittests

Using the provided unittest CMake Module it is easy to add new tests.

* project(demo C)
* cmake_minimum_required(VERSION 2.8)
* # configure search path to find unittest.cmake
* set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_SOURCE_DIR}/../cmake)
* # include unittest module
* include(unittest)
*
* # enable test target
* enable_testing()
*
* # enable CDash
* include(CTest)
*
* # configure search path to unittest headers
* include_directories(testlib)
*
* # build my cool libraries
* add_subdirectory(foo)
* add_subdirectory(bar)
*
* # add unit tests
* ADD_UNIT_TEST(Foo footest.c)
* ADD_UNIT_TEST(Bar bartest.c)
*