C Unit Test Framework  0.1
 All Files Functions Enumerations Enumerator Macros Pages
Macros | Enumerations | Functions
testlib.h File Reference
#include <test_config.h>
#include <stdbool.h>
#include <stdlib.h>
#include <stdint.h>

Go to the source code of this file.

Macros

#define UTEST_MAIN()
 Implements a main() function which executes the tests. More...
 
#define UFATAL(condition)
 Checks the given condition and terminates the application if false. More...
 
#define UVERIFY(condition)
 The UVERIFY() macros checks whether the condition is true or not. More...
 
#define UVERIFY2(condition, info)
 The UVERIFY2() macro behaves exactly like UVERIFY(), except that it outputs a verbose message when condition is false. More...
 
#define UCOMPARE(actual, expected)
 The UCOMPARE() macro compares an actual value to an expected value. More...
 
#define UCOMPARE64(actual, expected)
 UCOMPARE64 behaves exactly like UCOMPARE but works with 64bit ints instead of int.
 
#define UCOMPAREF(actual, expected)
 UCOMPAREF behaves exactly like UCOMPARE but works with floats instead ints. More...
 
#define UFUZZY_COMPAREF(actual, expected)
 UFUZZY_COMPAREF Compares the floating point value actual and expected and returns true if they are considered equal, otherwise false. More...
 
#define UCOMPARESTR(actual, expected)
 UCOMPARESTR behaves exactly like UCOMPARE but works with C strings instead of ints. More...
 
#define UCOMPAREMEM(actual, actuallen, expected, expectedlen)
 UCOMPAREMEM behaves exactly like UCOMPARESTR but takes the lengths instead of relying on zero terminated strings. More...
 
#define UREGISTER_NAME(name)
 Gives the test a human readable name. More...
 
#define UREGISTER_INIT(func)
 Registers a global test init function. More...
 
#define UREGISTER_CLEANUP(func)
 Registers a global test cleanup functions. More...
 
#define UREGISTER_TEST(test)
 Registers a test function. More...
 
#define UREGISTER_DATADRIVEN_TEST(test, testdata)
 Registers a data-driven test function. More...
 
#define UREGISTER_TEST2(test, init, cleanup)
 Registers a test function. More...
 
#define UREGISTER_DATADRIVEN_TEST2(test, testdata, init, cleanup)
 Registers a data-driven test function. More...
 
#define UEXPECT_FAIL(dataIndex, comment, mode)
 The UEXPECT_FAIL() macro marks the next UCOMPARE() or UVERIFY() as an expected failure. More...
 
#define UBENCHMARK   for (test_benchmark_start(); test_benchmark_done(); test_benchmark_next())
 Performs a benchmark test. More...
 

Enumerations

enum  testlib_fail_mode { Abort = 0, Continue }
 This enum describes the modes for handling an expected failure of the UVERIFY() or UCOMPARE() macros. More...
 

Functions

void testlib_add_column (const char *name, const char *fmt)
 Adds a new data column for data-driven tests. More...
 
void testlib_add_row (const char *name,...)
 Adds a new data row for data-driven tests. More...
 
void * testlib_fetch (const char *name)
 Returns the data for the given column name of the current test dataset. More...
 
int testlib_fetch_int (const char *name)
 This function behaves exactly like testlib_fetch() but returns an int value previously stored in a %i or %d column. More...
 
unsigned int testlib_fetch_uint (const char *name)
 This function behaves exactly like testlib_fetch() but returns an unsigned int value previously stored in a %u column. More...
 
double testlib_fetch_double (const char *name)
 This function behaves exactly like testlib_fetch() but returns a double value previously stored in a %f column. More...
 
void testlib_run_tests (const char *testname, const char *testset)
 Executes registered test functions. More...
 
void testlib_list_tests ()
 Lists all registered test functions. More...
 
int testlib_verbose ()
 Increase verbosity level to get more output. More...
 
int testlib_silent ()
 Decrease verbosity level to get less output. More...
 
struct testlib_stat * testlib_result ()
 Returns the test result statistic. More...
 
int testlib_main (int argc, char *argv[])
 Test main implementation used by UTEST_MAIN. More...
 

Macro Definition Documentation

#define UBENCHMARK   for (test_benchmark_start(); test_benchmark_done(); test_benchmark_next())

Performs a benchmark test.

The UBENCHMARK macro calls the test code n times to get reasonable performance results. The macro determins the correct n automatically by starting with n = 1, then it doubles n (2, 4, 8, 16, ...) until the measured time is over a configurable threshold (50ms by default).

It then outputs the result like this.

  RESULT : 0.000254 msecs per iteration (total: 66.518000, iterations: 262144)
  

Example:

* void test_foo()
* {
* struct list l;
* struct list_el *e;
*
* UVERIFY(list_load_from_file(&l, "/tmp/foo.txt"));
*
* // perform normal unittest
* e = list_find(&l, "key");
* UVERIFY(e != NULL);
* UCOMPARESTR(list_name(e), "last element");
*
* // now lets measure the find performance
* list_find(&l, "key");
* }
*
* list_clear(&l);
* }
*
#define UCOMPARE (   actual,
  expected 
)
Value:
do { \
if (testlib_compare(actual, expected, #actual, #expected, __FILE__, __LINE__)) { \
return; \
} \
} ONCE

The UCOMPARE() macro compares an actual value to an expected value.

If actual and expected are identical, execution continues. If not, a failure is recorded in the test log and the test won't be executed further.

UCOMPARE tries to output the contents of the values if the comparison fails, so it is visible from the test log why the comparison failed.

It is important that the first argument is the actual value and the second argument is the expected value. Otherwise the output might be confusing.

Example:

tmp = a * b;
UCOMPARE(tmp, result);
#define UCOMPAREF (   actual,
  expected 
)
Value:
do { \
if (testlib_comparef(actual, expected, #actual, #expected, __FILE__, __LINE__)) { \
return; \
} \
} ONCE

UCOMPAREF behaves exactly like UCOMPARE but works with floats instead ints.

UCOMPAREF compares the values using == operator. This often makes no sense for computed values due to rounding errors in floating point numbers. Therefore UFUZZY_COMPAREF is provided.

  1. Use UCOMPAREF for exact bitwise comparison, e.g. when testing encoding and decoding routines of network protocols where you expect that the data is not modified.
  2. Use UFUZZY_COMPAREF for computed values in algorithms.

Example:

void test_encoder()
{
double val = testlib_fetch_double("val");
double tmp;
char buf[20];
int ret;
ret = encode_double(buf, sizeof(buf), val);
UVERIFY2(ret == 0, "encode_double failed");
ret = decode_double(buf, sizeof(buf), &tmp);
UVERIFY2(ret == 0, "decode_double failed");
UCOMPAREF(tmp, val);
}
#define UCOMPAREMEM (   actual,
  actuallen,
  expected,
  expectedlen 
)
Value:
do { \
if (testlib_comparemem(actual, actuallen, expected, expectedlen, __FILE__, __LINE__)) { \
return; \
} \
} ONCE

UCOMPAREMEM behaves exactly like UCOMPARESTR but takes the lengths instead of relying on zero terminated strings.

Example:

memcpy(dst, src, sizeof(dst));
UCOMPAREMEM(dst, sizeof(dst), src, sizeof(src));
#define UCOMPARESTR (   actual,
  expected 
)
Value:
do { \
if (testlib_comparestr(actual, expected, #actual, #expected, __FILE__, __LINE__)) { \
return; \
} \
} ONCE

UCOMPARESTR behaves exactly like UCOMPARE but works with C strings instead of ints.

Example:

string_toupper(tmp, sizeof(tmp), string);
UEXPECT_FAIL("umlauts", "We can't handle umlauts yet. Will be fixed in the next release", Continue);
UCOMPARESTR(tmp, result);
#define UEXPECT_FAIL (   dataIndex,
  comment,
  mode 
)
Value:
do { \
testlib_expect_fail(dataIndex, comment, mode); \
} ONCE

The UEXPECT_FAIL() macro marks the next UCOMPARE() or UVERIFY() as an expected failure.

Instead of adding a failure to the test log, an expected failure will be reported.

The parameter dataIndex describes for which entry in the test data the failure is expected. Pass an empty string ("") if the failure is expected for all entries or if no test data exists.

comment will be appended to the test log for the expected failure.

mode is a testlib_fail_mode and sets whether the test should continue to execute or not.

Rationale: If you have a test failing it is better to mark it as XFAIL then disabling or skipping it. This way the test passes, but the error is still reported so that it cannot be forgotten. This is typically used for non-critical problems that cannot be easily fixed, and so have been deferred to the be fixed in the next version.

Example:

UEXPECT_FAIL("", "Will fix in the next release", Abort);
UVERIFY(f != NULL);
#define UFATAL (   condition)
Value:
do { \
if (testlib_fatal(condition, #condition, __FILE__, __LINE__)) { \
return; \
} \
} ONCE

Checks the given condition and terminates the application if false.

Only use this for fatal errors where continuing test is not possible.

#define UFUZZY_COMPAREF (   actual,
  expected 
)
Value:
do { \
if (testlib_fuzzy_comparef(actual, expected, #actual, #expected, \
__FILE__, __LINE__)) { \
return; \
} \
} ONCE

UFUZZY_COMPAREF Compares the floating point value actual and expected and returns true if they are considered equal, otherwise false.

Note that comparing values where either actual or expected is 0.0 will not work. The solution to this is to compare against values greater than or equal to 1.0.

* // Instead of comparing with 0.0
* UFUZZY_COMPAREF(tmp, result); // This will return false
* // Compare adding 1 to both values will fix the problem
* UFUZZY_COMPAREF(1 + tmp, 1 + result); // This will return true
*

The two numbers are compared in a relative way, where the exactness is stronger the smaller the numbers are.

Example:

void test_multiplicationf()
{
double a = testlib_fetch_double("a");
double b = testlib_fetch_double("b");
double result = testlib_fetch_double("result");
double tmp;
tmp = a * b;
UFUZZY_COMPAREF(1 + tmp, 1 + result);
}
#define UREGISTER_CLEANUP (   func)
Value:
do { \
testlib_register_cleanup(func); \
} ONCE

Registers a global test cleanup functions.

This is called after the last test as finished.

You can free the resources here that where allocated in your test_init function.

Example:

UREGISTER_CLEANUP(test_cleanup);
See Also
UREGISTER_INIT()
#define UREGISTER_DATADRIVEN_TEST (   test,
  testdata 
)
Value:
do { \
testlib_register_datadriven_test(test, #test, testdata, #testdata, 0, 0); \
} ONCE

Registers a data-driven test function.

All registered functions will be executed by testlib_run_tests().

Example:

UREGISTER_DATADRIVEN_TEST(test_toupper, test_toupper_data);
#define UREGISTER_DATADRIVEN_TEST2 (   test,
  testdata,
  init,
  cleanup 
)
Value:
do { \
testlib_register_datadriven_test(test, #test, testdata, #testdata, init, cleanup); \
} ONCE

Registers a data-driven test function.

This macro behaves exactly like UREGISTER_DATADRIVEN_TEST(), but additionally allows to specify init and cleanup functions which are called before and after the test function respectively. Note that init is called one time before the test function is called the 1st time and cleanup is called after the testfunction has been called the last time. Thus init and cleanup are not called for every dataset. Note also that init is called after testdata is called.

Example:

UREGISTER_DATADRIVEN_TEST(test_toupper, test_toupper_data);
#define UREGISTER_INIT (   func)
Value:
do { \
testlib_register_init(func); \
} ONCE

Registers a global test init function.

This is called before the first test starts.

You can use this to allocate resources that are required for all tests.

Example:

UREGISTER_INIT(test_init);
See Also
UREGISTER_CLEANUP()
#define UREGISTER_NAME (   name)
Value:
do { \
testlib_register_name(name); \
} ONCE

Gives the test a human readable name.

#define UREGISTER_TEST (   test)
Value:
do { \
testlib_register_test(test, #test, 0, 0); \
} ONCE

Registers a test function.

All registered functions will be executed by testlib_run_tests().

Example:

UREGISTER_TEST(test_fopen);
UREGISTER_TEST(test_memcpy);
#define UREGISTER_TEST2 (   test,
  init,
  cleanup 
)
Value:
do { \
testlib_register_test(test, #test, init, cleanup); \
} ONCE

Registers a test function.

This macro behaves exactly like UREGISTER_TEST(), but additionally allows to specify init and cleanup functions which are called before and after the test function respectively.

Example:

UREGISTER_TEST(test_fopen);
UREGISTER_TEST(test_memcpy);
#define UTEST_MAIN ( )
Value:
int main(int argc, char *argv[]) { \
return testlib_main(argc, argv); \
}
int testlib_main(int argc, char *argv[])
Test main implementation used by UTEST_MAIN.
Definition: testlib.c:1221

Implements a main() function which executes the tests.

You still have to provide a function register_tests(). This called from the provided main function to all registering test functions.

#define UVERIFY (   condition)
Value:
do { \
if (testlib_verify(condition, #condition, __FILE__, __LINE__)) { \
return; \
} \
} ONCE

The UVERIFY() macros checks whether the condition is true or not.

If it is true, execution continues. If not, a failure is recored in the test log and the test won't be executed further. The test framework will continue to execute the next test.

Example:

* UVERIFY(result != -1);
*
#define UVERIFY2 (   condition,
  info 
)
Value:
do { \
if (testlib_verify2(condition, #condition, info, __FILE__, __LINE__)) { \
return; \
} \
} ONCE

The UVERIFY2() macro behaves exactly like UVERIFY(), except that it outputs a verbose message when condition is false.

The message is a plain C string.

Example:

* int result = sem_init(...);
* UVERIFY2(result != -1, "sem_init failed");
*

Enumeration Type Documentation

This enum describes the modes for handling an expected failure of the UVERIFY() or UCOMPARE() macros.

Enumerator
Abort 

Aborts the execution of the test.

Use this mode when it doesn't make sense to execute the test any further after the expected failure.

Continue 

Continues execution of the test after the expected failure.

Function Documentation

void testlib_add_column ( const char *  name,
const char *  fmt 
)

Adds a new data column for data-driven tests.

Call this function only in a test-data preperation function.

Parameters
nameName of the column. Used also in testlib_fetch().
fmtPrintf like format specifier.

The following format specifiers are currently supported:

Fmt Description
%s C String
%p Pointer type
%i / %d int
%u unsigned int
%f double

Note: In C float values passed via ... to a variadic function are promoted to double. char and short are promoted to int. That's why there are only testlib_fetch_int and testlib_fetch_double functions and no functions for float, short and char.

void testlib_add_row ( const char *  name,
  ... 
)

Adds a new data row for data-driven tests.

Call this function only in a test-data preperation function.

Parameters
nameName of the test set. This is used in the output to see with what dataset the test fails.
...The subsequent arguments must match the number of specified columns and type.
See Also
testlib_add_column
void* testlib_fetch ( const char *  name)

Returns the data for the given column name of the current test dataset.

Only call this in data-driven test functions.

The test function is called once for each dataset.

Parameters
namethe column name registered with testlib_add_column()
Returns
Returns the stored pointer or NULL if not found. This function can be used for %s and %p format specifiers.
double testlib_fetch_double ( const char *  name)

This function behaves exactly like testlib_fetch() but returns a double value previously stored in a %f column.

Parameters
namethe column name registered with testlib_add_column()
Returns
the stored value or 0.0 if not found.
int testlib_fetch_int ( const char *  name)

This function behaves exactly like testlib_fetch() but returns an int value previously stored in a %i or %d column.

Parameters
namethe column name registered with testlib_add_column()
Returns
the stored value or 0 if not found.
unsigned int testlib_fetch_uint ( const char *  name)

This function behaves exactly like testlib_fetch() but returns an unsigned int value previously stored in a %u column.

Parameters
namethe column name registered with testlib_add_column()
Returns
the stored value or 0 if not found.
void testlib_list_tests ( )

Lists all registered test functions.

int testlib_main ( int  argc,
char *  argv[] 
)

Test main implementation used by UTEST_MAIN.

This is a simplified version of testlib_main for systems which have no getopt() implementation or if you simply want to avoid the additional code of getopt(). Of course here you have no commandline option parsing, but it still allows to specify one testname.

struct testlib_stat* testlib_result ( )

Returns the test result statistic.

void testlib_run_tests ( const char *  testname,
const char *  testset 
)

Executes registered test functions.

If testname is NULL all functions are executed, otherwise only the one matching testname. If testset is NULL all datasets are used for data-driven tests, otherwise only the one matching testset.

int testlib_silent ( )

Decrease verbosity level to get less output.

int testlib_verbose ( )

Increase verbosity level to get more output.