C API Reference¶
Result Macros¶
These macros can be used in test functions to indicate a particular test result.
-
NP_PASS
¶ Causes the running test to terminate immediately with a PASS result.
You will probably never need to call this, as merely reaching the end of a test function without FAILing is considered a PASS result.
-
NP_FAIL
¶ Causes the running test to terminate immediately with a FAIL result.
-
NP_NOTAPPLICABLE
¶ Causes the running test to terminate immediately with a NOTAPPLICABLE result.
The NOTAPPLICABLE result is not counted towards either failures or successes and is useful for tests whose preconditions are not satisfied and have thus not actually run.
Assert Macros¶
These macros can be used in test functions to check a particular
condition, and if the check fails print a helpful message and FAIL
the test. Treat them as you would the standard assert
macro.
-
NP_ASSERT
(cc)¶ Test that a given boolean condition is true, otherwise FAIL the test.
-
NP_ASSERT_TRUE
(a)¶ Test that a given boolean condition is true, otherwise FAIL the test.
This is the same as
NP_ASSERT
except that the message printed on failure is slightly more helpful.
-
NP_ASSERT_FALSE
(a)¶ Test that a given boolean condition is false, otherwise FAIL the test.
-
NP_ASSERT_EQUAL
(a, b)¶ Test that two signed integers are equal, otherwise FAIL the test.
-
NP_ASSERT_NOT_EQUAL
(a, b)¶ Test that two signed integers are not equal, otherwise FAIL the test.
-
NP_ASSERT_PTR_EQUAL
(a, b)¶ Test that two pointers are equal, otherwise FAIL the test.
-
NP_ASSERT_PTR_NOT_EQUAL
(a, b)¶ Test that two pointers are not equal, otherwise FAIL the test.
-
NP_ASSERT_NULL
(a)¶ Test that a pointer is NULL, otherwise FAIL the test.
-
NP_ASSERT_NOT_NULL
(a)¶ Test that a pointer is not NULL, otherwise FAIL the test.
-
NP_ASSERT_STR_EQUAL
(a, b)¶ Test that two strings are equal, otherwise FAIL the test.
Either string can be NULL; NULL compares like the empty string.
-
NP_ASSERT_STR_NOT_EQUAL
(a, b)¶ Test that two strings are not equal, otherwise FAIL the test.
Either string can be NULL, it compares like the empty string.
Syslog Matching¶
These functions can be used in a test function to
control the how the test behaves if the Code Under Test
attempts to emit messages to syslog
. See Messages Emitted To syslog()
for more information.
-
void
np_syslog_fail
(const char *re)¶ Set up to FAIL the test on syslog messages matching a regexp.
From this point until the end of the test, if any code emits a message to
syslog
whose text matches the given regular expression, the test will FAIL immediately as ifNP_FAIL
had been called from insidesyslog
.- Parameters
re
-POSIX extended regular expression to match
-
void
np_syslog_ignore
(const char *re)¶ Set up to ignore syslog messages matching a regexp.
From this point until the end of the test function, if any code emits a message to
syslog
whose text matches the given regular expression, nothing will happen. Note that this is the default behaviour, so this call is only useful in complex cases where there are multiple overlapping regexps being used for syslog matching.- Parameters
re
-POSIX extended regular expression to match
-
void
np_syslog_match
(const char *re, int tag)¶ Set up to count syslog messages matching a regexp.
From this point until the end of the test function, if any code emits a message to
syslog
whose text matches the given regular expression, a counter will be incremented and no other action will be taken. The counts can be retrieved by callingnp_syslog_count
. Note that tag does not need to be unique; in fact always passing 0 is reasonable.- Parameters
re
-POSIX extended regular expression to match
tag
-tag for later matching of counts
-
unsigned int
np_syslog_count
(int tag)¶ Return the number of syslog matches for the given tag.
Calculate and return the number of messages emitted to
syslog
which matched a regexp set up earlier usingnp_syslog_match
. If tag is less than zero, all match counts will be returned, otherwise only the match counts for regexps registered with the same tag will be returned.- Return
- count of matched messages
- Parameters
tag
-tag to choose which matches to count, or -1 for all
Parameters¶
These functions can be used to set up parameterized tests. See Parameters for more information.
-
NP_PARAMETER
(nm, vals)¶ Statically define a test parameter and its values.
Define a
static
char*
variable called nm, and declare it as a test parameter on the testnode corresponding to the source file in which it appears, with a set of values defined by splitting up the string literal vals on whitespace and commas. For example: Declares a variable calleddb_backend
in the current file, and at runtime every test function in this file will be run twice, once with the variabledb_backend
set to"mysql"
and once with it set to"postgres"
.NP_PARAMETER(db_backend, "mysql,postgres");
- Parameters
nm
-C identifier of the variable to be declared
vals
-string literal with the set of values to apply
Dynamic Mocking¶
These functions can be used in a test function to dynamically add and remove mocks. See Mocking for more information.
-
void
np_unmock_by_name
(const char *fname)¶ Uninstall a dynamic mock by function name.
Uninstall any dynamic mocks installed earlier by
np_mock_by_name
for function fname. Note that dynamic mocks will be automatically uninstalled at the end of the test, so callingnp_unmock_by_name()
might not even be necessary in your tests.- Parameters
fname
-the name of the function to mock
-
np_mock
(fn, to)¶ Install a dynamic mock by function pointer.
Installs a temporary dynamic function mock. The mock can be removed with
np_unmock()
or it can be left in place to be automatically uninstalled when the test finishes.- Parameters
fn
-the function to mock
to
-the function to call instead
Note that if
np_mock()
may be called in a fixture setup routine to install the mock for every test in a test source file.
-
np_unmock
(fn)¶ Uninstall a dynamic mock by function pointer.
Uninstall any dynamic mocks installed earlier by
np_mock
for function fn. Note that dynamic mocks will be automatically uninstalled at the end of the test, so callingnp_unmock()
might not even be necessary in your tests.- Parameters
fn
-the address of the function to mock
-
np_mock_by_name
(fname, to)¶ Install a dynamic mock by function name.
Installs a temporary dynamic function mock. The mock can be removed with
np_unmock_by_name()
or it can be left in place to be automatically uninstalled when the test finishes.- Parameters
fname
-the name of the function to mock
to
-the function to call instead
Note that if
np_mock_by_name()
may be called in a fixture setup routine to install the mock for every test in a test source file.
Main Routine¶
These functions are for writing your own main()
routine.
You probably won’t need to use these, see Main Routine.
-
np_plan_t *
np
::
np_plan_new
(void)¶ Create a new plan object.
A plan object can be used to configure a
np_runner_t
object to run (or list to stdout) a subset of all the discovered tests. Note that if you want to run all tests, you do not need to create a plan at all; passing NULL tonp_run_tests
has that effect.- Return
- a new plan object
-
void
np
::
np_plan_delete
(np_plan_t *plan)¶ Destroys a plan object.
- Parameters
plan
-the plan object to destroy
-
bool
np
::
np_plan_add_specs
(np_plan_t *plan, int nspec, const char **spec)¶ Add a sequence of test specifications to the plan object.
Each test specification is a string which matches a testnode in the discovered testnode hierarchy, and will cause that node (plus all of its descendant nodes) to be added to the plan. The interface is designed to take command-line arguments from your test runner program after options have been parsed with
getopt
. Alternately you can callnp_plan_add_specs
multiple times.- Return
- false if any of the test specifications could not be found, true on success.
- Parameters
plan
-the plan object
nspec
-number of specification strings
spec
-array of specification strings
-
void
np_set_concurrency
(np_runner_t *runner, int n)¶ Set the limit on test job parallelism.
Set the maximum number of test jobs which will be run at the same time, to n. The default value is 1, meaning tests will be run serially. A value of 0 is shorthand for one job per online CPU in the system, which is likely to be the most efficient use of the system.
- Parameters
runner
-the runner object
n
-concurrency value to set
-
void
np_list_tests
(np_runner_t *runner, np_plan_t *plan)¶ Print the names of the tests in the plan to stdout.
If plan is NULL, all the discovered tests will be listed in testnode tree order.
- Parameters
runner
-the runner object
plan
-optional plan object
-
bool
np_set_output_format
(np_runner_t *runner, const char *fmt)¶ Set the format in which test results will be emitted.
Available formats are:
- “junit” a directory called
reports/
will be created with XML files in jUnit format, suitable for use with upstream processors which accept jUnit files, such as the Jenkins CI server. - ”text” a stream of tests and events is emitted to stdout, co-mingled with anything emitted to stdout by the test code. This is the default if
np_set_output_format
is not called.
Note that the function is a misnomer, it actually adds an output format, so if you call it twice you will get two sets of output.
Returns true if
fmt
is a valid format, or false on error.- Parameters
runner
-the runner object
fmt
-string naming the output format
- “junit” a directory called
-
int
np_run_tests
(np_runner_t *runner, np_plan_t *plan)¶ Runs all the tests described in the plan object.
If plan is NULL, all the discovered tests will be run in testnode tree order.
- Return
- 0 on success or non-zero if any tests failed.
- Parameters
runner
-the runner object
plan
-optional plan object
-
np_runner_t *
np_init
(void)¶ Initialise the NovaProva library.
You should call
np_init
to initialise NovaProva before running any tests. It discovers tests in the current executable, and returns a pointer to anp_runner_t
object which you can pass tonp_run_tests
to actually run the tests.The first thing the function does is to ensure that the calling executable is running under Valgrind, which involves re-running the process. So be aware that any code between the start of
main
and the call tonp_init
will be run twice in two different processes, the second time under Valgrind.The function also sets a C++ terminate handler using
std::set_terminate()
which handles any uncaught C++ exceptions, generates a useful error message, and fails the running test.- Return
- a new runner object
-
void
np_done
(np_runner_t *runner)¶ Shut down the NovaProva library.
Destroys the given runner object and shuts down the library.
- Parameters
runner
-The runner object to destroy
Miscellany¶
-
int
np_get_timeout
()¶ Get the timeout for the currently running test.
If called outside of a running test, returns 0. Note that the timeout for a test can vary depending on how it’s run. For example, if the test executable is run under a debugger the timeout is disabled, and if it’s run under Valgrind (which is the default) the timeout is tripled.
- Return
- timeout in seconds of currently running test