Lars Henriksen d2791b046a Update import of basic recurrence rules
Conversion of COUNT to UNTIL was a simple calculation which assumed one
repetiton per period (day, week, month or year); it does not take exception
days and invalid dates into account. Solved by a new function which returns the
n'th occurrence of a recurrence rule.

In calcurse UNTIL is interpreted as a day (DATE), in RFC 5545 as a time of day
(DATE-TIME). This has implications when a recurrence rule has an occurrence on
the UNTIL day, see comment in ical.c

An "Import:" note is added when a multi-day event is imported and turned into a
calcurse all-day event.

Icalendar quotes in comments have been updated to RFC 5545.

Signed-off-by: Lars Henriksen <LarsHenriksen@get2net.dk>
Signed-off-by: Lukas Fleischer <lfleischer@calcurse.org>
2020-06-13 10:56:15 -04:00
..
2020-03-22 13:40:28 -04:00
2020-03-22 13:40:28 -04:00
2020-03-22 13:40:28 -04:00
2020-03-22 13:40:28 -04:00
2020-03-22 13:40:28 -04:00
2020-03-22 13:45:40 -04:00
2020-03-22 13:45:40 -04:00
2019-12-23 13:16:39 -05:00
2019-12-23 13:16:39 -05:00
2019-12-23 13:16:39 -05:00
2019-12-23 13:16:39 -05:00
2019-12-23 13:16:39 -05:00
2019-12-23 13:16:39 -05:00
2019-04-13 11:58:16 +02:00
2019-04-13 11:58:16 +02:00
2019-10-18 16:29:34 -04:00
2019-01-07 16:57:57 +01:00
2020-01-30 19:22:23 +01:00

calcurse Test Suite
===================

This directory holds of a couple of test cases and some helpers to simplify the
task of creating new tests. Test cases are intended to test a specified set of
behaviors and to avoid reintroduction of bugs that have already been fixed. The
idea is that a new test should be added for each bug report that is received
and for each bug that is fixed in the development branch.

Running tests
-------------

The easiest way to run tests is running `make check`. This will prepare and
compile all needed components and start all test cases. A summary is displayed
when all tests are finished.

You can also run tests manually. Test cases are usually shell scripts or
binaries. To run an individual test, just invoke the corresponding executable.
The `CALCURSE` and `DATA_DIR` environment variables can be used to specify an
alternative calcurse binary and data directory.

Passing another data directory might cause some failures since many tests are
adapted for the `test/` directory provided by the test suite:

    $ CALCURSE=../src/calcurse DATA_DIR="$HOME/.local/share/calcurse/" ./next-001.sh
    Running ./next-001.sh... FAIL

Writing tests
-------------

Writing test cases is as simple as creating a new shell script and adding some
test code. Success and failure are reported by setting the exit status. Setting
the exit status to `0` indicates success, a non-zero value indicates failure
(which reflects the usual exit code semantics of POSIX systems).

To enable a test, just add it to the `TESTS` variable in `test/Makefile.am`. If
your test case is written in a non-interpretable language, you may need to add
some compilation directives as well. Please note that we only accept
POSIX-compatible shell scripts and C in mainline, so please avoid using other
languages if you plan to get your test case integrated upstream.

If your test case invokes the calcurse binary, please continue reading the
following sections, also.

The `run-test` helper
---------------------

The `run-test` helper is a simple C program that makes writing script-based
test cases much easier. Tests for the calcurse command line interface usually
invoke the calcurse binary with some special command line options and compare
the output with a hardcoded set of expected results. Unfortunately, comparing
the output of two commands is not exactly easy in POSIX shell: this is where
the `run-test` helper comes in handy.

If you run the `run-test` helper, you can pass one or more executable files as
parameters. The helper then invokes each of the specified scripts twice: Once
passing `actual` as a command line parameter and once passing `expected`. It
then compares the outputs of both invocations and checks if they are equal or
not.  If the `actual`/`expected` outputs differ for one of the programs passed
to `run-test`, if displays `FAIL` and exits with a non-zero exit status. It
returns success otherwise.

Here is a simple example on how to use `run-test`:

    #!/bin/sh

    if [ "$1" = 'actual' ]; then
      echo 'obrocodobro' | sed 's/o/a/g'
    elif [ "$1" = 'expected' ]; then
      echo 'abracadabra'
    else
      ./run-test "$0"
    fi

If the script is run without any parameters, it simply invokes `run-test`,
passing itself as a command line parameter (see the `else` branch). `run-test`
then reruns the script, passing `actual` as the first parameter. This starts
the actual test (see the `if` branch). It reruns the script a second time,
passing `expected` as the first parameter which results in the script printing
the expected result for this test (see the `elif` branch). Finally, `run-test`
compares both outputs, prints a message indicating whether they are equal and
sets the exit status accordingly. This exit status is then passed on to the
original instance of the test script and returned since `./run-test "$0"` is
the last command that is run if the script is invoked without any parameters.

You should stick to this strategy whenever you want to check the output of a
non-interactive calcurse instance in a test. Check the following tests for some
more examples:

* `todo-001.sh`
* `todo-002.sh`
* `todo-003.sh`

Using libfaketime
-----------------

Some tests might require faking current date and time. We currently use
libfaketime to achieve this. Check the following files for examples:

* `appointment-001.sh`
* `next-001.sh`
* `range-001.sh`
* `range-002.sh`
* `range-003.sh`

NOTE: Please do not forget to check for libfaketime presence at the beginning
of your test. Otherwise, your test is likely to fail on systems that are not
supported by libfaketime.

Additional notes
----------------

Most tests, that invoke the calcurse binary, pass the `--read-only` parameter
to make sure the data directory is not modified by calcurse, preventing
unexpected side effects. Please follow this guideline if you plan to submit
your patch upstream.