You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »




How to Run the Script Test System

This tutorial walks through how to run the script-based test system to test GMAT.

Getting the Files

The script-based test system is considered internal to the GMAT project, and is checked into our internal repository:

Check out this entire directory to your local machine. You will need to log in using your NDC credentials.
Some tests use additional data files that are located elsewhere in the same repository:

Check out the data files to your local machine as well.The location doesn't matter, it can be in a different place than the test system files checked out earlier.

Setting up the Environment

You will need to have the copy of GMAT that you are testing located somewhere on your machine. It is preferable if this is a fresh copy not being used for anything else; the test system moves certain output files and deletes other ones.
Add the following lines to your gmat_startup_file.txt, where <testsys>is the location of the test system you checked out earlier:

GMAT_FUNCTION_PATH = <testsys>/input/Functions/GMAT/
MATLAB_FUNCTION_PATH = <testsys>/input/Functions/MATLAB/

Next, add the data files checked out above to the data folder of your copy of GMAT. This is most easily done by just dragging the top-level folders to the GMAT datafolder and letting the operating system merge the folders.

Configuring the Run

The test system is configured by editing a run definition file (or an equivalent Matlab structure).
Create your own run definition file by copying the supplied bin/rundef.example.m to a new name.
The syntax of this file is explained by comments in the file itself, and by the Run Definition Syntax wiki page. You will need to change at least the RunDef.GmatExe field, and probably others as well.
Edit your run definition file to reflect the run you want to perform.You will most likely change the following fields:

BuildThe name of this run (i.e. 20110101 or MyTestRun)
GmatExeThe location of your GMAT executable
ComparisonsChoose truth comparisons only, or regressions as well
RegressionBuildFolder name for regression comparisons
ReportersChoose screen only, or text file/email as well
Cases Categories Folders RequirementsSelect a subset of tests to run (comment them out to run everything)
Running the Tests

Running the test system itself is simple, once everything is configured.
Open Matlab and change the working directory to <testsys>/bin.
Run the command gmattest('<run definition file>'), where <run definition file> is the path to the run definition file you configured above. Paths can be absolute or relative to the working directory.
At this point, the system will run and report progress to the screen. A full run with all tests included may take several hours.
If you want to repeat a portion of the run without rerunning all the tests, there are other commands available for running only a portion of the system:

gmattest run <run definition file>Run everything (alternative to the command above)
gmattest runtests <run definition file>Run the tests only, then stop
gmattest runcomparators <run definition file>Run the truth/regression comparisons only
gmattest runreporters <run definition file>Run the screen/file/email reporting only


tc File Syntax

File Format

Each test case run by GMAT's base (script) test system has an associated .tc file that describes the test and its properties and tells the system how to run it.
Every test in the system must have a .tc file associated with it; currently, the association is made by giving it the same name as the GMAT script (with a different extension, of course). For example, the .tc file associated with TestCase1.script is named
The .tc file is written in a text format called YAML. See the sample below for a general template. This example shows the necessary fields and one way to format them. The YAML format is very flexible, and can be written in multiple ways. The Wikipedia articlehas a good description of the syntax rules.

Sample File
 # This is a comment (anything followed by a # to the end of the line). # # Parameters are set in key/values pairs: # Key: Value # # Lists of single values are surrounded by square brackets # and comma-separated: # Key: [Value1, Value2] # # Blocks of key/value pairs are surrounded by curly brackets # and comma-separated: # {Key:Value, Key:Value} # # Blocks can appear in lists by nesting the brackets: # [ {Key:Value, Key:Value}, {Key:Value, Key:Value} ] # 

 # # A list of open bugs related to this test. # Examples: # Bugs: [] # Bugs: [500] # Bugs: [12, 69, 3001] # 
Bugs:  [2003, 1482]

 # # A list of category names (tags) for this test. # Examples: # Categories: [] # Categories: [Numerical] # Categories: [Numerical, System, Smoke] # 
Categories:  [System, Numerical]

 # # A list of requirements exercised by this test. # Examples: # Requirements: [] # Requirements: [FRR-1] # Requirements: [FRR-5, FRC-12.1.1, FRC-8.2] # 
Requirements:  [FRR-1.1.3, FRC-5]

 # # A comparator and truth file for the GmatLog.txt file (optional). # Examples: # LogFile: # Comparator: ValidationComparator # # LogFile: # Comparator: ValidationComparator # Truth: TruthFileName.truth # 
    Comparator: ValidationComparator
    Truth: TestCaseName_Log.truth

 # # A list of output file blocks for each file written by this test. # Examples: # OutputFiles: [] # OutputFiles: [{}] # OutputFiles: [{}, {}, {}] # 
OutputFiles:  [
    { # # The file name of the current output file. # 
         # # The file name of the truth file associated with the above output file. # 
        Truth: TestName_TruthFile1.truth,
         # # The name of the comparator to use to compare the report and truth files. # 
        Comparator: ElementComparator,
         # # The tolerance for the comparison (if applicable). # Examples: # Tolerances: [] # Tolerances: [1e-3] # Tolerances: [1e-6, 1e-3] # 
        Tolerances: [1e-10],
         # # The name of the comparator to use for regression comparisons (optional) # 
        RegressionComparator: DiffComparator
        Truth: TestName_TruthFile2.truth,
        Comparator: PVComparator,
        Tolerances: [1e-6, 1e-3]
        Truth: [],
        Comparator: TrueFalseComparator,
        Tolerances: []

 # # [optional] The crash detection timeout for this test (in minutes, default 10). # Examples: # Timeout: 10 # Timeout: 60 


Script Test System Overview


GMAT has a new test system, written in MATLAB, that automatically runs all available test cases, compares the results to truth data, and reports whether or not each test passed or failed. The system has been designed to allow for quick and easy development of new test cases, without needing to worry about the internals of the system.
The following diagram explains the system at a high level:

The blocks in yellow represent input files that we need to write for reach individual test case. These files are described in the following sections.
The test system itself is kept on GMAT's internal repository, located at the following URL:

You can use any Subversion client to download the files, such as TortoiseSVN, SmartSVN, or even the Subversion command-line client. You'll need your NDC login details to access the server.
When you check out the system, you'll see several directories worth of files. Here's a short explanation of what you'll see:

        ...misc MATLAB folders...
        gmattest.m                      # The main test system script
        rundef.m                        # The run definition file (see below)
        setup.m                         # Run this before gmattest.m to set up path info
                                        # (or add to your startup.m)
        ...other .m files...            # Old or developmental scripts; ignore

        ...developer documentation...
        reqs/                           # This is where your test matrices will go

            FRC-1_Optimize/             # These folders will contain the scripts used to
                                        # generate your truth data (such as STK/Connect
                                        # scripts)
            ...other FRC folders...
            FRR-1_SpacecraftOrbitState/ # same as above
            ...other FRR folders...

                scripts/                # .tc files and .script files
                truth/                  # .truth files (truth data)
            ...other FRC folders...
            FRR-1_SpacecraftOrbitState/ # same as above
            ...other FRR folders...

    output/                             # This is where the output files, log files, and results
                                        # are stored. They are divided by the build specifier
                                        # (see the Run Definition section) and then by the 
                                        # Commands/Resources structure like the extern and input
                                        # folders.
The .script File

To write a test case, you'll first need to write the GMAT script that contains the test you'd like to run. Describing how to write a GMAT script is outside the scope of this tutorial, but there are a few rules that we need to follow to make things go smoothly.

  • Rule 1: Include a comment section at the top of the script that describes what the test is designed to do and who wrote it. This will help us understand it if we ever need to change it in the future.
  • Rule 2: Trim unnecessary lines from the script (especially if you generated it through the GMAT GUI. This includes graphical elements like OpenGLPlots and unused parameter settings (like Spacecraft Attitude lines if your script isn't using attitude features).
  • Rule 3: Write output files to the default directory. When writing an output file from GMAT, set the Filename field to a simple file name (usually with a .report extension). Don't include any folders or other path information.
The .truth File

Your script file will nearly always contain one or more output files that are written by GMAT. Our goal is to compare the contents of these files with a set of known-good truth files. This is what will tell us whether or not the test passed.
Once we've established which output file contains the data we're interested in, we need to recreate that file in roughly the same format, and with the same parameters, in another piece of software that we know has already been tested. For complex cases, this is usually STK or another established mission design tool. Sometimes for individual algorithms, this can be a custom MATLAB script that you've written and validated against a published source. For elementary operations, it could even be hand calculations.
Once you choose the method that makes sense for your case, you need to make sure that it outputs data in a format that's similar enough to the original output file the test system can compare the two. This is where the concept of a comparator comes in.
The test system relies on distinct bits of logic called comparators to do the actual comparison between a single output file and a single truth file and output the result. There are several comparators built in (and it's easy to add more) that interpret different types of files, whether they contain position/velocity files, dates, simple columns of data, or even just ones or zeros.
It helps to know which comparator you'll use before generating the output and truth files, so you can make sure the formats are compatible.

You can view descriptions of all the built-in comparators at the Comparators reference page.
The .tc File

Each test case is represented by a .tc file (or test case file) that describes the test case to the system. This file is a simple text file with a specific format that describes things like the test case category, any associated bugs, and pointers to the output files and truth files generated by the case.
For our FRR-2 example, there is a test case named EpochInput_2004. Its test case file is named EpochInput_2004.tcand looks like this:

Categories: [Numerical]
Bugs: []
Requirements: [FRR-2]
OutputFiles: [
       Truth: EpochInput_2004.truth,
       Comparator: DateComparator,
       Tolerances: [1e-3]

As you can see, the file contains a series of key names and values, along with some grouping characters. Square brackets ([]) group comma-separated lists of things, and curly brackets ({}) group sets of key/value pairs within a list. Though the above example is a simple case, you can see that the Categories key takes a list of category names, and the OutputFiles key can accept multiple inner blocks (each surrounded by curly brackets).

  • Categories is a list of category names that can be used to group test cases together. For example, all test cases that involve numeric comparisons can be put into the Numeric category, which allows us to run them all as a group, if desired. The Categories reference page has a list of category names we've been using.
  • Bugs is a list of bug numbers from GMAT's Bugzilla tracking system. This key is used to track which test cases have active bugs associated with them.
  • Requirements is a list of requirement numbers (such as FRR-2 or FRC-13.1.6) that this particular case tests. In the above example, we are testing all parts of the FRR-2 requirement in the same script.
  • OutputFiles is a list of blocks that represent each output file generated by the test case input script. So if your GMAT test script generates multiple output files that need to be processed, this section would have multiple blocks, each surrounded by curly brackets.
    • File is the file name of the current output file.
    • Truth is the file name of the truth file that the output file is being compared to.
    • Comparator is the name of the file comparator inside the test system that is doing the comparison between the output file and the truth file. The Comparators reference page has a list of existing comparators.
    • Tolerances is a list of numerical tolerances used by the comparator, if applicable. You'll fill this in based on the requirements of the comparator you've chosen. The Comparators reference page describes the requirements of each comparator in the system.
The full .tc file format is described in the tc File Syntax reference page.
The Run Definition File

Once you've written a test and all its associated files (and you've gone through the rest of this guide), you'll need to actually run the test system to make sure everything works. The system uses a file called a Run Definition File (the example included in the system is called rundef.m) to tell it how to behave.
The run definition file is a MATLAB file that contains a structure named RunDef:

% GMAT test system run definition

RunDef.Build = 'MyTestRun';
RunDef.GmatExe = 'C:\Program Files\GMAT\2010-07-08\GMAT_2010-07-08_wx2810.exe';
RunDef.Type = 'truth';
RunDef.Reporters = {'ScreenReporter'};
RunDef.Cases = {'Target_DC_Command_MatlabFunction', 'ISS_Earth_0_0_0'};
RunDef.Categories = {};
RunDef.Folders = {};
RunDef.Requirements = {};

The fields are as follows:

  • Build: The name of this particular run (when we use this in production, this will be the GMAT build number, hence the name). This corresponds to the name of the output folder in the test system file tree (see above).
  • GmatExe: The full path to the GMAT executable you want to test.
  • Type: The type of comparisons to perform (only 'truth' is valid for now)
  • Reporters: A list of reporters to use for results output (keep this as 'ScreenReporter' to see the results in your MATLAB window).
  • Cases, Categories, Folders, Requirements: These selectors allow you to choose which tests to run, instead of running the entire system. A missing specifier (commented out) means that everything is included. An empty specifier ({}) means that nothing is included. Any given test will be run if it matches any of the specifiers, which means to run a single test with Cases, the rest must be empty and not commented out. The example above will only run 2 cases.
    • Cases: A list of case names to run (with no file extension)
    • Categories: A list of categories to run (from the .tc file's Categories field)
    • Folders: A list of folders to run tests from (e.g. 'Commands\FRC-1_Optimize')
    • Requirements: A list of requirements to run (e.g. 'FRR-2' or 'FRC-1.2'). These must match exactly for now (specifying 'FRR-2' won't run tests tagged only with 'FRR-2.1').
Running the Test System

Once all these files are written and in their proper places, it's time to actually run the system. This part is easy compared to everything else.
If you're testing a single case you've just written, you'll probably want to put its name in the RunDef.Cases field in the run definition file, and make sure the Categories, Folders, and Requirements fields are empty and uncommented.
Then, to run the system, change to the test/bin directory in MATLAB and run the gmattest.mscript:

>> setup()
>> gmattest('.\rundef.m')

The results of the test will be printed to the MATLAB window.

{If you have any problems with running the system, see Joel (S127).} is a utility script to edit script test system test cases (.tc files).
Requirements is written in Python 3. If you have Python 3.x installed, you're set. There are no other dependencies.
Usage is located in <Jazz>/trunk/test/script/bin/util. You need to either add that path to your system's PATH variable, or switch to that directory before using the script.
On Windows, execute the script by running: python On any other platform, you can simply run:
Running python -hwill display the following help summary:
> python -h
usage: [-h] [--add-bugs ADD_BUGS] [--add-categories ADD_CATEGORIES]
                 [-d] [--dry-run] [--verbose]
                 [testcase [testcase ...]]

Manipulate test case metadata.

positional arguments:
  testcase              white-space-separated list of input names

optional arguments:
  -h, --help            show this help message and exit
  --add-bugs ADD_BUGS   add specified bug IDs to test case (comma-separated
  --add-categories ADD_CATEGORIES
                        add specified category names to test case (comma-
                        separated list)
  -d, --debug           turn on debug messages
  --dry-run             do everything except actually writing changes to the
  --verbose             turn on debug messages (same as --debug)
This command will search for the Events_Eclipse_Heo1 test case and tag it with the libEventLocatorcategory.
> python --add-categories=libEventLocator Events_Eclipse_Heo1
This command will tag the specific file C:\TestSys\input\Resources\FRR-12_FiniteBurn\scripts\ with the Validationcategory.
> python --add-categories=Validation C:\TestSys\input\Resources\FRR-12_FiniteBurn\scripts\
You can also provide a list of test cases as standard input instead of as command arguments. This command (when run from a Unix shell or MSYS on Windows) will search for all cases that have "vf13ad" in the name and tag them with the libVF13Optimizercategory.
$ find ../../input -iname '*vf13ad*.tc' | python --add-categories=libEventLocator
  • No labels