Script Testing

Script Testing Concepts

GMAT's script test system is a custom test environment written in MATLAB and automatically runs all test cases, compares the results to both external benchmark data and past GMAT output, and sends nightly test reports to the entire development team.   Nightly regression test reports contain high-level statistics on the number of tests run and the number of failing tests.  The reports also document test case changes including tests that have switched from passing to failing, test that changed from failing to passing or tests that changed but still fail or pass. 

Custom script verification tests types ensure that all features implemented in GMAT function correctly and within tolerance and allow for rapid development of new test cases without needing to understand the internals of the test system.  The system  supports built-in “Comparators” such as the ability to compare position velocity files to external benchmark data, compare generic data files to user defined tolerances, perform file difference compares, and search system log files for warnings or error messages.   Custom comparators are relatively easy to implement when a test requires one.

Testers can quickly run test cases by specific software requirement ID, test name, or feature group among others.  Additionally, script tests are classified by their type (Numeric, Validation, System, Stress, etc.) and subsets can be run easily by specifying the test categories in the test system configuration.  Finally, there are several higher-level classifications of script tests used during the development process before committing new code or used nightly to determine if code additions or changes have caused unexpected adverse effects. These higher level categories such as “Smoke” and “System Tests” are groupings of lower level test cases and provide developers and testers insight into the system without running the entire test suite. 


Script Test System Overview

GMAT's script test system is written in MATLAB and automatically runs all available test cases, compares the results to truth data, and reports whether or not each test passed or failed. The system has been designed to allow for quick and easy development of new test cases, without needing to worry about the internals of the system. The following diagram explains the system at a high level:

 The blocks in yellow represent input files required for reach individual test case. These files are described in the following sections. The test system is version controlled in an internal SVN repository.

You can use any Subversion client to download the files. You'll need your NDC credentials to access the server. When you check out the system, you'll see several directories worth of files. Here's a brief overview:

test/
    bin/
        ...misc MATLAB folders...
        gmattest.m                      # The main test system script
        rundef.m                        # The run definition file (see below)
        setup.m                         # Run this before gmattest.m to set up path info
                                        # (or add to your startup.m)
        ...other .m files...            # Old or developmental scripts; ignore

    doc/
        ...developer documentation...
        reqs/                           # This is where your test matrices will go

    extern/
        Commands/
            FRC-1_Optimize/             # These folders will contain the scripts used to
                                        # generate your truth data (such as STK/Connect
                                        # scripts)
            ...other FRC folders...
        Resources/
            FRR-1_SpacecraftOrbitState/ # same as above
            ...other FRR folders...

    input/
        Commands/
            FRC-1_Optimize/
                scripts/                # .tc files and .script files
                truth/                  # .truth files (truth data)
            ...other FRC folders...
        Resources/
            FRR-1_SpacecraftOrbitState/ # same as above
            ...other FRR folders...

    output/                             # This is where the output files, log files, and results
                                        # are stored. They are divided by the build specifier
                                        # (see the Run Definition section) and then by the 
                                        # Commands/Resources structure like the extern and input
                                        # folders.

Running the test system

Set up your system

  1. Check out the test system using Subversion.  Warning the test system contains both GUI and script tests.  It is unlikely you need GUI tests and the data is many GB.  We recommend only checking out the trunk/test/script folder.  This is an internal system, so you will need to log in using your NDC credentials.
  2. Install MATLAB.

  3. Locate the copy of GMAT you want to test. It's usually best if this is a secondary copy of GMAT, since the test system will copy a lot of data files and create a lot of output during the testing process.

Run smoke tests

Smoke tests are a small set of tests that exercise a critical features of the entire system. Running this tests takes just a few minutes, and is a great way to make sure things are generally running smoothly.

  1. Start MATLAB.
  2. In MATLAB:
    1. Change the working directory to the test system bin directory (the directory that contains RunSmokeTests.m).
    2. Run:
      preparegmat('/path/to/gmat')

      where the path is to the top-level GMAT folder. This configures your copy of GMAT for testing.
    3. Open RunSmokeTests.m.
    4. Change the value of RunDef.GmatExe to the path to the copy of GMAT you would like to test. This should be the full path to GMAT.exe.
    5. Run RunSmokeTests.

You will see some output as the test system proceeds, followed in a few minutes by a lengthy report. Most tests should pass, though a small number may fail if certain plugins or features are disabled (like the MATLAB interface or FminconOptimizer).

Run a custom test

Once you have the smoke tests running, it's straightforward to run a custom test.

  1. Start MATLAB.
  2. In MATLAB:
    1. Copy rundef.example.m to a new name (e.g. myrundef.m).
    2. Open the new file. This file contains a series of parameters that control the test run (i.e. the run definition). Each parameter is heavily commented with examples.
    3. Uncomment RunDef.GmatExe and set it to the path to the copy of GMAT you would like to test. This should be the full path to GMAT.exe
    4. Set the filters near the bottom (RunDef.Cases, RunDef.Categories, etc.) to select the tests you want to run. By default, these filters operate in an "OR" mode, meaning a test will be run if it matches any of the filters. To change the mode, edit RunDef.FilterMode.
  3. Run:
    preparegmat('/path/to/gmat')

    where the path is to the top-level GMAT folder. This configures your copy of GMAT for testing.
  4. Run:
    gmattest myrundef.m
    (or the name of your run definition file). This runs the test system.

Advanced configuration

The test system is configured by editing a run definition file (or an equivalent Matlab structure). Create your own run definition file by copying the supplied bin/rundef.example.m to a new name.


The syntax of this file is explained by comments in the file itself, and by the Run Definition Syntax wiki page. You will need to change at least the RunDef.GmatExe field, and probably others as well.


These are the most commonly-used fields:

Build

The name of this run (i.e. 20110101 or MyTestRun)

GmatExe

The location of your GMAT executable

Comparisons

Choose truth comparisons only, or regressions as well

RegressionBuild

Folder name for regression comparisons

Reporters

Choose screen only, or text file/email as well

Cases Categories Folders Requirements

Select a subset of tests to run (comment them out to run everything)

Advanced run options

Running the test system itself is simple, once everything is configured:

  1. Open Matlab and change the working directory to <testsys>/bin.
  2. Run the command:
    gmattest <run definition file>
    where <run definition file> is the path to the run definition file you configured above. Paths can be absolute or relative to the working directory.

At this point, the system will run and report progress to the screen. A full run with all tests included may take several hours.
If you want to repeat a portion of the run without rerunning all the tests, there are other commands available for running only a portion of the system:

gmattest run <run definition file>

Run everything (alternative to the command above)

gmattest runtests <run definition file>

Run the tests only, then stop

gmattest runcomparators <run definition file>

Run the truth/regression comparisons only

gmattest runreporters <run definition file>

Run the screen/file/email reporting on

Advanced Performance Configuration for Small Test Sets

When running a small set of script tests, the performance of the test system can be quite slow because by default, the test system searches ALL .tc files in the test system for cases that match your RunDef configuration.  If you need to run a few tests, you can configure the RunDef file carefully for a dramatic performance improvement.  When applying filters in And mode, the test system applies the Folders filter first.  Folders that do not match criteria are not loaded and therefore, .tc files and other data are also not loaded, saving considerable time.   For efficient execution of a small set of tests:

  1. Include the tests you want included in the Cases list

  2. Include the folders in which those cases are found in the Folders list
  3. Leave Requirements or Categories filters empty in the RunDef file

Below is an example RunDef.m file that efficiently runs the test named STM_GMAT_PD45 which is located in the FRR-13_DynamicsModels folder.

RunDef.Build       = 'NightlyBuild';
RunDef.GmatExe     = 'c:\PATHTOYOURBINDIRECTORY\GMAT.exe';
RunDef.Modes       = {'script'};
RunDef.Comparisons = {'truth'};
RunDef.Reporters    = {'ScreenReporter'};
RunDef.Cases        = {'STM_GMAT_PD45'};
RunDef.Folders      = {'FRR-13_DynamicsModels'};
RunDef.FilterMode   = 'and';
RunDef.Requirements = {}
RunDef.Categories = {}

Advanced Configuration for Runnning Alpha/Internal/Underdevelopment Tests

The test system supports options that allow exclusion of plugins, old or under development tests among others.   This enables the test system to be configured to easilyt test different release configurations includinng a full release, a production release, or a public release. 

% Tell the test system to run/not run tests tests that have alpha features 
RunDef.SetRunAlphaTests(true);
% Test the test system to run/not run tests that have internal features
RunDef.SetRunInternalTests(true);
% Test test system to exclude tests in folders that
% are either in old or UnderDevelopment
runDefMan.SetRunExcludedFolders(false);
% Pass in RunDef so manager can configure it according to settings above.   
RunDef = runDefMan.PrepareRunDef(RunDef);

Note, the test system identifies tests that are internal and alpha by collecting script snippet names from two functions located in the folder: ScriptTest\bin\testconfig as shown below.  These functions must be updated to include new alpha and internal components so that the test system will skip those tests when requested. 

function [alphaResources,alphaCommands,alphaSnippets] = GetAlphaScriptConfig()
% This function returns the names of Resources, Commands, and field names that are alpha. 
[internalResources,internalCommands,internalSnippets] = GetInternalScriptConfig()
% This function returns the names of Resources, Commands, and field names that are internal. 

Configuration for Nightly Regression Testing

Nightly regression tests require that GMAT's data configuration is consistent with the data used to create the truth data contained in the test system.   For example, some tests require additional ephemeris files, 3-D models, or other configuration data.  Configuration is performed using the function named  preparegmat.m located in the directory \test\script\bin.  When you execute the function, all data from the \test\script\gmatdata folder is copied into your local GMAT configuration.  You must provide the location of your local GMAT configuration as shown below.

>> preparegmat('C:\Path\To\GMAT')

Writing tests

Files Used By the Test System

The "tc" File

Each test case is represented by a .tc file (or test case file) that describes the test case to the system. This file is a simple text file with a specific format that describes things like the test case category, any associated bugs, and pointers to the output files and truth files generated by the case.
For our FRR-2 example, there is a test case named EpochInput_2004. Its test case file is named EpochInput_2004.tcand looks like this:

Categories: [Numerical]
Bugs: []
Requirements: [FRR-2]
OutputFiles: [
   {
       File: EpochInput_2004.report,
       Truth: EpochInput_2004.truth,
       Comparator: DateComparator,
       Tolerances: [1e-3]
   }
]

As you can see, the file contains a series of key names and values, along with some grouping characters. Square brackets ([]) group comma-separated lists of things, and curly brackets ({}) group sets of key/value pairs within a list. Though the above example is a simple case, you can see that the Categories key takes a list of category names, and the OutputFiles key can accept multiple inner blocks (each surrounded by curly brackets).

  • Categories is a list of category names that can be used to group test cases together. For example, all test cases that involve numeric comparisons can be put into the Numeric category, which allows us to run them all as a group, if desired. The Categories reference page has a list of category names we've been using.

  • Bugs is a list of bug numbers from GMAT's Bugzilla tracking system. This key is used to track which test cases have active bugs associated with them.

  • Requirements is a list of requirement numbers (such as FRR-2 or FRC-13.1.6) that this particular case tests. In the above example, we are testing all parts of the FRR-2 requirement in the same script.

  • OutputFiles is a list of blocks that represent each output file generated by the test case input script. So if your GMAT test script generates multiple output files that need to be processed, this section would have multiple blocks, each surrounded by curly brackets.

    • File is the file name of the current output file.

    • Truth is the file name of the truth file that the output file is being compared to.

    • Comparator is the name of the file comparator inside the test system that is doing the comparison between the output file and the truth file. Comparators are located in the <testsys>/bin/+comparator directory, with names like "PercentDiffComparator". To get more information about these, run "help comparator.<ComparatorName>" from a MATLAB command window, from the <testsys>/bin directory:

       >> help comparator.PVComparator
    • Tolerances is a list of numerical tolerances used by the comparator, if applicable. You'll fill this in based on the requirements of the comparator you've chosen. The help text for each comparator describes its requirements.

The full .tc file format is described in the tc File Syntax reference page.

Each test case run by GMAT's base (script) test system has an associated .tc file that describes the test and its properties and tells the system how to run it.
Every test in the system must have a .tc file associated with it; currently, the association is made by giving it the same name as the GMAT script (with a different extension, of course). For example, the .tc file associated with TestCase1.script is named TestCase1.tc.
The .tc file is written in a text format called YAML. See the sample below for a general template. This example shows the necessary fields and one way to format them. The YAML format is very flexible, and can be written in multiple ways. The Wikipedia articlehas a good description of the syntax rules.

 # This is a comment (anything followed by a # to the end of the line). # # Parameters are set in key/values pairs: # Key: Value # # Lists of single values are surrounded by square brackets # and comma-separated: # Key: [Value1, Value2] # # Blocks of key/value pairs are surrounded by curly brackets # and comma-separated: # {Key:Value, Key:Value} # # Blocks can appear in lists by nesting the brackets: # [ {Key:Value, Key:Value}, {Key:Value, Key:Value} ] # 

 # # A list of open bugs related to this test. # Examples: # Bugs: [] # Bugs: [500] # Bugs: [12, 69, 3001] # 
Bugs:  [2003, 1482]

 # # A list of category names (tags) for this test. # Examples: # Categories: [] # Categories: [Numerical] # Categories: [Numerical, System, Smoke] # 
Categories:  [System, Numerical]

 # # A list of requirements exercised by this test. # Examples: # Requirements: [] # Requirements: [FRR-1] # Requirements: [FRR-5, FRC-12.1.1, FRC-8.2] # 
Requirements:  [FRR-1.1.3, FRC-5]

 # # A comparator and truth file for the GmatLog.txt file (optional). # Examples: # LogFile: # Comparator: ValidationComparator # # LogFile: # Comparator: ValidationComparator # Truth: TruthFileName.truth # 
LogFile:
    Comparator: ValidationComparator
    Truth: TestCaseName_Log.truth

 # # A list of output file blocks for each file written by this test. # Examples: # OutputFiles: [] # OutputFiles: [{}] # OutputFiles: [{}, {}, {}] # 
OutputFiles:  [
    { # # The file name of the current output file. # 
        File:  TestName_OutputFile1.report,
         # # The file name of the truth file associated with the above output file. # 
        Truth: TestName_TruthFile1.truth,
         # # The name of the comparator to use to compare the report and truth files. # 
        Comparator: ElementComparator,
         # # The tolerance for the comparison (if applicable). # Examples: # Tolerances: [] # Tolerances: [1e-3] # Tolerances: [1e-6, 1e-3] # 
        Tolerances: [1e-10],
         # # The name of the comparator to use for regression comparisons (optional) # 
        RegressionComparator: DiffComparator
    },
    {
        File:  TestName_OutputFile2.report,
        Truth: TestName_TruthFile2.truth,
        Comparator: PVComparator,
        Tolerances: [1e-6, 1e-3]
    },
    {
        File:  TestName_OutputFile3.report,
        Truth: [],
        Comparator: TrueFalseComparator,
        Tolerances: []
    }
 ]

 # # [optional] The crash detection timeout for this test (in minutes, default 10). # Examples: # Timeout: 10 # Timeout: 60 

The Script File

To write a test case, you'll first need to write the GMAT script that contains the test you'd like to run. Describing how to write a GMAT script is outside the scope of this tutorial, but there are a few rules that we need to follow to make things go smoothly.

  • Rule 1: Include a comment section at the top of the script that describes what the test is designed to do and who wrote it. This will help us understand it if we ever need to change it in the future.

  • Rule 2: Trim unnecessary lines from the script (especially if you generated it through the GMAT GUI. This includes graphical elements like OpenGLPlots and unused parameter settings (like Spacecraft Attitude lines if your script isn't using attitude features).

  • Rule 3: Write output files to the default directory. When writing an output file from GMAT, set the Filename field to a simple file name (usually with a .report extension). Don't include any folders or other path information.

The Truth File

Your script file will nearly always contain one or more output files that are written by GMAT. Our goal is to compare the contents of these files with a set of known-good truth files. This is what will tell us whether or not the test passed.
Once we've established which output file contains the data we're interested in, we need to recreate that file in roughly the same format, and with the same parameters, in another piece of software that we know has already been tested. For complex cases, this is usually STK or another established mission design tool. Sometimes for individual algorithms, this can be a custom MATLAB script that you've written and validated against a published source. For elementary operations, it could even be hand calculations.
Once you choose the method that makes sense for your case, you need to make sure that it outputs data in a format that's similar enough to the original output file the test system can compare the two. This is where the concept of a comparator comes in.
The test system relies on distinct bits of logic called comparators to do the actual comparison between a single output file and a single truth file and output the result. There are several comparators built in (and it's easy to add more) that interpret different types of files, whether they contain position/velocity files, dates, simple columns of data, or even just ones or zeros.
It helps to know which comparator you'll use before generating the output and truth files, so you can make sure the formats are compatible.

You can view descriptions of all the built-in comparators at the Comparators reference page.

The Run Definition File

Once you've written a test and all its associated files (and you've gone through the rest of this guide), you'll need to actually run the test system to make sure everything works. The system uses a file called a Run Definition File (the example included in the system is called rundef.m) to tell it how to behave.
The run definition file is a MATLAB file that contains a structure named RunDef:

% GMAT test system run definition

RunDef.Build = 'MyTestRun';
RunDef.GmatExe = 'C:\Program Files\GMAT\2010-07-08\GMAT_2010-07-08_wx2810.exe';
RunDef.Type = 'truth';
RunDef.Reporters = {'ScreenReporter'};
RunDef.Cases = {'Target_DC_Command_MatlabFunction', 'ISS_Earth_0_0_0'};
RunDef.Categories = {};
RunDef.Folders = {};
RunDef.Requirements = {};

The fields are as follows:

  • Build: The name of this particular run (when we use this in production, this will be the GMAT build number, hence the name). This corresponds to the name of the output folder in the test system file tree (see above).

  • GmatExe: The full path to the GMAT executable you want to test.

  • Type: The type of comparisons to perform (only 'truth' is valid for now)

  • Reporters: A list of reporters to use for results output (keep this as 'ScreenReporter' to see the results in your MATLAB window).

  • Cases, Categories, Folders, Requirements: These selectors allow you to choose which tests to run, instead of running the entire system. A missing specifier (commented out) means that everything is included. An empty specifier ({}) means that nothing is included. Any given test will be run if it matches any of the specifiers, which means to run a single test with Cases, the rest must be empty and not commented out. The example above will only run 2 cases.

    • Cases: A list of case names to run (with no file extension)

    • Categories: A list of categories to run (from the .tc file's Categories field)

    • Folders: A list of folders to run tests from (e.g. 'Commands\FRC-1_Optimize')

    • Requirements: A list of requirements to run (e.g. 'FRR-2' or 'FRC-1.2'). These must match exactly for now (specifying 'FRR-2' won't run tests tagged only with 'FRR-2.1')

Comparators

Comparators are classes used to compare GMAT output files against truth data.   The files compared are specified in the "File" and "Truth" fields in a .tc file.  For example, the .tc file below compares the "GMATEphem.report" against the file "TruthEph.truth" using the PVComparator (Position-Velocity) with a tolerance of 1e-7. 

OutputFiles: [
   {
       File: GMATEphem.report,
       Truth: TruthEphem.truth,
       Comparator: PVComparator,
       Tolerances: [1e-7]
   }
]

The test system contains a growing list of built-in compartators (currently around 40) located in the bin\+comparator folder of the GMAT test system.  The test system allows testers to write new custom comparators by deriving from the base class,  comparator.Comparator (in Matlab syntax, this means the Comparator base class (upper case C)  is in the comparator package (lower case c)).  The only required method on a Comparator is the Compare() method with prototype

result = Compare(outputFile, truthFile, logFile, tolerance)

The return must be a Result() object which contains data on the pass or failure of the test and error messages in the event of failure. For more information on the Report object, the class is found in bin\@Result folder in the test system repository.

Running the Test System

Once all these files are written and in their proper places, it's time to actually run the system. This part is easy compared to everything else.
If you're testing a single case you've just written, you'll probably want to put its name in the RunDef.Cases field in the run definition file, and make sure the Categories, Folders, and Requirements fields are empty and uncommented.
Then, to run the system, change to the test/bin directory in MATLAB and run the gmattest.mscript:

>> setup()
>> gmattest('.\rundef.m')

The results of the test will be printed to the MATLAB window.

{If you have any problems with running the system, see Joel (S127).}

Test Utilities

tcfile.py

tcfile.py is a utility script to edit script test system test cases (.tc files).

Requirements

tcfile.py is written in Python 3. If you have Python 3.x installed, you're set. There are no other dependencies.

Usage

tcfile.py is located in <Jazz>/trunk/test/script/bin/util. You need to either add that path to your system's PATH variable, or switch to that directory before using the script.
On Windows, execute the script by running: python tcfile.py On any other platform, you can simply run: tcfile.py
Running python tcfile.py -hwill display the following help summary:

> python tcfile.py -h
usage: tcfile.py [-h] [--add-bugs ADD_BUGS] [--add-categories ADD_CATEGORIES]
                 [-d] [--dry-run] [--verbose]
                 [testcase [testcase ...]]

Manipulate test case metadata.

positional arguments:
  testcase              white-space-separated list of input names

optional arguments:
  -h, --help            show this help message and exit
  --add-bugs ADD_BUGS   add specified bug IDs to test case (comma-separated
                        list)
  --add-categories ADD_CATEGORIES
                        add specified category names to test case (comma-
                        separated list)
  -d, --debug           turn on debug messages
  --dry-run             do everything except actually writing changes to the
                        files
  --verbose             turn on debug messages (same as --debug)

Examples

This command will search for the Events_Eclipse_Heo1 test case and tag it with the libEventLocatorcategory.

> python tcfile.py --add-categories=libEventLocator Events_Eclipse_Heo1

This command will tag the specific file C:\TestSys\input\Resources\FRR-12_FiniteBurn\scripts\FiniteBurn_Validation_Thruster.tc with the Validationcategory.

> python tcfile.py --add-categories=Validation C:\TestSys\input\Resources\FRR-12_FiniteBurn\scripts\FiniteBurn_Validation_Thruster.tc

You can also provide a list of test cases as standard input instead of as command arguments. This command (when run from a Unix shell or MSYS on Windows) will search for all cases that have "vf13ad" in the name and tag them with the libVF13Optimizercategory.

$ find ../../input -iname '*vf13ad*.tc' | python tcfile.py --add-categories=libEventLocator

Test Coverage Tool

The script test system contains utilities to automatically generate a Requirements to Test Matrix (RTTM). 

Requirements

This test coverage tool's main interface is the gmattest() function located in the script test system's bin directory.  You will need the script test system installed and you need the GMAT requirements Excel spreadsheet.

Usage

To generate an RTTM configure a MATLAB script as shown in the examples section below below and run it from the bin directory in the test system.  The input arguments supported by the test coverage tool are:

Required Arguments
     outputFileName        The path and name of the xlsx file for writing the RTTM data. Must be an .xlsx file extension,
     requirementsList      The path and name of the .xlsx file containing requirements, and manually tracked test cases.

Examples

The following MATLAB script will read the GMAT Requirements.xlsx file located in "C:\myFiles\" and write an file containing requirements to test traceability to a file named "RTTM.xslx" located in "C:\myFiles\"

outputFileName   = 'C:\myFiles\RTTM.xlsx';
requirementsList = 'C:\myFiles\GMAT Requirements.xlsx';
gmattest('computemetrics',requirementsList,outputFileName);

Test-Writing Checklist

If you are writing tests to be run in the script-based test system, see the following checklist:

  • Copy the contents of the test system's gmatdata folder into your GMAT data folder. If you don't do this, your truth data may not be correct (especially for scripts that use EOP data).

Developing the test system

All test system development should target the "current-2" version of MATLAB.

Configuring GMAT Ancillary Data Used During Testing

GMAT uses many ancillary data files during test execution such as leap second files, EOP files, SPICE kernels, and Space Weather files to name just a few.   Data files must be put in the test repository in the correct place so that they are appropriately used in the nightly build.  Before the nightly test system executes, it copies the version controlled ancillary files into the GMAT folder structure.  

The test system has a copy of the directory structure the GMAT application employs located here: \trunk\test\script\gmatdata.  If your ancillary files naturally fit in the GMAT directory structure, they should be put in the appropriate folder  in that directory, and the nightly test system will copy them to the build test system to run the nightly tests.  For example, if you need a vehicle SPICE kernel to run a nightly test, it should be put here: \trunk\test\script\gmatdata\vehicle\ephem\spk.  

When running tests locally, you can ensure your local  version of GMAT will used the version controlled configuratoin data by running the preparegmat() function passing in a string containing the full path to you GMAT installation (i.e. the directory containing bin, data, docs etc.)

There are a few exceptions.  Matlab and Python functions required to run nightly tests should be placed here: "trunk\test\script\input\Functions" in the "MATLAB and Python folders in that directory.  For example, if you have a test that calls a Python file called "regexp.py", that file should be placed here: trunk\test\script\input\Functions\Python.

Automated Creation of Requirements to Test Traceability Matrix

The script test system can automatically create a Requirements to Test Traceability Matrix (RTTM).  Manual test and/or GUI test traceablity is entered into the GMAT requirements matrix by hand.  The script test system augments that information with automated test traceablity to provide a complete RTTM file.  To generate the full RTTM file, download the latest version of the requirements to your disk and follow the instructions below.  (The GMAT requirements matrix is located here).

Example

This example assumes you have the GMAT requirements downloaded in a local folder called "C:\MyFiles" and that you would like the RTTM placed in that folder.  Simply change the paths to be consistent with your local configuration.

% Define the desired target file name for the RTTM
outputFileName   = 'C:\MyFiles\RTTM.xlsx';
% Define the name and location of your GMAT requirements Excel spreadsheet
requirementsList = 'C:\MyFiles\GMAT Requirements.xlsx';
% Execute the command to read all tests cases and map tests to requirements
gmattest('computemetrics',requirementsList,outputFileName);