Procedures and Best Practices for Writing GUI Tests 

This section describes procedures and best practices for writing GUI-based tests with the goals of complete GUI coverage and adequate test depth. After a brief overview, the first part of this section will discuss the best practices and procedures for working with TestComplete to write GUI tests. The following subsections will discuss the specific procedures and processes for testing the GMAT GUI. 

Overview 

Writing, and even more importantly, performing GUI tests that provide complete and repeatable coverage of the GMAT GUI is difficult. The GMAT GUI testing has the following goals: 

  • Reasonable Confidence that GMAT is user-ready
  • 100% coverage of the GUI - every widget in the GUI must be exercised at least once
  • 100% Requirements Driven Testing - every GUI-related requirement must be tested to ensure that it is fulfilled by the GUI
  • Repeatable
  • Maintainable 

At its most basic, GUI testing requires clicking on every button and widget in the GMAT GUI, entering valid, invalid, and boundary conditions input into every text widget, and assessing that the GUI responds to all inputs and displays all results correctly. Manually performing these actions would be error-prone, time-intensive, and impossible to repeat every time GMAT changes. So how do we ensure "reasonable confidence" and that every requirement and widget is tested? More importantly, how do we make the tests repeatable and maintainable?
This document, the GMAT Test Plan, addresses the first 2 questions. The battery of tests defined above, Verification, Validation, Stress testing, etc, and the Requirements-To-Test matrix are designed to ensure we meet every GUI requirement and hit every GMAT widget. The combination of all these tests is designed to provide reasonable confidence in the GMAT GUI.
Making the tests repeatable and maintainable requires automation of the GUI actions. The GMAT GUI Test Team has selected the TestComplete tool to provide this test automation.

Using TestComplete for GUI testing 

Best Practices 
  •  Create and USE templates
  • Use Variables (more maintainable and more general)
  • Make top-level project tests stand-alone-able
  • When trying to fix (maintain) tests, dependencies between top-level tests make it difficult to fix-run-verify
  • Do NOT rename NameMapping items
  • I know the names that TestComplete gives items is… awful… but resist changing the names
  • TestComplete is very consistent in how it names objects, which is important when separate projects are creating tests that may be able to be reused
  • Make panel projects work with more than one object when it makes sense to do so – tests should be created using variables and then swap the variables before reexecuting the tests.

For example, the Spacecraft Panel should be tested with the DefaultSC as well as with a user-created Spacecraft, or a dialog that is called in multiple places throughout the GUI.

  • Do NOT use Object Checkpoints! These type of checkpoints are exceedingly brittle and hard to update.
  • Data-driven Tests
  • Give Input, View after Apply (e.g., is "+0" converted to "0", and how it looks in Show Script Columns
  • Include a Log column that is printed out before each data-driven loop
  • Use Pattern matches with Mission Show Script to test existence
  • Make checks case sensitive since GMAT is case sensitive
  • Avoid absolute File Paths; they WILL break. Base file paths off of a global variable, GMAT_PF_DIR, which defines the location of the GMAT program files directory (including parent of data directory). You can also use the Files Store.
  • Put Edit.KeyPress/SetText and PropertyChecks that are executed more than once in their own routines. Changing precision, compiler changes, etc mean that these values have to be changed, which is extremely tedious if this is scattered all over the place Checks for Warnings and Errors should be written to check for either, the GMAT team likes to change which dialog they use. 
Recording Tips 
  • Turn off Visualizer for playback and recording
  • Organize Projects using directories
  • Record your tests naturally and then immediately fix
  • Change items to variables where it makes sense (use project level variables not local variables, easier to reuse and to find and maintain)
  • Break recordings up into smaller, reusable tests
  • Always record using Named objects not screen locations (e.g., right-click "Show Script" not click screen location 23, 45
  • Change default options so it is KeyPress not SetText (more user-like)
  • If there are multiple paths to do something, there must be one instance for each path. Corollary, use the most maintainable way to do something when the purpose of the test is to test something else, e.g., to open a panel, you can either double-click or right-click "Open". Double-click uses a screen location (easier to break) – test it once. All operations that test the panel itself should use the Right-click "Open"
  • As much as possible, put logic in your tests to handle errors when they would "crash" the test. TestComplete, when it goes off the beaten path, tries to close unexpected dialogs using a default Close operation. However, that may not get GMAT back into a state that TestComplete can work with, leading to a cascading list of test errors that is not useful.
  • IfObject dlgSaveConfirmClose Exists, then log an error and click "No"
  • Avoid Region Checks as much as possible 
Things to Avoid 
  • Watch out for the ComboBox ClickItem bug! You need to insert a statement to tab into the ComboBox before the ClickItem statement. Otherwise TestComplete can get stuck on playback
  • Tests that use Windows OS dialogs, like the OpenDialog and SaveDialog, can break when moving between Windows Versions. Use the recommended OpenFile method
  • NameMapping is your friend. Fix most tests by fixing the name map, e.g., tests get broken by a panel being retitled from "New Object" to "New Asteroid". Don't use a new object, fix the mapping for the old object. 
Project Templates 
  • Setup to be easy to use ("By using Project Variables and panel specific Keyword tests, GMAT Panel Test Template Project is able to provide most of the coverage necessary for nominal panels and with fewer errors.")
  • Use DIRECTLY (add existing item…) keyword tests (Cloning the template should only clone tests that need to be changed) Not Done Yet 

Procedures for Writing GUI Tests 

This section details the procedures and practices for writing GUI Tests using TestComplete. 

Procedures for Writing GUI Verification Tests 

For the purposes of verification testing, the GMAT GUI has been divided up into 4 logical groupings: 

  1. Resource Tree/Resource Panels (e.g., Spacecraft Panel, GroundStation panel, etc)
  2. Mission Tree/Command Panels (e.g., Propagate Panel, Maneuver Panel, etc)
  3. Script Editor
  4. Infrastructure (e.g., menus, toolbar, welcome page, about box, etc.) 


The Script Editor and the Infrastructure will both be tested using one TestComplete project respectively. These projects will be responsible for testing all widgets and requirements for the respective functions, including side effects such as modifying a script unsynchronizes GMAT and vice versa.
The Resource Tree/Resource Panels and the Mission Tree/Command Panels make extensive use of TestComplete Project Templates to ensure full testing of a specific panel/object. The project automatically provides the logic and tests for almost 40 different tests with over 30 utility helper tests (providing approximately up to 80% percent of the code), with only a small, nicely partitioned, input from you (called TBD tests). By using Project Variables and panel specific Keyword tests, GMAT Panel Test Template Project is able to provide most of the coverage necessary for nominal panels and with fewer errors. 

Procedures to Write GUI Tests for a Resource Panel 
  1. Collect requirements for the resource
  2. Collect developer notes about resource ideosyncrasies
  3. Populate the Requirements-To-Test Matrix with line items for all top-level keyword/script tests for the project. Mark the items as incomplete so that the automated test utility will not try to execute the tests until they are finished.
  4. Create Test Resource Project per instructions in GMAT Panel Test Template Instructions
    1. Clone the GMAT Panel Test Template Project and add it to your test suite
    2. Define the Project Variables for your copy of the project
    3. Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)
    4. Fill in the InputTests.xslx excel document, which is used for the data-driven tests (Validation_ValidInput and Validation_InvalidInput)
    5. Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)
    6. Run and Validate test results
  5. Mark RTTM line items as complete so that they will be run in the automated tests 
Procedures to Write GUI Tests for a Command Panel 
  1. Collect requirements for the command
  2. Collect developer notes about command panel ideosyncrasies
  3. Populate the Requirements-To-Test Matrix with line items for all top-level keyword/script tests for the project. Mark the items as incomplete so that the automated test utility will not try to execute the tests until they are finished.
  4. Create Test Command Project per instructions in GMAT Command Test Template Instructions
    1. Clone the GMAT Command Test Template Project and add it to your test suite
    2. Define the Project Variables for your copy of the project
    3. Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)
    4. Fill in the InputTests.xslx excel document, which is used for the data-driven tests (Validation_ValidInput and Validation_InvalidInput)
    5. Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)
    6. Run and Validate test results
  5. Mark RTTM line items as complete so that they will be run in the automated tests 
Procedures for Writing GUI System Validation Tests 

System Validation Tests attempt to utilize every feature the way a user would when creating a mission. Each test focuses on on particular feature but incorporates it into a larger project to ensure the feature interacts with everything in GMAT correctly. The "writing" for these tests is done in TestComplete with the assistance of a pre-defined mission given to the testers by engineers. 

  1. Collect requirements for the resource or command
  2. Collect developer notes about command panel ideosyncrasies
  3. Collect a mission sequence from engineers
  4. Collect truth data from engineers
  5. Use TestComplete to record the creation of the missions' Resource and Mission Tree. 

Procedures and Best Practices for Writing Script Tests 

This section describes procedures and best practices for writing script-based tests with the goals of complete system coverage and adequate test depth. The steps are outlined in the overview section below and then subsequent sections discuss each step in the testing process in detail. Requirements in GMAT are organized into logical groups and given a unique high level identifier (FRR-1 Spacecraft Orbit State for example). The section below describes the process used to test a logical requirements group. 

Overview

The following steps will be used to develop new script-based tests. The steps are applicable to new features as well as existing features with test gaps. Each step is discussed in more detail in the sections below. 

  1. Perform final review of requirements and update as necessary
  2. Map existing tests to requirements
  3. Plan new tests to cover test gaps
  4. Review plan for new tests
  5. Write files for new tests
  6. Run tests/Debug tests
  7. Check in bugs into Bugzilla
  8. Check all new test files into Jazz repository
  9. Inspect nightly coverage report to verify tests have been assimilated into DBT 
Step 1: Inspecting Requirements 

The purpose of the requirements inspection phase is to understand the requirements and perform a final bidirectional comparison of feature implementation and requirements. This is a visual inspection process to ensure the feature is ready for testing. In this step testers shall: 

  1. Verify that all features implemented in GMAT are represented by a requirement in the SRS
  2. Verify all requirements in the SRS have appropriate features implemented 
Step 2: Mapping existing tests to requirements 

After inspecting requirements and addressing issues, testers shall map existing tests to the requirements by updating TC files for the requirements group. TC files are text files that accompany a script test file and contains metadata such as the test category and requirements covered by the tests. In this step of the testing process, testers shall: 

  1. Add/Review requirements Id in TC files
    1. Include existing tests directly related to requirement
    2. Include existing tests in other feature areas that may be applicable to requirement
  2. Mark test categories as appropriate (Categories: Smoke, System, Functional, Numeric, End-To-End, InputValidation, Modes: Command, Function ...) 
Step 3: Writing summaries for new tests 

The first activity in this step is to analyze the coverage provided by existing tests identified in Step 2. Once the test gaps are identified, the testers shall write a brief summary of each new test to be written to complete the coverage of the requirements group. Test summaries shall be added to the Test spreadsheet located here and categorized by specified requirement. After the tests written, the new test summaries shall be included in the header of the script test file once the tests are written and the requirements shall be added to the TC files.
Location of the Test Spreadsheet https://spreadsheets.google.com/spreadsheet/ccc?key=tTetBAZYg-snaPUSM3r6dug&authkey=CJ7or-EJ#gid=1 

Step 4: Reviewing summaries for new tests 

The purpose of Step 4 is to ensure that tests planned to cover a given requirements group are complete and will adequately verify that the system correctly meets the requirements. During this phase of testing, a GMAT team member not supporting the tests for this particular requirements group will review the test summaries written in Step 3 and identify additional tests that are required. 

Step 5: Writing files for new tests 

In step 5, testers shall write script files, TC files, and truth files for all tests identified in Step 4. These tests shall provide complete requirements coverage for all test categories indicated as necessary in the requirements spreadsheet. Note that not all requirements require all test categories. For example, features that are inherently graphical do not require certain types of tests. Features that are inherently commands do not require special command mode testing. The required mapping between requirements and test categories is contained in the SRS.
The testers shall: 

  1. Develop preliminary naming convention for new test scripts.
  2. Write the script files
  3. Write the truth files
  4. Write the .tc files


All Script File shall contain a comment block header with the following information: 

  • Author name
  • Brief summary of the test
  • Source of truth data 


All TC files shall be contain the following information: 

  • Test categories covered by the test
  • Test requirements covered by the test
  • Bugs found (this is added in Step 7)
  • GMAT ouput file names
  • Truth file names
  • Comparator for each file
  • Tolerances for each comparator 


The guidelines below are a set of best practices for writing script based tests 

  • Do not include any unnecessary objects in the test. For example, if the test does not require a 3D graphics plot, then do not include one in the test script. 
Step 6: Running tests/Debugging Tests 

In Step 6, testers shall place all tests written in step 5 in their local test system repository, execute those tests, and address any issues found in the script, TC and truth files. 

Step 7: Checking in bugs 

In step 7, testers shall follow procedures and best practices described in this document to commit bugs to the project's bugzilla database. 

Step 8: Checking in test files into Jazz repository 

In step 7, final validation of all tests is performed. In the event that bugs were identified in Step 7, testers shall update update the relevant TC files with the bug numbers. After files are updated they are checked into the test repository. 

Step 9: Inspect nightly coverage reports 

The final step in the testing process is to verify that the new tests are executing correctly in the DBT system. The testers shall verify that bugs identified during the test process are marked as failing in the automated DBT report. 

Step 10: Update User Documentation 

Testers gain unique insight into a feature during the test process. After testing is complete, any information that testers deem useful to users shall be added to the appropriate reference section in the GMAT User Guide. 

Procedures and Best Practices for Checking in Bugs 

All issues discovered during a test system run are reported in Bugzilla, GMAT's issue-tracking system, located at the address: {+}http://pows003.gsfc.nasa.gov/bugzilla/+. 

Procedure 

The following procedure should be followed to submit an issue: 

  1. Navigate to Bugzilla at the address above.
  2. Log in, if necessary.
  3. Click the "Enter a new bug report" link.
  4. Choose GMAT as the product.
  5. Select the most appropriate item from the Component list. If unsure, select "1 Uncategorized".
  6. Select your system configuration using the Platform and OS fields.
  7. Select appropriate values for Severity and Priority. See Appendix A for guidelines.
  8. Fill out the Summary field with a one-line summary of the issue.
  9. Fill out the Description field with a full description of the issue (see Best Practices below).
  10. Add an attachment, if possible (e.g. a script file illustrating the issue).
  11. Click "Commit". 
Best Practices

The following best practices should be followed when submitting an issue to the Bugzilla system: 

  1. Submit an issue as soon as possible after you discover it, even if no other details are known. It is better to have an incomplete issue report in the system than no report at all.
  2. Try to duplicate the bug manually (outside of the test system) by either using the GUI or loading a script.
  3. For a script-related bug, write a script that contains the minimum number of lines necessary to duplicate the bug.
  4. In the Descriptionfield, include the following items:
    1. The name of the test
    2. The steps you followed to trigger the bug
    3. The text of any error messages that are generated
    4. The build date of the GMAT version that contains the bug
  5. In the Attachmentssection, attach the following items (if appropriate):
    1. The script that duplicates the bug
    2. GmatLog.txt if the bug relates to an error, warning, or crash
    3. The output file that contains erroneous data
    4. A sample file that illustrates correct data
    5. A screenshot that illustrates a graphical bug
  6. If you want feedback from another team member, include their email address in the CC: field. 
Test Peer Reviews
Test Reporting
Test Evaluation/Analysis

Developers review and analyze nightly regression reports and fix regressions. Testers address flawed tests. Flawed tests are checked into Bugzilla under the Category "5 Test System".

Test System Maintenance  

Appendix A: Bug Priority and Severity Values 

The priority of a bug is based on whether or not the bug must be fixed by the next release. Higher priority bugs, such as P1, are fixed before lower priority bugs, such as P3. After a release, all bugs are reviewed and what was considered a P2 or P3 bug in the last release may become a P1 bug for the next release. Because a bug is originally entered as a P3 does not mean it will remain there indefinitely. Bug severity is a measure of the impact of the defect on system behavior. Crashes and loss of data are of critical severity while a slight performance issue is minor severity. Roughly speaking, the priority of a bug is determined by the severity and likelihood of occurrence or number of users affected. Defects that have high severity but a low likelihood of occurring are not necessarily high priority. Classification has some degree of subjectivity and what may be acceptable to one person may not be acceptable to another.

Priority
Definition

P1

Must Fix by Next Major Release

Examples:

  • Crashes with high frequency of occurrence, or a cumbersome or non-existent work-around.
  • Unmet requirement.
  • Critical numeric error.
  • Unacceptably low quality in highly visible component.
  • Critical loss of data.
  • Failed input validation with critical consequence.

P2

Fix if Time Permits

Examples:

  • Hard to duplicate crashes.
  • Numeric issues regarding precision (agreement with truth is acceptable but would like it to be better).
  • Numeric issues when near a singularity.
  • Unclear/non-standard error messages.

P3

Won't Fix in Next Release

Examples:

  • Moderate Performance Issues.
  • Numeric issues regarding precision (agreement with truth is acceptable but would like it to be better).
  • Unclear/non-standard error messages.
  • Enhancement requests.
  • Interfaces to old versions of third-party software.

P4

May Never Fix
Example:

  • Subjective issues that are hard to define as a bug or feature/design.

 

Severity
Definition

Critical

Provides Incorrect Answer or Renders System Unusable

Examples:

  • Crashes that affects all users, has no work around, or cumbersome workaround.

  • Unmet requirement.

  • Numeric error.

  • Loss of data.

Major

Major Impact on System Behavior

Examples:

  • Crashes that affect a few users (for example, only people using Matlab R14b.)
  • Unclear error message for common user error.


There is always some subjectivity in the above classifications. Numerical errors related to precision issues, such as difference at 10th significant figure, are handled on a case-by-case basis. Examples of loss of data include failure to report requested information or save input data to script file. Unacceptable quality issues include prominent misspellings, plots that cannot be interpreted due to.

Appendix B: GUI System Test Procedures

(  GMAT GUI Test Procedures Notes: C:\Users\Public\Documents\GMAT Basic Procedures.rtf  )


Overview

The GMAT GUI System Tests consist of 185 tests.  The tests currently consist of 45 System Tests, 29 SaveMission Tests, 26 Plot Stress Regression Tests, 22 Toolbar System Tests and 50 Menubar System Tests, 5 Mouse Tests and 8 Keyboard Tests.

The 45 System Tests verify that when GMAT is initialized from the GUI that the results match those produced by running GMAT from scripts.  The 45 System Tests utilize TestComplete to duplicate 45 GMAT Mission Scripts.  TestComplete is utilized to initialize the input parameters from the GUI, instead of GMAT reading in Script files to initialize the parameters.  The results of running GMAT from the GUI set by TestComplete will yield the same results as running GMAT from the Script files, if not then a Bug exists in the GUI.  The results of running GMAT from a Script file are referred to as the truth data since the GMAT reading Script file method is well tested against other sources.  The Stores directory contains the truth data and will have to be updated whenever a change is made to the GMAT GUI.

The 29 SaveMission Tests verify that the SaveMission Command is operating correctly.  The SaveMission Command is defined and executed to create a SaveMission script file when the 45 System tests are run.  Once the SaveMission Scripts are created the TestComplete is utilized to execute GMAT with the SaveMission Scripts as input and then compare the results to the truth.  The results do not match exactly in the cases where certain Spacecraft State Types are converted to other State Types.

The 26 Plot Stress Regression Tests verify that OrbitView, GroundTrackPlot and XYPlot plot large amounts of data.   The tests are run by TestComplete which starts GMAT and loads the scripts that contain the plot tests, then executes the script and finally compares the plots to plots stored in the TestComplete Stores directory. 

The 22 Toolbar System Tests verify that the features on the second line in the GMAT window operate correctly.  Currently the tests verify the New Script, Open, Save, New Mission, Copy/Paste, Cut/Paste, Run, Pause, Stop, Screen Shot, Close All, Close, About, Link to Website, NASA Agreement, On Line Help Link, Run/Stop Animation, Speed Up Animation, Slow Down Animation and Script Sync Status features.  Tests T4(New Mission), T11(Screen Shot), T18(Run/Stop Animation), T20(Speed Up Animation) and T21(Slow Down Animation) must be run manually from TestComplete as they require human observation of the results of the test.

The 50 Menubar System Tests verify that the features on the top line of the GMAT window operate correctly.  The top line consists of File, Edit, Window and Help.  File consists of New, Open, Open Recent, Save, Save As and Exit and they are tested by Menubar tests M1 to M9.  Edit consists of Undo, Redo, Cut, Copy, Paste, Comment, Uncomment, Select All, Find and Replace, Find Next, Show Line Numbers, Goto, Indent More and Indent Less and they are tested by Menubar tests M10 to M23.  Window consists of Close All, Close, Cascade, Tile Horizontally, Tile Vertically, Arrange Icons, Next, Previous and Window Highlight which are tested by Menubar tests M24 to M28.  Help consists of the Welcome Page, Contents, OnLine Help, Tutorials, Forum, Report an Issue, Provide Feedback, and About Gmat which are tested by Menubar tests M29 to M50.  (Note:  M46 and M47 Report an Issue and Provide Feedback requie Outlook so Joel tests these manually upon request.)

The 5 Mouse Tests (M51 to M55) verify Undo/Redo, Cut/Paste, Copy/Paste, Select Block and Delete and Select All Text.

The 8 Keyboard Tests (M56 to M61, M64 and M65) verify Undo/Redo, Cut/Paste, Copy/Paste, Comment/Uncomment, Select All, Find/Find Next, Goto, Indent More and Indent Less.



Obtain GMAT executable

 1) bring up 2 copies of Windows Explorer

2) set path file:\\\\mesa-file\595\GMAT\Builds\windows\VS2010_build, select Build name, go to bin and copy the files (GMAT.exe, libCInterface.dll, libGmatBase.dll)

3) on the other copy of Windows Explorer, set path to C:\Users\Public\Documents\GMAT\bin and paste the 3 files

4) on the mesa-file go back and copy the \plugins files and paste

 

Run Tests Automatically from the Command Prompt

Initially, copy runall.bat, runToolBar.bat, runMenuBar.bat and runall-04232013.bat from C:\Users\Public\Documents\GMAT\Auto to a local directory then start from step 1.
 

1) start Command Prompt by left clicking the start icon (bottom left of monitor)

2) cd Documents\Auto to change directory

3) dir to list contents of directory

4) runall.bat > runallResults_0521.txt, make sure Wordpad and Windows Explorer are set to full screen and closed for the Command Summary test, also GMAT and TestComplete must be closed.  The tests will run for 2hrs, 43 minutes when GMAT and TestComplete work properly.

5) the master .bat files are called runall.bat, runToolBar.bat and runMenuBar.bat.  The remaining MenuBar tests must be run manually from TestComplete

6) the results are located in the runallResults_0521.txt in this example

7) the tests will run unattended and the monitor can be turned off.  Upon inactivity the machine will log off.


Run Tests Semi-Automatically from TestComplete

1) bring up TestComplete and select GUI System Tests, GUI2 System Tests, My_Control_Flow, ToolBar System Test or MenuBar System Test.  All projects are stored on JAZZ at C:\Users\Public\Documents\JAZZ\trunk\test\gui

2) click on GUI System Tests, GUI2 System Test or My_Control_Flow to use the semi automatic capability.

3) double Click on End to End Tests and observe Auto Run 1 to 7 show up in the center panel, expand each and note that the boxes are checked

4) right Click on Auto Run X and select Run Focused Item to run the tests in each Auto Run section, where X is 1 to 7 currently



Run Tests Manually from TestComplete

1) bring up TestComplete and select GUI System Tests, GUI2 System Tests, My_Control_Flow, Toolbar System Test or Menubar System Test.  All projects are stored on JAZZ at C:\Users\Public\Documents\JAZZ\trunk\test\gui

2) click on GUI System Tests, GUI2 System Test, My_Control_Flow, Toolbar System Test or Menubar System Test.  These directories contain all of the test code, referred to as a Keyword Tests by TestComplete.

3) for example, click on Menubar System Test to open the Menubar test directory.  A list of approximately 65 tests will appear on the left side of the screen under Keyword Tests and Menubar_Tests.  Double click on M3_File_New_Script for example, and observe that the test code will appear on the right side of the TestComplete panel.  Click Run Test to execute the test.  Run Test is located in the center and directly above the test code.

4) observe that the test log will appear in place of the test code.  The Log contains blue circles that means the line of test code was executed, green squares mean a Checkpoint executed and passed and anything in red represents an issue.  The directory of Keyword Test Log entries is maintained on the left side of the panel under Project Suite Logs Menubar and System Test Logs.  The Logs are not maintained in JAZZ and should be deleted when not needed by high lighting the log entries, then click and select Remove and then Delete.

Notes
a) JIRA is located at http://li64-187.members.linode.com:8080/secure/Dashboard.jspa
b) development resources at http://gmat.ed-pages.com/wiki/Development+Resources
c) project dashboard at http://gmat.ed-pages.com/wiki/Project+Dashboard
d) meeting minutes at http://li394-117.members.linode.com:8090/display/GW/Meeting+Minutes
e) gmatcentral.org is at http://li394-117.members.linode.com:8090/display/GW/GMAT+Wiki+Home

 

  • No labels