Objectives
The objectives are: 1) to verify correctness of a new Java Runtime, 2)measure
size and performance and 3) verify completness of design documentation. Objectives
1 and 2 are cross-platform goals. To achive these
objectives, QA will work closely with Development to ensure that all requirements
are met.
Test Outline
The following outline follows the project schedule
and describes what QA will concurrently be doing.
- Phase I: Proof of Concept
- The functionality and data structure of the Runtime are defined and
reviewed at this time. The Runtime will be divided into modules. The modules'
functions and data structure should be designed and documenteed. Each module
should have functions made available to QA for future testing.
- In preparation for future phases, a Test Harness,
a Bytecode Verifier, and a Data
Structure Inspector are created.
- Acceptance Test Cases will be designed and used
on PowerPC.
- Phase II: Alpha Qualification
- As the JIT, for Power PC, is made available, every opcode is tested.
- Working modules should also be tested by using the functions that were
made available in Phase I.
- A Standalone Runtime for PowerPC and Win32 are built and tested at
this time.
- Along with previous test cases, Functional Test
Cases will be designed and used on PowerPC and Win32.
- In preparation for Phase III, a Benchmark
tool is created.
- Phase III: Beta Qualification
- Standalone Runtimes for Win16, SPARC, and MIPS are built and tested
at this time.
- The benchmarking process begins. Performance of the Runtime must be
better or equal to preformances of other runtimes.
- Again with previous tests cases, Load Test
Cases will be designed and used on PowerPC, Win32, Win16, SPARC, and MIPS
platforms.
- Phase IV: Release Qualification
- The Runtime should be fully functional and integrated into the Client
and the Server. For integrations testing, existing tests along with other
tests to be designed, will be converted into applets and used on the Client
and Server.
- A complete test regression is done to make sure that nothing has been
broken.
- Verify runtime meets performance and platform requirements.
Test Cases
The bulk of the compatibitlity tests are taken from the Java Compatibility
Kit (JCK) which is provided by Javasoft. Other sources will also be used
and noted below. All test cases should be able to run on the Test
Harness.
- Accpetance
- Acceptance tests will be made for Phase I. These tests will determine
whether the JIT or Runtime is stable and ready for further testing. Every
new build of the JIT or Runtime will first be tested using these tests.
- These should test basic functionality of the JIT or Runtime.
- These test cases contain a list of Java bytecodes
taken from the JCK Language Test.
- Sample test 1.
- Functional
- These should test the Runtime according to the design specification
as well as include all VM Specification tests from the JCK.
- All Java bytecodes are tested.
- Sample test 2.
- Load
- These should include boundary tests as well as illegal bytecode instructions.
- Error recovery should also be included in these tests.
- Sample test 3.
Tools
The following are the list of QA tools to be used for the project. All tools
should be platform independent so they can be used on all platforms for testing.
Test Harness
The Test Harness will be the general tool to run the test cases. The
user will specify which test cases to run. Then the Harness will load the
test and pass it to the JIT or Runtime and display the returned results.
How the Test Harness will communicate with the JIT or the Runtime still
needs to be determined. There are two ways to communicate which are to either
pass the Java class or the bytcode instructions.
Bytecode Verifier
The Bytecode Verifier should be able to compare and verify the bytecodes
used by our runtime to the bytecodes generated by javap or even other runtimes.
The verifier should, for example, pass a class to the runtime, then verify
the bytecodes used are similar.
Data Structure Inspector
This tool should be able to print out the all the structures used in
the Translator, ie the control flow and data flow graphs.
Benchmark
These tools will be used to compare ours with existing Runtimes, ie Microsoft's
or Symantec's. These tools will measure if the Runtime meets our performance
requirements.
- Caffeinemark
is an example of the kind of Benchmark tool that will be used. This tool
will be used as a general measuring stick to compare our Runtime's performance
with others.
- For more in depth comparisons, Synthetic Benchmarks will be used to
test the performance of individual functions of the Runtime, ie. Method
Invocation or Looping.