XMidi Programmer's Guide

Table of Contents


This document is intended for programmers who wish to understand and perhaps modify any of the programs in the XMidi package. There are a number of "levels" of documentation to this package. I would suggest that this be the last resort. The XMidi "Home Page" (XMidi.html) is the first level. It describes some of the whys and wherefors, etc. The next level is, of course, the javadoc, which can be accessed in the doc directory. The next level is the User's Guide, which can be found in docs/UsersGuide.html and contains information on how to use the program and the XMidi file format. The next level is the source itself, which has numerous comments besides the javadoc comments. Finally, there is this document. It is meant to supplement the information contained in all the other levels.

I have broken this document up into several sections. The first, which you are now reading is the introduction. Next, is a section called "Design Thoughts". This section is directed towards the specific design decisions I made in this package. The next section concerns testing. It documents the test package I used to convince myself that the programs work properly.

Design Thoughts

In general, I tried to design an XML format which followed the MIDI file format as closely as possible.

Test Package

Testing Philosophy

In the early 1980's I had the honor of working for a fellow by the name of Jim Hicks. I learned a great deal about programming from him. One suggestion he made was "never hard-code any value more than once, if possible." I have always tried to follow this advice. He also had an interesting philosophy about testing. A test package (in Jim's view) should show how every possible branch in the code could go either way and every possible message that the program could produce should be produced.

I have always felt that this approach to testing had some drawbacks:

For these reasons, I have not taken his advice on testing. I should point out, however, that this testing philosophy usually point out most, if not all bugs. In this case, it seems like overkill. Also, I think that there is a better way in this case.

I do try to test my code, but not always by the Jim Hicks method. One thing I try to do is to make the testing fit the program. I don't have any set rules for testing, but rather I let the nature of the program suggest ways in which it could be tested which might make sense.

In looking at the programs of the XMidi package, I noticed that the two primary programs (MX and XM) are complimentary. That is, MX converts a MIDI file to an XMidi file and XM converts an XMidi file to a MIDI file. What if I took a MIDI file, ran it through MX, then took the result and ran it through XM. Then I would have two MIDI files which should be the same, if the program is correct. This idea underlies my test package for XMidi.

The XMidi Test Package

At the heart of the package is testOne.cmd which looks like this:

Call runMX -t %1 %2
Call runXM %2 %3
Call CFB %1 %3 -o%5
Call runMX -t %3 %4
Call CFT %2 %4 -o%6
Call TL %7 Mis-match -o%8
Each of these calls represents another .cmd file on the base directory. I use the -t (test mode) option for both calls to MX. This suppresses the creation of the comment line in the output. The comment line is a good thing, in general, but it is the only thing which prevents perfect comparisons of the two XML files (%2 and %4). CFB invokes the CompareBin program, which compares two binary files. CFT invokes the CompareText program, which compares two text files. Both of these programs take a -ooutput file option. This allows me to save the outputs for each run. TL invokes the ScanList program, which scans a list of files (the list is named in %7) for a string (the second argument, in this case "Mis-match"). It also has a -ooutput file with the same meaning as for CFB and CFT. In this case the list contains the names of the outputs from CFB and CFT. Thus the final output file (%8) will show if there were any differences between either the two MIDI files or between the two XMidi files. Notice that CompareBin, CompareText and ScanList are not in this package. Thus the source is absent, but the class files are where they should be to work.

I have a bunch (54) of MIDI files in test/midi. The test/dir.dir file is the result of doing a dir within KEDIT (the text editor I use) and saving the results. The test/makeDirList.rex programm (in REXX) reads test/dir.dir and creates test/dirList.txt, which is nothing more than a simple list of files on the test/midi directory. It is this last file (test/dirList.txt) which drives the makeTests.rex program.

The makeTests.rex program reads test/dirList.txt for a list of files which it expects to find in test/midi. It uses a fairly simple naming convention to build the testAll.cmd file. To explain the naming convention, suppose there were a file test/midi/blah.mid and the corresponding line in test/dirList.txt was blah.mid. Here are the assignments for testOne.cmd:

After processing all files in this manner, makeTests.rex adds a final line to testAll.cmd: "Call TL testList.txt Found -otestResults.txt". The testList.txt (created by makeTests.rex is a list of all the final output files for each test.

Here are the steps for running the test package.

  1. Empty the test/temp directory
  2. Copy the XMidi.dtd to test/temp
  3. run makeTests.rex
  4. run testAll.cmd

    This can be a long running program, depending on the speed of the machine and how many files.

  5. Examine testResults.txt; if there are any lines containing the string "Found in line:", then there is some problem which needs to be addressed. Address it and re-run the test.

I tested version 1.4 of XMidi with 54 MIDI files from diverse sources. I found lots of bugs, but I also found things like HTML-style comments at the end of MIDI files and files which ended in ".mid" but which started with (in the file itself) "RIFF" and then did not follow the RIFF format as I understand it. I corrected where I could. I eliminated the faux-RIFF files from the test package, and continued iterating until I got a completely clean run. At that point I decided that it is most likely that most of the bugs are out of the programs.