dodecahedron wrote:i disagree. it would not eliminate an extra variable. the burner is a reader like any other reader, and there's no more reason the test the burned CD on it than on any other reader.
Well, it doesn't eliminate it in the sense that it does away with the reader altogether. I wasn't trying to say that. Obviously some reader is required to test the end-to-end quality. But if the disc is tested on the same drive that wrote it, there aren't any questions about whether the writer or reader is to blame if the results aren't good, because they are one and the same.
dodecahedron wrote:i remember quite a few posts about CDs that test fine on the burner that burned them but bad on others. so if you test your CD on the burner, what does that prove? only that's it's readble on it.
That's the unfortunate flip side, and it is certainly a valid point. But doing the read test with the same drive that wrote the disc does at least provide a single, well characterized result, whether or not that result can be extrapolated to other readers. Using a reader other than the writer, and not reporting that reader, means there is an unknown variable in the results that dilutes their usefulness somewhat.
The way I look at it, the truly important aspect of these tests is the end-to-end quality of the data transfer. That transfer requires three main components: a writer, a reader, and the medium on which the data is transferred. To me those are probably the three most important variables in the equation. Next are the write and read speeds. Then come presumably tertiary variables such as the ambient temperature and humidity at the time of writing and reading, etc.
Neglecting the tertiary variables, in my opinion the ideal database of media compatibility would include a multi-dimensional matrix that had entries for every combination of writer, write speed, media, reader and read speed. Obviously this is impractical.
The next most ideal setup, in my view, would be to use a single "gold standard" reader and read speed for all combinations of writer, write speed and media. This also is impractical since we are relying on end users to contribute what they can, and we don't all own the same reader.
In this thread we are making it manageable by omitting the reader and read speed altogether, concentrating instead on the writer and media, and including only a single data point for write speed - the maximum speed at which the media could be written without errors (although, again, we can’t actually know this without reading).
Given that we are omitting the reader, my thinking was that cleanest way to omit it is to use the same device for both writing and reading wherever possible. As you noted, this, like all practical solutions, isn't completely ideal. But I think it does do the best job at reducing unknown variables. A bad result is known to be the fault of the reported media/drive combination, because there isn't a separate, unknown reader (as I was forced to use due to the limitation in my writer) thrown into the mix. When you do test with a different reader, you run the risk of reporting a bad writer/media combination when in fact it was a bad reader/media combination. That’s what happened to me.
Maybe we should also post the reader we used while testing, and the top speed at which it read the media without errors. Or, if we have access to more than one reader, post the best results we can obtain. Anyone have more thoughts?
cfitz