spath wrote:According to these figures it seems that KProbe is roughly missing from
2 to 12% of all the sectors on a disc for its C1/C2 calculations

True, but sampling 88 to 98% of the sectors seems to be sufficient to give a reasonably accurate representation of a disc's quality. I say this because running multiple tests, where a different 2 to 12% of the sectors are missed each time, doesn't materially change the measured values.
In my view the one possible cause for concern would be the measurement of C2 errors. Missing a particular C1 error is not a big deal, since there are generally plenty of others and enough samples are taken to ensure that the average is representative of the true rates. But missing a C2 error is a different story. We don’t want any C2 errors on our discs, and expect them to be rare so that when one does show up it is a noteworthy event. Thus, the cost associated with missing a C2 error is higher than the cost of missing a C1 error.
Fortunately, this doesn't seem to be a significant issue in actual practice. I haven't run across any disc that when repeatedly tested sometimes shows a C2 error and sometimes does not. I have one particular disc that shows just a few C2 errors, so it would be a candidate for exhibiting this type of problem, but it shows the C2 errors consistently. Granted, it is possible that a disc may have just one or two C2 errors and that they could be missed. Therefore it would be best if KProbe guaranteed to catch all C2 errors all of the time. Lacking that guarantee, it is probably best for people with particular concerns in this area to test each disc at least twice, and also to test with a second, independent program such as CD Speed’s Scan Disc or CD Quality test. Despite having said this, it still seems to me, based on my experiences, that actual instances of completely missing all C2 errors on a disc would be rare.
By the way, WSES, CD Doctor and KProbe all exhibit this same behavior. Here is a histogram that adds to the data MediumRare posted. In this case rather than showing distributions for different testing speeds I am showing distributions for different testing programs at the same (48x CAV) testing speed:

- Code: Select all
C1 Max C1 Avg C1 Total Total Samples
----------------------------------------------------
WSES 8 0.232 958 4122
CD Doc 8 0.221 942 4255
KProbe 7 0.236 981 4163
CD Doctor collected the most samples overall. WSES would have done almost as well, but for some reason it skipped the first 100 seconds of the disc even though I had it set to start at 2 seconds. I've seen it do this type of thing in the past.
All three have pretty tightly grouped sample spacing, with WSES and CD Doctor being a little better than KProbe. However, the difference isn't enough to make a significant affect in the overall measurement outcome, as can be seen by comparing the maximum and average values. Keep in mind, while viewing the histogram chart, that I have zoomed in on the area around the 75-block sample spacing to show greater detail. Thus, differences are artificially magnified.
Note that none of sample spacings (with the exception of CD Doctor which I will explain in a moment) are less than 75 blocks. In fact, the smallest sample spacing is 78 blocks. This offers additional evidence to support the contention that KCK's concern about under-sampling, while potentially theoretically valid, is not warranted in practice.
The exception of which I spoke regarding the CD Doctor results is that 28 samples were reported with a spacing of 0 blocks, and 28 with an average spacing of 162 blocks. It was suspicious that the same numbers of samples were spaced by 0 as by 162 blocks, and that 162 blocks is half the average spacing. Further investigation revealed that these 56 incidents were the result of CD Doctor reporting two samples in a row with the same LBA 28 times. I think this is a bug or anomaly with how CD Doctor reports the LBA rather than with how it measures and reports the error count data, since the C1 counts varied even though the LBA remained the same. Here are a few examples:
- Code: Select all
LBA C1
------------
3185 4
3185 1
12281 0
12281 2
21990 2
21990 2
Based on this, I don't believe that the sample size was actually 0 blocks. I think there were two full size 75-block samples separated by an average of 81 blocks that were incorrectly reported as having the same LBA. It does make me wonder, though, if CD Doctor is using a slightly different polling technique. To me it smacks of polling on a Read C1C2 command that returns complete/incomplete status, and reading the LBA only after the Read C1C2 command returns with status "complete". This is, however, pure speculation.
In the end, all three programs show very similar behavior and results. I believe that the limiting factor is likely the characteristics of the underlying hardware in the drives. So, if they all show essentially the same behavior and results, which should be used? That obviously is a question that everyone has to answer for himself or herself, but I will state my personal views.
WSES was the only game in town when it first came out, and it was great to have such a tool, but it is very inconvenient to use. CD Doctor made a big advancement in convenience since it can run under Windows and allows results to be easily saved in annotated chart format for display or csv file for additional analysis. KProbe improves on CD Doctor by including more control over formatting and scaling and by providing additional tests and information. Also, KProbe is still under development and Karr has shown a very generous spirit in working with us and incorporating our suggestions. Based on all of these considerations, I personally intend to use KProbe for most of my future testing.
MikeTR wrote:We can't expect performance on par with a professional dedicated machine from a (free) tool like this. I think mr. Wang has done a great job so far and I hope he keeps going on like this.
I agree 110%!
MikeTR wrote:He might,....as long as cfitz and KCK don't drive him from this forum with those gruelling questions
Hopefully we aren't driving him away. I've tried to convey to him that I support his efforts and appreciate that he has given us such a nice test program. I wouldn't want inquisitiveness and constructive criticism to be confused with mean-spirited bashing. I'm definitely
not against KProbe.
And by the way, why don't you include MediumRare in the list of grueling inquisitors?

I think he deserves credit (or blame

) as well.
cfitz