by Halc on Tue Apr 19, 2005 3:36 am
What pains me is this:
They did a lot of work.
I mean, of course there are some no-lifer enthusiasts with no full time work/family/obligations (who, me?), who could do even more work.
Regardless, they did a big amount of work.
But had they spent 20% of that time by asking around, reading here, cdrinfo and cdfreaks, they would have succeeded in using the remaining time much more usefully.
I'm not saying that we have all the information and that the testing that for example I do myself is faultless. Of course is not. What I've learned during the past 3-4 years that this is constant learning and one must remain humble.
But at least many (if not most) of the commonly agreed mistakes can be weeded out by asking around, participating in discussions and reading others test methodologies (and criticism about them).
If one does not do it, there can be several reasons: language barrier, hubris and/or not knowing of the possible sources (even if trying to find out). Language barrier and not finding out even if trying are understandable. Hubris is not a good reason, in my book anyway :)
In my opinion real-life constraints doesn't immediately strike me as the most obvious reason, if one ends up doing a huge amount of testing anyway (thus spending a lot of time on the project as a whole).
It boils down to balancing preliminary research and planning with the actual unavoidable grunt work (burn, test, take pictures, compile, analyze, draw conclusions, write article).
So, based on all this, I recommend we not lambast the people that do not surpass our often very unrealistic set of test criteria, but give them constructive criticism. After all, they did try and they did do a lot of work and they can still learn more.
After all, there aren't enough knowledgeable testers in the world. So, let's try to educate them. If there were, we wouldn't need to have discussions like these. Just read the results of others :)
regards,
halcyon
PS I have a similar problem myself in terms of testing. I'm going to do a test of 5 drives (limited on purpose to 5). Now, I could of course burn as many discs with each drive and scan each of them once. Those of you who know my current stance on this, now that I don't think too much of such tests in terms of statistical reliability.
My other choice is to burn a very small number of discs (the most used), but to scan all the burns on various readers (excluding LiteOn completely). Now, this is far from optimal too, because I can then use a max of 6 different brands and the amount of testing will still be huge. And the readers will cry "why didn't you test X, why didn't you test Y, etc."
Of course, there is the 3rd chance of burning a huge amounts of discs and scanning with a big number of drives. While this is theoretically the best option, I've noticed it's just not doable (at least I can't do it). It takes such a long time to do it that even if doing it for a couple of hours each day, the results become outdated before one can publish them :)
So, whateve one does, one will get huge amounts of criticism (some deserved, some completely ignorant).
So, from this point of view, it is (IMHO) useful to do the test Right (TM) with reliable - even if limited in scope - test results, because you will get scorned anyway.
It's better to have a little bit of reliable information than a huge amount of unreliable information. Imho.