We’ve got that un-validated science, of course that can lead to a wrongful conviction because it’s not really science.
Ensuring the validity of scientific procedures and techniques is one of the most important steps in presenting probative, unbiased, and clear science to a fact-finder.
With the tendency of fact-finders to defer to expert witnesses, attorneys can provide balance by properly challenging and defending presented evidence. The adversarial system is only effective when both sides properly evaluate the merit of the science. This calls for a working understanding of the concept of validity, as well as an appreciation for the limitations of forensic techniques, and the hurdles to obtaining valid results.
How does a lab or accreditation body know they are producing valid results? How would you know if a lab is producing valid results?
For a technique or procedure to be valid, it must consistently yield accurate results, and be competently performed. To clarify, validity (accuracy) is different than reliability (consistency).
For validation, labs are required to run repetitive tests and inter-laboratory comparisons to identify possible errors and inconsistencies. In order to be valid, a measurement or measuring system must be (among other characteristics) both accurate and precise. Those terms may seem interchangeable, when in fact they’re quite distinct.
Valid laboratory tests are designed to generate distinct types of results. There are two main types of tests that can be performed.
Quantitative tests give a numerical result, typically a concentration of some compound in a solution. There will, of course, be some small, measurable amount of variability from one repetition of the test to the next, but all the results should fall within a narrow range.
In qualitative tests, however, the outcome is either positive or negative. The results are based on observation. However, determining what indicates a positive or negative result can be more complicated than it sounds, thus requiring some subjective interpretation based on analyst experience.
Neither qualitative nor quantitative evidence is stronger in and of itself; judges should always consider the relative strength of evidence based on the full range of scientific underpinnings. The National Academy of Sciences focuses on quantifiable error rates, emphasizing that when error rates are available for either quantitative or qualitative tests, they should be made known so proper weight can be given to any scientific conclusions.