DynoTech Software Logo Interesting Articles about Popular Topics

  Article Index

Forensic Testimony:
Do It Right or Do Something Else

Dave Carlson - September 17, 2006 (Updated February 18, 2013)


Forensic testimony generally is regarded as reliable, but occasionally faulty forensic practices give criminalistics and the forensic profession a black eye. Court records reveal occasions when innocent people have been convicted wrongfully based on faulty forensic evidence, human error, or incompetent forensic testimony. This article discusses cases related to fingerprint identification, toolmark examination, DNA analysis, and lab procedures. It analyzes select cases to determine what went wrong and ways errors could have been avoided. It presents the result of an interesting and unique study concerning human response to personal bias and contextual distractions, and discusses concerns some in the legal profession have about DNA testing. The article concludes an exhortation for forensic investigators to be professional, thorough, accurate, and honest.

Forensic Testimony: Do It Right or Do Something Else

This article will illustrate examples where faulty forensic testimony and practices gave criminalistics and the forensic profession a black eye. It will list cases where innocent people have been convicted wrongfully based on faulty forensic evidence, human error, or incompetent forensic testimony. It will further analyze some cases to see what went wrong and how the errors could have been avoided. It will conclude with an exhortation for forensic investigators to be professional, thorough, and accurate.

Inconsistencies of forensic evidence and forensic expert testimony are illuminated by the Daubert spotlight. “I’m an expert” is no longer acceptable as a reason to admit evidence or testimony in some courts. A U.S. Supreme Court ruling promulgated “that the Daubert standard may be applied to any sort of expert testimony, not just strictly ‘scientific’ testimony (Kumho Tire Co. v Carmichael)” (Inman & Rudin, 2001, p. 292).

Daubert v Merrill Dow Pharmaceuticals established that “expert opinion based on a scientific technique is inadmissible unless the technique is ‘generally accepted’ as reliable in the relevant scientific community” (Daubert, 1993, ¶ 1). The case placed the burden for determining the admissibility of expert testimony upon the shoulders of the trial judge to “make a preliminary assessment of whether the testimony's underlying reasoning or methodology is scientifically valid and properly can be applied to the facts at issue” (Daubert, 1993, ¶ 5). Giannelli (2003) identifies Daubert as “the most important evidence case ever decided” (p. 1072).

Fingerprint Identification
Simultaneous Impressions Procedure

Frequently, subtle rules of evidence change from court to court. What might have been acceptable in one court may not be admissible in another. Armstrong (2006) discusses the implications of using an on-the-edge fingerprint identification technique called identification by simultaneous impressions (“fingerprints believed to be made by multiple fingers from the same hand made at the same time” (¶ 13)) to tie a suspect to the scene of a crime. There was not enough detail in each individual latent print to use the standard procedure to make a positive identification, so the examiner concluded that the total combination of minor similarities on each fingerprint summed enough evidence to establish identification (¶ 13).

Upon appeal, the February 1995 conviction was set aside in 2000 (Armstrong, 2006, ¶ 3). Because there were not enough similarities in any individual print, the fingerprint examiner concluded the synergy created by the collection of similarities from all four prints justified a greater-than-the-sum-of-its-parts identification (Armstrong, 2006, ¶ 4). The court did not agree with the examiner. The court agreed with fingerprint analysis as a generally accepted scientific process, but there are requisite levels of identifying detail that “must be present for the evidence to meet the necessary standard prior to being admitted into evidence” (Armstrong, 2006, ¶ 16). The court felt that the “process of individualization by simultaneous impressions has not reached the necessary standard of trustworthiness” (Armstrong, 2006, ¶ 16).

Psychological and Cognitive Factors

Dror and Charlton (2006) investigated whether errors occur even “when expert practitioners perform well and technology is effective” (p. 602). If errors did occur during ideal conditions (with trained personnel and effective technology), the investigators also wanted to know why and how errors happen (p. 602). To answers to their questions of why and how, Dror and Charlton (2006) conducted an experiment to study how expert latent fingerprint examiners would perform upon confronting “biasing contextual information” (p. 600) when reexamining the same prints they had judged previously. The experiment resulted in two-thirds of the experts making inconsistent decisions (p. 600).

What could account for so many competent experts, using appropriate technology, reversing their previous conclusions? Because humans are involved, the experimenters would expect to observe some anomalies, but a two-thirds reversal could not be brushed away with a nobody’s-perfect excuse. There must have been “psychological and cognitive vulnerabilities” (Dror & Charlton, 2006, p. 600) involved.

One specific experiment was presenting five experts with fingerprints identified as having come from a “highly publicized erroneous identification,” leading them to believe that the fingerprints they were to examine were exclusions (Dror & Charlton, 2006, p. 605). The reality of the situation was that the experts were presented with the same fingerprints they had previously individualized. The results: “most of the experts (four of the five) were affected by the context and made inconsistent decisions” (Dror & Charlton, 2006, p. 606).

It is beyond the scope of this article to discuss the specifics of the experiment -- the reader is invited to read Dror and Charlton (2006) for details. The point of this discussion is that even trained professionals are not immune to personal bias and contextual distractions.

Toolmark Examination
Balistics Identification

In September 2004, U.S. District Judge Lee Rosenthal ordered a third re-trial based on faulty ballistics evidence presented by the “Houston Police Department's embattled crime lab” (McVicker, 2005, ¶ 1). “’I'm astonished that the ballistics lab hasn't already drawn more attention,’ said attorney Morris Moon, of the Texas Defender Service” (McVicker, 2005, ¶ 5).

In one case, which was returned for re-trial, the ballistics examiner testified the victim was killed by “either a .38 or .357 caliber” (McVicker, 2005, ¶ 28) bullet to the head. The defendant was discovered in possession of a .38 caliber revolver (McVicker, 2005, ¶ 29). However, fourteen years latter ballistic reexamination revealed the bullet that killed the victim most likely was a .25 caliber bullet, not a .38 or .357 caliber bullet as asserted by the original examiner (McVicker, 2005, ¶ 30).

Knife Mark Identification

Based primarily on the following testimony by Miami crime technician Robert Hart, a toolmark examiner, Joseph J. Ramirez was found guilty and sentenced to death for a 1983 murder (Moenssens, 2005, ¶ 4-5).

>p>"The result of my examination made from the microscopic similarity, which I observed from both the cut cartilage and the standard mark, was the stab wound in the victim was caused by this particular knife to the exclusion of all others." The technician explained that he had compared a piece of cut cartilage from the body of the victim to knife impressions, using the knife in question, but had made no comparisons with other knives. (Emphasis added). (Moenssens, 2005, ¶ 6)

On December 20, 2001 the Supreme Court of Florida overturned the conviction, stating, “We reverse the convictions and vacate the sentences for the same reason as before—i.e., the trial court erroneously admitted evidence based on the knife mark identification procedure of Robert Hart” (Moenssens, 2005, ¶ 3). This ruling followed an appeal of the third time Ramirez was tried for the 1983 murder (Moenssens, 2005, ¶ 4).

Even if Hart had adequate experience identifying marks left by a knife on human cartilage and his examination followed acceptable scientific practice, his testimony included at least two fatal flaws. (1) In essence, he testified that no other knife ever in existence could have caused the stab wound. (2) He failed to compare the marks on the cartilage with any other knife.

An evaluation of Hart’s testimony reveals that he may have been able to save both his reputation and explanation of the evidence if he had more carefully considered his words and actions. His words, “to the exclusion of all others” (Moenssens, 2005, ¶ 6) and his actions of failing to at least consider testing a null hypothesis sealed his fate.

In this case, common sense agrees with accepted scientific practice. How can one exclude all others when the object of interest has never been compared with any other object? “Like fingerprint examiners, the opinions of toolmark and firearm examiners have been accepted almost without challenge regarding the individualization of an impression to a tool or a bullet to a gun” (Inman & Rudin, 2001, p. 53). To avoid Hart’s testimonial error, a forensic examiner must not become too complacent about her expert status and remember that she must substantiate her findings, instead of just claiming it is so.

The body of collected evidence and examinations of knife marks is not significant enough to claim a pattern of exclusion. It is not possible to prove that no two fingerprints and no two snowflakes are exactly alike. However, the majority of the general scientific community has accepted that the significant base of exhibits establish that the chances of finding an exact match are so miniscule that the theory of “no matches” also is accepted. The collection of knife mark samples does not even come close to the research documented for fingerprints or snowflakes. There simply is not enough preponderance of evidence to establish that no other knife ever to exist could have caused a particular mark.

Testing a null hypothesis (trying to prove that the knife in question could not have caused the wound (Inman & Rudin, 2001, p. 6)), would have left Hart with a small shred of credibility. Or, at the very least, testing different knifes to demonstrated that they could not have caused the wound would demonstrate an attempt at objectivity. Hart’s testimony indicates that he lost his objectivity and convinced himself there was no reason to consider other alternatives. This industry lesson learned is important for all forensic examiners to remember, so they don’t repeat this kind of credibility-shattering blunder on the stand.

Phrenology (“a theory which claims to be able to determine character, personality traits, and criminality on the basis of the shape of the head (reading bumps)” (Wikipedia, 2006, ¶ 1)) was dismissed as quackery in the early 20th century. Courts discovered that expert opinion, even though loosely based on accepted scientific principles of the day (Dailey, 2004, ¶ 6), needed more support than just “because I said so it must be true”. The trend toward requiring more than just a single expert opinion is paving the path toward increased validity of forensic examinations.

DNA Analysis
Testing Error

DNA analysis has been accepted by science and the courts as the most reliable method of identification. “Using DNA analysis, trace biological samples can now be used to determine individual identity, with a vanishingly small probability that the sample derived from any other individual” (Haglund & Sorg, 1997, p. 109). The danger of such a universally accepted method of identification is that sometimes people become complacent about its use as evidence. Even though DNA matching is accepted as positive identification of an individual, the second part of the entire evidence picture is to establish testing was accomplished using acceptable procedures.

In early February 2000, several British press reports revealed that a local police department admitted that a suspect was arrested for burglary and tried based on a DNA testing error (Moenssens, 2000, ¶ 1). At that time British rules had established the fidelity of DNA matching based on testing of six loci, since this level of examination produced only a one in 37 million chance of error. Based on the error revealed during the trial of that case, British standards have been raised to require testing of ten loci, establishing less than one a billion chance of duplicate matching (Moenssens, 2000, ¶ 2).

It was interesting to note that the DNA evidence was the only thing that fingered the suspect. Even though it was proven that he was more than 200 miles away at the time of the burglary, police concluded that “it had to be him” (Moenssens, 2000, ¶ 1). They would not accept the possibility that the DNA evidence could have been wrong. Even though the famous Sherlock Holmes axiom, "Eliminate all other factors, and the one which remains must be the truth." (Doyle, 2001, p. 66), remains valid, law enforcement officials must be cautious about what factors are eliminated or accepted. By not considering there could have been an error, the police, in effect, did not eliminate all other factors before arriving at what they perceived to be the truth.

Human Error

McDonald (2006) reported that about 60 cases in Australia could be reopened based on unreliable DNA evidence (¶ 1). Ron Grice, a former Queensland Health Scientific Services scientist, admitted that up to 5% of the 1200 cases he handled used DNA samples too small to be retested, so there would be no way to verify the results (McDonald, 2006, ¶ 1).

Ron Grice further admitted that “it was not uncommon for he [sic] and his colleagues to mix up DNA samples belonging to different cases” (McDonald, 2006, ¶ 3). Additionally, sometimes when the examiners in the John Tong centre did have enough samples to test, they would use their own blood as controllers, knowing that there was not enough of the original sample for a retest if their results were questioned in court (McDonald, 2006, ¶ 4).

Even after the discovery of the inconsistencies and unprofessional procedures, a spokeswoman for Queensland Health Scientific Services confirmed that the lab continued to test samples that were too small to retest. However, she asserted that “processes were in place to prevent miscarriages of justice” (McDonald, 2006, ¶ 10). This example underscores the requirement for forensic examiners to ensure they comply with acceptable scientific standards and procedures and thoroughly document their work.

Lab Procedures
People Problems

Four murder cases and three rape cases were opened for re-examination because prosecutors discovered errors in Ranae Houtz’s work (Levy, 2003, ¶ 1). Houtz is a former Pennsylvania state police forensic scientist. Major John R. Capriotti, the bureau’s director, also confirmed that “more requests are anticipated” (Levy, 2003, ¶ 1). Capriotti added that “Houtz analyzed evidence in 615 cases, ranging from murders to simple assaults, over three years” (Levy, 2003, ¶ 2).

Houtz was one of about thirty scientists who worked in the state police crime laboratories. Of the seven labs reviewed, Houtz was the only one found to have such a significant history of errors “analyzing evidence to find and identify body fluids” (Levy, 2003, ¶ 11). Even though the other scientists have performed exemplary work, this one case of incompetence left a scar of distrust that will be difficult to ignore in the future.

Fred Zain was a serologist of questionable integrity at the West Virginia State Police Crime Lab. Following his termination by the West Virginia Lab, Zain secured a position with the County Medical Examiner’s Lab in San Antonio (Kanon, 2002, p. 450). During his tenure with both laboratories, Zain falsified lab results to show results favorable to the prosecution. He then used this false data to testify against defendants, many of whom were convicted based on his testimony. Even though “DNA profiling cannot positively identify one person” (Kanon, 2002, p. 450), Zain testified in a rape case that a defendant’s DNA “could only have originated from [the defendant]” (Kanon, 2002, p. 450).

Based on a history of faulty testimony, the West Virginia Supreme Court of Appeals opened an investigation of Zain and the West Virginia State Police Crime Laboratory. The investigation rocked the forensic community when “the State Supreme Court ruled that none of the testimony given by Zain in more than 130 cases was credible” (Connors, Lundregan, Miller, & McEwen, 1996, p. xvii). Additionally, “the court ordered that Zain be indicted for perjury” (Connors, Lundregan, Miller, & McEwen, 1996, p. xvii). In July 1994, a “grand jury indicted Zain for perjury, tampering with government records, and fabricating evidence” (Connors, Lundregan, Miller, & McEwen, 1996, p. 18). Zain was not convicted for these alleged misdeeds, because the statute of limitations had run out on the charges (Inman & Rudin, 2001, p. 319).

“Unfortunately, the fraud committed by Fred Zain is not unique” (Westervelt & Humphrey, 2005, p. 28). In New York State, between 1992 and 1995, “five state police troopers pled guilty and were sentenced for repeatedly planting fingerprints in criminal cases” (Westervelt & Humphrey, 2005, p. 28). Additionally, in 1997, a New Jersey medical examiner on trial for faking an autopsy report was convicted of witness tampering (Westervelt & Humphrey, 2005, p. 28).

An investigation of the Houston Police Department crime lab in 2005 revealed “63 percent of the cases featured errors of forensic science. What's more, in 27 percent of the cases forensic scientists gave false or misleading testimony” (Casey, 2005, ¶ 9). Casey (2005) goes on to speculate with Jonathan Koehler, a behavioral scientist specializing in legal issues at the University of Texas McCombs School of Business, that

One of the reasons forensic science is so fallible . . . is that 96 percent of the positions are held by persons with bachelor's degrees or less. By contrast, in "normal science, academically gifted students receive four or more years of doctoral training where much of the socialization into the culture takes place. This culture emphasizes methodological rigor, openness, and cautious interpretation of data." (¶ 23)

The unspoken question to ponder is: Should forensic examiners be held to the same educational standard as other scientists?

Leadership Problems

In March 2000, Joyce Gilchrist was relieved of her position as supervisor in the Oklahoma City police department crime lab. In January 2001, “a devastating memo from Byron Boshell, captain of the police department’s laboratory-services division, thudded onto [Police Chief M.T.] Berry’s desk” (Luscombe, 2001, ¶ 10). The memo “filled four three-ring binders and noted reversals and reprimands the courts had handed Gilchrist, as well as the issues the professional journals had taken with her work” (Luscombe, 2001, ¶ 10).

Charges against Gilchrist included missing evidence and “blatant withholding of unquestionably exculpatory evidence” (Luscombe, 2001, ¶ 9) during a rape and murder trial. Further investigation revealed that the original complaint was filed against her fourteen years before, but “ignored by judges and police who did nothing” (Luscombe, 2001, ¶ 11).

Procedure Problems

Mills, McRoberts, and Possley (2004) reported numerous examples of procedural problems that allowed erroneous evidence to be admitted into evidence. Here are some examples of procedural problems in crime labs:

  • In the 1992 rape trial of John Willis, Pamela Fish, from the Illinois State Police crime lab in Chicago, testified that the results of her test were inconclusive when asked if she had identified Willis as a potential source of semen. Examination of her lab notes “showed she did not find Willis’ blood type in semen recovered from the crime scene” (¶ 2).
  • Two lab examiners asserted in a 1997 murder trial in Kane County that they could definitively link the defendant to the crime through his lip prints, even though the FBI has never validated the practice (¶ 15).
  • Don Plautz spent 24 years in the Illinois crime lab system as a supervisor and director before retiring in 2002. He said he had a different philosophy from many of his colleagues. Many forensic scientists at the state police labs, Plautz said, saw their role as members of the state's attorney's team. "They thought they were prosecution witnesses," he said. "They didn't understand they were just scientists (¶ 40).

These are isolated examples of things gone wrong in a profession otherwise acknowledged for their integrity and high ethical principles. Dr. Paul Kirk, an historical icon of the forensic community observed that, “As a rule, even those practitioners not bound by any official code of ethics tend to be objective, fair and just in their relations to the people and the law. The exceptions are not more glaring than those in many of the established professions” (Inman & Rudin, 2001, p. 301).


This article has revealed and discussed cases where expert forensic testimony and forensic procedures have manifest themselves as legal disasters. It is imperative that everyone involved in the profession of gathering, analyzing, interpreting, and presenting forensic evidence ensure they established and follow appropriate procedures.

In the preamble to the Code of Ethics of the California Association of Criminalists we learn that, “It is the duty of any person practicing the profession of criminalistics to serve the interests of justice to the best of his ability at all times” (Inman & Rudin, 2001, p. 347). “The unfortunate poster child for unethical conduct in forensic science is Fred Zain” (Inman & Rudin, 2001, p. 318). Inman and Rudin (2001) revealed that Zain had worked on hundreds of cases that showed an “undeniable trend toward interpreting marginal or even nonexistent results so as to implicate a suspect” (p. 318). Don’t be another Fred Zain. Do it right or do something else.


Armstrong, E. D. (2006). Did the partial fingerprints lie?: Identification by "simultaneous prints" struck down. Retrieved September1, 2006, from http://forensic-evidence.com/site/ID/SimultaneousPrints.html

Casey, R. (2005, September 17). It's a crime when science gets it wrong. [Electronic version]. Houson Chronicle, September 17, 2005. Retrieved September 15, 2006, from http://www.chron.com/disp/story.mpl/metropolitan/casey/3357691.html

Connors, E., Lundregan, T., Miller, N., and McEwen, T. (1996). Convicted by juries, exonerated by science: Case studies in the use of DNA evidence to establish innocence after trial. Washington, DC: National Institute of Justice.

Daley, B. (2004, June 8). Foolproof forensics? [Electronic version]. The Boston Globe, June 8, 2004. Retrieved September 15, 2006, from http://www.truthinjustice.org/foolproof-forensics.htm

Daubert v. Merrell Dow Pharmaceuticals (92-102), 509 U.S. 579 (1993). Retrieved September 15, 2006, from http://supct.law.cornell.edu/supct/html/92-102.ZS.html

Doyle, A. C. (2001). The original illustrated "Strand" Sherlock Holmes. Hertfordshire, UK: Wadsworth Editions.

Dror, I. E. and Charlton, D. (2006, February). Why experts make errors. [Electronic version]. Journal of Forensic Identification, 56(4), 600-619. Retrieved August 24, 2006, from http://www.ecs.soton.ac.uk/~id/JFI%20expert%20error.pdf

Giannelli, P. C. (2003, Fall). The Supreme Court's “criminal” Daubert cases. Seton Hall Law Review, 33(4), 1071-1112.

Haglund, W. D. and Sorg, M. H. (1997). Forensic taphonomy: The postmortem fate of human remains. Boca Raton, FL: CRC Press.

Inman, K. and Rudin, N. (2001). Principles and practices of criminalistics: The profession of forensic science. Boca Raton, FL: CRC Press.

Kanon, D. L. (2002). Will the truth set them free? No, but the lab might: Statutory responses to advancements in DNA technology. Arizona Law Review, 44(2), 449-476.

Levy, M. (2003, June 20). Pa. forensic work called into question. AP Online. Retrieved September 1, 2006, from http://www.law-forensic.com/cfr_houtz_3.htm

Luscombe, B. (2001, May 21). When the evidence lies, Joyce Gilchrist helped send dozens to death row: The forensic scientist's errors are putting capital punishment under the microscope. [Electronic version]. Time, May 21, 2001, 38. Retrieved September 16, 2006, from http://www.law-forensic.com/cfr_gilchrist_7.htm

McDonald, A. (2006, July 8). DNA evidence claim clouds Australian convictions. [Electronic version]. The Australian. Retrieved September 1, 2006 from http://truthinjustice.org/AussieDNA.htm

McVicker, S. (2005, March 14). Ballistics lab results questioned in 3 cases. Houston Chronicle, March 14, 2005. Retrieved September 15, 2006, from http://www.chron.com/disp/story.mpl/special/crimelab/3083288.html

Mills, S., McRoberts, F., and Possley, M. (2004, October 20). When labs falter, defendants pay: Bias toward prosecution cited in Illinois cases. Chicago Tribune, October 20, 2004. Retrieved August 29, 2006, from http://truthinjustice.org/labs-falter.htm

Moenssens, A. A. (2000). A mistaken DNA identification?: What does it mean? Retrieved September 15, 2006, from http://forensic-evidence.com/site/EVID/EL_DNAerror.html

Moenssens, A. A. (2005). Toolmark identification received a (Frye-Daubert) body blow in Florida. Retrieved September 1, 2006, from http://forensic-evidence.com/site/ID/toolmark_id.html

Westervelt, S. and Humphrey, J. A. (Eds.). (2005). Wrongly convicted: Perspectives on failed justice. New Brunswick, NJ: Rutgers University Press.

Wikipedia. (2006). Phrenology. Retrieved September 16, 2006, from http://en.wikipedia.org/wiki/Phrenology


  Article Index

Copyright © 2016, DynoTech Software, All Rights Reserved.