2 Unit 2: Admissibility of forensic science and expert testimony 2 Unit 2: Admissibility of forensic science and expert testimony

2.1 Class 3: Admissibility under the Frye standard (Part A) 2.1 Class 3: Admissibility under the Frye standard (Part A)

Frye v. United States Frye v. United States

This case (a) set the prior standard for admission of expert testimony in the federal courts until it was replaced by the Daubert standard and (b) sets the current standard for admission in some states, including New York and New Jersey.

FRYE v. UNITED STATES.

(Court of Appeals of District of Columbia.

Submitted November 7, 1923.

Decided December 3, 1923.)

No. 3968.

J. Criminal law <&wkey;472 — Expert testimony, explaining systolic blood pressure deception test, inadmissible.

The systolic blood pressure deception test, based on the theory that truth is spontaneous and comes without conscious effort, while the utterance of a falsehood requires a conscious effort, which is reflected in the blood pressure, held not to have such a scientific recognition among psychological and physiological authorities as would justify the courts in admitting expert testimony on defendant’s behalf, deduced from experiments thus far made.

2. Criminal iaw <&wkey;472 — Principle must be generally accepted, to render export testimony admissible.

While the courts will go a long way in admitting expert testimony, deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it • belongs.

Appeal from the Supreme Court of the District of Columbia.

James Alphonzo,Frye was convicted of murder, and he appeals.

Affirmed.

Richard V. Mattingly and Foster Wood, both of Washington, D. C., for appellant.

Peyton Gordon and J. H. Bilbrey, both of Washington, D. C., for the United States.

Before SMYTH, Chief Justice, VAN QR5DED, Associate Justice, and MARTIN, Presiding Judge of the United States Court of Customs Appeals.

VAN ORSDRP, Associate Justice.

Appellant, defendant below, was convicted of the crime of murder in the second degree, and from the judgment prosecutes this appeal.

A single assignment of error is presented for our consideration. In the course of the trial counsel for defendant offered an expert witness to testify to the result of a deception test made upon defendant. The test is described as the systolic blood pressm-e deception test. It is asserted that blood pressure is influenced by change in the emotions of the witness, and that the systolic blood pressure rises are brought about by nervous impulses sent to the sympathetic branch of the autonomic nervous system. Scientific experiments, it is claimed, have demonstrated that fear, rage, and pain always produce a rise of systolic blood pressure, and that conscious deception or falsehood, concealment of facts, or guilt of crime, accompanied by fear of detection when the person is under examination, raises the systolic blood pressure in a curve, which corresponds exactly to the struggle going on in the subject’s mind, between fear and attempted control of that fear, as the exam*1014ination touches the vital points in respect of which he is attempting to deceive the examiner.

In other words, the theory seems to be that truth is spontaneous, and comes without conscious effort, while the utterance of a falsehood re-citares a conscious effort, which is reflected in the blood pressure. The rise thus produced is easily detected and distinguished from the rise produced by mere fear of the examination itself. In the former instance, the pressure rises higher than in the latter, and is more pronounced as the examination proceeds, while in the latter case, if the subject is telling the truth, the pressure registers highest at the beginning of the examination, and gradually diminishes as the examination proceeds.

Prior to the trial defendant was subjected to this deception test, and counsel offered the scientist who conducted the test as an expert to testify to the results obtained. The offer was objected to by counsel for the government, and the court sustained the objection. Counsel for defendant then offered to have the proffered witness conduct a test in the presence of the jury. This also was denied.

Counsel for defendant, in their able presentation of the novel question involved, correctly state in their brief that no cases directly in point have been found. The broad ground, however, upon which they plant their case, is succinctly stated in their brief as follows:

“The rule is that the opinions of experts or skilled witnesses are admissible in evidence in those cases in which the matter of inquiry is such that inexperienced persons are unlikely to prove capable of forming a correct judgment upon it, for the reason that the subject-matter so far partakes of a science, art, or trade as to require a previous habit or experience or study in it, in order to acquire a knowledge of it. When the question involved does not lie within the range of common experience or common knowledge, but requires special experience or special knowledge, then the opinions of witnesses skilled in that particular science, art, or trade to which the question relates are admissible in evidence.”

[1,2] Numerous cases are cited in support of this rule. Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduction is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.

We think the systolic blood pressure deception test has not yet gained such standing and scientific recognition among physiological and psychological authorities as would justify the courts in admitting expert testimony deduced from the discovery, development, and experiments thus far made.

The judgment is affirmed.

Motion to Exclude Fingerprint Evidence Under Frye Motion to Exclude Fingerprint Evidence Under Frye

This is a defense motion to exclude fingerprint evidence in a criminal case under Frye (which was referred to as the Frye/Dyas test in D.C.). The actual document is sitting under the "Class 3" folder on Moodle - please go to Moodle to read the document. This motion:

(a) illustrates the application of the Frye test;

(b) considers the question of who counts as the relevant scientific community;

(c) describes the fingerprint identification process;

(d) describes the critique of that process; and

(e) illustrates the relationship of Rule 403 to Frye.

Government's Reply to the Motion to Exclude Fingerprint Evidence under Frye Government's Reply to the Motion to Exclude Fingerprint Evidence under Frye

This is an excerpt from the government's response to the motion to exclude fingerprint evidence that you just read. This excerpt illustrates the different view the government takes on the Frye test, the definition of the relevant scientific community, and the meaning of the 2009 NRC Report.

The excerpt is a pdf posted on our Mooodle site under "Class 3."

Writing Reflection #3 Writing Reflection #3

Please go to our Moodle Page and under "Class 3" you will find the prompt and submission folder for Writing Reflection #3.

2.2 Class 4: Admissibility under the Frye standard (part B) 2.2 Class 4: Admissibility under the Frye standard (part B)

Innocence Project Amicus Brief in New York v. Williams Innocence Project Amicus Brief in New York v. Williams

This amicus brief makes four recommendations to the New York Court of Appeals that are essentially four critiques of the way the Frye standard is handled in the New York. (This excerpt includes the first three – we’ll read the fourth recommendation later.)

(You don’t need to know much about the underlying case to understand the amicus argument, but the legal question was “whether the trial court should have held a Frye hearing with respect to the admissibility of low copy number (LCN) DNA evidence and the results of a statistical analysis conducted using the proprietary forensic statistical tool (FST) developed and controlled by the New York City Office of Chief Medical Examiner (OCME).”

APL-2018-00151
APL-2018-00157
New York County Clerk's Index No. [●]

 

THE PEOPLE OF THE STATE OF NEW YORK

Respondents,

—against —

CADMAN WILLIAMS,

Defendant-Appellant,

 

THE PEOPLE OF THE STATE OF NEW YORK

Respondents,

—against —

ELIJAH FOSTER-BEY,

Defendant-Appellant.

BRIEF OF AMICUS CURIAE THE INNOCENCE PROJECT

M. Chris Fabricant
Innocence Project, Inc.
40 Worth Street, Suite 701
New York, NY 10013

 

Konrad Cailteux
Carolyn R. Davis
Weil, Gotshal & Manges LLP
767 Fifth Avenue
New York, NY 10153
Tel.: (212) 310-8000
Fax.: (212) 310-8007

Attorneys for Amicus Curiae The Innocence Project

Date Completed: [●]

 

 

TABLE OF CONTENTS

Page

PRELIMINARY STATEMENT......................................................................... 3

ARGUMENT..................................................................................................... 4

I....... New York courts applying Frye should not use the “novelty test” to avoid further analysis of scientific evidence................................................................... 4

II...... New York courts should not mistake legal precedent for an analysis of reliability in evaluating scientific evidence under Frye............................................. 12

III..... This Court should provide guidance on the appropriate scope of the “relevant scientific community.”............................................................................ 18

IV..... New York’s interest in effectively scrutinizing scientific evidence for reliability could be aided by using the factors discussed in Daubert to conduct a Frye analysis.................................................................................................. 26

CONCLUSION................................................................................................ 30

   1. TABLE OF AUTHORITIES

Cases

In re Accutane Litig.,
234 N.J. 340 (2018)..................................................................................... 29

Ex parte Chaney,
563 S.W.3d 239 (Tex. Crim. App. 2018)........................................................ 8

Chesson v. Montgomery Mutual Ins. Co.,
75 A.3d 932 (Md. 2013)............................................................................... 16

Coble v. State,
330 S.W.3d 253 (Tex. Crim. App. 2010)...................................................... 16

State ex rel. Collins v. Superior Court,
644 P.2d 1266 (Ariz. 1982).......................................................................... 22

Commonwealth v. Foley,
38 A.3d 882 (Pa. Super. Ct. 2012).................................................................. 7

Commonwealth v. Shanley,
919 N.E.2d 1254 (Mass. 2010)................................................................. 8, 16

Contreras v. State,
718 P.2d 129 (Alaska 1986)......................................................................... 23

Cornell v. 360 W. 51st St. Realty, LLC,
22 N.Y.3d 762 (2014)............................................................................ 13, 19

Daubert v. Merrell Dow Pharm., Inc.,
509 U.S. 579 (1993).............................................................................. passim

Frye v. United States,
293 F. 1013 (D.C. Cir. 1923)................................................................. passim

Marso v. Novak,
42 A.D.3d 377 (1st Dep’t 2007)..................................................................... 9

Melendez-Diaz v. Massachusetts,
557 U.S. 305 (2009).................................................................................... 24

Motorola Inc. v. Murray,
147 A.3d 751 (D.C. 2016)...................................................................... 27, 28

Parker v. Mobil Oil Corp.,
7 N.Y.3d 434 (2006)................................................................... 14, 18, 25, 28

People v. Boone,
30 N.Y.3d 521 (2017).................................................................................. 29

People v. Calabro,
161 A.D.2d 375 (1st Dep’t 1990).............................................................. 9, 28

People v. Collins,
49 Misc. 3d 595 (Sup. Ct. Kings Cty. 2015)............................................ 19, 25

People v. Foster-Bey,
158 A.D.3d 641 (2d Dep’t 2018)............................................................ 15, 17

People v. Garcia,
39 Misc. 3d 482 (Sup. Ct. Bronx Cty. 2013)...................................... 10, 15, 18

People v. John.
27 N.Y.3d 294 (2016).................................................................................. 24

People v. Johnson,
27 N.Y.3d 199 (2016).................................................................................. 29

People v. LeGrand,
8 N.Y.3d 449 (2007).......................................................................... 5, 12, 13

People v. Luna,
989 N.E.2d 655 (Ill. App. Ct. 2013).............................................................. 14

People v. McKown,
875 N.E.2d 1029 (Ill. 2007).................................................................... 15, 16

People v. Rodriguez,
Ind. No. 5471/2009, Decision and Order (Sup. Ct. N.Y. Cty. May 1, 2012).. 6, 7

People v. Shreck,
22 P.3d 68 (Colo. 2001)............................................................................. 5, 6

People v. Slone,
76 Cal. App. 3d 611 (Cal. Ct. App. 1978)..................................................... 20

People v. Smith,
63 N.Y.2d 41 (1984).................................................................................... 20

People v. Vining,
28 N.Y.3d 686 (2017).................................................................................. 29

People v. Wesley,
83 N.Y.2d 417 (1994)........................................................................... passim

Reed v. State,
391 A.2d 364 (Md. 1978)............................................................................. 23

Starks v. City of Waukegan,
123 F. Supp. 3d 1036 (N.D. Ill. 2015)........................................................... 21

State v. Alberico,
861 P.2d 192 (N.M. 1993)........................................................................... 14

State v. Coon,
974 P.2d 386 (Alaska 1999)........................................................................... 6

State v. Hull,
788 N.W.2d 91 (Minn. 2010)....................................................................... 17

State v. Sharpe,
SP-7326 (Alaska, Jan. 4 2019)....................................................................... 8

State v. Ward,
694 S.E.2d 738 (N.C. 2010)......................................................................... 16

Sybers v. State,
841 So.2d 532 (Fla. Dist. Ct. App. 2003)...................................................... 22

Statutes & Rules

Federal Rule of Evidence 702....................................................................... 4, 27

Colorado Rule of Evidence 702........................................................................... 6

Other Authorities

David L. Faigman and Claire Lesikar, Organized Common Sense, 64 DePaul L. Rev. 421 (2014)........................................................................................... 13

David H. Kaye, Forensic Science, Statistics & the Law, “The New York City Medical Examiner’s Office ‘Under Fire’ for Low Template DNA Testing,” Sept. 11 2017, http://for-sci-law.blogspot.com/2017/09/the-new-york-city-medical-examiners.html................................................................................. 7

David H. Kaye, et al., The New Wigmore on Evidence, “Limiting Strict Scrutiny by Methodology,” § 9.5.1 (2018).................................................................. 16

DNA Exonerations in the United States, The Innocence Project, https://www.innocenceproject.org/dna-exonerations-in-the-united-states/ (last visited June 26, 2019).................................................................................... 1

Eric S. Lander, Fixing Rule 702: The PCAST Report and Steps to Ensure the Reliability of Forensic Feature-Comparison Methods in Criminal Courts, 86 Fordham L. Rev. 1661 (2018)................................................................. 11, 12

Harry T. Edwards, Solving the Problems That Plague the Forensic Science Community, 50 Jurimetrics 5 (2009)............................................................. 23

James E. Starrs, Frye, v. United States Restructured and Revitalized, 26 Jurimetrics J. 249 (1986)................................................................................ 7

Jane Campbell Moriarty, Deceptively Simple: Framing, Intuition, and Judicial Gatekeeping of Forensic Feature-Comparison Methods Evidence, 86 Fordham L. Rev. 1687 (2018)..................................................................................... 10

Lauren Kirchner, Traces of Crime: How New York’s DNA Techniques Became Tainted, NY Times, Sept. 4, 2017, https://www.nytimes.com/2017/09/04/nyregion/dna-analysis-evidence-new-york-disputed-techniques.html.......................................................... 17, 25, 26

Michael J. Saks, et al., Forensic bitemark identification: weak foundations, exaggerated claims, 3 J. L. Biosciences 538 (2016)......................................... 9

Misapplication of Forensic Science, The Innocence Project, https://innocenceproject.org/causes/misapplication-forensic-science/ (last visited June 26, 2019).................................................................................... 2

National Academy of Sciences, Strengthening Forensic Science in the United States (2009).......................................................................................... 21, 24

Peter W. Huber, Galileo's Revenge: Junk Science in the Courtroom (1991)........ 20

President’s Council of Advisors on Science and Technology, Report to the President: Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods (2016) ………………………………………passim

2. INTEREST OF THE AMICUS CURIAE

The Innocence Project is a national litigation and public policy organization dedicated to providing pro bono legal and related investigative services to indigent prisoners whose actual innocence may be established through post-conviction DNA evidence.  To date, the work of the Innocence Project and affiliated organizations has led to the exoneration of 365 individuals who post-conviction DNA testing has shown were wrongly convicted of crimes they did not commit. . . .

The Innocence Project submits this brief because the issues presented have serious implications for ensuring defendants are not wrongfully convicted through the admission of unreliable forensic science.  The use of unreliable forensic sciences occurs in 45 percent of exonerations of innocent defendants established through post-conviction DNA testing.  . . . .

3. PRELIMINARY STATEMENT

As currently applied in New York, the Frye standard for the admissibility of scientific evidence is susceptible to a level of confusion and complacency that poses a serious risk to the integrity of criminal proceedings in this state.  Therefore, this Court should take steps to provide the lower courts with clear guidance in order to establish an effective, consistent, and fair application of the “general acceptance” test for the admissibility of scientific evidence.  And the Court should remind the trial courts that trial courts must take seriously the responsibility of evaluating contested scientific evidence for reliability, particularly where a defendant’s freedom is at stake.

First, the Court should caution lower courts against rigidly applying “novelty” as a threshold test to bypass further analysis of scientific evidence, and provide specific guidance that underscores how “novelty” in the scientific fields can be a fluid concept.  Second, the Court should reiterate that legal precedent, while a useful tool, cannot be a substitute for examining whether a scientific technique is “generally accepted as reliable by the relevant scientific community.”  In particular, courts should be wary of relying on prior decisions where there is new evidence that the scientific consensus has changed.  Third, the Court should provide guidance on the definition of the “relevant scientific community” that makes clear that the opinions of a few experts, particularly those with financial or professional interest in the proffered methodology, are inadequate to represent the relevant scientific community.  Finally, [omitted].

 

4. ARGUMENT

I. New York courts applying Frye should not use the “novelty test” to avoid further analysis of scientific evidence.

In seminal cases applying Frye, this Court has discussed the concept of “novelty” only in general terms to explain what scientific evidence courts should evaluate for general acceptability.  See, e.g., People v. Wesley, 83 N.Y.2d 417, 435 (1994) (“[W]here the scientific evidence sought to be presented is novel, the test is that articulated in Frye.”); People v. LeGrand, 8 N.Y.3d 449, 455 (2007) (“[I]n recognition that expert testimony of this nature may involve novel scientific theories and techniques, a trial court may need to determine whether the proffered expert testimony is generally accepted by the relevant scientific community”) (citations omitted).  Many courts have interpreted this language to mean that “novelty” is a threshold question that determines whether scientific evidence is subject to Frye scrutiny at all.  The “novelty test” risks oversimplifying and abusing the concept of “novelty”—which is in fact a wholly fluid concept in science—to forego any real Frye analysis.  On one hand, courts may construe novelty too narrowly and refuse to scrutinize to new changes to an established methodology.  On the other hand, courts may construe novelty too broadly and refuse to scrutinize previously established methodologies that have since lost favor in the scientific community.  Compounding these problems, this Court has not provided guidance on what constitutes “novel” scientific evidence.   

Even where, as here, the underlying scientific theory is generally accepted, parties should be able to challenge truly novel applications of that underlying theory.  See People v. Shreck, 22 P.3d 68, 76 (Colo. 2001) (criticizing courts for not subjecting evidence previously admitted under Frye to new scrutiny, “despite improvements or other developments in scientific technologies”).[1]  This Court has recognized that, where a proposed methodology or hypothesis builds upon established scientific theory, courts must still exercise proper diligence before categorizing the proposed methodology or hypothesis as “not novel” and ending its analysis there. See Wesley, 83 N.Y.2d at 438 (Kaye, J.C., concurring) (finding that the addition of new steps to the traditional process for analyzing DNA, in order to compare two DNA samples, was “truly novel”).  Within the forensic sciences, dozens of new methodologies have emerged, many of which build on other non-novel methodologies.  As forensic sciences continue to innovate, courts must continue to scrutinize those innovations under Frye.

Courts should hold a Frye hearing where the methodology as a whole has not yet been proven reliable, even if some elements of the methodology have been proven reliable.  Some courts, however, have improperly focused on the non-novel elements of a proffered methodology, rather than evaluating the reliability of the methodology as a whole.  For example, in People v. Rodriguez, Ind. No. 5471/2009, Decision and Order (Sup. Ct. N.Y. Cty. May 1, 2012), the District Attorney argued that the likelihood ratios generated by the Forensic Statistical Tool (“FST”) should be admitted because likelihood ratios are not novel.  Id.  The proper focus of Frye inquiry though, is whether the methodology used to calculate the proffered likelihood ratios is generally accepted—not whether, in the abstract, likelihood ratios are a generally accepted statistical concept.  See David H. Kaye, Forensic Science, Statistics & the Law, “The New York City Medical Examiner’s Office ‘Under Fire’ for Low Template DNA Testing,” Sept. 11 2017, http://for-sci-law.blogspot.com/2017/09/the-new-york-city-medical-examiners.html (analogizing the Rodriguez court’s decision to admitting any linear regression model on the grounds that the method of least squares is not novel); see also James E. Starrs, Frye, v. United States Restructured and Revitalized, 26 Jurimetrics J. 249, 254 (1986) (approval of gas chromatographic evidence in one case should not be read as giving “carte blanche approval to all gas chromatographs,” especially as new types of detectors or sampling methods emerge).

At the same time, non-novel forensic techniques that once enjoyed “general acceptance” have since been challenged, or outright discredited, by new scientific developments.  As other Frye jurisdictions have recognized, it cannot be the law that a technique is insulated from new scientific criticism simply because it is not “novel.”  See, e.g., Commonwealth v. Foley, 38 A.3d 882, 888 (Pa. Super. Ct. 2012) (“[N]ovelty is not restricted to new science, and even ‘bedrock’ scientific principles may be subject to a Frye analysis if those principles become disputed.”) (internal quotation marks and citation omitted); Commonwealth v. Shanley, 919 N.E.2d 1254, 1264 n.15 (Mass. 2010) (“[T]he evolving nature of scientific and clinical studies of the brain and memory and the controversy surrounding those studies [on dissociative memory loss] made it prudent for the judge to proceed with a Lanigan hearing in this case.”).  Daubert jurisdictions have also recognized that new scientific evidence may necessitate a new evidentiary hearing.  See, e.g., State v. Sharpe, SP-7326, (Alaska, Jan. 4 2019) at 24 (“If an appellate court has made a Daubert determination and then new scientific research becomes available, or if a litigant identifies research that the appellate court overlooked, the trial court would be justified in holding an evidentiary hearing to make a complete record and rule in the alternative.”).  Similarly, New York courts should also ensure that “non-novel” methodologies and hypotheses continue to be accepted as reliable within the scientific community.  New York courts interpreting Frye, however, often fall short of this benchmark. 

For example, bite mark evidence and hair microscopy have both been effectively discredited.  See, e.g., Ex parte Chaney, 563 S.W.3d 239, 257 (Tex. Crim. App. 2018) (“[T]he body of scientific knowledge underlying the field of bite mark comparisons [has] evolved in a way that discredits almost all the probabilistic bite mark evidence at trial.”) (vacating conviction based on bite mark evidence); PCAST Report at 87 (“PCAST finds that bite mark analysis does not meet the scientific standards for foundational validity, and is far from meeting such standards.”); PCAST Report at 13 (“PCAST’s own review of the cited papers finds that these studies [of human hair comparisons] do not establish the foundational validity and reliability of hair analysis.”).  Yet despite a change in the scientific consensus, both have effectively evaded Frye scrutiny because neither technique is considered “novel.”  See Michael J. Saks, et al., Forensic bitemark identification: weak foundations, exaggerated claims, 3 J. L. Biosciences 538, 541 (2016) (“Despite the lack of empirical evidence to support its claims, to date no court in the United States has excluded [forensic odontology] expert evidence for failing to meet the requisite legal standard for admission of expert testimony.”); People v. Calabro, 161 A.D.2d 375 (1st Dep’t 1990) (finding sufficient evidence of guilt based upon testimony of three forensic odontologists and admission of similar testimony in another case nine years prior, without discussing whether forensic odontology was admissible evidence in the first instance). 

Lower courts faced with questions of where to draw the line between “novelty” and established science have ruled inconsistently.  Compare Marso v. Novak, 42 A.D.3d 377, 378 (1st Dep’t 2007) (granting judgment notwithstanding verdict where plaintiff’s expert could not show that her conclusions were generally accepted, even though the underlying methodology was generally accepted), with People v. Garcia, 39 Misc. 3d 482, 484 (Sup. Ct. Bronx Cty. 2013) (“The application of a generally accepted technique, even though its application in a specific case was unique or modified, does not require a Frye hearing.”).  This Court should remind trial courts to consider potential changes in the scientific consensus; otherwise, the temptation to evade a true Frye inquiry by simply checking the “not novel” box is as strong as it is problematic.  See Jane Campbell Moriarty, Deceptively Simple: Framing, Intuition, and Judicial Gatekeeping of Forensic Feature-Comparison Methods Evidence, 86 Fordham L. Rev. 1687, 1697 (2018) (“Rather than addressing the complexity head on and resolving [the pointed science-based critiques of forensic comparison methods], courts tend to use a variety of analysis-avoiding methods in evaluating the reliability of [forensic comparison method] evidence, even after learning of its shortcomings in the NRC Report.”).

Here, no party is challenging the admissibility of DNA evidence in general, which has been admissible in New York for over 20 years; nor does the Innocence Project suggest that a Frye hearing must be held in every instance of contested scientific evidence.  Rather, the challenges in this case are to two new techniques for analyzing DNA that were not subjected to a Frye hearing to evaluate whether they are generally accepted by the scientific community for the use to which they were put.  In situations like this one, the concept of “novelty” should not be used as an excuse to turn a blind eye to evidence that warrants additional examination. 

Finally, a widely used technique may still be “novel” where its use is primarily in non-scientific communities.  At minimum, use alone is not a proper factor in determining whether to forego a Frye hearing, as it is incumbent upon courts to look “under the hood” of even widely-used forensic methods.  Indeed, many unreliable forensic methods initially find their way into courts through their use as investigative techniques.  PCAST Report at 32 (“[M]any of these difficulties with forensic science may stem from the historical reality that many methods were devised as rough heuristics to aid criminal investigations and were not grounded in the validation practices of scientific research.”). 

The investigative origin of forensic methods poses two problems.  First, “fundamentally, the forensic sciences do not yet have a well-developed ‘research culture.’”  Id.  Although the forensic community has made significant strides to increase its scientific rigor in the past few years, courts should be wary of forensic techniques that are grounded in casework and investigative experience, as opposed to scientific research and knowledge.  See Eric S. Lander, Fixing Rule 702: The PCAST Report and Steps to Ensure the Reliability of Forensic Feature-Comparison Methods in Criminal Courts, 86 Fordham L. Rev. 1661 (2018) (describing how previously untested forensic methods are now undergoing empirical validity testing).[2]  Second, the standard of reliability required for just outcomes may be lower in an investigation, which is by its nature exploratory, than a prosecution, which seeks a final resolution.  See PCAST Report at 4 (“In investigations, insights and information may come from both well-established science and exploratory approaches.  In the prosecution phase, forensic science must satisfy a higher standard.”).  Although a technique’s “use” can serve as a helpful factor in determining whether court should treat it as “novel,” trial courts should be cautious not to conflate use outside of the courtroom with reliability. 

II. New York courts should not mistake legal precedent for an analysis of reliability in evaluating scientific evidence under Frye.

Directly related to the “novelty” inquiry is the role of precedent in Frye litigation.  This Court noted in LeGrand that “[o]nce a scientific procedure has been proved reliable,” scientific evidence may be considered not novel, thereby obviating the need for a Frye hearing.  8 N.Y.3d at 458 (citing Wesley, 83 N.Y.2d at 436 (Kaye, Ch. J., concurring)).  Thus, the test for determining when a Frye hearing should be held is whether prior proceedings have proved the proffered method’s reliability.  Once reliability has been proven, other courts “may take judicial notice of reliability of the general procedure.”  Id. at 458 (emphasis added).  In practice, similar to the problems inherent in the “novelty test,” courts have misconstrued this language and too often used legal precedent as a proxy for scientific acceptance, causing the most critical inquiry—reliability—to get lost in the shuffle.  Therefore, the Court should take this opportunity to provide crucial guidance to courts tasked with applying Frye, and, specifically, to instruct that (1) because science is constantly evolving, courts are not bound forever by a prior court’s Frye ruling; (2) a prior court’s ruling may be evidence of admissibility only where a Frye hearing was actually held; and (3) reliance on a prior court’s ruling may be improper where a party submits new evidence bearing on the challenged technique’s reliability or general acceptance within the scientific community.

Although Frye represents an intersection between law and science, these two fields approach “precedent” in fundamentally different ways.  See David L. Faigman and Claire Lesikar, Organized Common Sense, 64 DePaul L. Rev. 421, 422 (2014) (“[T]here is a basic disconnect between how scientists approach the empirical world and the way courts do so.”).  As this Court itself has recognized, “scientific understanding, unlike a trial record, is not by its nature static; the scientific consensus prevailing at the time of the Frye hearing in a particular case may or may not endure.”  Cornell v. 360 W. 51st St. Realty, LLC, 22 N.Y.3d 762, 786 (2014).  Similarly, the Supreme Court in Daubert acknowledged that, while reliance on precedent is axiomatic in our legal system, “[s]cientific conclusions are subject to perpetual revision.”  Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 596-97 (1993).  Other Frye jurisdictions have reached similar conclusions.  See, e.g., People v. Luna, 989 N.E.2d 655, 670 (Ill. App. Ct. 2013) (“[C]onstant scientific advances in our modern era may affect our inquiry as to the novelty of a particular methodology.”) (internal quotations omitted).  Therefore, when deciding the admissibility of scientific evidence, “courts should not be obliged to defer to past precedents: they should look afresh at the scientific issues.”  PCAST Report at 144.

  While a court may rely on a prior Frye hearing in the interest of judicial economy, a court should not rely on another court’s reliance on a third court’s Frye hearing.  Admissions without Frye hearings are only evidence of judges’ votes, as opposed to scientists’ votes.  See Parker, 7 N.Y.3d, 434, 447 (2006) (“[Frye] emphasizes counting scientists’ votes, rather than on verifying the soundness of a scientific conclusion.” (internal quotation marks omitted) (citing Wesley, 83 N.Y.2d at 439 (Kaye, C.J., concurring); see also State v. Alberico, 861 P.2d 192, 203 (N.M. 1993) (“It is improper to look for scientific acceptance only from reported case law because that amounts to finding a consensus in the legal community based on scientific evidence that is sometimes many years old.”).

When courts admit evidence based upon other courts’ findings without a Frye hearing—as was the case in Foster-Bey regarding the combination of FST and Low Copy Number (“LCN”) DNA evidence—courts compound the illusion that a methodology is generally accepted by the scientific community.  Courts should be wary of recycling prior courts’ decisions, especially when the scientific consensus has shifted.  For example, in People v. Garcia, the trial court cited judicial opinions admitting LCN DNA without a Frye hearing.  39 Misc. 3d at 487.  While the Garcia court correctly pointed out that LCN DNA profiling had been admitted in 125 cases, this did not mean that it had been proven reliable in 125 Frye hearings.  The Illinois Supreme Court recognized a similar species of error in People v. McKown, where the lower appellate court relied on a prior appellate decision that “merely reaffirmed” a third court’s decision to admit horizontal gaze nystagmus (“HGN”) evidence, [3] which itself relied heavily on a fourth court’s Frye hearing.  875 N.E.2d 1029, 1037–38 (Ill. 2007).  The Illinois Supreme Court held that this reliance on precedent was improper, and a Frye hearing should have been held, where the evidence of general acceptance was not “unequivocal or undisputed.”  Id. at 1046.  As evidence of such a dispute, the Illinois Supreme Court cited both more recent opinions from other states’ courts refusing to admit HGN evidence, id. at 1041-46, and more recent scientific articles questioning the reliability of HGN evidence, id. at 1044-47. 

Furthermore, where a trial court does rely on the findings of a previous hearing, that court should be convinced that the hearing was fair and thorough.  “Especially in the early days of a scientific technique, imbalanced hearings are not uncommon.”  David H. Kaye, et al., The New Wigmore on Evidence, “Limiting Strict Scrutiny by Methodology,” § 9.5.1 (2018). 

A number of appellate courts in other states have similarly recognized that trial courts should not blindly accept scientific evidence based on its admission in a prior case.  See, e.g., Coble v. State, 330 S.W.3d 253, 276 n.56 (Tex. Crim. App. 2010) (“[C]ourts do not ‘grandfather in’ expert testimony in a particular field or by a particular witness simply because the court has admitted expert testimony in that field or by that witness in the past.”); Shanley, 919 N.E.2dat 1264 n.15 (“[W]e have not ‘grandfathered’ any particular theories or methods for all time, especially in areas where knowledge is evolving.”); Chesson v. Montgomery Mutual Ins. Co., 75 A.3d 932, 938 (Md. 2013) (“Even scientific techniques once considered to be generally accepted are excluded when subsequent scientific studies bring their reliability and validity into question and show a fundamental controversy within the relevant scientific community.”); State v. Ward, 694 S.E.2d 738, 746 (N.C. 2010) (“[T]he length of time a method has been employed does not necessarily heighten its reliability or alleviate our concerns.”); c.f., State v. Hull, 788 N.W.2d 91, 103 n.3 (Minn. 2010) (noting that the “lengthy use of a method by law enforcement, and even lengthy unquestioning acceptance by courts, does not [by itself] exempt expert evidence from scrutiny under the first prong of Frye-Mack”) (alteration in original).

Published case law can provide a valuable tool in determining whether a scientific technique that was previously subjected to a Frye hearing is still generally accepted by the scientific community.  Courts, however, should be instructed not to lose sight of the fact that the Frye test emphasizes counting scientists’—not judges’—votes.  While published legal opinions may sit unaltered for decades, science is a field of ever-evolving developments.  Therefore, courts must be conscious of this fact and not simply admit evidence because previous courts have. 

In this case, none of courts cited by the trial court had ever subjected the contested evidence to a Frye hearing.  The trial court itself acknowledged that FST has never been subject to a Frye hearing, even while relying on other courts’ blind acceptance of that technique to support its own determination.  People v. Foster-Bey, 158 A.D.3d 641, 641 (2d Dep’t 2018).  In addition, the Office of Chief Medical Examiner (“OCME”), which introduced the contested evidence, no longer uses LCN or FST.  See Lauren Kirchner, Traces of Crime: How New York’s DNA Techniques Became Tainted, NY Times, Sept. 4, 2017, https://www.nytimes.com/2017/09/04/nyregion/dna-analysis-evidence-new-york-disputed-techniques.html. 

Thus, this case serves as a prime example of the dangers of a trial court deciding to use legal precedent as a substitute for its own determination of whether a technique is “generally accepted as reliable in the scientific community.”  Parker, 7 N.Y.3d at 449.  In light of the above, this Court should seize this opportunity to provide much-needed guidance on the appropriate use of precedent and under what circumstances reconsideration of previously-admitted scientific evidence is warranted under Frye.  As discussed in Section 4, infra, the adoption of Daubert principles, including encouraging courts to look to factors beyond “acceptability”—such as whether the technique in question can and has been tested, has been subjected to peer review, and has known or potential error rates—would further this objective.

III.This Court should provide guidance on the appropriate scope of the “relevant scientific community.”

In order to admit scientific evidence under Frye, a party must show that the evidence is generally accepted as reliable by the “relevant scientific community.”  As with “novelty,” this Court has not provided guidance to lower courts on how to interpret the “relevant scientific community” standard, leading to disparate and anomalous results.  Compare Garcia, 39 Misc. 3d at 487-88 (admitting testimony concerning LCN and FST based on studies conducted by the OCME despite the fact that it is the only government facility using these techniques), with People v. Collins, 49 Misc. 3d 595, 611, 613, 616, 618 (Sup. Ct. Kings Cty. 2015) (finding that the OCME’s research validating LCN and FST cannot establish general acceptance when no other laboratory uses these techniques as evidence in criminal cases).  Frye requires a careful balancing act when it comes to assessing the composition of the relevant community.  Because it is a general acceptance test, acceptance need not be universal; however, a court’s job must not end upon a finding of mere acceptance in the form of a single proponent of a technique. See, e.g., Cornell, 22 N.Y.3d at 783 (A showing that an expert’s opinion has “some support” is insufficient to establish general acceptance in the relevant scientific community). 

Here, the OCME, which developed the methodology in question, is the only entity in the United States that used the scientific techniques at issue to develop and analyze DNA profiles for criminal cases.  Collins, 49 Misc. 3d at 611, 618.  Moreover, the OCME is the only entity that used FST for any purpose.  Id.  Under these facts, it is likely that the trial court here did as Chief Judge Kaye cautioned against in Wesley: conflated the “judgment of the scientific community” with “the opinion of a few experts.”  Wesley, 83 N.Y.2d at 438 (Kaye, C.J., concurring). 

Despite this cautionary language, many courts have allowed small groups of experts who support a given technique to define themselves as the relevant scientific community, thus guaranteeing general acceptance.  One commentator referred to this practice as “gerrymandering” to create a majority by defining the scientific community “narrowly and uncritically.”  Peter W. Huber, Galileo's Revenge: Junk Science in the Courtroom 15 (1991).  For instance, in the case of now-discredited bite mark identification evidence, some courts have defined the relevant scientific community as the forensic dentists themselves, i.e., those whose careers are largely dependent on the validity of such evidence.  See, e.g., People v. Smith, 63 N.Y.2d 41, 63 (1984) (basing admission on the claim that the technique of comparing one photo of a bite mark to another was sufficiently reliable and had been “accepted by the scientific community,” comprised of prosecution and defense experts who together “acknowledged the reliability and acceptance of photographic comparisons”); People v. Slone, 76 Cal. App. 3d 611, 624-25 (Cal. Ct. App. 1978) (relying on testimony of three forensic odontologists which showed “bite-mark-identification technique had gained general acceptance in the scientific community of dentistry—the relevant scientific community involved”). 

By contrast, one federal court, considering the scientific validity of bite mark analysis in a civil suit against forensic dentists, brought by Benny Starks, who served over twenty years in prison after a bitemark “match” was used to convict him of a rape he did not commit, observed:

Starks argues that Dentist Defendants’ bite mark “analysis” was so far outside the norms of bite mark matching, such as they were in 1986, that it violated due process. For this assertion, Starks relies on the opinion of Senn, a forensic odontologist himself who has testified as a bite mark expert in many criminal cases. Doc. 313-20 at 48-50 (Senn’s CV). Eighty years ago, Upton Sinclair observed: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” Upton Sinclair, I, Candidate for Governor: And How I Got Licked 109 (Univ. of Calif. Press 1994) (1935). Illustrating Sinclair’s point, Senn opines not that bite mark matching is inherently unreliable, but only that Dentist Defendants made analytical errors and overstated their conclusions in Starks’s criminal case.

Starks v. City of Waukegan, 123 F. Supp. 3d 1036, 1051-52 (N.D. Ill. 2015) (internal citations omitted).  The court went on to cite the National Academy of Sciences’s (“NAS”) observation, also relevant here, that “[a]lthough the majority of forensic odontologists are satisfied that bite marks can demonstrate sufficient detail for positive identification, no scientific studies support this assessment.”  Id. (quoting National Academy of Sciences, Strengthening Forensic Science in the United States 176 (2009)) (emphasis in original).

A broader view of the relevant scientific community is particularly important where certain experts, like the dentists discussed in Starks, have a vested interest in promoting a particular technique.  “If the field is too narrowly defined,” Chief Judge Kaye observed, “the judgment of the scientific community will devolve into the opinion of a few experts.  The field must still include scientists who would be expected to be familiar with the particular use of the evidence at issue, however, whether through actual or theoretical research.”   Wesley, 83 N.Y.2d at 438-39 (Kaye, J.C., concurring) (citation omitted).  “A Frye court should be particularly cautious when . . . the supporting research is conducted by someone with a professional or commercial interest in the technique.”  Id. at 440.  Indeed, the court in the original Frye decision, in examining the admissibility of systolic blood pressure deception testing, did not limit its view of the relevant scientific community to polygraph experts, but rather held that the technique had not gained general acceptance because it had “not yet gained such standing and scientific recognition among physiological and psychological authorities as would justify the courts in admitting expert testimony deduced from the discovery, development, and experiments thus far made.” Frye v. United States, 293 F. 1013, 1014 (D.C. Cir. 1923) (emphasis added).

Other jurisdictions have similarly recognized that courts should look to outside, disinterested experts with experience in the relevant field to confirm that such experts also accept those theories.  See, e.g., Sybers v. State, 841 So.2d 532, 543 (Fla. Dist. Ct. App. 2003) (“such assertions [that a technique is generally accepted] by experts who developed and performed the testing procedures are not, alone, sufficient.”); State ex rel. Collins v. Superior Court, 644 P.2d 1266, 1285 (Ariz. 1982) (“This requirement is not satisfied with testimony from a single expert or group of experts who personally believe the challenged procedure is accepted or is reliable.”); Reed v. State, 391 A.2d 364, 382 (Md. 1978) (“In general, members of the relevant scientific community will include those whose scientific background and training are sufficient to allow them to comprehend and understand the process and form a judgment about it”); Contreras v. State, 718 P.2d 129, 135 (Alaska 1986) (“We define the relevant scientific community as the academic, scientific, and medical or health-care professions which have studied and/or utilized hypnosis for clinical, therapeutic, research and investigative applications.”).  As such, this Court should instruct lower courts that under Frye, the “relevant scientific community” must be sufficiently broad as to include disinterested scientists with experience in related fields.

Courts should be wary of defining the “relevant scientific community” too narrowly where experts have a professional or pecuniary interest not only in the validity of a methodology, but also in the ultimate disposition of the case.  For example, Judge Edwards has warned that a close relationship between forensic scientists and law enforcement administrators may “inhibit good science and ultimately adversely affect the credibility of the field.”  Harry T. Edwards, Solving the Problems That Plague the Forensic Science Community, 50 Jurimetrics 5, 15 (2009); see also PCAST Report at 13-14 (to avoid potential bias and other issues, scientific evaluations are “best carried out by a science-based agency that is not itself involved in the application of forensic science within the legal system”).  Indeed, the Supreme Court in Melendez-Diaz rejected the State of Massachusetts’s arguments that forensic science was inherently “neutral” or “reliable.”  Melendez-Diaz v. Massachusetts, 557 U.S. 305, 318 (2009) (“Because forensic scientists often are driven in their work by a need to answer a particular question related to the issues of a particular case, they sometimes face pressure to sacrifice appropriate methodology for the sake of expediency.”) (quoting National Academy of Sciences, Strengthening Forensic Science in the United States 23–24 (2009)); see also Edwards, supra, at 19 (characterizing Melendez-Diaz as “a not very subtle indictment of our existing forensic science system”).  This Court similarly recognized the potential biases of forensic DNA analysts in People v. John.  27 N.Y.3d 294, 311 (2016) (“We will not indulge in the science fiction that DNA evidence is merely machine-generated, a concept that reduces DNA testing to an automated exercise requiring no skill set or application of expertise or judgment.”).

Likewise, enhanced scrutiny in defining the “relevant scientific community” might be appropriate where the proffered evidence relies on scientific techniques that are so new that the scientific community has not had a chance to determine whether to accept them.  Wesley, 83 N.Y.2d at 439 (Kaye, C.J., concurring) (“[A]bsence of controversy reflected not the endorsement perceived by our colleagues, but the prematurity of admitting this evidence. Insufficient time had passed for competing points of view to emerge.”).  Because the Frye test “emphasizes counting scientists’ votes, rather than on verifying the soundness of a scientific conclusion,” courts cannot determine whether a novel technique is generally accepted before scientists have been given enough time and information to decide how to cast their votes.  Parker, 7 N.Y.3d at 447 (internal quotation marks omitted) (citing Wesley, 83 N.Y.2d at 439 (Kaye, C.J., concurring).  That is not to say that recently introduced methodologies should be categorically rejected, but rather, that courts should recognize instances where “counting scientists votes” alone does not necessarily serve as an accurate proxy for reliability, such as where there has been insufficient time to establish a robust enough “community.”  

[omitted]

IV. New York’s interest in effectively scrutinizing scientific evidence for reliability could be aided by using the factors discussed in Daubert to conduct a Frye

[omitted]

CONCLUSION

For the reasons discussed herein, the time has come for this Court to speak on issues that have led to inconsistent and problematic results among courts when evaluating proffered scientific evidence under Frye.  This case provides an ideal opportunity for this Court to provide guidance that emphasizes the critical role of the courts in gatekeeping against evidence based on “novel” methodologies and hypotheses that have not yet been proven to be reliable, as well as previously accepted methodologies and hypotheses that have since been discredited by new scientific developments.  Incorporating the principles underlying the Daubert standard, which has been embraced by 40 states, would be a particularly effective tool to help courts bring Frye into the future.  Only with such guidance will courts achieve greater effectiveness, consistency, and fairness in their treatment of the admissibility of scientific evidence.     

 

Dated:         [, 2019]
                   New York, NY

 

                                                          Respectfully submitted,

 

                                                          ________________________________

                                               

 

Konrad Cailteux
Carolyn R. Davis
Weil, Gotshal & Manges LLP
767 Fifth Avenue
New York, NY 10153

M. Chris Fabricant
Innocence Project, Inc.
40 Worth Street, Suite 701
New York, NY 10013

Counsel for Proposed Amicus Curiae The Innocence Project

 

 

 

CERTIFICATION

 

          I certify pursuant to §500.13(c) of the Rules of Practice of this Court that the total word count for all printed text in the body of the brief is 6,936 words.

 

Dated:         [, 2019]
                   New York, NY

 

                                                Respectfully submitted,

 

                                                          ___________________________________

                                                By:    Konrad Cailteux

 

 

[1] For this reason, the Colorado Supreme Court in Shreck rejected the Frye standard in favor of Colorado Rule of Evidence 702, which emphasizes “reliability and relevance of the scientific evidence.”  Id. at 78.  Other state supreme courts have also criticized Frye’s novelty test in adopting DaubertSee, e.g., State v. Coon, 974 P.2d 386, 397–98 (Alaska 1999) (finding that Frye’s limitation to “novel” scientific evidence is not an advantage to Daubert’s application to “all scientific knowledge”), abrogated on other grounds by State v. Sharpe, 435 P.3d 887, 899 (Alaska 2019).

[2] Courts should also be wary of experts overstating the accuracy of otherwise reliable methods.  For example, Professor Lander describes how fingerprint identifications, once described by the Department of Justice as “infallible,” have since been shown to have a real world error rate as high as 1 in 24.  Lander, Fixing Rule 702, 86 Fordham L. Rev. at 1669–71.

[3] A type of field sobriety test that measures a subject’s eye movements as they track an object moving side to side.

Motion to Exclude Firearms Testimony Under Frye Motion to Exclude Firearms Testimony Under Frye

You can also download a copy of this pleading from our Moodle course page.

IN THE CIRCUIT COURT OF COOK COUNTY

ILLINOIS                                                      )

                                                                        )           14 CR 16514-01

  1. )

                                                                        )           JUDGE MALDONADO

[----]                             )           PRESIDING

 

MOTION TO EXCLUDE FIREARMS EXAMINATION OPINION TESTIMONY

 

NOW COMES the Defendant, [---], by his attorney Amy P. Campanelli, Cook County Public Defender, through her assistants Richard E. Gutierrez and Roger Warner, and brings this motion to bar testimony regarding firearms examination evidence. The Defendant is charged with murder for allegedly shooting Summer Moore. Officers of the Chicago Police Department recovered multiple spent cartridge cases at the scene of the purported shooting as well as a bullet from the body of the victim. The Defense expects that the State will attempt at trial to elicit opinions from a firearms examiner (Marc G. Pomerance) in order to link a later-recovered gun to the spent cartridges and bullet associated with Moore’s shooting. But such evidence, the Defense contends, possesses neither the general acceptance necessary to warrant admission under Frye, nor the reliability required to avoid the structures of Rule 403. In support thereof, the Defendant asserts the following:

  1. INTRODUCTION

Firearms examination, in contrast to its high profile in criminal courts, has long benefitted from an existence in the shadow lands of science, where its pairing of grandiose claims and questionable tenets could escape unnoticed and uncritiqued. But in recent years, notorious misidentifications and wrongful convictions attributable to firearms examination and similar pattern-matching fields have at last compelled the broader scientific community to shed light on the suspect approaches of its forensic kin. The results have been sobering to say the least, for example the National Academy of Sciences (“NAS”) unequivocally and scathingly concluded that “no forensic method [besides DNA analysis] has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source,”[1] and the President’s Council of Advisers on Science and Technology (“PCAST”) even more recently and bitingly determined that firearms examination flat out “falls short of the scientific criteria for foundational validity.”[2] Thus, while little more than a decade ago acquiescence to the claims of the State’s expert would have met with few dissenters, at present it would fly in the face of a no-longer-silent majority of credentialed researchers and scientists.

Yet despite these admonitions by supremely-distinguished panels of scientists and law enforcement professionals, firearms examiners (members of a field without the benefit of sufficient scientific research validating its assumption of the uniqueness of fired bullets or spent cartridges, or affirming the reliability of its practitioners) have buried their heads in the sand and continue to report conclusions in nigh absolute terms with only token recognition of the potential for error stemming from the subjectivity of the practice and its lack of defined standards, the specter of cognitive bias, and the increasing uniformity of firearm components due to the evolution of manufacturing methods. This Court, however, should not follow the same misguided path by ignoring the criticisms of the field, emanating as they do from the highest scientific authorities organized by the federal government as well as from experts versed in the very disciplines (metrology and metallurgy/materials engineering) responsible for spawning the merely-applied practice of firearms examination. Instead, accounting for the broad consensus of experts positioned against the discipline and the substance of their critiques, firearms examination cannot be said to enjoy wide-spread scientific acceptance, nor to possess reliability sufficient to overcome the prejudicial and overblown statements of its practitioners. As such, this Court should exclude the subjective opinions of the State’s firearms examiner under Frye v. United States, 293 F. 1013 (D.C. Cir. 1923), or alternatively, Illinois Rule of Evidence 403.

  1. FIREARMS EXAMINATION INVOLVES THE SUBJECTIVE ASSESSMENT OF MARKS DEPOSITED ON FIRED BULLETS & CARTRIDGE CASES.

Firearms examination at least begins with the scientifically sound premise that the inner-workings of guns, made from hard metals, may transfer their own markings to the softer metal of bullets and cartridges.[3] In other words, and without detailing ad nauseum the firing process itself: when bullets are propelled forward through a barrel they may take on the inverse of the lands and grooves (respectively peaks and valleys) of a its rifling as well as the imperfections/scratches (striations or striae) within those lands and grooves; cartridges may in turn be marked by the surfaces they impact, such as a gun's breech face and firing pin.[4] [The images below depict barrel rifling, the surfaces that may mark cartridges, and the placement of components within a gun.]

 

           

 

[5]

It is not the existence of such marks, however, with which this motion takes issue, but instead the methodology that firearms examiners use to derive meaning from whatever marks they may observe. Specifically, firearms examiners are generally asked to determine either (1) whether multiple recovered bullets or cartridges match (i.e. do the markings on the projectiles indicate that they were fired from the same gun), or (2) whether a recovered cartridge or bullet was fired from a specific recovered gun---in the latter instance examiners test fire the gun into a water tank and use the bullet or cartridge from that test fire for comparison.[6] And examiners have not updated their approach to answering those questions over the last 80 years: they use a comparison microscope to view two bullets or cartridges side by side, and make a determination based on the correspondence or lack thereof of the markings that they observe.[7] Said markings are divided into three categories: (1) class characteristics are the features predetermined by a manufacturer (and thus common to all guns of certain makes and models) such as the number of lands and grooves or the shape of a firing pin; (2) subclass characteristics are microscopic marks left behind by imperfections in gun parts and thus incidental to manufacture, but that are carried over and shared by multiple guns from the same batch; and finally (3) individual characteristics are marks produced by random irregularities of gun surfaces, which firearms examiners believe (without justification) are unique to each gun.[8]

            The Association of Firearm and Toolmark Examiners (AFTE), a trade organization whose membership is comprised exclusively of firearms examiners, [9] has established the ultimate range of conclusions for the discipline, and permits examiners to declare an identification (in other words a match) if they observe “sufficient agreement” between the individual characteristics of the bullets or cartridges they are comparing.[10] The definition offered for that vague term, however, scarcely clears things up, as AFTE describes the standard only by noting that agreement is sufficient when “it exceeds the best agreement demonstrated between toolmarks known to have been produced by different tools and is consistent with agreement demonstrated by toolmarks known to have been produced by the same tool.”[11] To boil things down, examiners may conclude that bullets or cartridges match when they look like a match.

Not surprisingly, AFTE admits even in its Theory of Identification that “the interpretation of individualization/identification is subjective in nature…and based on the examiner’s training and experience,”[12] meaning that “there will be some difference between examiners as to what constitutes the best-known non-match situation.”[13]  Nevertheless, AFTE still manages to claim that when examiners encounter sufficient agreement, that “means that the likelihood that another tool could have made the mark is so remote as to be considered a practical impossibility.”[14] But in contrast to AFTE’s unabashed self-confidence, this motion will demonstrate that “[s]ubjective methods [like firearms examination] require particularly careful scrutiny because their heavy reliance on human judgment means they are especially vulnerable to human error, inconsistency across examiners, and cognitive bias,”[15] and can only be evaluated as scientifically acceptable if vetted by multiple, appropriately designed, empirical studies of examiner reliability,[16] studies conspicuously absent at the base of the field of firearms examination.[17]

  1. SCIENTIFIC AUTHORITIES HAVE ROUNDLY REJECTED FIREARMS EXAMINATION AS UNVALIDATED & BEREFT OF EMPIRICAL FOUNDATION.

Legitimate scientists have always understood that “valid scientific knowledge can only be gained through empirical testing of specific propositions.”[18] And although firearms examination (despite its longstanding use in courts) never developed such a foundation, it was not until the last few years that a harmonious and powerful consensus of scientific voices emerged to make unequivocally clear its doubts about of the discipline. Among the most influential of such voices, the National Academy of Sciences and its operating agency the National Research Council have twice joined the fray to chastise the field for its exaggerated claims, de minimus research, and vague/tautological standards.[19] This Court should accept its reports as authoritative. Not only have they been cited as such by the United States Supreme Court and other judges across the country,[20] but the mission and history of the NAS, which stands as the “leading scientific advisory body established by the Legislative Branch,”[21] ought to afford it ample reverence given that it has been tasked by Congress since the days of Abraham Lincoln “with providing independent, objective advice to the nation on matters related to science and technology” and has produced landscape-shifting studies of the forensic sciences (including the use of coroners offices, DNA statistics, and the shortcomings of bullet-lead analysis) since the 1920s. [22]

The NAS first approached the foundation of firearms examination when it set out (staffed by engineers, metallurgists, materials scientists, and others, as well as in consultation with firearms examiners) to evaluate the feasibility of operating a federal database of bullet and cartridge case images.[23] To do so it needed to study the underlying premises of any such database---the uniqueness and evidentiary value of bullet and cartridge case markings themselves---which it did through tireless review (eventually captured in over 80 pages of analysis) of a significant quotient of literature in the field of firearms examination, visits to manufacturing plants, and presentations from practitioners.[24] And though tasked to avoid the question of admissibility with regards to firearms evidence,[25] what the NAS discovered fell so short of valid science that the panel was compelled to nevertheless render several findings.[26]

Specifically, it emphasized that “the validity of the fundamental assumptions of uniqueness and reproducibility of firearms-related toolmarks has not yet been fully demonstrated,” and accordingly called for significant research to place even the basic premises of firearms examination on “solid scientific footing.”[27] Such work, the NAS conceded, would be arduous, necessitating “a designed program of experiments covering a wide range of sources of variability” while paying “careful attention to statistical experimental design issues, as well as intensive work on the underlying physics, engineering, and metallurgy of firearms.”[28] But because acceptable science, and derivatively acceptable testimony, requires a foundation of “established error rates” among other indicia of validity, the NAS viewed said research as “essential to the long-term viability” of firearms examination.[29]

Moreover, another panel of the NAS would echo those conclusions in a 300 page, meticulously-researched report published one year later. On this second go-round Congress directly authorized the NAS to investigate the status of several forensic science disciplines based on the recognition that “significant improvements are needed in forensic science.”[30] To that end NAS formed a team of acclaimed scientists, legal minds, and forensic specialists who for two years heard testimony from practitioners (including firearms examiners) and tirelessly “considered the peer-reviewed, scientific research purporting to support the validity and reliability of existing forensic disciplines.”[31] Ultimately, its authors reached unanimity with regard to the deficiencies of forensic identification (and especially pattern matching) approaches,[32] describing such methodologies as more akin to rough heuristics than validated science,[33] and noting in broad strokes that, as mentioned above, “no forensic method [other than DNA] has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source.[34] 

The NAS report also pulled no punches when discussing firearms examination specifically. After adopting and incorporating the conclusions of the 2008 NAS panel discussed above, the report expressed concern that despite the “challenging” nature of distinguishing between marks left by the same or different firearms/tools, “the decision of the toolmark examiner remains a subjective decision based on unarticulated standards and no statistical foundation for estimation of error rates.”[35] Nor could the NAS discern any standards sufficient to guide examiners in that endeavor, noting that “a fundamental problem with toolmark and firearm analysis is the lack of a precisely defined process,” and criticizing the AFTE Theory of Identification for failing to “provide a specific protocol,” and “not even consider[ing], much less address[ing], questions regarding variability, reliability, repeatability, or the number of correlations needed to achieve a given degree of confidence.”[36] And, as to the research that could help flesh out such protocols, the NAS report could say only that (1) “sufficient studies have not been done to understand the reliability and repeatability of the methods,” and (2) “the scientific knowledge base for toolmark and firearms analysis is fairly limited.”[37] Thus its conclusion at bottom: firearms examination evidence lacks “any meaningful scientific validation, determination of error rates, or reliability testing to explain the limits of the discipline.”[38] In keeping with that rejection of the discipline, NAS allowed that examiners are capable of the fairly simple task of narrowing the pool of possible firearms matches using class characteristics, but did not evaluate the discipline and its methodology (in contrast to AFTE’s claims) as able to consistently link bullets or cartridges to a particular source.[39]

NAS, however, is but one member of an expansive coalition of the discontent comprised of academics and practitioners alike who view firearms examination with unmitigated skepticism and consider its claims as, at best, “plausible” but more realistically as “under researched, and oversold.”[40] In fact, article after article has appeared in the world’s preeminent scientific journals bemoaning the lack of research underlying firearms examination (and other forensic identification / pattern matching fields), the discipline’s lack of rigor, its failure to abide by any of the hallmarks associated with the very practice of science, and the overblown conclusions made by its practitioners.[41] Even the editorial board of Nature found “a disturbing degree of methodological sloppiness… [and] a poor empirical basis for estimating error rates.”[42] And statisticians (a group vital to the appropriate design of research studies and thus to any analysis of whether a discipline can lay claim to demonstrated validity) have widely endorsed the NAS reports and called for greater rigor in the design of experiments, increased transparency, and well-supported analysis and reporting or error rates.[43] Finally, scholars at the intersection of law and science have laid out the same concerns, and accordingly suggested outright exclusion of firearms examination testimony[44]

Nor have such admonishments been voiced merely by academics. Rather, forensic professionals admit that identification and pattern matching disciplines (like firearms examination) have “historically been troubled by a serious deficiency in that a heterogeneous assemblage of technical procedures … have frequently been submitted for basic theory and principles.”[45] And they emphasize that firearms examination “has always suffered from the fact that the examination of these types of evidence is highly subjective, and cannot fall back upon a body of independently-derived scientific knowledge … Despite three quarters of a century, no systematic and comprehensive attempt to codify standards for a minimum toolmark or firearms match has been published.”[46] Forensic self-critics have also included a past president of the American Academy of Forensic Sciences (who called the field of firearms examination wholly unvalidated, going so far as to suggest that threshold studies of the field’s underlying foundations might result in a finding that the entire field has always been invalid)[47] as well as a coalition of thirteen diverse authors who came together to support the recommendations of the NAS and issue a call for greater focus on the empirical underpinnings of pattern matching disciplines.[48]

Perhaps most pointedly, William Tobin (a materials scientist and retired FBI metallurgist) has railed against the discipline of firearms examination for (1) its “inherently vague and tautological” theory of identification, (2) its failure to conduct appropriate experiments to test its underlying foundations, (3) the lack of understanding by practitioners of the manufacturing processes so central to the production of toolmarks, (4) the absence of adequate proficiency testing, and finally (5) the unjustified grandeur of its conclusions.[49] And given his background, Mr. Tobin is ideally positioned to comment on the conclusions and methods of firearms examiners, because while those practitioners may possess an understanding of “the general manufacturing process for firearms,” the characteristics upon which they rely “are generated by a variety of metallurgical processes and entail complex tribological and microstructural (including atomic) interactions that can, and most often do, vary from product to product, and even from production lot to production lot.”[50] Thus firearms examiners, according to Tobin, should have (but have never) engaged with members of the field of metallurgy/materials science, “the most relevant true scientific domain … that understands the tribology of the manufacturing processes and their specific seminal effects on firearm components.”[51] The discipline’s failure to do so, coupled with its inability to “incorporate effective statistical methods,” ultimately leaves it bereft of “every critical cornerstone of the scientific method.”[52]

The same sentiments above, moreover, now echo through the various regulatory agencies for forensic science established in the wake on the NAS report. For example, the members of the Firearms and Toolmark subcommittee of the Organization of Scientific Area Committees (OSAC)[53] recently acknowledged the complete absence of appropriate studies concerning the reliability of firearms examination or the ability of examiners to characterize relevant markings, and described said absence as a “major gap” in understanding regarding the discipline.[54] Moreover, the National Commission on Forensic Science published a views document noting that “the underlying foundation of [a forensic discipline] and associated testimony must be supported by sound research that meets the standards of forensic practitioners, academic researchers, measurement scientists, and statisticians.”[55] Although the NCFS could not itself conduct such a review of firearms examination in particular, it identified “a clear and compelling need to address the technical merit of forensic science,” and proceeded to pass the torch by opining that “all forensic science methodologies should be evaluated by an independent scientific body to characterize their capabilities and limitations.”[56] That call (made explicitly by NCFS and so many of the other authorities cited throughout this section) has now been answered by PCAST, and firearms examination has been found wanting.

  1. AFTER AN IMMENSELY DETAILED REVIEW OF FIREARMS EXAMINATION, THE PCAST DETERMINED THAT THE DISCIPLINE LACKS EVEN BASIC FOUNDATIONAL VALIDITY.

 

Despite the authority and scope of its critique of the forensic sciences, NAS readily admitted that, in part because of that expansive scope, it had “decided early in its work that it would not be feasible to develop a detailed evaluation of each discipline in terms of its scientific underpinning.”[57] As a result, AFTE ignored its comments and actively awaited the day when an independent body would evaluate the full breadth of its literature.[58] But that day has come, because the same concern motivated PCAST, in response to then-President Obama’s request to identify and provide insight regarding lingering deficiencies in forensics, to highlight two important gaps in scientific understanding regarding pattern matching disciplines: “(1) the need for clarity about the scientific standards for the validity and reliability of forensic methods and (2) the need to evaluate specific forensic methods to determine whether they have been scientifically established to be valid and reliable.”[59] Its resulting review went further than any before it by (1) compiling and reviewing over 2000 papers (more than 400 of which were specific to firearms examination); (2) consulting a diverse group of forensic scientists and practitioners (including those working with the FBI and the National Institute of Standards and Technology) as well as judges, attorneys, statisticians, and academic researchers; and (3) soliciting statements and bibliographies from all corners of the practitioner community.[60]

And this Court, as with the NAS, should consider the PCAST’s ultimate conclusions authoritative. Not only is PCAST “the leading scientific advisory body established by the Executive Branch,”[61] but the Obama-era-iteration of the PCAST consisted primarily of some of our nation’s leading and most-respected scientists, including: a geneticist from MIT/Harvard who was the principal contributor in efforts to map the human genome, an engineer and Vice President of the National Academy of Engineering, a mathematician and former CEO of The Aerospace Corporation, a doctor who was the first female president of the American College of Physicians, a chemist who directs the Institute for Nanotechnology at Northwestern University, the director of The Laboratory for Geochemical Oceanography at Harvard University, a doctor of biochemistry and professor emeritus at the University of California Berkeley, and a physicist who is a Senior Vice President at a leading aerospace and technology corporation (to name but a few).[62] For several decades, the PCAST has reported to the then-sitting U.S. President on a wide range of scientific issues, including, but not limited to, nanotechnology, internet broadband development, cloning, and the uses of science and technology to combat terrorism.[63]  In short, the PCAST represents one of the most important and authoritative collections of scientists in the country. And its final report on the pattern matching disciplines has, since its publication, been endorsed by the nation’s most prestigious forensic body (the American Academy of Forensic Sciences),[64] an international consortium of forensic experts,[65] and Judge Alex Kozinski of the United States Court of Appeals for the Ninth Circuit, who went so far as to say that the report “will fundamentally change the way many criminal trials are conducted” and “will likely upend many people’s beliefs” about once-trusted forensic disciplines.[66]

Turning to the substance of the report, PCAST viewed its primary mission as providing courts with an understanding of the “scientific standards for scientific validity” based on “the fundamental principles of the ‘scientific method’—applicable throughout science,” and more specifically the standards of metrology (the science of measurement), from which all pattern recognition disciplines derive.[67] But even as it began its work, PCAST had already identified numerous areas of concern, noting specifically (1) that many wrongful convictions discovered only through DNA testing and attributable to faulty forensic testimony “reflected a systemic problem—the testimony was based on methods and included claims of accuracy that were cloaked in purported scientific respectability but actually had never been subjected to meaningful scientific scrutiny,”[68] (2) “the historical reality that many methods were devised as rough heuristics to aid criminal investigations and were not grounded in the validation practices of scientific research,”[69] and (3) that “subjective methods require particularly careful scrutiny because their heavy reliance on human judgment means they are especially vulnerable to human error, inconsistency across examiners, and cognitive bias.”[70] Nevertheless, it did not simply dismiss pattern recognition disciplines outright based on that checkered record, but instead carefully explained that, because subjective methods like firearms examination rely on the skill of their practitioners, “without appropriate estimates of accuracy, an examiner’s statement that two samples are similar—or even indistinguishable—is scientifically meaningless.”[71] Thus based on general scientific standards: “Since the black box in the examiner’s head cannot be examined directly for its foundational basis in science, the foundational validity of subjective methods can be established only through empirical studies of examiner’s performance.”[72]

Accordingly, for a discipline to qualify as foundational valid, and therefore as worthy of scientific acceptance and legal admissibility, it would need at its base multiple black box studies that (1) are double blind (“neither the examiner nor those with whom the examiner interacts have any information about the correct answer”); (2) are “overseen by individuals or organizations that have no stake in the outcome of the studies”; and (3) “involve a sufficiently large number of examiners and [are] based on sufficiently large collections of known and representative samples from relevant populations to reflect the range of features or combinations of features that will occur in the application.”[73] Any divergent claims to validity, PCAST highlighted, would run “contrary to the fundamental principle of scientific validity in metrology—namely, that the claim that two objects have been compared and found to have the same property … is meaningless without quantitative information about the reliability of the comparison process.”[74]  And although these standards might appear taxing, such was simply not the case in practice: the NAS subjected fingerprint comparison, for example, to the same criticisms as firearms examination,[75] but because that discipline responded appropriately to those criticisms with empirical research, PCAST evaluated the field of fingerprint comparison as possessing foundational validity on the basis of merely two adequate studies.[76]

Firearms examination, in contrast, did not pass muster even under PCAST’s unexacting standards. As an initial matter, PCAST noted that the AFTE Theory of Identification (along with the methodology of firearms examination more generally) is “circular” and thus the discipline benefits from no rigorous, or objective criteria.[77] And although some studies promulgated by the discipline would seem to indicate that, despite that lack of guidance, “examiners can, under some circumstances, associate ammunition with the gun from which it was fired” those industry-funded and industry-implemented projects “involved designs that are not appropriate for assessing the scientific validity or estimating the reliability of the method as practiced … because of their design, many frequently cited studies seriously underestimate the false positive rate.”[78] In fact, the director of a leading forensic research institute analogized the design of such studies to “a ‘Sudoku’ puzzle, where initial answers can be used to help fill in subsequent answers.”[79] And because those efforts utterly failed to replicate casework by providing examiners with simplistic problems, PCAST discounted them, instead concluding that “there is only a single study that was appropriately designed to test foundational validity and estimate reliability,”[80] specifically one recently conducted by the independent AMES laboratory (which, by the way, discovered that 2% of examiners registered a disturbing misidentification rate of 40%).[81] As a result, it ultimately concluded that:

“The scientific criteria for foundational validity require appropriately designed studies by more than one group to ensure reproducibility…the current evidence [for firearms examination] falls short of the scientific criteria for foundational validity. There is thus a need for additional, appropriately designed black-box studies to provide estimates of reliability.”[82]

 

Worse still, the AMES study accomplishes far less for the discipline than any endorsement of its general design might indicate. Even PCAST noted concern because the article has never been published and thus subjected to peer review. But putting that issue aside, the false positive rate the study uncovered for examiners on the whole corresponds to an upper bound estimate of error in one of every 46 cases.[83] And that is without even mentioning that the samples involved simply did not reflect the difficulty of casework. Specifically, the study focused on cartridge cases fired from Ruger SR9 semiautomatics, a gun chosen precisely because the marks it produces are of average difficulty to interpret.[84] But recall that PCAST’s criteria require samples “representative of the quality of evidentiary samples seen in real cases. (For example…for distorted, partial, latent fingerprints; the random match probability for full scanned fingerprints, or even very high quality latent prints would not be relevant.)”[85] And by focusing on only cartridges of average difficulty, the AMES study could not hope to validate, or provide an idea of examiner reliability, when comparing bullets or more trying cartridge samples­—like ones that have been consecutively manufactured or feature subclass characteristics (as could well have been the case in this matter).[86] The actual and applicable error rates for the discipline may therefore far outstrip that reported by the AMES study, a reality that would further undercut the field’s already wanting claim to validity and admissibility.

  1. FIREARMS EXAMINERS ARE IN NO POSITION TO DISPUTE THE CRITICISMS OF THEIR FIELD.

 

            The approach of the field of firearms examination to all the above criticisms, including even the PCAST report, has been to bury its collective head in the sand rather than strike out on a path of reform and confront said criticism with new and appropriately designed research studies.[87] In fact, at all levels, the discipline has even continued to express conclusions to a practical certainty,[88] an approach universally decried as scientifically indefensible and simply ludicrously devoid of support.[89] And those decisions should concern the Court, first because, as practitioners of a merely-applied science, firearms examiners are (by even the admission of other pattern matching specialists) simply not as qualified as the authorities cited above to design and conduct appropriate validation studies, or to opine more generally on the reliability of their profession.[90] But more than that because: (1) conscientious firearms examiners admit that their discipline lacks appropriate research and must therefore rely on tenets---the purported uniqueness of gun parts, as well as training and experience---that cannot serve scientifically as a means of validating the field, (2) the field has yet to grapple convincingly with the greatest challenges to reliability---coincidental similarity and subclass marks---facing examiners, and (3) unacceptable rates of error and misidentification have long-plagued the discipline. These deficiencies, alone and especially when taken together, serve as ample proof that PCAST and others critics of firearms examination got it right; firearms examination has not yet been adequately validated, and might be incapable of ever demonstrating sufficient reliability.

  • Firearms Examiners Essentially Admit That their Field Lacks Validation.

Firearms examination practitioners acknowledge that, although every meaningful conclusion of “match” has at its foundation a calculation of probability,[91] they simply lack the data to provide any adequate measure of that foundation based on a lack of appropriate or varied studies.[92] That leaves practitioners only to guess at the likelihood that another firearm might be responsible for the marks they observe, to make essentially a “leap of faith”[93] based on nothing more than intuited (and altogether unjustified) feelings regarding uniqueness and the value of experience. And that reality is borne out by the history of research and self-reflection available for firearms examination.

It may be true that the discipline has been a fixture of criminal proceedings since the beginning of the 20th century, but reviewing the field’s allegedly scientific literature in 1997, its preeminent defender, Ronald Nichols, found little reason to support the trust and reliance long-bestowed on firearms examination by the courts. He was forced to concede that most of the field’s research had been “very subjective in nature” and conducted in a manner only “analogous to” rather than in accordance with the scientific method.[94] Moreover, he admitted that “the most exhaustive, statistical empirical study ever published” in support of firearms examination dated back to 1959.[95] Troubling as it might seem that firearms examination had not conducted any scientific introspection since the United States numbered only 48, Eisenhower was president, and color television was a scarcely utilized medium, Nichols’s admission should further distress the Court given that the 1959 article’s author, Biasotti, held a much humbler view of the value of his own study (which by the way involved consideration of only one firearm model, .38 Special Smith & Wesson revolvers, and only analyzed fired bullets not cartridges). Specifically, he began his article by noting the “almost complete lack of factual and statistical data pertaining to the problem of establishing identity in the field of firearms examination,” but would say of his contribution to remedying that error only that “much more factual (statistical) data must be collected before any general verifiable laws can be formulated or before the data reported in this study can attain any real measure of practical significance.”[96]

Unfortunately, Biasotti’s calls to study and reflection would not come to fruition, and years later he would again lament that his chosen field of firearms examination was anything but “a highly developed science with well-defined criteria for evidence evaluation.”[97] Instead, Biasotti noted “a very superficial treatment of the basic problem of evaluating results and establishing identity,” and described firearms examination as “essentially an art limited by the intuitive ability of individual practitioners.”[98] Moreover, what studies have since been performed (mostly “10-gun” studies of consecutively manufactured tools) have been roundly lampooned not just by PCAST but also by a host of other outside scientists and firearms examiners alike for (among an even more extensive list of grievances) their gross lack of objectivity, minimal consideration of manufacturing variables, inappropriate design, minuscule sample sizes, and consideration of only a small fraction of gun make and models.[99] Thus, even as late as 2012, little had changed: firearms examiners could still point to “only a few numerically based studies” and admitted that their discipline “lacked scientific, statistical proof that would independently corroborate conclusions” of examiners performing casework.[100]

But perhaps nothing more perfectly captures the cavalier and unscientific bent of the discipline of firearms examination than the venue in which it has disseminated the overwhelming majority of its “research.”[101] This Court will surely notice myriad citations throughout this motion to the AFTE Journal. That periodical houses the vast majority of published research on firearms examination (although most such “research” is comprised of little more than two or three page anecdotes about casework). But it effectively stymies outside review of firearms examination because it wholly fails to meet professional standards regarding the publication of scientific literature,[102] only recently became inaccessible to the scientific and legal communities (formerly only firearms examiner could peruse the journal online), cannot be downloaded via most academic databases (i.e. PubMed), and is available in print at only a handful of libraries throughout the United States.[103] Moreover, while credible journals like the Journal of Forensic Science offer non-subscribers the opportunity to purchase all back issues for a reasonable price ($695),[104] AFTE allows purchase of its journal only on an article-by-article basis, necessitating the spending of thousands of dollars to obtain all its back issues. In fact, the Public Defender managed to obtain copies of the AFTE Journal (available publicly nowhere in Chicago) only by the fortunate happenstance of one assistant’s presence at a forensics conference in New York City (where the library of the John Jay College of Criminal Justice houses print copies); access nevertheless came at the cost of three full workdays’ worth of ceaseless scanning.[105] It should come as no surprise then, that both academics and forensic practitioners have concluded that the AFTE Journal, and thus the research contained within, wholly lack scientific indicia of trustworthiness.[106]

To attempt to claim credibility in the face of such a glaring deficiency in research, AFTE and others within the discipline have doubled down on their belief that the markings on fired bullets and cartridges are unique (and thus easily discerned and sorted by trained examiners), and pointed to the fact that the field has “stood the test of time.”[107] But the concept of uniqueness is not particularly relevant to the reliability of firearms examination,[108] and at all events is scientifically indefensible-at minimum it could never be proven through sampling (i.e. through failing to observe identical pairings during any period of casework).[109] In some sense, however, the discipline’s logical failure is expected. All human beings (including it would seem forensic professionals) are tempted to blindly accept the concept of uniqueness because “duplication is inconceivable to the rational mind.”[110] Unfortunately, our assumptions mislead us: despite the supposed truism that all snowflakes are unique, visually identical examples have been discovered,[111] and even in firearms examination (paradoxically considering the discipline’s commitment to the concept of uniqueness) examiners know that “each element of a firearm’s signature may be found in the signature of other firearms.”[112] Thus, because “the concept of uniqueness has more the qualities of a cultural meme than a scientific fact,”[113] firearms examiners have in fact conceded that “the erroneous conception of the ‘perfect match’ … is actually only a theoretical possibility and a practical impossibility.”[114]

Yet the assumption of uniqueness “although lacking theoretical or empirical foundations” perseveres in the field of firearms examination perhaps because “it offers important practical benefits” to the discipline (although not to the accused), as one expert explains:

“It enables forensic scientists to draw bold, definitive conclusions that can make or break cases. It excuses the forensic sciences from developing measures of object attributes, collecting population data on the frequencies of variations in those attributes, testing attribute independence, or calculating and explaining the probability that different objects share a common set of observable attributes. Without the discernable uniqueness assumption, far more scientific work would be needed, and criminalists would need to offer more tempered opinions.”[115]

 

Having so long relied on the broken notion of uniqueness, however, examiners can now only cite to casework and experience (rather than quantification) as proof of their abilities. “But whatever the courts' intuitive confidence in [firearms examination] ‘implicit testing’ and ‘casework validation’ set poor precedents that defy science and logic.”[116] At bottom, scientists have always been suspicious of claims grounded alone on longstanding use,[117] (perhaps given their experience with long-held beliefs of a flat world or longstanding use of such medieval methods as blood letting), and forensic use of the concept should meet with equal disapproval.[118]

Initially, belief in the validating power of casework relies on the absurd “assumption that every examiner remembers the details of every object ever examined, and even if only subconsciously, they have then ‘compared’ all of the objects they happened to examine with one another. Such a proposition is highly dubious, and relies on claims and observations that have never been recorded nor compiled in a systematic manner.”[119] And it requires as foundation the untenable notion that every examination comprising the test-of-time has been correct: “Because the ground truth is not known in casework, a case cannot serve as a test of the accuracy of a forensic assay used in it.”[120] But, because of the insufficiency of sampling as proof that has already been noted, casework fails as a pillar of support for firearms examination even if we assume that practitioners never err and possess superhuman powers of recollection. One readily understandable example posits that “a truly random sample of a large number of human beings may indicate that none of them have the same mother; but we know that to conclude that not one person on Earth shares the same mother defies common sense.”[121] And specific to the firearms context one expert has formulated an even more biting attack on the discipline’s belief in the power of experience to justify its methodology:

suppose that exactly 100 pairs of firearms out of an estimated 100,000 guns in a Texas town share indistinguishable gun barrel markings. If each of 100 firearms experts examined 10 pairs of guns from the town's gun population every day for 10 years (n=3,650,000 gun pairs), there is about a 93% chance that none of the indistinguishable pairs will have come under examination. That is, despite 1,000 "collective years" of forensic science experience (100 experts multiplied by 10 years), the failure to find even a single pair of guns with indistinguishable markings would offer little basis for drawing conclusions about whether gun barrel markings, even in this single town, are unique.”[122]

 

As Tobin notes, “this is a striking and counterintuitive example of the folly of individual, or even collective, training and experience as basis for validation.”[123] It should compel this Court to link admissibility to real scientific research, and should dispel any feeling of comfort in the platitudes regarding years of experience and thousands of cases worked so often peddled by examiners.[124]

  • Significant Pitfalls Confronting Firearms Examiners During Casework Undercut the Discipline’s Assertions of Reliability.

 

            And so we find firearms examiners, without standards to chart their course or research to keep them afloat, awash in a sea of roughly 310 million firearms,[125] claiming nonetheless the capability to navigate directly to the single and only gun that could have discharged a bullet or cartridge. But because their methodology calls for side-by-side, show-up style comparisons and welcomes investigators to provide examiners with task-irrelevant information, the distorting effects of cognitive bias may well lead them astray. And to further complicate matters, even assuming that practitioners persevere through these sources of cognitive corruption, the markings on the projectiles they examine, in part due to ever-advancing standardization and calibration of the manufacture of the weapons firing them, are not so easily sorted into matches and non-matches. Instead markings left by different firearms, whether based on the close proximity of the manufacture of such guns by the same production tool or even randomly, may exhibit striking similarity exceeding even that of markings produced by the same firearm.[126] Examiners, as the following sections will demonstrate, concede the dangers presented by these complications. And although they remain willing to contradict those admissions and stand by the reliability of their discipline, this Court should not permit such incongruity to distort its truth seeking function.

  1. Cognitive bias can distort an examiner’s perception of evidence & produce error.

 

            At bottom, cognitive bias is merely the professional parlance for what laypeople, as evident from the long-unbroken parade of advertising focused on blind taste tests (i.e. the Pepsi Challenge), have long accepted: that outside influences, stimuli, or preconceptions can cloud subjective, human judgment.[127] Some psychologists have even more directly termed the concept “mental contamination,”[128] but regardless of the title given it, scientists have long acknowledged that cognitive bias “can lead to perceptual distortion, inaccurate judgment, or illogical interpretation,”[129] specifically because it causes decision makers to “seek information that they consider supportive of a favored hypothesis or existing beliefs and to interpret information in ways that are partial to those hypotheses or beliefs.”[130] So significant are the effects of cognitive bias that researchers have noted that “If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirmation bias would have to be among the candidates for consideration.”[131]

In sync with the warnings of these experts, studies have consistently shown that the misconceptions of laypeople and scientists alike can remain steadfast even in the face of clear evidence that their ideas are wrong. Laypeople have faltered due to cognitive bias and thereby ignored market trends,[132] inappropriately focused on “blossoming students” (thus inflating their performance relative to their peers),[133] and enacted sexism in hiring.[134] And the same distorting effects have infected scientific work for decades,[135] skewing even the results of clinical trials of novel, medical treatments.[136] Moreover, forensic practitioners are not exempt from those cognitive biases that all interpreters of data and information face.[137]  In fact, biasing contextual information (even when mundane) has been documented to cause serious mistakes and misidentifications across a wealth of forensic disciplines including the use of dogs for scent detection,[138] forensic anthropology,[139] arson investigation,[140] handwriting comparison,[141] hair comparison,[142] bite mark analysis,[143] bloodstain pattern analysis,[144] fingerprint comparison,[145] and even DNA.[146]

Because cognitive bias occurs at a subconscious level and “cannot be willed away,”[147] these findings in no way diminish the professionalism or virtue of scientists, both forensic and otherwise:

“contextual bias is by no means limited to cases of misconduct or bad intent. Rather, exposure to task-irrelevant information can bias the work of [examiners] who perform their job with utmost honesty and professional commitment. Moreover, the nonconscious nature of contextual bias also means that people cannot detect whether they are being influenced by it. It follows that task-irrelevant information can bias the work of [examiners] even when they earnestly and honestly believe they are operating with utmost objectivity.[148]

 

Instead, experts (given the shortcuts in reasoning and perception that their experience affords them) may actually be at a greater risk of succumbing to cognitive bias (again unintentionally) than non-experts.[149] Thus the broader scientific community has, without any associated shame, adopted best practices to limit the impact of cognitive bias, such as double-blind trials and exposure control of extraneous information.[150] But forensic disciplines have been slow to follow. No wonder then, that the National Academy of Sciences, after stressing that “the findings of forensic science experts are vulnerable to cognitive and contextual biases,” expressed concern over the lack of “good evidence to indicate that the forensic science community has made a sufficient effort to address the bias issue,”[151] and recommended significant research into best practices for the forensic sciences.[152] And more recently, both the National Commission on Forensic Science, PCAST, and prominent forensic experts have stressed the need to keep examiners ignorant of extraneous case information (the identity of a suspect, the fact of an arrest or confession, etc…) and other biasing contexts.[153]

            But on theme, the discipline of firearms examination has failed, again unlike its peer disciplines, to conduct rigorous research into the effects of cognitive bias on its practitioners or to develop procedures to minimize its corrupting influence. In fact, the one study conducted on the topic (1) failed to satisfy OSAC-which still identifies the issue as direly in need of scientific investigation,[154] (2) utilized a sample of examiners so small that even the authors concede they easily could have failed to detect the effects of cognitive bias, and (3) presented examiners with biasing contexts far less extreme than even those encountered in routine casework.[155] Yet still it recorded at least some differences between biasing and non-biasing conditions.[156] Regardless, and as explained above, nothing in the scientific literature or the realm of common sense would suggest that firearms examiners are uniquely immune from mental distortion. Rather they have themselves admitted that outside influences can corrupt their decision making.[157] What they have failed to do is respond to the risk posed by cognitive bias.

Moving beyond the typical defense refrain of bias due to examiners’ work with law enforcement (although such effects have been documented to produce significant changes in expert opinion),[158] these practitioners continue to receive task-irrelevant information---the examiner in this case, for example, was unnecessarily informed that the submitted gun had already been linked by investigators to Mr. [---] (the State’s main suspect in the shooting of Summer Moore). Moreover, they conduct their analysis with the questioned and known bullets side-by-side, prompting them to reason back and forth between the two and creating the risk that they will “see in data the patterns for which they are looking, regardless of whether the patterns are really there.”[159] And finally, examiners are generally (and in this case were) provided with only one gun and asked to match it to a recovered projectile. Such examinations, as both district courts and forensic experts have noted, are “in effect, an evidentiary ‘show-up,’ not what scientists would regard as a ‘blind’ test.”[160] Illinois courts consider such practices “inherently suggestive and not favored as a means of identification” in the eyewitness context.[161] And given the equal susceptibility of subjective expert judgments to suggestion, there is ample reason to extend that logic and question the forensic evidence in this case, especially because the threat of cognitive bias actually reaches an apex in the context of firearms examination where, as the following section will demonstrate, practitioners, “must rely on data that are somewhat ambiguous.”[162]

  1. Coincidental correspondence and subclass markings blur the line between matches and non-matches.

 

Initially, it bears mentioning that firearms examiners begin their work behind the eight ball, given that the bullets and cartridges they examine are most often damaged and deformed due to environmental factors or impacts against surfaces. The areas suitable for comparison may therefore be limited, and the relevant features necessary to identify or exclude a gun as the source of a projectile obscured.[163] But modern manufacturing techniques (meaning even the advent of mass production and its application to firearms in the 1920s)[164] cast doubt on examiners’ ability to reliably analyze even pristine samples. Such methods have resulted in increased calibration of the tools used to produce guns and thus heightened standardization of gun surfaces: “Modern mass-production methods for tools dictate the necessity of minimizing the manufacturing steps in order to make tool production as economical as possible. When this occurs, the manufacturing process could turn out consecutively manufactured parts that would have similar surface conditions.”[165] In fact, in some circumstances, examiners struggle to identify even the class characteristics of ammunition.[166] Because “variation due to manufacturing and individual wear patterns continues to be minimized by manufacturing processes” critics have gone so far as to note broadly that “there is simply no basis for the assumption, fundamental to classic toolmark identification theory and technique, that those markings previously classed as individual characteristics…are in fact unique to a particular gun.”[167] And examiners themselves admit that “as the techniques of firearms manufacture have evolved, following mostly commercial rather than forensic arguments, [their foundational assumptions] need to be verified on a regular basis.”[168]

Moreover, as alluded to earlier, the limited data produced by the minimal research that firearms examiners have performed over the years bears out that, given advances in manufacturing, no clear line actually divides matches from non-matches. Instead, different tools, including guns, can produce marks with striking similarity equivalent to that of marks produced by the same tool, especially if an examiner has only a small surface area available to consider.[169]

[170]

Biasotti, for example, noted as early as his 1959 article comparing testfires from Smith & Wesson revolvers, that “the average percent match for bullets from the same gun is low and the perfect match for bullets from different guns is high.”[171] In fact, “the total number of matching lines [in a match] is often no higher or even less than the number which could occur as the result of chance."[172] And William Tobin, summarizing Biasotti’s work and the studies replicating it would eventually emphasize that:

Among those publications that hint at the nature and scope of the problem, one found up to 52% matching lines in a known non-match and another only 21-24% (steel-jacketed bullets) and 36-38% (non-jacketed bullets) concordance on bullets fired from the same gun.  It has been observed that there are typically 2 and 3 times more matching striations in known non-matches (fired in different guns) than in those fired in the same gun.[173]

 

 Thus examiners may ultimately err because their assumptions about the level of similarity sufficient to constitute a match actually overlap with the level of similarity possible in non-matches.[174] This is likely a frequent occurrence given that, as the charts below demonstrate, the similarity between known, non-matches may exceed not just that found in some matches, but that found commonly in matches:

Percentage of Matching Two-Dimensional Striations in Single Land Impressions 

Percentage of Matching Three-Dimensional Striations in Single Land Impressions[175]

            Additionally, researchers have more recently taken advantage of computerized databases of bullet and cartridge images to confirm the troubling overlap of similarity as between known matches and known, non-matches. One such researcher, noting that prior to the availability of databases, close non-matches could be discovered only “sporadically” during the course of casework, concluded that when he retrieved non-matches that were highly ranked as hits by databases capable of searching immense numbers of samples simultaneously, he could observe “numerous two dimensional similarities.”[176] And such misattributions were not simply the result of some computerized error: “when using a comparison microscope, these similarities are still present and it is difficult to eliminate comparisons even though we know they are from different firearms."[177] Moreover, the National Institute of Standards and Technology, in its attempt to develop an even more-nuanced database by applying advanced surface topography imaging and comparison  algorithms  to the  evaluation of  cartridge cases  similarly  found that   non-matches

often exceed matches in terms of the similarity scores assigned them [see chart on next page].

[178]

All that, however, is before mentioning the disconcerting reality that firearms examiners will struggle even to appropriately identify which marks on bullets and cartridges are actually “individual” in nature. As noted at the outset of this brief, certain manufacturing processes, as well as imperfections on any number of manufacturing tools used to make guns, may produce what examiners term “subclass characteristics.”[179] Such marks are essentially, visually indistinguishable from individual marks, but in point of fact are shared by all guns of a batch rather than any one gun: “some machining processes are capable of reproducing remarkably similar surface characteristics…on the working surfaces of many consecutively produced tools which if not recognized and properly evaluated could lead to a false identification.”[180] Thus the potential for error should be clear: if examiners confuse subclass marks from their visibly indistinguishable cousins (individual marks) they will identify a single gun as the source of marks on a bullet or cartridge when in reality tens, hundreds, or even thousands of guns from a batch[181] would have produced the same patterns. And: “The danger that misidentifications will result from confusing subclass with individual characteristics is real, not theoretical. In the 1980s, this type of confusion was discovered to have produced misidentifications of striated toolmarks.”[182] One expert has even gone so far as to say that “The most seminal, but problematic, obstacle for toolmarks examiners, however, is discerning subclass from purported ‘individual’ characteristics.”[183]

 

    

 

 

              

                       [184]

The images above provide examples of such marks in configurations both easily mistaken for individual markings and of sufficient quality-if they were individual-to justify calling a match.

            AFTE may dismiss subclass characteristics as rare and easily discerned by trained examiners,[185] but even its theory of identification states: “caution should be exercised distinguishing subclass characteristics from individual characteristics.”[186] And its lack of concern flies in the face of statements of prominent firearms examiners who have studied the issue as well as nearly every article on the topic contained in its own journal. One examiner notes that “the specter of subclass characteristics has loomed over the field of firearms identification for a number of years,”[187] and another has said explicitly that “the difficulty of addressing subclass characteristics is not in debate.”[188] And for good reason, given that firearms examiners have described the appearance of such marks as prevalent,[189] and, in fact, at least one forensic scientist specializing in manufacturing and metallurgy believes that “most metal forming operations generally impart characteristics of forced contact on the work piece (firearm components in this case) that are overwhelmingly subclass [rather than individual] in nature.”[190] At all events, subclass marks can appear on all the gun surfaces firearms examiners consider in their analysis,  as well as the surfaces of even unfired cartridges; and article after article, for decades, has noted that such marks will readily cause misidentifications by examiners[191] and computers alike.[192]

            Worse still, subclass markings have generally been uncovered only at random,[193] and firearms examiners have no rules or credible guideposts to help them distinguish between them and individual characteristics, or even to alert them as to when to expect subclass markings.[194] Rather, the few nuggets of advice the literature has offered practitioners (i.e. consider whether parallel striations on cartridges appear within a firing pin impression, or utilize caution when considering groove impressions but feel safe considering land impressions)[195] have been negated by later discoveries.[196] In fact, even when Ronald Nichols was challenged by a critic of the discipline during a debate in the Journal of Forensic Science to delineate whatever rules he uses to address subclass characteristics, he could impart none, returning instead to broken-record-reliance on examiner training and experience,[197] a particularly concerning response given how significantly examiners struggle with subclass marks during proficiency testing.[198] Such failings to define standards or support examiner capability with regard to subclass characteristics have led one expert to question “how toolmark trainers communicate behind closed doors with trainees to recognize the difference between subclass and individual characteristics if instructors cannot articulate such differences in published articles.”[199]

  • Known Errors by Firearms Examiners in Testing & Practice Should Disturb the Court.

 

DNA testing has exposed the forensic pattern matching disciplines as significant contributors to wrongful conviction.[200] Given that reality, especially when combined with all the weaknesses of firearms examination already discussed throughout this motion, it should come as no surprise to the Court that, when tested, firearms examiners simply cannot accede to the lofty ideals of practical certainty they proclaim. Instead, they must concede significant variability as between examiners[201] sometimes based on factors as unscientific as the geographic region where an examiner received training[202] and extending to even the most basic tasks (such as counting and matching-up striations on surfaces).[203] In fact, in perhaps the only empirical study that has ever endeavored to track and quantify analytical variations between examiners, participants observing the same tooled surfaces and asked to count lines, match lines, and calculate a total percent of matching lines differed in their final counts by as much as 21 total lines, 23 matching lines, and 34% matching lines.[204] And it bears mentioning that when experts from other forensic pattern matching disciplines have, unlike those in the field of firearms identification, actually undertaken research into the reproducibility of findings between examiners and the repeatability of findings by the same examiner, their results have done little to inspire confidence.[205]

At bottom, if accounting for the very real costs of examiner mistakes and accepting the higher-end of their error rates, accurate identifications are not just far from practically certain, but instead, scarcely qualify as particularly common. PCAST and other scientific authorities have long noted the limited value of the proficiency testing to which firearms examiners are subjected because such tests present examiners with far simpler tasks than does actual casework, allow test-takers more time than can be allotted to casework, can be worked on collaboratively, and are declared rather than blind (meaning examiners know they are being tested and may therefore exercise greater caution as to avoid incorrect responses).[206] Thus, any error rates generated by proficiency testing represent merely “lower-bound estimates.”[207] But despite the ease of the task before them, firearms examiners opt out of responding (by answering inconclusive) more than practitioners from any other discipline (at a rate, on the most trying tests, of up to 69% and 97%).[208] When they do provide answers they misidentify ammunition[209] at disturbing levels: (1) examiners taking a European proficiency misidentified bullets and cartridges 8.2% of the time {1 in 12 cases},[210] (2) standard American proficiency tests by have recorded misidentification rates as high as 10.5%,[211] and (3) a program designed to test the performance of crime labs recorded misidentification rates of up to 28.2% {1 in 4 cases}.[212]

Moreover, these rates rise even higher when firearms examiners confront more difficult samples. One of the studies reviewed by PCAST for example, uncovered an upper bound estimate of misidentification in the equivalent of one in nineteen cases.[213] And, in the one test to present examiners with the task of ferretting out subclass marks, 26.4% of examiners failed to classify the marks as subclass features and perpetrated misidentifications-a real world equivalent of an error in over 1 of every 4 such cases considered (this despite the fact that images of the specific marks had appeared in an AFTE Journal article just two years prior to the test cautioning practitioners to exercise caution).[214] And finally, the one audit[215] of actual casework undertaken regarding firearms examiners (specifically one involving the Detroit police laboratory) reported a misidentification rate of 10% based on a finding that 29 of the 250 cases reviewed involved misidentifications.[216]

            But however disturbing those error rates appear in the abstract, the real world cost of the arrogance of firearms examiners far out shadows them. Specifically, the 29 very real victims of the Detroit lab’s errors are by no means an anomaly. Misidentifications have ravaged defendants since the earliest days of firearms examination’s use in court[217] up through as recently as the investigation into a rash of unexplained shootings over the last year on the highways of Arizona.[218] They have occurred across the full geographic span of the United States,[219] threatened the livelihood of police officers,[220] afflicted both high-profile[221] and unheard of cases,[222] infected even the labs of major metropolitan areas,[223] and left defendants unjustly languishing on death row.[224] That the most candid of firearms experts concede the reality of such errors and decry their frequency[225] should not offer this Court any comfort given their continued assertions of reliability and certainty, as well as the fact that the exposed misidentifications likely pale in number compared to those left undiscovered.[226]

  1. FIREARMS EXAMINATION ENJOYS NEITHER THE RELIABILITY NOR THE GENERAL ACCEPTANCE NECESSARY FOR ADMISSABILITY.

 

            The preceding sections of this motion demonstrate that the field of firearms examination deserves none of the aura of infallibility that has surrounded it for decades. In fact, practitioners themselves have long recognized their discipline’s failure to quantify and calibrate its standards or to credibly gauge the ultimate reliability of its methods when applied. But while firearms examiners may remain content to elide mention of such limitations and propound indefensible conclusions, experts from the disciplines at the foundation of that forensic methodology outright reject the ultimate validity of firearms examination as a proven and acceptable science.  At this point, the authoritative chorus of credentialed specialists criticizing the discipline simply overwhelms the minimal loyalty which it retains only amongst law-enforcement-engaged, forensic practitioners. Moreover, the discipline’s negligibly demonstrated reliability, when coupled with the substantial pitfalls confronting examiners, cannot justify the undue prejudice that would accompany the exaggerated and misleading statements of certainty peddled by practitioners. Thus, this Court should bar the testimony of the State’s firearms examiner as not generally accepted under Frye, or alternatively, as substantially more prejudicial than probative.

  • The General Acceptance of Firearms Examination Did Not Survive the Last Decade’s Proliferation of Attacks By the Broader Scientific Community.

 

            In Illinois, the State as the proponent of scientific evidence bears the burden of satisfying the threshold inquiry set forth in Frye, which permits admission “only if the methodology or scientific principle upon which the opinion is based is sufficiently established to have gained general acceptance in the particular field in which it belongs.”[227] And admittedly, a merely passing review of the discipline of firearms examination might seem to confirm that it could survive review under that standard: it has enjoyed years of use in the courts of this State, and has never been excluded under Frye in the courts of any other. But even those recent decisions by Illinois appellate courts[228] endorsing the admissibility of firearms examination simply have not accounted for the scientific consensus presently positioned and crescendoing against firearms examination, ultimately failing to evaluate the effect of, among a plethora of other sources, the PCAST’s unambiguous and scathing conclusion that the discipline runs aground even under the unexacting criteria of foundational validity.[229] Thus, despite their fresh vintage, they in no way foreclose the more robust challenge to firearms examination presented by the Defense in this matter, and ultimately leave Illinois’s law regarding the legitimacy of the discipline unsettled. Left then on its own to weigh the significance of the addition of the PCAST report to the substantial chorus of doubts about firearms examination previously raised in scientific papers (doubts which could fairly be summarized as the field’s rejection by every authoritative, legitimately-scientific source to have evaluated it), this Court should at minimum conclude that  firearms examination no longer enjoys the “unequivocal and undisputed” accord necessary to admit such evidence via judicial notice, thus necessitating a pretrial hearing on its general acceptance.[230]

             As an initial matter, the State’s response to this motion will likely begin by arguing that firearms examination evidence, given its decades of use in criminal courts,  does not qualify as “new or novel” and is therefore beyond the reach of Frye.[231] But for this Court to reflexively foreclose its inquiry on that basis, it would have to ignore the truism that today’s scientific hypothesis may turn out to be tomorrow’s claim of a flat earth (or as PCAST frames the issue: “from a purely scientific standpoint, the resolution is clear. When new facts falsify old assumptions, courts should not be obliged to defer to past precedents: they should look afresh at the scientific issues.”)[232] And nothing from a legal rather than scientific standpoint would require this Court to walk the absurd path of turning a blind eye to the evolution of ideas central to the nature and purpose of science. Instead, myriad courts have described Frye as a test that can and must adapt to advances in understanding, acknowledging that “scientific developments may require that the court consider afresh whether a particular proffer meets the [Frye standard].”[233] Thus, by way of example, was bullet lead analysis eventually excluded under Frye, despite routine and decades-long admission in criminal trials, when scientific revelations, parallel to those facing firearms examination, undercut its reliability and validity.[234]

And in that regard, Illinois’s approach to the issue of novelty proves to be no outlier, incorporating the same recognition that “constant scientific advances in our modern era may affect our inquiry as to the novelty of a particular method.”[235] It was that reasoning that led the Illinois Supreme Court in McKown to conclude that HGN evidence, although used for years by police, qualified as novel “given the history of legal challenges to [its] admissibility … and the fact that a Frye hearing ha[d] never been held in Illinois on th[e] matter,”[236] as well as the First district to demand a hearing on the Gudjonsson Suggestibility Scale based on evolving scientific standards, holding that “acceptance of the GSS in the field of forensic psychology was unsettled despite its almost 30-year existence and, thus, remained a novel scientific methodology.”[237] And it is that same reasoning that should compel this Court to advance to the second stage of Frye and evaluate the general acceptance of firearms examination evidence. Not only has the field of firearms examination, like HGN testing at the time of McKown, never been vetted at a pretrial hearing in Illinois,[238] but it has also been subjected over the last decade to substantial scientific and legal challenges, challenges made materially more robust and impregnable by the PCAST report’s authority, scope of review, and direct rejection of the underlying validity of the discipline. No court (in Illinois or otherwise) has vetted the full scope of said challenges in the context of a hearing, and as such (even if paradoxically given the discipline’s age), firearms examination qualifies as new and novel for purposes of Frye.[239]

Turning then to the central issue of admissibility: the Frye standard aims to exclude methods “that undeservedly create a perception of certainty when the basis for the evidence or opinion is actually invalid.”[240]And even before delving into the test’s finer points, it should seem plain that firearms examination violates that animating spirit considering that its practitioners claim unsupported levels of certainty foreign to even the forensic gold standard of DNA, without any of that more rigorously-tested discipline’s empirical foundations. But more than that, firearms examination also runs afoul of Frye’s specific requirement of general acceptance, because although the field is not bereft of adherents (firearms practitioners themselves), experts from the relevant scientific communities of metrology and metallurgy have wholly rejected its methodology. Although the State need not demonstrate “universal acceptance,” that divide deprives the field of the “consensus” as opposed to “controversy” required for admissibility, and demonstrates that firearms examination possesses at best only “dubious validity.”[241] At all events, this Court could simply deny this motion and admit the testimony of the State’s expert only if “unequivocal and undisputed prior judicial decisions or technical writings on the subject” so dispose of the matter as to permit a determination via judicial notice.[242] But such a claim is belied on the scientific front by the PCAST report and other writings critical of firearms examination, and on the legal side by the simple truth that the judiciary has never had the opportunity to consider the full scope of those criticisms. Thus, at minimum, a pretrial hearing would be necessary to resolve the question of general acceptance.[243]

Although, as just stated, both scientific writings and prior judicial decisions may contribute to this Court’s understanding of general acceptance, Frye’s deference to the self-reflection and self-regulation of science clearly privileges the former.[244] And in evaluating that primary determinative issue it is vital to account for all the available and pertinent literature from the scientific fields on which firearms examination is based.[245] In fact, this Court should place a particular emphasis on ensuring that the relevant scientific community it considers extends beyond merely practitioners of firearms examination, thereby allowing room for disagreement.[246] Fortunately, authorities from the broader scientific community and practitioners themselves agree that the relevant foundational fields underlying firearms examination are metrology (the science and application of measuring features) and metallurgy/materials engineering.[247]

And writings from those fields overwhelmingly skew in favor of excluding firearms examination evidence. Given the exhaustive review specifically of firearms examination by PCAST’s metrologists (recall that it involved analysis of more than 400 bibliographic sources as well as exchanges with practicing firearms examiners) and that panel’s direct mandate to evaluate foundational validity, its indictment of the discipline obviously stands as the strongest indicator that firearms examination enjoys no general acceptance by its parent fields, much less the undisputed and unequivocal endorsement in technical writings necessary to avoid a full pretrial hearing. But the daunting coalition that has published about and coalesced around the nigh-unanimous view that firearms examination lacks the adequate empirical basis necessary for scientific acceptance wholly eclipses any support the discipline still retains, and includes, among others: (1) the metallurgists, materials engineers, and metrologists of the 2008 and 2009 NAS panels; (2) leading statisticians; (3) academics from all walks of the scientific community publishing in the most-prestigious of journals; (4) at least a majority quotient of the OSAC committee for firearms and toolmarks[248] as well as the full NCFS, (5) experts at the intersection of law and science, like Adina Schwartz, (6) a former metallurgist with the FBI, William Tobin, who has spent years analyzing the studies and methods of the discipline, and (7) the forensic organizations that have endorsed PCAST. The existence of such authoritative and diverse consensus, and the reality that many firearms examiners agree that their discipline needs but lacks a legitimate empirical foundation, demonstrate that the field is awash in controversy sufficient to disrupt judicial notice of its general acceptance.

But more than that, this Court could well dispense with a hearing and simply exclude the State’s firearms expert, because to rebut the authorities just cited and carry its burden of showing general acceptance, the State could and likely will present only the patently self-serving testimony of firearms examination practitioners. And permitting firearms examiners to establish the general acceptance of their own field would undercut the “scrutiny of the marketplace of general scientific opinion” central to Frye:

To allow general scientific acceptance to be established on the testimony alone of witnesses whose livelihood is intimately connected with a new technique would eliminate the safeguard of scientific community approval implicit in the general scientific acceptance test. Scientific community approval is absent where those who have developed and whose reputation and livelihood depends on use of the new technique alone certify, in effect self-certify, the validity of the technique.”[249]

 

Instead, Frye’s criteria “requires the testimony of impartial experts or scientists,”[250] a truism recognized by the McKown court’s focus on objectivity of pundits in its evaluation of HGN evidence (as well as its conclusion that general acceptance must not turn on “the testimony or writings of law enforcement officers or agencies”),[251] and the First District’s holding that a discipline’s use in crime labs cannot “justify admission of evidence in the face of a bona fide scientific dispute.”[252] As law enforcement professionals whose very livelihoods would dissipate if their discipline were to be rejected by the courts, firearms examiners cannot qualify as objective experts with regard to general acceptance. And AFTE, as a trade organization representing the interests of such examiners, finds no stronger footing.[253] As such, unless the State in its response presents the opinion of a credentialed metrologist or metallurgist who accepts the validity of firearms examination (and the defense is aware of none) a pretrial hearing would simply expend judicial resources without avoiding the inevitability of excluding the State’s firearms expert.[254]

Moreover, prior judicial decisions do not diverge as sharply from the scientific opinions cited above as the lack of any case ultimately excluding firearms examination evidence might suggest, and thus provide insufficient support to warrant admission. True, the earliest of such cases permitted firearms examination testimony without pause.[255] But importantly, those bygone precedents arose before the past decade’s revelations regarding the weaknesses of forensic methods, in periods when such disciplines were “assumed rather than established to be foundationally valid.”[256] In contrast, the weight of recent judicial authority (of which the Robinson and Rodriguez decisions considered only a small percentage)[257] has (1) acknowledged serious deficiencies with regard to the reliability of and empirical support underlying firearms examination, and (2) almost uniformly imposed stringent limits on the scope of firearms examination evidence.[258] Those restrictions have often been so severe that they have approximated outright exclusion by effectively sapping firearms examination evidence of its probative value,[259] and have been justified by caustic rhetoric normally foreign to the niceties of legal discourse, with one judge going so far as to describe the probative value of firearms examination as akin to “the vision of a psychic” and emphasizing that  “it reflects nothing more than the individual's foundationless faith in what he believes to be true. This is not evidence on which we can in good conscience rely, particularly in criminal cases, where we demand proof—real proof—beyond a reasonable doubt, precisely because the stakes are so high.”[260]

Admittedly, however, none of the cases just referenced eclipse, in terms of their ultimate admissibility decisions, those reviewed by the Robinson or Rodriguez courts (although the sheer number of such decisions and their consistency across multiple forensic disciplines that share the limitations associated with firearms examination may well have troubled the First District had the full gamut of cases cited herein been available to or discovered by that appellate body).[261] But mere “[r]eliance upon other courts' opinions can be problematic,” because “[u]nless the question of general acceptance has been thoroughly and thoughtfully litigated in the previous cases, reliance on judicial practice is a hollow ritual…[and] could become a yellow brick road for judicial acceptance of bogus or at least unvalidated scientific theories or techniques.”[262]Thus, the Illinois Supreme Court in New refused to take judicial notice of the general acceptance of a paraphilic diagnosis in part because previous cases that had considered the issue had not had the opportunity to account for the most recent version of the DSM manual,[263] and the Fourth District in Kirk declined to follow past cases that had admitted HGN evidence given the existence of scientific articles published since they were decided as well as the lack of robust hearings supporting admission.[264] In fact, even the Robinson decision stressed that it could sanction the admission of firearms examination evidence absent a Frye hearing only because other courts had already extensively vetted all of the sources the defendant in that matter wished to use to challenge the discipline.[265]

But there’s the rub: no caselaw currently controls or settles the law regarding firearms examination evidence because the Robinson and Rodriguez cases reviewed the effect, essentially, of only the NAS reports, they did not (nor did those decisions they relied upon) confront the far more developed record, including the PCAST report amongst other materials, presented by the Defense in this motion.[266] In fact, in spite of often robust litigation concerning firearms examination (and the many doubts about its foundation and validity such litigation consistently produced) many past cases involving the discipline simply and ultimately cited back to even older cases from times before the weaknesses of the field surfaced to support admission.  And given the only-recent publication of the PCAST report, only one court has even evaluated its effect on the calculus of general acceptance in the setting of a Frye hearing.[267] Because that report (by evaluating thousands of sources, clearly defining the scientific criteria for validity, and thoroughly applying said criteria to firearms examination) went further than any prior critique of the discipline, cases that precede its publication have, to put it plainly, been thoroughly undermined. Moreover, all that is without accounting for the wide swath of materials presented in this motion that, although available at the time of previous decisions admitting firearms examination, have never been reviewed by judicial authorities.

This Court could only guess at whether that resulted from the oversights of other defense counsel, judges, or some combination of the two. But ultimately it should not deprive the Defendant, having gathered together such writings, or the general public, with its vested interest in the marriage of criminal convictions only to legitimate science, of the opportunity to gauge the validity of firearms examination against the most robust case possible. As the old judge’s adage goes “change the facts change the ruling.” Well, the Defense has changed the facts by developing a scathing record against firearms examination, and this Court should not resolve that challenge, one based on newly published materials, by reference solely to cases that never weighed the significance of said sources. To do so would constitute precisely the type of hollow ritual of “grandfathering in irrationality”[268] warned against time and time again by Illinois appellate courts. Instead, this Court should rise to the occasion and gauge thoroughly (in the context of a hearing, and with the benefit of expert testimony) the present legitimacy of firearms examination evidence.

  • The Limited Probative Value of Firearms Examination Evidence Does Not Outweigh its Substantial & Unfair Prejudicial Effect.

 

Even if this Court remains convinced that its hands are tied under Frye, however, it should not dismiss the concerns regarding the validity of firearms examination raised throughout this motion as mere issues of weight as opposed to admissibility. Rather, the Illinois Supreme Court (along with other courts and commentators across the country) has emphasized that scientific evidence, even if deemed generally accepted, must still satisfy the strictures of Rule 403, which requires exclusion of evidence, “if its probative value is substantially outweighed by the danger of unfair prejudice, confusion of the issues, or misleading the jury.” [269] Thus, the court in Murray, after noting that “expert testimony, because of its powerful potential to mislead or confuse juries can be excluded under Rule 403 even if it would otherwise meet the standard for admissibility,” barred the testimony of an epidemiologist who, although claiming to utilize generally-accepted Weight of Evidence Analysis, in fact had distorted research conclusions while picking and choosing between studies to support her opinion.[270] And similarly, the Second District in Floyd, citing concerns about the unreliability of an expert’s underlying assumptions, refused to abandon for vetting by the adversarial process the expert’s testimony regarding a (normally generally accepted) retrograde extrapolation calculation.[271] Additional examples abound of courts excluding experts from generally-accepted disciplines as unreliable,[272] and one court has even relied specifically on Rule 403 to limit the testimony of a firearms examiner.[273]

In fact, both courts and commentators have noted that expert testimony actually requires heightened, rather than diminished, vigor with regard to applying Rule 403. The United States Supreme Court emphasized that "[e]xpert evidence can be both powerful and quite misleading because of the difficulty in evaluating it. Because of this risk, the judge … under Rule 403 of the present rules exercises more control over experts than over lay witnesses."[274] And other scholars from the judiciary have similarly noted the special deference juries grant expert testimony as well as the real possibility that even the best cross-examination may be insufficient to dispel the reverence afforded forensic experts. The warnings of these courts and scholars have come in the form of exhortations to “guard against complacency”[275] in admitting forensic testimony, the observation that “[b]ecause of the ‘talismanic significance’ and ‘authoritative quality’ that surrounds expert opinions, the court must be vigilant to prevent jury confusion caused by misleading testimony,”[276] and the recognition that “cross-examination is a minimal constitutional safeguard … But it is far from adequate."[277] On the home front, the Illinois Supreme Court has even highlighted the “natural inclination of the jury to equate science with truth and, therefore, accord undue significance to any evidence labeled scientific.”[278]

Moreover, the legal opinions just described find added support in scientific findings about the perceptions of jurors. The PCAST report, for example, concludes that “[c]ompared to many types of expert testimony, testimony based on forensic feature-comparison methods poses unique dangers of misleading jurors,” because “[t]he vast majority of jurors have no independent ability to interpret the probative value of results based on the detection, comparison, and frequency of scientific evidence…they would be completely dependent on expert statements garbed in the mantle of science.” [279] And a significant quotient of research underlies the PCAST’s conclusion, bearing out the troubling reality that neither cross-examination nor the conflicting opinion of a defense expert would likely be effective in exposing the weaknesses of firearms examination, and meaningfully impacting a jury’s perception of the strength of the State’s forensic evidence.[280]

Study after study shows that jurors struggle to assess the real value of forensic testimony, willingly defer to experts, and grossly underestimate the potential for misidentification.[281] Moreover, the same studies demonstrate that even robust and pointed cross-examination that is well-designed to expose weaknesses in forensic practitioners’ methods has little to no power to do so.[282] In fact, expert conclusions become even less susceptible to moderation by the adversarial process (and by the way, also impart the lowest levels of understanding to jurors) when premised on years of experience and framed in unshakeable terms like “match” (as of course is the case with firearms examiners).[283] Finally, if this Court fails to act as a gatekeeper, juror misconceptions about the firearms examination performed in this case may well persist even in the face of testimony from an expert favorable to the defense.[284] No wonder then that these researchers have themselves concluded that their “results should give pause to anyone who believes the adversarial process will always undo the effects of weak expert testimony.”[285]

Turning then to the specifics of Rule 403’s application to firearms examination, we find a methodology that, despite the absurd and indefensible claims of its practitioners to practical certainty in their conclusions, actually offers little in the way of probative value. Illinois courts have long tied the probative value of scientific evidence to its reliability.[286] And as previous sections of this motion have highlighted, issues like cognitive bias as well as confusion based on coincidental similarity and subclass characteristics have resulted in myriad errors by firearms examiners (both when tested and in real-world casework), thus calling into serious question the discipline’s reliability and validity. But more than that, “without appropriate empirical measurement of a method’s accuracy, the fact that two samples in a particular case show similar features [i.e. match] has no probative value,”[287] or as one court has phrased the matter: “Without [a] probability assessment, the jury does not know what to make of the fact that the patterns match: the jury does not know whether the patterns are as common as pictures with two eyes, or as unique as the Mona Lisa.”[288] And because the discipline of firearms examination has not subjected itself to rigorous empirical testing (recall that the PCAST report concluded that only one suitable study has ever been performed) the State’s firearms expert simply could not, through other than rank and disallowed conjecture,[289] establish that the probabilities of an accurate match are sufficient even to qualify as relevant, much less amply probative.[290]

The scientific studies cited above demonstrate, however, that in contrast to such minimal probative value, the testimony of the State’s firearms expert would impose a significant risk of unfair prejudice and juror confusion because “juries will likely incorrectly attach meaning to the observation” of an alleged match.[291] And that troubling possibility is only intensified given that (1) said expert’s unwarranted claims of certainty will elide mention of the various pitfalls that diminish the reliability and precision of firearms examination, and (2) cross examination will likely prove ineffective as a means of educating the jury about the weaknesses of the State’s forensic evidence. At bottom, the undeserved aura of infallibility cloaking firearms examination (especially when unfairly buttressed by the extreme conclusions of practitioners) is simply not amenable to correction by the normal workings of the adversarial process.[292] To prevent juror confusion and unfair prejudice, as well as to preserve the integrity of a trial’s truth-seeking function, this Court should therefore utilize its discretion under Rule 403, and bar testimony regarding firearms examination as substantially more prejudicial than probative.

  1. CONCLUSION

            For too many years, the field of firearms examination and its sister forensic disciplines have resisted introspection, refused to develop rigorous criteria, and grossly overstated the probative value of findings. Moreover, bolstered by judicial decisions admitting the testimony of practitioners without conducting searching inquiries or demanding foundational validity, forensic communities have dismissed research that might uncover limitations as a “net loss,”[293] resulting in the present reality that “clinical laboratories must meet higher standards to be allowed to diagnose strep throat than forensic labs must meet to put a defendant on death row.”[294] But reform is coming to forensic science and likewise to courts that would ignore scientific shortcomings, its inevitability bolstered by the consistency and authority of the critics positioned against foundationally-lacking forensics as well as by the wholesale failure of the community of forensic practitioners to offer any legitimate rebuttal to their attacks.

In fact, at every opportunity afforded them over the last decade, prominent scholars have balked at the lack of validation, questionable research practices, and overblown conclusions infecting firearms examination and similar pattern matching disciplines. The PCAST report represents not simply another voice added to the fray, but the culmination of all those years of growing scientific discontent distilled into a straightforward, unequivocal, and authoritative excommunication of firearms examination from the realm of valid and reliable methodologies. In the face of such overwhelming evidence about the limitations of firearms examination, criminal courts can acquiesce to its admission only for so long. And the public simply cannot continue to bear the cost of delay as measured in innocents wrongfully convicted and the persistent harm perpetrated by the guilty left free. This Court’s decision must therefore shoulder far more than just the already-weighty burden of Mr. [---]’s fate. At stake instead is the very respect the public will accord the courts of Illinois, for as one judge has already framed the issue: “Why trust a justice system that imprisons and even executes people based on junk science?”[295] The voices of dissent and the concerns of reliability documented throughout this motion with regard to firearms examination amply support a decision to exclude the testimony of the State’s examiners under Frye or Rule 403. Thus, the only question that remains is whether this Court will have the courage to rise to the historical moment, carve out a path to progress, and cast out voodoo science as having no place in any hall with claims to justice.[296]

 

Wherefore, Mr. [---]  requests that this Court issue the following orders:

  • Exclude the testimony of the State’s firearms examiners under Frye as not generally accepted by the relevant scientific community.

 

  • Exclude the testimony of the State’s firearms examiners under Rule 403 as substantially more prejudicial than probative.

 

  • Conduct a pretrial hearing to asses both the general acceptance and reliability of firearms examination pursuant to Frye and Rule 403.

 

  • Limit the testimony of the State’s firearms examiner by precluding conclusions phrased in terms of “practical certainty,” instead permitting only testimony that the firearms examiner could not exclude any particular gun as the source of any particular bullet or cartridge casing.

                                                                        Respectfully Submitted,

 

 

 

Attorney for Mr. [---]

 

 

[1] National Academy of Sciences, “Strengthening Forensic Science in the United States: A Path Forward,” National Academies Press, at 7 (2009), available at https://www.ncjrs.gov/pdffiles1/nij/grants/228091.pdf.

[2] President’s Council of Advisors on Science & Technology, “Forensic Science in the Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods,” at 111 (Sept. 20, 2016), available at https://www.whitehouse.gov/administration/eop/ostp/pcast/docsreports.

[3] Robert Thompson, “Firearms Identification in the Forensic Laboratory,” at 7 (2010), available at http://www.crime-scene-investigator.net/firearm-Identification-in-the-forensic-laboratory.pdf.

[4] Id. at 7-8.

[5] Top Left: http://www.firearmsid.com/a_bulletidrifling.htm; Top Right: Thompson, “Firearms Identification in the Forensic Laboratory,” at 24; Bottom Center: David Kopel & H. Sterling Burnett, “Ballistic Imaging Not Ready for Prime Time,” National Center for Policy Analysis (April 3, 2003), available at http://www.ncpa.org/pub/bg160?pg=2.

[6] In situations where firearms examiners are presented with only crime scene samples and no suspect gun they may also utilize a database system (IBIS/NIBIN) to attempt and associate the crime scene evidence turned over to them to cold cases. See Thompson, “Firearms Identification in the Forensic Laboratory,” at 29-30. 

[7] Id. at 8.

[8] Id. at 8-9; AFTE, “Theory of Identification As it Relates to Toolmarks,” 30 AFTE J 86 (1998).

[9] AFTE, “The Response of the Association of Firearms & Tool Mark Examiners to the National Academy of Sciences 2008 Report Assessing the Feasibility, Accuracy, & Technical Capacity of a National Ballistics Database,” 40 AFTE J 234, 237 (2008).

[10] AFTE, “Theory of Identification As it Relates to Toolmarks,” 30 AFTE J 86 (1998).

[11] Id.

[12] Id.

[13] Ronald G. Nichols, “The Scientific Foundations of the Firearms & Toolmark Identification: Responding to Recent Challenges” CAC News, 2nd Quarter, at 26 (2006), available at http://www.forensicdna.com/assets/2ndq06.pdf.

[14] AFTE, “Theory of Identification As it Relates to Toolmarks,” 30 AFTE J 86 (1998).

[15] PCAST, “Forensic Science in the Criminal Courts,” at 5.

[16] Id. at 5-6.

[17] Id. at 111.

[18] Id. at 46; see also Jennifer L. Mnookin et al., “The Need for a Research Culture in the Forensic Sciences,” 58 UCLA L. Rev. 725, 732 (2011) (emphasizing the centrality of research culture to valid scientific endeavors meaning “a focus on empirical evidence, transparency, and a consistently critical and reflective perspective on claims of knowledge”).

[19] National Research Council. “Ballistic Imaging,” The National Academies Press (2008), available at https://www.nap.edu/catalog/12162/ballistic-imaging; National Academy of Sciences, “Strengthening Forensic Science in the United States: A Path Forward,” National Academies Press,” (2009).

[20] See e.g., Melendez-Diaz v. Massachusetts, 557 U.S. 305 (2009) (relying on The NAS Report for the finding that “serious deficiencies have been found in the forensic evidence used at criminal trials”); United States v. Mouzone, 696 F. Supp. 2d 536, 570 (D. Maryland 2009)  (“Suffice it to say that the concerns expressed by the NRC ought to be heeded by courts in the future regarding the limits of toolmark identification evidence, and courts should guard against complacency in admitting it just because, to date, no federal court has failed to do so”).

[21] See PCAST, “Forensic Science in Criminal Courts,” at 144.

[22] http://www.nasonline.org/about-nas/mission/; see also David Kaye, “The good, the bad, the ugly: The NAS report on strengthening forensic science in America,” 50 Science & Justice 8, 8-9 (2010).

[23] National Research Council. “Ballistic Imaging,” at 2.

[24] Id. at 3.

[25] Id. at 81.

[26] Id. at 3.

[27] Id. at 82.

[28] Id.

[29] Id. at 82, 85.

[30] National Academy of Sciences, “Strengthening Forensic Science in the United States: A Path Forward,” National Academies Press,” at xix.

[31] The Honorable Harry T. Edwards, “The National Academy of Sciences Report on Forensic Sciences: What it Means for the Bench & Bar,” Presentation to the Superior Court of DC, at 1-2 (2010).

[32] Id. at 1 (noting also that “I can now say that the substance of the Committee’s Report was really not hard to write. The problems that plague the forensic science community have been well understood for quite some time by thoughtful and skilled forensic professionals”).

[33] National Academy of Sciences, “Strengthening Forensic Science in the United States: A Path Forward,” National Academies Press, at 128 (2009).

[34] Id. at 7.

[35] Id. at 153-54.

[36] Id. at 155.

[37] Id. at 154.

[38] Id. at 107-108.

[39] Id. at 154 (“studies should be performed to make the process of individualization more precise and repeatable.”)

[40] Michael J. Saks & Jonathan L. Koehler, “The Coming Paradigm Shift in Forensic Identification Science,” 309 Science 892 (2005) (saying also that “Scientists have begun to question the core assumptions of numerous forensic sciences” & decrying forensic sciences for their lack of transparency and scientific rigor).

[41] Donald Kennedy, “Forensic Science: Oxymoron?” 302 Science 1625 (2003) (“…the analysis of bullet markings exemplifies kinds of ‘scientific’ evidence whose reliability may be exaggerated when presented to a jury”); David L. Faigman, “Is Science Different for Lawyers” 297 Science 339 (2002) (concluding that although research into forensic identification sciences would be easily accomplished, little if any has actually been conducted); Donald Kennedy & Richard A. Merrill, “Assessing Forensic Science” 20 Issues in Sci. & Tech. 1 (2003) (emphasizing that “the scientific foundation of many common forensic science techniques may be open to question” because they “have not undergone the type of extensive testing and verification that is the hallmark of science elsewhere”).

[42] NATURE Editorial Board, “Science in Court” 464 Nature 325 (2010).

[43] See American Statistical Association, “ASA Board Policy Statement on Forensic Science Reform,” (April 17, 2010), available at http://www.amstat.org/asa/files/pdfs/POL-Forensic_Science_Endorsement.pdf; Karen Kafadar, “Statistical Issues in Assessing Forensic Evidence,” Technical Report 11-01, Dep’t of Statistics-Indiana University (April 21, 2011), available at http://www.stat.indiana.edu/files/TR/TR-11-01.pdf; Alicia Carriquiry, “Declaration in Support of Defendant Joseph Blacknell’s Motion to Exclude Firearms & Toolmark Identification evidence Or, In the Alternative, for a Kelly Hearing,” (Nov. 21, 2011) (“In my opinion as a statistician with many years of experience, the studies that have been carried out and the (scant) data that have been collected in no way support the methods or the conclusions that are routinely drawn by firearms examiners”), available at https://afte.org/uploads/documents/swggun-cavblacknell-carriquiry.pdf.

[44] Paul C. Gianelli, “Forensic Science: Under the Microscope,” 34 Ohio N.U.L. Rev. 315 (2008) (noting an unfulfilled “need for comprehensive regulation of crime laboratories…there is a critical need for independent scientific validation of forensic techniques.”); Adina Schwartz, “A Systemic Challenge to the Reliability & Admissibility of Firearms & Toolmark Identification,” 6 Colum. Sci. & Tech. L. Rev. 2 (2005) (reviewing literature on firearms examination and concluding that such evidence fails to meet standards of admissibility).

[45] John Thorton, “The General Assumptions & Rationale of Forensic Identification,” In Modern Scientific Evidence: The Law & Science of Expert Testimony, at 3 (1997).

[46] Id. at 36.

[47] Thomas L. Bohan, “President’s Editorial- Strengthening Forensic Science: A Way Station on the Journey to Justice” 55 J. Forensic Sci. 5, 6 (2010).

[48] Mnookin, “The Need for a Research Culture in the Forensic Sciences,” at 732-35, 778.

[49] See William Tobin & Peter Blau, “Hypothesis Testing of the Critical Underlying Premise of Discernible Uniqueness in Firearms-Toolmark Forensic Practice,” 53 Jurimetrics 121 (2013); William A. Tobin & Clifford Spiegelman, “Analysis of Experiments in Forensic Firearms/Toolmark Practice Offered as Support for Low Rates of Practice Error & Claims of Inferential Certainty,” 12 L., Prob., & Risk 115 (2013); William A. Tobin, “Affidavit in Virginia v. Macumber,” (2011), available at https://afte.org/uploads/documents/swggun-azvmacumber-tobin.pdf.

[50] William Tobin, David Sheets, & Clifford Spiegelman, “Absence of Statistical and Scientific Ethos: The Common Denominator in Deficient Forensic Practices,” 4 Statistics & Public Policy 1, at 8 (2017).

[51] Id.

[52] Id. at 1, 10.

[53] A group of hundreds of forensic professionals charged by the National Institute of Standards and Technology to “to create a sustainable organizational infrastructure that produces consensus documentary standards and guidelines to improve quality and consistency of work in the forensic science community.” SeeNIST Organization of Scientific Area Committees Roles and Responsibilities,” (2016), available at http://www.nist.gov/forensics/osacroles.cfm.

[54] See http://www.nist.gov/forensics/osac/upload/FATM-Research-Needs-Assessment_Class-and-individual-marks.pdf; http://www.nist.gov/forensics/osac/upload/FATM-Research-Needs-Assessment_Blackbox.pdf.

[55] NCFS, “Technical Merit Evaluation of Forensic Science Methods & Practices,” at 2 (2016), available at https://www.justice.gov/ncfs/file/881796/download.

[56] Id.

[57] NAS, “Strengthening Forensic Science in the United States” at 5.

[58] See AFTE, “The Response of the Association of Firearm and Tool Mark Examiners to the February 2009 National Academy of Science Report,” 41 AFTE J 204, 206 (2009); AFTE, “Comments on NCFS Views Document: ‘Scientific Literature in Support of Forensic Science and Practice,’” 47 AFTE J 109, 111 (2015).

[59] PCAST, “Forensic Science in Criminal Courts,” at x.

[60] Id. at 2, 23, 67.

[61] Id. at 144.

[62] https://obamawhitehouse.archives.gov/administration/eop/ostp/pcast/about/members.

[63] https://obamawhitehouse.archives.gov/administration/eop/ostp/pcast/docsreports.

[64] https://news.aafs.org/policy-statements/presidents-council-of-advisors-on-science-and-technology-pcast-report/.

[65] The Forensic Institute, “Commentary on PCAST 2016,” (last visited Jan. 19, 2017), available at http://www.theforensicinstitute.com/news-articles/views-and-opinions/commentary-of-pcast-2016.

[66] Kozinski, “Rejecting Voodoo Science in the Courtroom,” Wall Street Journal (Sept. 19, 2016) see also Motorola Inc. v. Murray, 147 A.3d 751, 759 (D.C. 2016) (J. Easterly concurring) (“Fortunately, in assessing the admissibility of forensic expert testimony, courts will have the aid of landmark reports [including PCAST’s]… These reports provide information about best practices for scientific testing, an objective yardstick against which proffered forensic evidence can be measured, as well as critiques of particular types of forensic evidence”).

[67] PCAST, “Forensic Science in Criminal Courts,” at 4, 44-46.

[68] Id. at 26.

[69] Id. at 32.

[70] Id. at 5.

[71] Id. at 46.

[72] Id. at 49.

[73] Id. at 52-53; see also PCAST, “An Addendum to the PCAST Report on Forensic Science in Criminal Courts,” at 2 & 4(Jan. 6, 2017) (“While scientists may debate the precise design of a study, there is no room for debate about the absolute requirement for empirical testing” & “there is no hierarchy in which empirical evidence is simply the best way to establish validity…in science, empirical testing is the only way to establish validity”).

[74] Id. at 62. It also bears mentioning that PCAST identified multiple forensic groups that would share its views, see id. at 63-65 & 105.

[75] See NAS, “Strengthening Forensic Science in the United States” at 154 (“Toolmark and firearm analysis suffers from the same limitations discussed above for impression evidence”).

[76] See PCAST, “Forensic Science in Criminal Courts,” at 101 (“Based largely on two recent appropriately designed black-box studies, PCAST finds that latent fingerprint analysis is a foundationally valid subjective methodology—albeit with a false positive rate that is substantial and is likely to be higher than expected by many jurors based on longstanding claims about the infallibility of fingerprint analysis.”)

[77] Id. at 60 (“[i]t declares that an examiner may state that two toolmarks have a ‘common origin’ when their features are in ‘sufficient agreement.’ It then defines ‘sufficient agreement’ as occurring when the examiner considers it a ‘practical impossibility’ that the toolmarks have different origins”). Importantly, PCAST provided the FBI lab with an opportunity to defend the theory and rebut their understanding of its circularity. It was unable to do, and merely restated the theory itself.

[78] Id. at 110-111.

[79] Id.

[80] Id.

[81] See David P. Baldwin, et al., “A Study of False-Positive & False-Negative Cartridge Case Comparisons” AMES Technical Report #IS-5207 (April 7, 2014) (the study notes that 5 of 218 participants [roughly 2%] committed 20 of the 22 total errors in the study, meaning those examiners erred 20 out of 50 times they considered evidence. The authors describe these results as demonstrating “a highly heterogeneous mixture of a few examiners with higher rates and most examiners with much lower rates” but identified no way to discriminate between the two.)

[82] PCAST, “Forensic Science in Criminal Courts,” at 111.

[83] Id. at 110. Before this Court dismisses that error rate as reasonable it should consider that PCAST, from a scientific standpoint, viewed the far lower error rates associated with fingerprint comparisons (as high as 1 in 306 cases for one study or 1 in 18 cases in a second) on the whole as “substantial.” See id. at 101.

[84] Baldwin, “A Study of False-Positive & False-Negative Cartridge Case Comparisons,” at 5.

[85] PCAST, “Forensic Science in Criminal Courts,” at 52.

[86] PCAST, “Forensic Science in Criminal Courts,” at 111 (acknowledging that the AMES study “did not involve consecutively manufactured guns” and that “Actual casework may involve more complex situations (for example, many different bullets from a crime scene)”).

[87] Jonathan J. Koehler, “Forensic Science Reform in the 21st Century: A Major Conference, a Blockbuster Report, and Reasons to be Pessimistic” 9 Law, Probability, & Risk 1, 4(2010) (accusing AFTE of “recoil[ing] from the [NRC] report’s conclusion that the existing science does not support the strong claims made by firearms and tool mark examiners at trial”); Nature Editorial Board, “Science in Court” 464 Nature 325 (2010) (many practitioners have closed themselves off from any open sharing of methods and information with the academic community.”); AFTE, “Response to the PCAST Report on Forensic Science,” (Oct. 31, 2016) (citing to no specific studies overlooked by PCAST, failing to respond to its criticisms of proficiency testing, and offering no legitimate retort to descriptions of its theory as circular), available at https://afte.org/uploads/documents/AFTE-PCAST-Response.pdf.

[88] AFTE, “The Response of the Association of Firearms & Tool Mark Examiners to the National Academy of Sciences 2008 Report Assessing the Feasibility, Accuracy, & Technical Capacity of a National Ballistics Database,” 40 AFTE J 234, 241-42 (2008); Ronald Nichols, “Defending the Scientific Foundation of the Firearms & Tool Mark Identification Discipline: Responding to Recent Challenges,” 52 J. Forensic Sci. 586, 590-91 (2007); Department of Justice, “Letter in Manning v. Mississippi, 2013-DR-00491-SCT” (May 6, 2013).

[89] See e.g., PCAST, “Forensic Science in Criminal Courts,” at 19 (describing as “scientifically indefensible” claims of: “‘zero,’ ‘vanishingly small,’ ‘essentially zero,’ ‘negligible,’ ‘minimal,’ or ‘microscopic’ error rates; ‘100 percent certainty’ or proof ‘to a reasonable degree of scientific certainty;’ identification ‘to the exclusion of all other sources;’ or a chance of error so remote as to be a ‘practical impossibility.’”); NAS, “Ballistic Imaging,” at 82 (rejecting certainty statements because they “‘cloak an inherently subjective assessment of a match with an extreme probability statement that has no firm grounding and unrealistically implies an error rate of zero”); NAS, “Strengthening Forensic Science,” at 142, 184 (concluding that practitioners should abandon absolutist claims of identification in favor of “modest claims about the meaning and significance of a ‘match,’”  as well as that “the concept of ‘uniquely associated with’ must be replaced with probabilistic association, and other sources of the crime scene evidence cannot be completely discounted”); NCFS, “Views of the Commission Regarding Use of the Term ‘Reasonable Scientific Certainty’,” Dep’t of Justice, at 3 (2016) (emphasizing that even the lesser term reasonable scientific certainty “cloaks” conclusions with unjustified levels of rigor and respectability and would confuse or mislead jurors concerning the weight owed forensic testimony), available at https://www.justice.gov/ncfs/file/839731/download.; Working Group on Human Factors in Latent Print Analysis, “Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach,” National Institute of Justice, at 72 (2012) (rejecting extreme source attribution conclusions for fingerprint examiners as scientifically deficient); Tobin, “Hypothesis Testing the Critical Underlying Premise of Discernable Uniqueness in Firearms-Toolmarks Forensic Practice,” 53 Jurimetrics at 131 (calling on firearms examiners to “curb the excesses” of their conclusions and noting that “the switch to weaker forms of source attribution (such as ‘practical certainty’) is a cosmetic change that does nothing to remedy the underlying scientific shortcomings of F/TM practice”); Simon A. Cole, “Individualization is Dead, Long Live Individualization! Reforms of Reporting Practices for Fingerprint Analysis in the United States,” 13 Law, Probability, & Risk 117 (2014) (describing terms absolute and practical certainty as redundant and noting that practical certainty is “an obscure and seemingly nonsensical value for a probability” and concluding: “neither the Theory of Identification nor the toolmark literature provides a defensible justification for claims that toolmark analysis can reduce the probability that two impressions derive from different sources to ‘practical impossibility.’”); Budowle et al., “A Perspective on Errors, Bias, & Interpretation in the Forensic Sciences and Direction for Continuing Advancement,” 54 J. Forensic Sci. 798, 804 (2009) (conceding that with the use of terms like match there may be an “unintended contribution to bias (i.e., conveying more strength than intended)” and suggesting “instead the term ‘failure to exclude,’ which may seem to some more acceptable”); John M. Collins, “Stochastics-The Real Science Behind Forensic Pattern Identification,” The Crime Lab Report  (2009), available at http://forensicfoundations.com/Resources/Documents_CLR/Archive_Legacy/2009_1124_

Stochastics.pdf. (noting the scientific irresponsibility of extreme source attribution conclusions, suggesting instead that examiners more conservatively acknowledge the subjectivity of their work and state only: “I have never seen, nor would I expect to see, this amount of similarity in … bullet striations… that came from different sources”).

[90] See SWGGUN & AFTE, “Response to 25 foundational firearm and toolmark examination questions received from the Subcommittee on Forensic Science (SoFS), Research, Development, Testing, & Evaluation Interagency Working Group (RDT&E IWG),” at 1 (2011) (describing firearms examination as an applied science derived from the discipline of metallurgy); Mnookin, “The Need for a Research Culture in Forensic Sciences,” at 760 (“Even those with a BS in forensic science or some other scientific discipline have not typically received significant training in the development of research design. Experience may provide the basis for determining what questions to ask, but most pattern identification analysts, even with entirely noble intentions, would not be qualified to design or develop sophisticated research projects to answer those questions”); Tobin, “Absence of Statistical and Scientific Ethos,” at 4, 26 (criticizing discipline for being “insular” and failing to consult more specialized scientific authorities).

[91] Thorton, “The General Assumptions & Rationale of Forensic Identification,” at 20 (1997) (“Behind every opinion rendered by a forensic scientist there is a statistical basis. We may not know what that basis is, and we may have no feasible means of developing an understanding of that basis, but it is futile to deny one exists.”); A.A. Biasotti, “The Principles of Evidence Evaluation as Applied to Firearms & Tool Mark Identification,” 9 J. Forensic Sci. 428, 430 (1964) (“each time we claim identity we are giving an opinion based on the concept of statistical probability, whether or not we like to admit it”).

[92] A.A. Biasotti, “The Principles of Evidence Evaluation as Applied to Firearms & Tool Mark Identification,” 9 J. Forensic Sci. 428, 430 (1964) (firearms examiners “lack necessary statistical data which would permit [them] to formulate precise criteria for distinguishing between identity and nonidentity with a reasonable degree of certainty.”); see also David Howitt et al., “A Calculation of the Theoretical Significance of Matched Bullets,” 53 J. Forensic Sci. 868 (2008) (“The statistical likelihood that a particular correspondence of the striae will occur by chance has, however, never been properly assessed.”)

[93] National Research Council. “Ballistic Imaging,” at 60.

[94] Ronald G. Nichols, “Firearm & Toolmark Identification Criteria: A Review of the Literature,” 42 J. Forensic Sci. 466 (1997); see also Eliot Springer, “Toolmark Examinations-A Review of its Development in the Literature,” 40 J. Forensic Sci. 964 (1995) (reaching the same conclusion after reviewing literature specific to toolmarks and noting that despite decades of acknowledging a lack of objective research and standards for the field “no one had developed any of the methods for proper laboratory use”).

[95] Id. at 467 (citing Alfred A. Biasotti, “A Statistical Study of the Individual Characteristics of Fired Bullets,” 4 J. Forensic Sci. 34 (1959)).

[96] Biasotti, “A Statistical Study of the Individual Characteristics of Fired Bullets,” at 34, 47-48.

[97] A.A. Biasotti, “The Principles of Evidence Evaluation as Applied to Firearms and Toolmark Identification,” 9 J. Forenisc Sci. 428 (1964).

[98] Id. at 429.

[99] National Research Council. “Ballistic Imaging,” The National Academies Press, at 70-72 (2008) (attacking studies of consecutively manufactured guns for their small samples sizes and failure to consider whether sequential serial numbers actually indicate consecutive manufacture); National Academy of Sciences, “Strengthening Forensic Science in the United States: A Path Forward,” National Academies Press, at 155 (criticizing so-called ten gun studies for “a heavy reliance on the subjective findings of examiners rather than on the rigorous quantification and analysis of sources of variability.”); Alfred A. Biasotti & John Murdock, “Criteria for Identification or State of the Art of Firearm & Toolmark Identification” 16 AFTE J. 16, 19 (1984) (“Such studies are subjective evaluations based on criteria of identification which cannot readily be articulated or communicated to other examiners except through photography. The information gained from such studies is therefore only of value to the examiner who conducted the study; or to the examiners trained trained or supervised by that examiner.”); William A. Tobin & Clifford Spiegelman, “Analysis of Experiments in Forensic Firearms/Toolmark Practice Offered as Support for Low Rates of Practice Error & Claims of Inferential Certainty,” 12 L., Prob., & Risk 115 (2013) (finding substantial weaknesses such as small sample sizes, and failure to adequately consider manufacturing variables infect six of the most common studies cited by firearms examiners in support of their practice); William Tobin & Peter Blau, “Hypothesis Testing of the Critical Underlying Premise of Discernible Uniqueness in Firearms-Toolmark Forensic Practice” 53 Jurimetrics 121, 139 (2013) (“As it turns out, careful analysis for both internal and external validity of the various putative validation studies that currently exist reveals them to be nothing more than very limited proficiency tests of the participating examiners. . .in addition to the fact that they do not circumstantially mirror casework.”); Mark Page et al., “Uniqueness in the Forensic Identification Sciences-Fact or Fiction?” 206 Forensic Sci. Int. 12, 15 (2011) (noting that even if legitimate studies of consecutively manufactured guns fail entirely to address the issue of random matching); D. Michael Risinger & Michael J. Saks, “A House with No Foundation” 20 Issues in Sci. & Tech. 1 (2003) (arguing that most research into forensic sciences has been highly partisan, effectively overbilling positive findings and hiding negative ones); Stephen G. Bunch, “Consecutive Matching Striation Criteria: A General Critique,” 45 J. Forensic Sci. 955, 961 (2000) (“But what about the CMS research that already has been conducted? Is it useful? An honest answer is that it is only marginally so.” & “The existing research findings are directly relevant for only particular barrel manufacturing methods, barrel lengths, barrel hardness, bullet hardness, and bullet surface materials...so far there has been a paucity of published, empirical validity research …drawing conclusions from the limited existing data is unjustified”).

[100] Nicholas D.K. Petraco et al., “Addressing the National Academy of Sciences’ Challenge: A Method for Statistical Pattern Comparison of Striated Tool Marks,” 57 J. Forensic Sci. 900 (2012).

[101] See PCAST, “Forensic Science in Criminal Courts,” at 125 (we believe that the state of forensic science would be improved if papers on the foundational validity of forensic feature-comparison methods were published in leading scientific journals rather than in forensic-science journals, where, owing to weaknesses in the research culture of the forensic science community discussed in this report, the standards for peer review are less rigorous”).

[102] Compare National Commission on Forensic Science, “Scientific Literature in Support of Forensic Science and Practice,” (2015); with AFTE Editorial Committee, “Comments on NCFS Views Document: ‘Scientific Support of Forensic Science & Practice’,” 47 AFTE J. 109 (2015); & Dominic J. Denio, “The History of the AFTE Journal, the Peer Review Process, and Daubert Issues,” 34 AFTE J 210 (2003).

[103] Adina Schwartz, “Affidavit in N.C. vs. Vonzel Adams, No. 05CRS5889,” (2010), available at https://www.fd.org/ docs/trainingmaterials/2010/MT2010/MT10_Firearm_Toolmark_ID.pdf.

[104] http://www.astm.org/DIGITAL_LIBRARY/JOURNALS/FORENSIC/jofs_subscription.html.

[105] The gross lengths necessary to obtain the AFTE Journal present a particular problem of hypocrisy for the discipline giving that it has chided critics for failing to allegedly engage thoroughly enough with source material from said journal. See e.g., Ronald G. Nichols, “The Scientific Foundations of the Firearms & Toolmark Identification: Responding to Recent Challenges” CAC News, 2nd Quarter, at 9 (2006).

[106] See Mnookin, “The Need for a Research Culture in the Forensic Sciences,” at 755-56 (“This journal therefore appears to have extremely limited dissemination beyond the members of AFTE itself; completely lacks integration with any of the voluminous networks for the production and exchange of scientific research information; and engages in peer review that is neither blind nor draws upon an extensive network of researchers. None of this is compatible with an accessible, rigorous, transparent culture of research”); Simon A. Cole, “How Do We Trust the Scientific Literature,” in Forensic Science Research and Evaluation Workshop, at 88-89 (2015) (adopting the same assessment); Itiel Dror, “Recognition & Mitigation of Cognitive Bias in Forensic Science: From Crime Scene Investigation to Forensic research & Literature,” in Forensic Science Research and Evaluation Workshop, at 57-58 (2015) (noting bias in research published in forensic journals without review by outside scientists because of motivation to “underpin and justify the existing practices”).

[107] AFTE, “The Response of the Association of Firearms & Tool Mark Examiners to the National Academy of Sciences 2008 Report Assessing the Feasibility, Accuracy, & Technical Capacity of a National Ballistics Database,” 40 AFTE J at 238; Bruce Moran, “A Report on the AFTE Theory of Identification and Range of Conclusions for Tool Mark Identification & Resulting Approaches to Casework,” 34 AFTE J 227 (2002) (the traditional ‘pattern match’ approach … relies on … the uniqueness of tool surfaces”); Alfred Biasotti & John Murdock, “The Scientific Basis of Firearms & Toolmark Identification,” at 140.

[108] See PCAST, “Forensic Science in Criminal Courts,” at 62 (“The issue is not whether objects or features differ; they surely do if one looks at a fine enough level. The issue is how well and under what circumstances examiners applying a given metrological method can reliably detect relevant differences in features to reliably identify whether they share a common source. Uniqueness studies, which focus on the properties of features themselves, can therefore never establish whether a particular method for measuring and comparing features is foundationally valid. Only empirical studies can do so.”)

[109] Michael J. Saks & Jonathan J. Koehler, “The Individualization Fallacy in Forensic Science Evidence” 61 Vand. L. Rev. 199 (2008) (explaining that “the claim of unique individuality cannot be proven with samples, especially samples that are a tiny proportion of the relevant population” and emphasizing that uniqueness “exists only in a metaphysical or rhetorical sense. It has no scientific validity, and it is sustained largely by the faulty logic that equates infrequency with uniqueness.”); Tobin, “Hypothesis Testing the Critical Underlying Premise of Discernable Uniqueness in Firearms-Toolmarks Forensic Practice,” 53 Jurimetrics at 122-23 (“The cited scholarly essays suggest that forensic individualization based on the claim of uniqueness has a scientifically indefensible conceptual foundation and is a fallacy promulgated by the forensic community.  The authors, and relevant mainstream scientists and colleagues with specialized forensic expertise with whom the authors have collaborated, agree.”); Mark Page, Jane Taylor, & Matt Blenkin, “Uniqueness in the Forensic Identification Sciences-Fact or Fiction?” 206 Forensic Sci. Int. 12, 13 (2011) (“Accumulation of positive instances simple cannot lead to a conclusion of certainty.”); John Thorton, “The General Assumptions & Rationale of Forensic Identification” at 12 (uniqueness does “not seem susceptible of rigorous proof. But the general principle cannot be substituted for a systematic and thorough investigation of a physical evidence category”).

[110] National Research Council, “Ballistic Imaging,” at 60.

[111] Page, “Uniqueness in the Forensic Identification Sciences-Fact or Fiction?” 206 Forensic Sci. Int. at 16; John Thorton, “The General Assumptions & Rationale of Forensic Identification” at 11.

[112] Alfred A. Biasotti & John Murdock, “Criteria for Identification or State of the Art of Firearm & Toolmark Identification”16 AFTE J. 16 (1984).

[113] Page, “Uniqueness in the Forensic Identification Sciences-Fact or Fiction?” 206 Forensic Sci. Int. at 15; see alsoThe Individualization Fallacy in Forensic Science Evidence” 61 Vand. L. Rev. at 208-09 (noting lack of science behind uniqueness concept: “various arguments have been offered on behalf of the individualization hypothesis. None are scientifically compelling. Some arguments rely on the metaphysical notion that because no two objects can be the same object, they will inevitably manifest observable differences. Some rely on appeals to venerated authority (dead members of our field said it was so), contemporary authority (living members of our field say it is so), wishful thinking (because object variability has been observed, there will always be discernible differences between any two objects), or the personal experience of practitioners (as if by doing casework on pairs of objects the nature of the population and relationships within that population are revealed). These approaches amount to nothing more than faith and intuition.”

[114] Alfred A. Biasotti, “A Statistical Study of the Individual Characteristics of Fired Bullets,” 4 J. Forensic Sci. 34, 40 (1959).

[115] Michael J. Saks, Jonathan L. Koehler, “The Coming Paradigm Shift in Forensic Identification Science,” 309 Science 892 (2005); see also Page, “Uniqueness in the Forensic Identification Sciences-Fact or Fiction?” 206 Forensic Sci. Int. at 17 (same).

[116] Simon A. Cole, “Forensic Statistics, Part II ‘Implicit Testing’: Can Casework Validate Forensic Techniques?,”46 Jurimetrics J. 117, 128 (2006).

[117] Id. at 122; see also Tobin, “Absence of Statistical and Scientific Ethos,” at 20, 21.

[118] See PCAST, “Forensic Science in Criminal Courts,” at 55 (“By contrast, ‘experience’ or ‘judgment’ cannot be used to establish the scientific validity and reliability of a metrological method, such as a forensic feature-comparison method. The frequency with which a particular pattern or set of features will be observed in different samples, which is an essential element in drawing conclusions, is not a matter of ‘judgment.’ It is an empirical matter for which only empirical evidence is relevant. Moreover, a forensic examiner’s ‘experience’ from extensive casework is not informative—because the ‘right answers’ are not typically known in casework and thus examiners cannot accurately know how often they erroneously declare matches and cannot readily hone their accuracy by learning from their mistakes in the course of casework. Importantly, good professional practices—such as the existence of professional societies, certification programs, accreditation programs, peer-reviewed articles, standardized protocols, proficiency testing, and codes of ethics—cannot substitute for actual evidence of scientific validity and reliability. Similarly, an expert’s expression of confidence based on personal professional experience or expressions of consensus among practitioners about the accuracy of their field is no substitute for error rates estimated from relevant studies. For a method to be reliable, empirical evidence of validity, as described above, is required”); Mnookin, “The Need for a Research Culture in Forensic Sciences,” at 745-48 (similarly rejecting experience and longstanding use as surrogates for appropriately conducted research).

[119] Page, “Uniqueness in the Forensic Identification Sciences-Fact or Fiction?,” 206 Forensic Sci. Int. at 13; see also William Tobin, “Hypothesis Testing of the Critical Underlying Premise of Discernible Uniqueness in Firearms-Toolmark Forensic Practice,” 53 Jurimetrics at 134 (questioning whether “cognitive retention and subsequent recollection of the spatial relationships (patterns) of the tens of millions of nondescript lines on the many thousands or millions of specimens over a lengthy period of time was humanly possible”); Simon A. Cole, “Forensic Statistics, Part II ‘Implicit Testing’: Can Casework Validate Forensic Techniques?,”46 Jurimetrics J. 117, 123 (2006) (“Casework does not focus on searching the database for exact duplicates to either the mark or the prints. Moreover, even if latent-print examiners were searching for duplicate fingerprints, the sheer number of possible combinations would render it extremely unlikely that duplicates would be found if they did exist.”)

[120] Simon A. Cole, “Forensic Statistics, Part II ‘Implicit Testing’: Can Casework Validate Forensic Techniques?,”46 Jurimetrics J. 117, 123 (2006).

[121] Page, “Uniqueness in the Forensic Identification Sciences-Fact or Fiction?,” 206 Forensic Sci. Int. at 14.

[122] Saks, “The Individualization Fallacy in Forensic Science Evidence,” 61 Vand. L. Rev. at 213.

[123] William Tobin, “Hypothesis Testing of the Critical Underlying Premise of Discernible Uniqueness in Firearms-Toolmark Forensic Practice,” 53 Jurimetrics at 134.

[124] As a final indicator of the inadequacy of training, this Court should consider PCAST’s statement that: “The mere fact that an individual has been trained in a method does not mean that the method itself is scientifically valid nor that the individual is capable of producing reliable answers when applying the method.”  PCAST, “Forensic Science in Criminal Courts, at 61.

[125] William J. Krouse, Congressional Research Service, “Gun Control Legislation,” at 8 (2012), available at https://fas.org/sgp/crs/misc/RL32842.pdf.

[126] This brief, moreover, does not even have the scope necessary to discuss other sources of potential confusion such as the effects of finishing processes and bullet velocity. See John Fowler & Dave Brundage, “The Effects of Velocity on Bullet Striations” 15 AFTE J. 56 (1983); Jessica A. Winn, “The Effect of Vibratory Finishing on Broaching Marks as a Function of Time”45 AFTE J. 350 (2013).

[127] Bieber, “Fire Investigation and Cognitive Bias,” Encyclopedia of Forensic Science (2014) (“Cognitive bias is the tendency for an examiner to believe and express data that confirm their own expectations and to disbelieve, discard, or downgrade the corresponding data that appear to conflict with those expectations. The observer’s conclusions become contaminated with a pre-existing expectation and perception, reducing the observer’s objectivity and laying the groundwork for selective attention to evidence.”)

[128] Wilson & Brekke, “Mental Contamination and Mental Correction: Unwanted Influences on Judgments and Evaluations,” Psychological Bulletin, 116, p. 119, (1994).

[129] Working Group on Human Factors in Latent Print Analysis, “Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach,” National Institute of Justice, at 10 (2012).

[130] Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology 2, p. 177 (1998).

[131] Id. at 175; see also Evans, “Bias in Human Reasoning: Causes and Consequences,” Psychology Press, at 41 (1989) (describing cognitive bias as “the best known and most widely accepted notion of inferential error to come out of the literature on human reasoning.”); National Commission on Forensic Science, “Ensuring that Forensic Analysis is Based on Task-Relevant Information,” at 4 (2015) (“Contextual bias is not a problem that is unique to forensic science. It is a universal phenomenon that affects decision making by people from all walks of life and in all professional settings”).

[132] Cipriano & Gruca, “Power of Priors: How Confirmation Bias Impacts Market Prices,” Journal of Predictive Markets, Vol. 8 (2014).

[133] Rosenthal & Jacobson, “Pygmalion in the Classroom,” Crown House Publishing (1992); Rosenthal, “How Often are Numbers Wrong?” American Psychologist, at 1005-1008 (1978); Cahen, “An Experimental Manipulation of the Halo Effect: A Study of Teacher Bias,” Stanford University manuscript (1965).

[134] Claudia Goldin & Cecilia Rouse, “Orchestrating Impartiality: The Impact of ‘Blind’ Auditions on Female Musicians,” 90 A. Economic Rev. 715 (2000) (using a screen to blind judges to the gender of a musician significantly increased the probability of female hires into orchestras).

[135] Rosenthal, “How Often Are Our Numbers Wrong,” Rev. Gen. Psychol. Vol. 2, 1005 (1978) (meta-study looked at 140,000 findings in published scientific data, and found that the data was systematically infected by cognitive bias in favor of the preferred hypotheses); Nuzzo, “How Scientists Fool Themselves-And How They Can Stop,” Nature, (October 7, 2015) (detailing the failure of reproducibility in many areas of scientific research, attributable to cognitive bias);  Ionnidis, “Why Most Published Research Findings Are False,” PLOS.Medicine, (2005) (finding significant bias in the methodology and publication of psychological research).

[136] Hrobjartsson et al., “Observer Bias in Randomized Clinical Trials With Measurement Scale Outcomes: A Syetematic Review of the Trials with Both Blinded and Nonblinded Assessors,” Canadian Medical Association Journal, at  p. 201 (2013) (establishing the effects of observer bias in clinical trials and concluding that “failure to blind outcome assessors in such trials results in a high risk of substantial bias”).

[137] Working Group on Human Factors in Latent Print Analysis, “Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach,” National Institute of Justice, at 40 (2012); see also  Forensic Science Regulator, “Draft Guidance: Cognitive Bias Effects Relevant to Forensic Science Examinations,” at 4 (2014) (emphasizing that every forensic discipline is “potentially susceptible to unconscious personal bias (cognitive contamination), which in turn could undermine the objectivity and impartiality of the forensic process.”).

[138] Lit, L., Schweitzer, J., & Oberbauer, A., “Handler Beliefs Affect Scent Detection Dog Outcomes,” 14 Animal Cognition 387 (2011).

[139] Nakhaeizadeh et al., “Cognitive Bias in Forensic Anthropology: Visual Assessment of Skeletal Remains is Susceptible to Confirmation Bias,” 54 Science and Justice 208 (2014).

[140] Bieber, “Fire Investigation and Cognitive Bias,” Encyclopedia of Forensic Science (2014); see also NFPA Guide for Fire and Explosion Investigations,” 2014 (formally recognizing the effects of expectation bias and confirmation bias in forensic arson investigations).

[141] Jeff Kukucka & Saul Kassim, “Do Confessions Taint Perceptions of Handrwriting Evidence? An Empirical Test of the Forensic Confirmation Bias,” Am. Pych. Assoc. (2013); Reinoud D. Stoel, et al., “Bias Among Forensic Document Examiners: Still a Need For Procedural Changes,” Australian J. Forensic Sci. (2013).

[142] Miller, “Procedural Bias in Forensic Science Examinations of Human Hair,” 11 L. & Hum. Behav., 157 (1987).

[143] Nikola K.P. Osborne et al., “Does Contextual Information Bias Bitemark Comparisons,” 54 Science & Justice 267 (2014); Mark Page et al., “Context Effects & Observer Bias-Implications in Forensic Odontology,” 57 J. Forensic Sci. 108 (2012).

[144] Michael C. Taylor, “The Reliability of Pattern Classification in Bloodstain Pattern Analysis, Part I: Bloodstain Patterns on Rigid Non-Absorbent Surfaces,” J. Forensic Sci. 1 (2016).

[145] Dror & Charlton, “Why Experts Make Errors,” 56 J. Forensic Identification,’ at 600 (2006); Dror et al, “Contextual Information Renders Experts Vulnerable to making Erroneous Identifications,” 156 Forensic Sci. Int’l 74 (2006); Working Group on Human Factors in Latent Print Analysis, “Latent Print Examination and Human Factors: Improving the Practice through a Systems Approach,” National Institute of Justice, at 20 (2012)  (noting that “bias and error can occur in any process for making comparisons and drawing inference,” and further stating that “in the context of latent print examination, they can play a role in the final decision by an examiner”).

[146] Dror &Hampikian, “Subjectivity and Bias in Forensic DNA Mixture Interpretation,” 51 Science and Justice 204 (2011).

[147] National Academy of Sciences, “Strengthening Forensic Science in the United States: A Path Forward,” National Academies Press, at 122 (2009).

[148] National Commission on Forensic Science, “Ensuring that Forensic Analysis is Based on Task-Relevant Information,” at 4 (2015); see also M.J. Saks et al., “Context Effects in Forensic Science: A Review & Application of the Science of Science to Crime Laboratory Practice in the United States,” 43 Science & Justice 77, 78 (2003).

[149] Itiel Dror, “The Paradox of Human Expertise: Why Experts Get it Wrong,” in The Paradoxical Brain (2011).

[150] See, Nuzzo, “How Scientists Fool Themselves-And How They Can Stop,” Nature, October 7, 2015 (reporting on blind data analysis in physics); Wilson & Brekke, “Mental Contamination and Mental Correction: Unwanted Influences on Judgments and Evaluations,” Psychological Bulletin, 116, p. 134 (1994) (discussing blinding and exposure control procedures); MacCaun &Perlmutter, “Blind Analysis: Hide Results to Seek the Truth,” Nature (October 7, 2015).

[151] National Academy of Sciences, “Strengthening Forensic Science in the United States: A Path Forward,” National Academies Press, 2009, at 8-9, n.8.

[152] Id. at 184-85.

[153] National Commission on Forensic Science, “Ensuring that Forensic Analysis is Based on Task-Relevant Information,” (2015); Dan Krane et al., “Sequential Unmasking: A Means of Minimizing Observer Effects in Forensic DNA Interpretation,” 53 J. Forensic Sci. 1006 (2008); Itiel Dror, “Combating Bias: The Next Step in Fighting Cognitive and Pychological Contamination,” 57 J. Forensic Sci. 276 (2011); PCAST, “Forensic Science in Criminal Courts,” at 49, 51 (concluding that firearms examination is “especially vulnerable to human error, inconsistency across examiners, and cognitive bias” & noting that “Several strategies have been proposed for mitigating cognitive bias in forensic laboratories, including managing the flow of information in a crime laboratory to minimize exposure of the forensic analyst to irrelevant contextual information (such as confessions or eyewitness identification) and ensuring that examiners work in a linear fashion, documenting their finding about evidence from crime science before performing comparisons with samples from a suspect”).

[154] Organization of Scientific Area Committees, “OSAC Research Needs Assessment Form- Cognitive         Bias: To What Extent Does it Affect Firearm and Toolmark              Comparison Outcomes,” available at http://www.nist.gov/forensics /osac/ upload/FATM-Research-Needs-Assessment_Cognitive-Bias.pdf.

[155] Jose Kerstholt, et al, “Does Suggestive Information Cause a Confirmation Bias in Bullet Comparisons,” 198 Forensic Sci. Int’l 138, 139, 141 (2010).

[156] Id. at 140, Table 2.

[157] William C. Smith, “Who Me … Biased? Or ‘We Have Met the Enemy, and He is Us!’,” 25 AFTE J 260 (1993); Evan E. Hodge, “Guarding Against Error,” 20 AFTE J. 290 (1988) (acknowledging that all examiners are affected by outside pressures and noting an examination that occurred as a result); Budowle, “A Perspective on Errors, Bias, & Interpretation in the Forensic Sciences and Direction for Continuing Advancment,” at 803 (cognitive bias “might override sound judgment, may affect interpretations in certain circumstances, and need to be minimized”).

[158] Daniel C. Murrie, “Are Forensic Experts Biased by the Side that retained Them?,” 24 Pych. Sci. 1889 (2013); Michael Risinger et al., “The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion,” 90 Calif. L. Rev. 1, 48 (2002) (noting that in crime lab study “fewer than 10% of all reports disassociated a suspect from the crime scene or from connection to the victim”).

[159] Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” at 181. The effects of comparing known and questioned items side by side has been studied in fingerprint comparison, where researchers have concluded that the presence of a suspects prints will meaningfully change an examiner’s interpretation of a latent print, potentially causing them to see ridge patterns that are not actually present or to interpret ambiguous patterns in an inculpatory rather than exculpatory fashion. See Dror et al., “Cognitive issues in fingerprint analysis: inter- and intra-expert consistency and the effect of a 'target' comparison,” 208 Forensic Sci. Int’l 10 (2011).

[160] United States v. Green, 405 F.Supp.2d 104,  108 (D.C.Mass 2005); see also U.S. v. Taylor, 663 F.Supp.2d 1170, 1178-79 (D.C.N.M 2009) (The problem with this practice is the same kind of problem that has troubled courts with respect to show-up identifications of people: it creates a potentially significant "observer effect" whereby the examiner knows that he is testing a suspect weapon and may be predisposed to find a match.”); Michael Risinger et al., “The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion,” 90 Calif. L. Rev. 1, 48 (2002).

[161] People v. Carrero, 345 Ill. App. 3d 1, 10 (1st Dist. 2003); see also Stovall v. Denno, 388 U.S. 293 (1967).

[162] National Commission on Forensic Science, “Ensuring that Forensic Analysis is Based on Task-Relevant Information,” at 4.

[163] See, e.g. Ronald G. Nichols, “The Scientific Foundations of the Firearms & Toolmark Identification: Responding to Recent Challenges” CAC News, 2nd Quarter, at 26 (2006) (noting the difficulties examiners face when damage from impact on bullets makes correspondence borderline at best).

[164] See People v. Robinson, 2013 IL App (1st) 102476 (2013) (ISP examiner testified that “the increased automation in firearms manufacture since 1929 creates potential carryover in subclass characteristics”).

[165] F.H. Cassidy, “Examination of Toolmarks from Sequentially Manufactured Tongue-and-Groove Pliers,” 25 J. Forensic Sci. 796, 797 (1980); see also William A. Tobin, “Affidavit in Virginia v. Macumber,” at 11,12 (2011) (“The effect of these motivating concerns [costs of manufacture] has been increasingly larger production lots before tooling changes are required. This consequently means that the subclass characteristics  (toolmarks) imparted to workpieces such as barrels, extractors, ejectors, and breech faces during production have tended to exist in larger production lots over time.” & “It can be expected that consecutively formed components could readily be confused in specific source attributions particularly when the examinations are temporally isolated”).

[166] Gil Hocherman et al, “Identification of Polygonal Barrel Sub-Family Characteristics,” 35 AFTE J 197, 200 (2003) (even after specialized training examiners erred up to 20% of the time identifying even the manufacturer of polygonally rifled barrels); National Research Council, “Strengthening Forensic Science in the United States: A Path Forward,” National Academies Press, 2008, at 46. (“generally speaking it is possible, although extremely difficult, to match bullets from polygonally rifled barrels.”); Jan DeKinder et al., “Reference Ballistics Imaging Database Performance,” 140 Forensic Sci. Int’l 207, 213 (2003).

[167] Joan Griffin & David LaMagna, “Daubert Challenges to Forensic Evidence: Ballistics Next on the Firing Line” The Champion, September/October, at 58 & 59 (2002).

[168] M.S. Bonfanti & J Dekinder, “The Influence of Manufacturing Processes on the Identification of Bullets & Cartridge Cases- A Review of the Literature” 39 Sci. & Justice 3, 4 (1999).

[169] John Murdock, “Some Suggested Court Questions to Test Criteria for Identification Qualifications,” 24 AFTE J 69, 70-71 (1992).

[170] David Baldwin et al., “Statistical Tools for Forensic Analysis of Toolmarks,” AMES Laboratory Final Report IS-5160, at 24 (2004).

[171] Alfred A. Biasotti, “A Statistical Study of the Individual Characteristics of Fired Bullets,” 4 J. Forensic Sci. 34, 38 (1959).

[172] Id. at 40.

[173] Tobin & Blau, “Hypothesis Testing of the Critical Underlying Premise of Discernible Uniqueness in Firearms-Toolmark Forensic Practice,” at 136; see also See Jerry Miller & Michael Neel “Criteria for Identification of Toolmarks Part III: Supporting the Conclusion,” 36 AFTE J 7, 9 (2000) (reporting that even more significant matching has been observed by the author in known, non-matches).

[174]Adina Schwartz, “A Systemic Challenge to the Reliability & Admissibility of Firearms & Toolmark Identification,” 6 Colum. Sci. & Tech. L. Rev. 2 (2005).

[175] Data for both graphs available in: Jerry Miller, “Criteria for Identification of Toolmarks Part II: Single Land Impression Comparisons,” 32 AFTE J 116, at 117, 121, & 124 (2000).

[176] Joseph J. Masson, “Confidence Level Variations In Firearms Identifications Through Computerized Technology,” 29 AFTE J. 42 (1997).

[177] Id.

[178] T.V. Vorburger et al., “Surface Topography Analysis for a Feasibility Assessment of a National Ballistics Imaging Database,” National Institute of Standards & technology Interagency/Internal Report (NISTIR) – 7362, at 94 (May 1, 2007), available at  http://www.nist.gov/manuscript-publication-search.cfm?pub_id=822733.

[179] Jerry Miller & Glen Beach, “Toolmarks: Examining the Possibility of Subclass Characteristics” 37 AFTE J 296 (2005).

[180] AFTE, “Theory of Identification as it Relates to Toolmarks” 30 AFTE J. 86 (1998); Alfred Biasotti & John Murdock, “Criteria for Identification or State of the Art of Firearm & Toolmark Identifiction,” 16 AFTE J 16, 17 (1984); see also Adina Schwartz, “A Systemic Challenge to the Reliability & Admissibility of Firearms & Toolmark Identification,” 6 Colum. Sci. & Tech. L. Rev. 2 (2005) (“Other manufacturing processes result in batches of such similar tools that their toolmarks have the same subclass characteristics, and may or may not also have individual characteristics”); David Q. Burd & Allan E. Gilmore, “Individual and Class Characteristics of Tools” 13 J. Forensic Sci. 390 (1968) (“Modern mass production methods used in industry often result in repetitive structural detail being left on tool surfaces. This is particularly true when such tools are formed in a mold, die stamped, or die forged”).

[181] Alfred Biasotti & John Murdock, “Criteria for Identification or State of the Art of Firearm & Toolmark Identification,” 16 AFTE J 16, 18 (1984) (“we can have remarkable reproduction on many hundred or even thousands of individual items”); M.S. Bonfanti & J Dekinder, “The Influence of Manufacturing Processes on the Identification of Bullets & Cartridge Cases- A Review of the Literature” 39 Sci. & Justice 3, 5 (1999) (noting that one tool, thanks to manufacturing improvements, may now make batches of hundreds or thousands of barrels); Gene C. Rivera, “Subclass Characteristics in Smith & Wesson SW40VE Sigma Pistols” 39 AFTE J 247, 250 (2007) (“anywhere between a couple of hundred to one thousand slides could be machined before the broach is resharpened”).

[182] Adina Schwartz, “A Systemic Challenge to the Reliability & Admissibility of Firearms & Toolmark Identification,” 6 Colum. Sci. & Tech. L. Rev. 2 (2005).

[183] William A. Tobin, “Affidavit in Virginia v. Macumber,” at 8.

[184] Top Left: Frederic A. Tulleners & James S. Hamiel, “Subclass Characteristics of Sequentially Rifled 38 Special S & W Revolver Barrels” 31 AFTE J. 117 (2004); Top Right: http://projects.nfstc.org/firearms/module13/fir_m13_t05

_04.htm.; Middle Left: Evan Thompson, “Possible NFEA research Project,” AFTE Forum (2011); Middle Right: Gene C. Rivera, “Subclass Characteristics in Smith & Wesson SW40VE Sigma Pistols” 39 AFTE J 247  (2007); Bottom Left: Michael Lee et al, “Subclass Carryover in Smith & Wesson M&P 15-22 Rifle Firing Pins,” 48 AFTE J. 27 (2016); Bottom Right: F.H. Cassidy, “Examination of Toolmarks from Sequentially Manufactured Tongue-and-Groove Pliers,” 25 J. Forensic Sci. 796 (1980).

[185] AFTE Committee for the Advancement of the Science of Firearm & Toolmark Identification, “The Response of the Association of Firearm and Toolmark Examiners to the National Academy of Sciences 2008 Report Assessing the Feasibility, Accuracy, and Technical Capability of a National Ballistics Database,” 40 AFTE J 234, 239 (2008).

[186] AFTE, “Theory of Identification as it Relates to Toolmarks” 30 AFTE J. 86 (1998).

[187] Gene C. Rivera, “Subclass Characteristics in Smith & Wesson SW40VE Sigma Pistols” 39 AFTE J 247 (2007).

[188] Ronald G. Nichols, “Defending the Scientific Foundations of the Firearms & Toolmark Identification Discipline: Responding to Recent Challenges,” 52 J. Forensic Sci. 586, 587 (2007).

[189] Ronald Nichols, “Firearm and Tool Mark Identification: The Scientific Reliability and Validity of the AFTE Theory of Identification Discussed Within the Framework of a Study of Ten Consecutively Manufactured Extractors,” 36 AFTE J 67, 77 (2004).

[190] William A. Tobin, “Affidavit in Virginia v. Macumber,” at 9; see also Clifford Spieglman & William Tobin, “Analysis of Experiments in Forensic Firearms/Toolmarks Practice Offered as Support for Low Rates of Practice Error and Claims of Inferential Certainty,” 12 Law, Probability, & Risk 115, 128 (2013) (“...one of the authors with relevant manufacturing experience has observed that the majority of manufacturing marks (other than grinding) imparted to work pieces are subclass in nature.”)

[191] William Matty & Torrey Johnson, “A Comparison of Manufacturing Marks on Smith & Wesson Firing Pins,”16 AFTE J 51 (1984) (describing concentric rings left by firing pins would be common to all pins produced by the same tool); Evan Thompson, “False Breech Face ID’s,” 28 AFTE J. 95 (1996) (“an examiner could miscall an identification based only on breechface markings” of Lorcin handguns); M.S. Bonfanti & J Dekinder, “The Influence of Manufacturing Processes on the Identification of Bullets & Cartridge Cases- A Review of the Literature,” 39 Sci. & Justice 3, 5 (1999) (because of subclass marks “a correct identification of the firearm on basis of breech face and firing pin impressions, respectively, turned out to be hardly possible”); Michael Lee et al., “Subclass Carryover in Smith & Wesson M&P 15-22 Rifle Firing Pins,” 48 AFTE J. 27, 29 (2016) (for firing pins “a false-positive identification could be made if no other marks were utilized in making the identification”); Vyacheslav Polosin, “Subclass Characteristics in Extractor Groove of Winchester Cartridges,” 48 AFTE J. 50 (2016); Alicia K. Welch “Breech Face Subclass Characteristics of the Jiminez JA Nine Pistol” 45 AFTE J. 336, 343 (Fall 2013) (calling breechface similarity due to subclass a “startling observation”); Frederic A. Tulleners & James S. Hamiel, “Subclass Characteristics of Sequentially Rifled 38 Special S & W Revolver Barrels,” 31 AFTE J 117 (1999) (“If these striae were not caused by subclass features of the rifling tool, the extent of this agreement would be sufficient for an identification.”); Ronald Nies, “Anvil Marks of the Ruger MKII Target Pistol-An Example of Subclass Characteristics,” 35 AFTE J 75 (2003) (“A surprisingly high degree of agreement could be found...even when the magnification was increased to 79x, enough agreement of the fine detail was present to possibly lead to the mistaken conclusion that the two cartridge cases could have been fired in the same barrel”); Patrick D. Ball, “Toolmarks Which May Lead to False Conclusions,” 32 AFTE J 292 (2000) (pre-firing marks on cartridges “could easily be identified as breechface impressions); Susan M. Komar & Gregory E. Scala, “Examiners Beware New Bolt Cutter Blades-Class or Individual,” 25 AFTE J 298 (1993) (subclass correspondence “could easily be mistaken for true matches”); Steve Kramer, “Subclass Characteristics on Firing Pins Manufactured by ‘Metal Injection Molding’,” 44 AFTE J 364 (2012); Salvatore LaCova, et al, “Subclass Characteristics on CCI Speer Cartridge Case Heads,” 42 AFTE J 281 (2010); Laura L. Lopez & Sally Grew, “Consecutively Machined Ruger Bolt Faces,” 32 AFTE J 19 (2000) (“Comparisons revealed a startlingly high correspondence of microscopic characteristics among the bolt faces examined”); Gene C. Rivera, “Subclass Characteristics in Smith & Wesson SW40VE Sigma Pistols,” 39 AFTE J 247 (2007) (“This article documents an alarming example of subclass characteristics that could easily be mistaken for individual characteristics, and might lead an examiner to make a false positive identification”); Peter Lardizabal, “Cartridge Case Study of the Heckler & Koch USP,” 27 AFTE J 49 (1995) (noting “excellent correspondence” between breech faces markings from different guns); E.J.A.T Mattijssen et al., “Subclass Characeristics in a Gamo Air Rifle Barrel,” 45 AFTE J 281 (2013); Evan Thompson, “Possible NFEA research Project,” AFTE Forum (2011) (describing subclass markings on land impressions of Ruger and Winchester firearms) available at http://forum.afte.org/index.php?topic=7455.0; National Forensic Science Technology Center, “Firearms Examiner Training: Physical Characteristics,” available at http://projects.nfstc.org/firearms/module13/fir_m13_t05_04.htm (same); Jason Flater, “Manufacturing Marks on Winchester USA Brand 9mm Luger Primers,” 34 AFTE J 315 (2002) (describing subclass marks on unfired cartridges); Bill Matty, “Lorcin L9mm & L380 Pistol Breechface Toolmark Patterns,” 31 AFTE J 134 (1999) (noting that because of subclass issues, breechfaces of Lorin fired cartridges alone are insufficient for identification); Evan Thompson & Rick Wyant, “9mm Smith & Wesson Ejectors,” 34 AFTE J 406 (2002) (because of subclass markings “more than just an ejector toolmark must be used before making an identification to a particular firearm); Michelle Hunsinger, “Metal Injection molded Strikers & Extractors in Smith & Wesson Model M&P Pistol,” 45 AFTE J 21 (2013) (noting especial concern after finding subclass marks because “the corresponding areas are small and irregular; not what examiners are taught to be subclass”); Fabiano Riva, “Objective Evaluation of Subclass Characteristics on Breech Face Marks,” 62 J. Forensic Sci. 417 (2017) (“recognizing subclass characteristics is not an easy task, and some have rightly indicated that the ability of examiners to detect them is not well established”).

[192] Stephen R. Garten, “The Effect of Subclass Characteristics Involving Shotgun Ammunition on IBIS Entries & Correlation Results,” 42 AFTE J 364 (2010).

[193] See e.g., Michael Lee et al., “Subclass Carryover in Smith & Wesson M&P 15-22 Rifle Firing Pins,” 48 AFTE J. 27 (2016); Vyacheslav Polosin, “Subclass Characteristics in Extractor Groove of Winchester Cartridges,” 48 AFTE J. 50 (2016); Alicia K. Welch “Breech Face Subclass Characteristics of the Jiminez JA Nine Pistol,” 45 AFTE J. 336 (2013).

[194] Alfred Biasotti & John Murdock, “Criteria for Identification or State of the Art of Firearm & Toolmark Identification,” 16 AFTE J 16, 18-19 (1984) (“Because what would constitute these subclass features is a function of the relative hardness of the tool, the material, and the dynamics of the cutting process, it is not currently possible to describe them in quantitative terms. ); Adina Schwartz, “A Systemic Challenge to the Reliability & Admissibility of Firearms & Toolmark Identification,” 6 Colum. Sci. & Tech. L. Rev. 2 (2005) (noting that firearms examiners have no rules or statistics for the frequency of subclass marks, how they can be identified, or how long they may last, so that “examiners can only rely on their personal familiarity with types of forming and finishing processes and their reflections in toolmarks.”); William A. Tobin, “Affidavit in Virginia v. Macumber,” at 9 (“The AFTE theory provides no guidance on this question”).

[195] See e.g., Richard K. Maruoka, “Guilty Before the Crime? The Potential for a Possible Misidentification or Elimination,” 26 AFTE J 206 (1994);

[196] See e.g., Richard K. Marouka, “Guilty Before the Crime II?” 27 AFTE J 20 (1995); Patrick D. Ball, “Toolmarks Which May Lead to False Conclusions,” 32 AFTE J 292, 293 (2000) (Those markings are only around the outer circumference of the primer, the center area was free of toolmarks. This is different from other reported manufacturing toolmarks on primers previously reported on in past issues of the journal”); E.J.A.T Mattijssen et al., “Subclass Characeristics in a Gamo Air Rifle Barrel,” 45 AFTE J 281 (2013); Evan Thompson, “Possible NFEA Research Project,” AFTE Forum (2011); Hunsinger, “Metal Injection molded Strikers & Extractors in Smith & Wesson Model M&P Pistol,” 45 AFTE J 21 (2013).

[197] Ronald G. Nichols, “Response to- Adina Schwartz, Commentary on: Defending the Scientific Foundations of the Firearms & Toolmark Identification Discipline: Responding to Recent Challenges,” 52 J. Forensic Sci. 1416 (2007).

[198] Petra Pauw-Vugts, et al, “FAID2009: Proficiency Test & Workshop,” 45 AFTE J 115, 124 & 126 (2013).

[199] William A. Tobin, “Affidavit in Virginia v. Macumber,” at 15.

[200] Michael J. Saks & Jonathan L. Koehler, “The Coming Paradigm Shift in Forensic Identification Science,”  309 Science 892  (2005) (“Erroneous forensic science expert testimony is the second most common contributing factor to wrongful convictions, found in 63% of those cases. These data likely understate the relative contribution of forensic science expert testimony to erroneous convictions.”); ABA Criminal Justice Section’s Ad Hoc Committee to Ensure the Integrity of the Criminal Process, “Achieving Justice: Freeing the Innocent & Convicting the Guilty,” (2006) (reporting that 1/3 of DNA exonerations resulted from tainted or fraudulent science); Craig Cooley & Gabriel Oberfield, “Symposium: Daubert, Innocence, and the Future of Forensic Science: Increasing Forensic Evidence’s Reliability and Minimizing Wrongful Convictions: Applying Daubert Isn’t the Only Problem,” 43 Tulsa L. Rev. 285 (2007) (describing similar percentages); PCAST, “Forensic Science in Criminal Courts,” at 3.

[201] Tasha P. Smith, et al., “A Validation Study of Bullet & Cartridge Case Comparisons Using Samples Representative of Actual Casework,” 69 J. Forensic Sci. 939 (2016) (“The ‘human factor’ in identification accounts for tremendous variability in analysis”); Angela Stroman, “Empirically Determined Frequency of Error in Cartridge Case Examinations Using a Declared Double-Blind Format,” 46 AFTE J. 157, 169-70 (2014) (documenting variations in the types of marks used by examiners during study to reach cartridge identifications as well as differing levels of knowledge concerning subclass characteristics); R.G. Nichols, “The Scientific Foundations of the Firearms & Toolmark Identification: Responding to Recent Challenges,” CAC News, 2nd Quarter, at 26 (2006) (“In the absence of a specific criterion such as CMS, there will be some difference between examiners as to what constitutes the best-known non-match situation…it is not necessarily unexpected that one examiner would reach an inconclusive determination while another might conclude a more positive association.”); Kertsholt, “Does Suggestive Information Cause a Confirmation Bias in Bullet Comparisons,” at 141 (when two sets of examiners evaluated the same ballistic evidence in 128 cases, they did not agree 16% of the time).

[202] Ronald G. Nichols, “Defending the Scientific Foundations of the Firearms & Toolmark Identification Discipline: Responding to Recent Challenges,” 52 J. Forensic Sci. 586, 590 (2007) (“while the concept of correspondence exceeding that observed in a best-known nonmatch situation is a standard ideal, the actual definition of that will be different between examiners because they have different experiences. For example, an examiner in California has access to certain training materials dealing with comparing known nonmatches that establish a baseline correspondence. It is very likely that an examiner in the Northeast has different materials and will therefore develop a different experiential concept of the best-known nonmatch.”)

[203] Werner Deinet, “Comments of the Application of Theoretical Probability Models Including Bayes Theorem in Forensic Science Relating Firearm and Tool Marks,” 39 AFTE J 4, 6 (2007) (“Very often, two independent experts will get different results concerning the total number of striae and the number of matching striae”); Schwartz, “A Systemic Challenge to the Reliability & Admissibility of Firearms & Toolmark Identification,” at  ¶42 (“different, well-qualified examiners are likely to count different numbers of striae on the same toolmark. This creates the possibility that different experienced examiners will reach different conclusions about whether the same toolmarks satisfy or do not satisfy the CMS criteria.”)

[204] Jerry Miller, “Criteria for Identification of Toolmarks Part III: Supporting the Conclusion,” 36 AFTE J 7, 9 (2004). Given that the total lines observed on any one surface reached only as high as 58, see id., differences in count between examiners of up to 21 and 23 lines  surely qualify as substantial.

[205] See e.g., Ulery, et al., “Repeatability and Reproducibility of Decisions By Latent Print Examiners,” Proceedings of the National Academy of Sciences (2012) (latent print examiners disagreed with each other about 50% of the time on difficult cases, and about 20% of the time on easier cases.  Examiners changed their own opinion about 30% of the time when taking a second look at fingerprint evidence identified as more difficult and about 10% of the time on easier cases); Neumann et al., “Improving the Understanding and the Reliability of the Concept of ‘Sufficiency’ in Friction Ridge Examination,” U.S Department of Justice, p. 56 (2013) (146 fingerprint examiners agreed regarding the suitability of prints for comparison on none of 15 sets of latent prints analyzed); Ulery et al., “Changes in Latent Fingerprint Examiners’ Markup Between Analysis and Comparison,” 247 Forensic Sci. Int’l 58 (2015) (fingerprint examiners agreed on just 23% of features in clear areas of latent prints when reaching association conclusions).

[206] PCAST, “Forensic Science in Criminal Courts,” at 57-59 (criticizing existing tests as not being blind and noting that even the designer of CTS testing admits that they are designed to be simplistic because the forensic industry prefers them that way); William Tobin & Peter Blau, “Hypothesis Testing of the Critical Underlying Premise of Discernible Uniqueness in Firearms-Toolmark Forensic Practice,” 53 Jurimetrics 121, 137 (2013); Adina Schwartz, “Challenging Firearms and Toolmark Identification- Part Two,” The Champion XXXII (9): 44-52, 47 (2008) (explaining that many CTS tests do not even require examiners to distinguish between guns of the same make or model and are described as incredibly easy by examiners, one of whom noted that he could complete the test “virtually with the naked eye”); Richard Grzybowski, et al, “Firearm ToolMark Identification: Passing the Reliability Test Under Federal and State Evidentiary Standards,” 35 AFTE J 209, 219 (2003) (“most [error rate tests] tend to be rather straightforward and of only moderate difficulty”);Michael J. Saks & Jonathan L. Koehler, “The Coming Paradigm Shift in Forensic Identification Science,” 309 Science 892 (2005) (describing proficiency testing as “infrequent, internal, and unrealistic”); Gianelli, “Reference Guide on Forensic Identification Expertise,” at 98 (tests are not representative or blind, error rates vary based on counting of inconclusive responses, and the rigor of exams has been questioned); Peterson & Markham, “Crime Laboratory Proficiency Testing Results, 1978-1991, I: Identification & Classification of Physical Evidence,” J. Forensic Sci. 994, 997 (1995); Angela Stroman, “Empirically Determined Frequency of Error in Cartridge Case Examinations Using a Declared Double-Blind Format,” 46 AFTE J. 157, 158 (2014) (When examiners knows they are being tested: “The examiner may feel more pressure to perform accurately and get the correct answer, which may lead the examiner to treat the test in a different manner than real casework” moreover notes of CTS tests that they are often taken collaboratively “typically do not accurately reflect the full range of difficulty experienced in real-life firearm and toolmark casework”); AFTE Committee for the Advancement of the Science of Firearm & Toolmark Identification, “AFTE Response to the NACDL Task Force on the Future of Forensic Science,” 42 AFTE J 102, 103 (2010) (says of CTS tests: “could be more representative of casework compared to how most of them are currently prepared and administered”); Donald Kennedy & Richard A. Merrill “Assessing Forensic Science” Issues in Sci. & Tech.: 20, 1 (Fall 2003) (“practitioners have not been subjected to rigorous proficiency testing, reliable error rates are not known”); Petra Pauw-Vugts, “FAID2009: Proficiency Test & Workshop,” at 117 (“Most of the participating examiners already participated in the CTS collaborative testing program, but found this test insufficiently challenging to be of use in demonstrating competence in microscopy skills”); Michigan State Police Forensic Science Division, “Audit of the Detroit Police Department Forensic Services Laboratory Firearms Unit,” at 27 (2008) (proficiency tests are taken as a group with consensus answers submitted to the test provider, management cannot determine an individual examiner’s proficiency level.”), available at http://www.sado.org/content/pub/10559_MSP-DCL-Audit.pdf.

[207] Adina Schwartz, “Challenging Firearms and Toolmark Identification- Part Two,” The Champion XXXII (9): 44-52, 47 (2008) (“results on the CTS tests provide an inflated, rather than an accurate, estimate of the competence of examiners”); Michael J. Saks & Jonathan L. Koehler, “The Coming Paradigm Shift in Forensic Identification Science,” 309 Science 892 (2005) (“Indeed these existing data [on error rates] are probably best regarded as lower- bound estimates of error rates. Because the tests are relatively easy …and because participants know that mistakes will be identified and punished, test error rates (particularly the false-positive error rate) probably are lower than those in everyday casework.”)

[208] Peterson & Markham, “Crime Laboratory Proficiency Testing Results, 1978-1991, II” at 1018-19 (inconclusive rate reached 69%, and increases “seemed to be a function of the difficulty of the test”); Gianelli, “Reference Guide on Forensic Identification Expertise,” at 98 (firearms examination “constitute the evidence category where evidence comparisons have the highest rates of inconclusive responses.”); Tobin & Blau, “Hypothesis Testing of the Critical Underlying Premise of Discernible Uniqueness in Firearms-Toolmark Forensic Practice,” at 137, n.39; Smith, “Cartridge case and bullet comparison validation study with firearms submitted in casework,” 37 AFTE J 130 (2005) (examiners declared 97% of the 704 different source samples they inspected to be inconclusives rather than exclusions).

[209] To the extent possible, and where the necessary data was available, the error rates noted above were calculated using the approach advocated by PCAST of discounting inconclusive responses. See PCAST, “Forensic Science in Criminal Courts,” at 51-52 (“When reporting a false positive rate to a jury, it is scientifically important to calculate the rate based on the proportion of conclusive examinations, rather than just the proportion of all examinations. This is appropriate because evidence used against a defendant will typically be based on conclusive, rather than inconclusive, examinations. To illustrate the point, consider an extreme case in which a method had been tested 1000 times and found to yield 990 inconclusive results, 10 false positives, and no correct results. It would be misleading to report that the false positive rate was 1 percent (10/1000 examinations). Rather, one should report that 100 percent of the conclusive results were false positives (10/10 examinations)”).

[210] Petra Pauw-Vugts, et al., “FAID2009: Proficiency Test & Workshop,” 45 AFTE J 115 (2013) (also reporting misidentification rate of 3% on a 2005 version of the same proficiency test).

[211] Joseph L. Peterson & Penelope Markham, “Crime Laboratory Proficiency Testing Results, 1978-1991, II. Resolving Questions of Common Origin,” 40 J. Forensic Sci. 994, 1009, 1018-19, 1024 (1995) (also noting that for toolmarks as opposed to firearms misidentification rate came in at as high as 13%).

[212] Paul C. Gianelli, et al., “Reference Guide on Forensic Identification Expertise,” In Reference Manual on Scientific Evidence (National Academies Press 2011), at 97 (also noting that administrators of the test viewed the error rates of firearms examiners as “particularly grave in nature”). And in fact, for the methodological twin of firearms examination (toolmark examination) one author has tabulated an error rate of 35%. See Michael J. Saks, “Merlin & Solomon: Lessons from the Law’s Formative Encounters with Forensic Identification Science,”49 Hastings L.J. 1069, 1089 (1998).

[213] PCAST, “Forensic Science in Criminal Courts,” at 109.

[214] Petra Pauw-Vugts, et al.,“FAID2009: Proficiency Test & Workshop,” 45 AFTE J 115, 124 & 126 (2013).

[215] Audits likely provide more reliable estimates of error even than testing of examiners. See Report of the Governor’s Commission on Capital Punishment: Chapter 3, DNA & Forensic Testing,” at 53 (April 15, 2002) (advocating for routine, external audits whether both police and civilian crime laboratories); Janine Arvizu, “Forensic Labs: Shattering the Myth,” Champion Magazine, at 7 (May 2000) (former lab director and lab quality assurance specialist notes that “In recent years, review by external parties has been the most effective means of identifying forensic laboratory problems” as well as that “An independent on-site quality audit is the best means of assessing the quality of the field and laboratory operations”); Ronald G. Nichols, “Defending the Scientific Foundations of the Firearms & Toolmark Identification Discipline: Responding to Recent Challenges, ” 52 J. Forensic Sci. 586, 592-93 (2007) (“A better estimation of error rate in casework would be most rigorously achieved by the re-examination of several thousand cases where each case was examined by a panel of experts to achieve consensus”); Michael J. Saks, “Merlin & Solomon: Lessons from the Law’s Formative Encounters with Forensic Identification Science,” 49 Hastings L.J. 1069, 1089 (1998) (“the only way to find false matches would be to conduct special studies to look for them”).

[216] Michigan State Police Forensic Science Division, “Audit of the Detroit Police Department Forensic Services Laboratory Firearms Unit,” at 4 (2008). Similarly disturbing results have been documented in audits of other forensic disciplines. See Chris Swecker & Michael Wolf, “AN INDEPENDENT REVIEW OF THE SBI FORENSIC LABORATORY,” at 3 (2010), available at http://truthinjustice.org/sbi.audit.report.pdf; FBI Press Release, “FBI Testimony on Microscopic Hair Analysis Contained Errors in at Least 90 Percent of Cases in Ongoing Review,” (April 20, 2015).

[217] Craig Cooley & Gabriel Oberfield, “Symposium: Daubert, Innocence, and the Future of Forensic Science: Increasing Forensic Evidence’s Reliability and Minimizing Wrongful Convictions: Applying Daubert Isn’t the Only Problem,” 43 Tulsa L. Rev. 285, 337-38 (2007) (discussing wrongful conviction of Charles Stielow in 1915 due to firearms examination misidentification).

[218] Megan Cassidy, “A look inside the reports that unraveled the Phoenix freeway-shooting case,” The Arizona Republic (May 13, 2016) (reporting on the release of a suspect in a spree of highway shooting after a former AFTE president, reviewed the initial firearms examination report and concluded that no identification was warranted).

[219] Williams v. Quarterman, 551 F.3d 352 (5th Cir. 2008) (granting postconviction hearing based on State’s expert’s admission regarding firearms misidentification); People v. Williams, 2010 Mich. App. LEXIS 1955, at *16-17 (2010) (“The initial police report indicated that all bullet casings from the crime scene came from a single weapon. Subsequent testing showed that the bullet casings actually came from two different AK-47 assault rifles. At trial, expert testimony suggested that the discrepancy occurred because the original Detroit Crime  Lab examiners either lied, or were incompetent, or did not actually examine all 42 casings.”); Catherine Leahy Scott, “Investigation into the New York State Police Forensic Investigation Center,” (2014) (concluding that NY firearms examiners completed false reports to aid the prosecution’s case); Texas Forensic Science Comission, “FINAL REPORT FOR COMPLAINT FILED BY ATTORNEY FRANK BLAZEK REGARDING FIREARM/TOOL MARK ANALYSIS PERFORMED AT THE SOUTHWESTERN INSTITUTE OF FORENSIC SCIENCE,” (April 2016) (reporting on misidentification in murder case, which actually spurred on lab in question to adopt CMS rather than the subjective methodology).

[220] Craig Cooley & Gabriel Oberfield, “Symposium: Daubert, Innocence, and the Future of Forensic Science: Increasing Forensic Evidence’s Reliability and Minimizing Wrongful Convictions: Applying Daubert Isn’t the Only Problem,” 43 Tulsa L. Rev. 285, 338-39 (2007) (discussing false accusations against California police officer due to firearms misidentification).

[221] U.S. District Court for the Northern District of Illinois, Eastern Division, “Report of the 1970 Grand Jury,” at 79-90 (Jul. 28, 1970) (documenting misidentifications by the Chicago Police Department laboratory during the investigation into leading Black Panther, Fred Hampton’s death)  available at http://peopleslawoffice.com/wp-content/ uploads/2012/02/Hampton.-1970-FGJ-Report.pdf; Hinton v. Alabama, 134 S. Ct. 1081, 1086 (2014) (three postconviction experts concluded State’s firearms examiner had misidentified ballistic evidence, and State offered no rebuttal).

[222] Commonwealth v. Ellis, 364 N.E.2d 808 (Mass. Sup. Ct. 1977) (two State experts contradicted each other’s identifications); Trotter v. Missouri, 736 S.W.2d 536 (Ct. App. 1987) (State’s firearms expert changed his opinion identifying opinion after additional evidence was revealed post-trial).

[223] See Spencer S. Hsu & Keith OL. Alexander, “Forensic errors trigger reviews of D.C. crime lab ballistics unit, prosecutors say,” Washington Post (March 24, 2017) (three errors by multiple examiners at DC lab necessitated reexamination of over 150 cases), available at https://www.washingtonpost.com/local/public-safety/forensic-errors-trigger-reviews-of-dc-crime-lab-ballistics-unit-prosecutors-say/2017/03/24/2d67cdcc-0e75-11e7-ab07-07d9f521f6b5_story.html?utm_term=.43b68450f72a.

[224] Steve McVicker, “Ballistics lab results questioned in 3 death cases,” The Houston Chronicle, available at http://www.chron.com/news/houston-texas/article/Ballistics-lab-results-questioned-in-3-death-cases-1923892.ph.

[225] Bruce Moran, “A Report on the AFTE Theory of Identification & Range of Conclusions for Tool Mark Identification & Resulting Approaches to Casework,” 34 AFTE J 227 (2002) (“In the 1980s come striated toolmark mis-identifications resulting from a poor understanding of toolmark criteria for identification were experienced. An increasing need to address problems of applying subjective criteria became apparent.”); Evan E. Hodge, “Guarding Against Error,” 20 AFTE J. 290 (1988) (noting that “most of us [firearms examiners] know someone who has committed serious error” and describing misidentification by another examiner of the wrong .45 caliber firearm due to cognitive bias and pressure from prosecutors); Alfred Biasotti & John Murdock, “The Scientific Basis of Firearms & Toolmark Identification,” In Modern Scientific Evidence: The Law & Science of Expert Testimony at 143 (1997) (acknowledging misidentifications stemming “from one examiner ascribing too much significance to a small amount of matching striae and not appreciating that such agreement is achievable in known non-match comparisons.”); Lowell Bradford, “Forensic Firearms Identification: Competence or Incompetence,” 11(2) AFTE J (1979) (“An appalling number of misidentifications have been found in the firearm identification field”).

[226] Simon A. Cole, “Symposium: Forensic Statistics, Part II ‘Implicit Testing’: Can Casework Validate Forensic Techniques?” 46 Jurimetrics J. 117, 126-27 (2006) (“known misattributions are very likely to only be a small subset of actual misattributions”); Andre A. Moenssens, “Novel Scientific Evidence in Criminal Cases: Some Words of Caution,” 84 J. Crim. L. & Criminology 1, 12-13 (1993) (noting that misidentifications occur but “mistakes of this kind are not very likely to be discovered.”)

[227] People v. McKown, 226 Ill. 2d 245, 254 (2007); see People v. McKown, 236 Ill. 2d 278, 294-295 (2010).

[228] See People v. Rodriguez, 2017 IL App. (1st) 141379 (1st Dist. May 8, 2017); People v. Robinson, 2013 IL App. (1st) 102476 (1st Dist. Dec. 2, 2013).

[229] PCAST, “Forensic Science in the Criminal Courts,” at 112.

[230] McKown, 226 Ill. 2d at 254.

[231] See Donaldson v. Cent. Ill. Pub. Serv. Co., 199 Ill. 2d 63, 78-79 (2002) (“The trial judge applies the Frye test only if the scientific principle, technique or test offered by the expert to support his or her conclusion is new or novel.”)

[232] PCAST, “Forensic Science in the Criminal Courts,” at 144.

[233] Benn v. United States, 978 A.2d 1257, 1278 n.90 (D.C. 2009); see State v. Lucero, 85 P.3d 1059, 1062 (Ariz. App. 2004) (“This is not to say that, once admitted, scientific evidence is forever after unassailably admissible. After all, some theories once generally accepted ultimately have been rejected in favor of new ones”); People v. Kelly, 549 P.2d 1240, 1245 (Cal. 1976) (emphasizing that admissibility under Frye persists only “until new evidence is presented reflecting a change in the attitude of the scientific community”); Jones v. United States, 27 A.3d 1130, 1136 (D.C. 2011) (“we do not doubt that a technique that has previously been recognized in court as generally accepted may lose that wide acceptance”); Chesson v. Mongomery Mutual Insurance Company, 75 A.3d 932 (MD Ct App. 2013) (acknowledging that “even scientific techniques once considered to be generally accepted are excluded when subsequent scientific studies bring their reliability and validity into question and show a fundamental controversy within the relevant scientific community”); Trach v. Fellin, 817 A.2d 1102, 1110 (Pa. Super. Ct. 2003) (emphasizing that “a principle or discovery can fall by the wayside as science advances is just another way of saying it is not generally accepted”).

[234] See Clemons v. Maryland, 896 A.2d 1059, 1076 (MD Ct.  App. 2006) (finding that “the assumptions regarding that uniformity or homogeneity of the molten source and the uniqueness of each molten source that provide the foundation for CBLA have come under attack by the relevant scientific community of analytical chemists and metallurgists” and excluding such evidence as not generally accepted).

[235] Illinois v. Luna, 2013 IL App (1st) 072253, at ¶65 (1st Dist. 2013) (internal quotations & citations omitted); see also Donaldson, 199 Ill.2d at 78. In fact, even in the specific context of firearms examination, the First District has twice indicated, albeit in unpublished orders, that its decision in Robinson would not foreclose a subsequent Frye challenge if based on new authority. See People v. Smith, 2014 IL App (1st) 121062-U (1st Dist. Apr. 14, 2014); People v. Dupree, 2014 IL App (1st) 121179-U (1st Dist. Dec. 12, 2014). And the two states that have explicitly addressed the impact of appellate Frye rulings on subsequent trials in light of new scientific evidence, held that “precedent so established may control subsequent trials, at least until new evidence is presented reflecting a change in the attitude of the scientific community.” People v. Smith, 215 Cal. App. 3d 19, 26 (Cal. App. 1st Dist. 1989) (internal citations & quotations omitted); see also State v. Copeland, 922 P.2d 1304, 1333 (Wash. 1996) (“Only if a party presents new evidence seriously questioning continued general acceptance of use of the product rule will a Frye hearing be required”).

[236] McKown, 226 Ill. 2d at 258.

[237] See People v. Shanklin, 2014 IL App (1st) 120084, ¶80 (1st Dist. 2014).

[238] See Robinson, 2013 IL App (1st) 102476 at ¶70.

[239] The State will likely make much of Rodriguez’s statement that firearms examination “is not new or novel, either pursuant to the plain meaning of those words or in accordance with the analysis employed by our supreme court in McKown.” 2017 IL App. (1st) 141379, at ¶56. But that conclusion deserves only the weight due its context (which did not include a review of the PCAST report or other sources further challenging the legitimacy of firearms examination evidence). And its reference to the McKown approach to the novelty question actually demonstrates conclusively that said context matters: on the basis of new legal or scientific challenges, even longstanding forensic methods must again be labeled new and novel.

[240] Donaldson, 199 Ill.2d at 78.

[241] Id.

[242] McKown, 226 Ill. 2d at 254 (emphasis added).

[243] Id.

[244] See Paul C. Gianelli, “The Admissibility of Novel Scientific Evidence: Frye v. United States, a Half-Century Later,” 80 Coulm. L. Rev. 1197, 1218-19 (1980) (overreliance on prior cases as opposed to technical writings or expert testimony “undercuts the primary rationale supporting Frye-that those most qualified to judge the validity of a technique should have the determinative voice"); PCAST, “Forensic Science in the Criminal Courts,” at 144 (“from a scientific standpoint, subsequent events have indeed undermined the continuing validity of [judicial] conclusions that were not based on appropriate empirical evidence”).

[245] See McKown, 236 Ill. 2d at 300 (concluding that “the relevant scientific fields that embrace the testing for and observation of HGN include medicine, ophthalmology, and optometry… [t]hus, the question of general acceptance must be determined from the testimony of experts and the literature in these scientific fields.”); People v. Watson, 257 Ill. App. 3d 915, 926 (1st Dist. 1994) (“the proposed DNA profiling evidence should be evaluated by scientists in the fields of molecular biology, population genetics and forensic science”); PCAST, “Forensic Science in the Criminal Courts,” at 55 & 142 (“scientific validity of a method must be assessed within the framework of the broader scientific field of which it is a part (e.g., measurement science in the case of feature-comparison methods). The fact that bitemark examiners defend the validity of bitemark examination means little” & “the appropriate scientific field should be the larger scientific discipline to which [a forensic method] belongs”); Reed v. State, 283 Md. 374, 382 (Md. 1978) (“the relevant scientific community will include those whose scientific background and training are sufficient to allow them to comprehend and understand the process and form a judgment about it”).

[246] See Luna, 2013 IL App (1st) 072253 at ¶75 (“This court has counseled against too narrowly defining the relevant scientific community to those who share the views of the testifying expert.”); Bernardoni v. Indus. Comm'n (Huntsman Chem. Co.), 362 Ill. App. 3d 582, 595 (3d Dist. 2005) (“A court must not define the relevant field of experts so narrowly that the expert's opinion inevitably will be considered generally accepted. If the community is defined to include only those experts who subscribe to the same beliefs as the testifying expert, the opinion always will be admissible. The community of experts must include a sufficiently broad sample of experts so that the possibility of disagreement exists.”); United States v. Porter, 618 A.2d 629, 634 (D.C. 1992) (“It simply is not creditable to argue … that general acceptance may be premised simply on the opinion of forensic scientists”).

[247] See PCAST, “Forensic Science in the Criminal Courts,” at 42 & 44 (“As a scientific matter, the relevant scientific community for assessing the reliability of feature-comparison sciences includes metrologists (including statisticians) as well as other physical and life scientists from disciplines on which the specific methods are based. Importantly, the community is not limited to forensic scientists who practice the specific method” & “feature comparison is a common scientific activity, and science has clear standards for determining whether such methods are reliable. In particular, feature-comparison methods belong squarely to the discipline of metrology—the science of measurement and its application”); Brandon Giroux, “The Association of Firearm and Tool Mark Examiners (AFTE) December 23, 2015 response to seven questions related to forensic science posed on November 30, 2015 by The President’s Council of Advisors on Science and Technology (PCAST),” at 22 (2015) (“Metrology is a second discipline that has enhanced the science of firearm and toolmark identification.”); SWGGUN & AFTE, “response to 25 foundational firearm and toolmark examination questions,” at 1 (firearms examination “is based on previously established theories, principles and properties that were adapted in the material and engineering sciences”); Tobin, “Affidavit in Virginia v. Macumber,” at 17 (“Metallurgy is the most appropriate scientific discipline to address issues of metal to metal contact, such as occurs during the cycling of a firearm. This metal to metal contact produces the toolmarks on which firearms examiners base their conclusons.”); Gregory Klees (SWGGUN Chair), “Practice & Standards of the Scientific Working Group for Firearms & Toolmarks,” at slide 10 (acknowledging academic professionals, industry experts and engineers as subject matter experts), available at http://sites.nationalacademies.org/cs/groups/pgasite/documents/webpage/pga_049914.pdf.

[248] The State will surely point out that the subcommittee recently released a statement opposing the PCAST report’s ultimate conclusion regarding the validity of firearms examination. See OSAC Firearms & Toolmarks Subcommittee, “Response          to PCAST’s Call for Additional References Regarding its Report,” (Dec. 14, 2016). But that opinion piece (supported by an unknown number of the subcommittee’s members) in no way revokes its previous statements indicating a need for/but complete lack of appropriate black-box studies and other empirical measures of examiner accuracy. Instead, it more likely signals that the members of said committee who are themselves firearms examiners gave in to precisely the type of bias that a later portion of this brief notes is the underlying reason for discounting the opinions of practitioners from the Frye calculus. In other words, when dealing abstractly with the realities of available research the subcommittee found the discipline grossly lacking. When faced with the potential loss of their professions, it appears that some members suspiciously changed course. That leaves their first opinions as the most accurate and credible statement of their scientific as opposed to self-interested views.

[249] People v. Young, 425 Mich. 470, 483 n.24 (Mich. 1986); see also Ramirez v. State, 810 So.2d 836, 844 n.13 (Fla. 2001) (warning against reliance on experts who have a “personal stake” in the acceptance of a methodology or show “institutional bias”).

[250] Ramirez, 810 So.2d at 851 (emphasis added).

[251] 236 Ill.2d at 295 & 300 (also stating: “Law enforcement, however, is not a scientific field. Therefore, general acceptance within law enforcement circles cannot be the basis for finding scientific evidence admissible under Frye.”)

[252] People v. Harbold, 124 Ill. App. 3d 363, 379 (1st Dist. 1984).

[253] See McKown, 236 Ill.2d at 295 to 296 (giving no weight whatsoever to the stance of the American Optometric Association, a trade organization).

[254] It also bears mentioning that the counterarguments likely to be posited by any State expert at a Frye hearing (that training and experience when coupled with the alleged uniqueness of fired bullets and cartridges diminish the need for black box studies) would warrant little weight given that the PCAST’s conclusions “are based on the fundamental principles of the ‘scientific method’—applicable throughout science—that valid scientific knowledge can only be gained through empirical testing of specific propositions.” PCAST, “Forensic Science in Criminal Courts,” at 46. And firearms examiners, as mere practitioners of an applied science, simply lack the qualifications necessary to overcome the more specific expertise in validation and the appropriate methods for vetting scientific methods possessed by the PCAST panel. See McKown, 236 Ill.2d at 300-01 (Noting of a police specialist that his “years of experience, his professional credentials do not qualify him as an expert on the general acceptance of HGN testing for the purpose of alcohol impairment within these scientific fields.”)

[255] See e.g., People v. Fisher, 340 Ill. 216, 237-241 (1930).

[256] PCAST, “Forensic Science in the Criminal Courts,” at 143 (noting a serious tension between legitimate science and legal standards if “courts admit forensic feature-comparison methods based on longstanding precedents that were set before these fundamental problems were discovered”).

[257] 2013 IL App (1st) 102476 at ¶¶81-89; 2017 IL App. (1st) 141379, at ¶54.

[258] See Mark Page, et al., “Forensic Identification Science Evidence Since Daubert: Part I-A Quantitative Analysis of the Exclusion of Forensic Science Evidence,” 56 J. Forensic Sci. 1180, 1182 (2011) (identifying  total 37 challenges firearms examination testimony that resulted in either exclusion or limitation of the proffered evidence with reliability as the reason for exclusion in 20 of those); United States v. Mouzone, 696 F.Supp.2d 536, 569 & 572-73 (D. Maryland 2009) (concluding that neither conclusions of absolute nor practical certainty of a match were  factually warranted and noting that the most accurate reading of recent cases on firearms examination is that courts have recognized “as the NRC Forensic Science Report clearly did, that if firearms toolmark evidence is characterized exclusively as ‘science,’ it has a long way to go before it legitimately can claim this status ... the concerns expressed by the NRC ought to be heeded by courts in the future”); United States v. Willock, 696 F.Supp.2d 536, 546 (D. Maryland 2010) (adopting report and recommendation of magistrate in Mouzone, and enforcing “a complete restriction on the characterization of certainty”); United States v. Taylor, 663 F.Supp.2d 1170, 1180 (D. NM 2009) (“because of the limitations on the reliability of firearms identification evidence discussed above, Mr. Nichols will not be permitted to testify that his methodology allows him to reach this conclusion as a matter of scientific certainty. Mr. Nichols also will not be allowed to testify that he can conclude that there is a match to the exclusion, either practical or absolute, of all other guns.”); United States v. Ashburn, 88 F.Supp.3d 239, 249 (E.D.N.Y. 2015) (quoting the finding of the NAS Committee that forensic ballistic comparison “suffers from certain ‘limitations,’ including the lack of sufficient studies to understand the reliability and repeatability of examiners’ methods . . .” and precluding “expert witness from testifying that he is ‘certain’ or ‘100%’ sure of his conclusions that certain items match ... that a match he identified is to ‘the exclusion of all other firearms in the world,’ or that there is a ‘practical impossibility’ that any other gun could have fired the recovered materials.”); United States v. Green, 405 F. Supp. 2d 104, 124 (D. Mass. 2005) (permitting testimony regarding observations but NO ultimate opinion about source); United States v. Monteiro, 407 F.Supp.2d 351, 375 (D. Mass. 2006) (limiting testimony to “reasonable degree of ballistic certainty”); United States v. Diaz, 2007 U.S. Dist. LEXIS 13152, at *41-42 (N.D. Cal. 2007) (precluding matches to the exclusion of all other guns in the world); Massachusetts v. Pytou Heang, 942 N.E.2d 927, 945-46 (2010) (allowing testimony to a reasonable degree of ballistics certainty but precluding statements describing firearms examination as a science or phrasing of conclusions to an absolute or practical certainty); United States v. Love, No. 2:09-cr-20317-JPM (W.D. Tenn. Feb. 8, 2011) (excluding testimony regarding absolute or practical certainty); United States v. Alls, No. CR2-08-223(1) (S.D. Ohio Dec. 7, 2009) (forbidding any claim of a match to one firearm to the exclusion of all other guns and limiting examiner to descriptions of her methodology and observations of casings); United States v. St. Gerard, APO AE 09107, at 4 (U.S. Army Trial Judiciary, 5th Judicial Circuit June 7, 2010) (the probative value of [the expert’s] proffered testimony that it would be practically impossible for a tool other than the seized AK-47 to have made the marks on the cartridge case would be substantially outweighed by the unfair prejudice associated with its unreliability.”), available at http://www.swgfast.org/Resources/ 101126_US-v-Gerard.pdf.

[259] See e.g., United States v. Green, 405 F. Supp. 2d 104, 124 (D. Mass. 2005), citing United States v. Hines, 55 F. Supp. 2d 62 (D. Mass. 1999) (permitting testimony only regarding an examiner’s observations without any accompanying conclusions about the source of a projectile); United States v. Glynn, 578 F.Supp.2d 567 (S.D.N.Y. 2008) (noting that, given the lack of data supporting the discipline “ballistics lacked the rigor of science,” and limiting testimony of match to a conclusion of “more likely than not” instead of even “reasonable ballistics certainty” to ensure that “a conviction in a criminal case may not rest exclusively on ballistics testimony.”); Missouri v. Goodwin-Bey, No. 1531-CR00555-01 (Dec. 16, 2016) (limiting testimony “to the point this gun could not be eliminated as the source of the bullet.”)

[260] Williams v. United States, 130 A.3d 343, 355 (D.C. 2016) (J. Easterly concurring). Notably, even that most derisive of statements was discussed approvingly by PCAST. See Forensic Science in the Criminal Courts,” at 55.

[261] In terms of similar decisions across other forensic disciplines: See Commonwealth v. Joyner, 4 N.E.3d 282, 289 (Mass. 2014) (holding that that fingerprint examiners should avoid expressing opinions of absolute certainty); United States v. Oskowitz, 294 F. Supp. 2d 379, 384 (E.D.N.Y. 2003) (“Many other district courts have similarly permitted a handwriting expert to analyze a writing sample for the jury without permitting the expert to offer an opinion on the ultimate question of authorship.”); United States v. Rutherford, 104 F. Supp. 2d 1190, 1194 (D. Neb. 2000) (expert limited to “explaining the similarities and dissimilarities between the known exemplars and the questioned documents” and “precluded from rendering any ultimate conclusions on authorship of the questioned documents and is similarly precluded from testifying to the degree of confidence or certainty on which his opinions are based”); United States v. Hidalgo, 229 F. Supp. 2d 961, 967 (D. Ariz. 2002) (“Because the principle of uniqueness is without empirical support, we conclude that a document examiner will not be permitted to testify that the maker of a known document is the maker of the questioned document. Nor will a document examiner be able to testify as to identity in terms of probabilities.”); U.S. v. McVeigh, 1997 WL 47724 3 (D. Colo. 1997) (holding that a pattern recognition expert could not testify to ultimate source attribution for unknown handwriting evidence).

[262] People v. Kirk, 289 Ill. App. 3d 326, 333 (4th Dist. 1997); see Donaldson, 199 Ill. 2d at 85 (same); In re Det. of New, 2014 IL 116306, ¶48 (2014) (same); McKown, 236 Ill. 2d at 303-04 (declining to rely on cases from other jurisdictions that had been undermined by later scientific developments or where the HGN issue had not been as fully litigated).

[263] 2014 IL 116306, at ¶51.

[264] 289 Ill. App. 3d at 333-34.

[265] See 2013 IL App. (1st) 102476, at ¶91 (“[I]n recent years, federal and state courts have had occasion to revisit the admission of expert testimony based on toolmark and firearms identification methodology. Such testimony has been the subject of lengthy and detailed hearings, and measured against the standards of both Frye and Daubert. Courts have considered scholarly criticism of the methodology, and occasionally placed limitations on the opinions experts may offer based on the methodology. Yet the judicial decisions uniformly conclude toolmark and firearms identification is generally accepted and admissible at trial”).

[266] See Rodriguez, 2017 IL App. (1st) 141379, at ¶56-57 (looking only at the NAS reports); Robinson, 2013 IL App (1st) 102476, at ¶90 (looking only at NAS reports and one other article by a firearms examiner).

[267] See Goodwin-Bey, No. 1531-CR00555-01 (Dec. 16, 2016) (ultimately and “reluctantly” admitting firearms examination evidence but noting that although “toolmark identification is a very valuable investigative tool. That is where it should stay, in the area of law enforcement, not in the courts.”) It does bear mentioning that despite admitting the evidence, Goodwin-Bey made no finding on general acceptance, instead admitting firearms examination testimony after concluding only that it required specialized knowledge.

[268] Green, 405 F.Supp.2d at 123.

[269] McKown, 236 Ill. 2d at 305; Luna, 2013 IL App (1st) 072253, at ¶72; People v. Floyd, 2014 IL App (2d) 120507, ¶22-24 (2d Dist. 2014); Murray v. Motorola, Inc., 2014 D.C. Super. LEXIS 16, 33-35, 56-58 (D.C. Super. Ct. 2014); United States v. Frazier,  387 F.3d 1244, 1263 (11th Cir. 2004);United States v. Van Wyk, 83 F. Supp. 2d 515 (D.N.J. 2000); United States v. Santillan, 1999 WL 1201765 (N.D. Ca 1999); United States v. Reynolds, 904 F.Supp. 1529, 1558 (E.D. Oka. 1995); People v. Shreck, 22 P.3d 68, 70 (Colo. 2001); Daubert v. Merrell Dow Pharms., 509 U.S. 579, 595 (1993); Bowers, “Forensic Testimony: Science, Law and Expert Evidence,” Academic Press (2014); Mnookin, “The Courts, NAS, & the Future of Forensic Sciences,” 75 Brooklyn L. R.  51-55 (2010).

[270] 2014 D.C. Super. LEXIS at ¶60.

[271] 2014 IL App (2d) 120507 at ¶23-24.

[272] See e.g., Ramirez, 810 So. 2d at 845 (barring the testimony of an expert who claimed to be able to identify the knife used in a murder by replicating marks on left on a victim’s cartridge, despite the general acceptance of the wider field of toolmark identification;); People v. Ferguson, 172 Ill. App. 3d 1, 8 & 12 (2d Dist. 1988) (excluded the testimony of an expert who claimed to be able to identify a suspect based on wear patterns repeated across multiple pairs of his shoes;); United States v. McCluskey, 954 F. Supp. 2d 1224, 1280-81 & 1286 (D.N.M. 2013) (analysis of low copy DNA insufficiently reliable despite the widespread use and reliability of other forms of DNA.); Sexton v. Texas, 93 S.W.3d 96, (2002) (“We conclude, based on the record before us, that the underlying theory of toolmark examination could be reliable in a given case, but that the State failed to produce evidence of the reliability of the technique [considering magazine marks alone] used in this case.”); Almeciga v. Ctr. for Investigative Reporting, Inc., 2016 U.S. Dist. LEXIS 60539, *54 (S.D.N.Y. May 6, 2016) (“It would be an abdication of this Court's gatekeeping role under Rule 702 to admit Carlson's testimony [regarding document examination] in light of its deficiencies and unreliability. Accordingly, Carlson's testimony must be excluded in its entirety.”).

[273] See United States v. St. Gerard, APO AE 09107 (2010).

[274] Daubert, 509 U.S. at 595 (internal quotations & citations omitted).

[275] Mouzone, 696 F.Supp.2d at 569.

[276] Murray, 2014 D.C. Super. LEXIS 16; see also Frazier, 387 F.3d at 1263; United States v. Monteiro, 407 F.Supp.2d 351, 358 (D. Mass. 2006) (“The court’s vigilant exercise of this gatekeeper role is critical because of the latitude given to expert witnesses to express their opinions on matters about which they have no firsthand knowledge, and because an expert’s testimony may be given greater weight by the jury due to the expert’s background and approach.”)

[277] The Honorable Harry T. Edwards, “The National Academy of Sciences Report on Forensic Sciences: What it Means for the Bench & Bar,” Presentation to the Superior Court of DC (2010) (also encouraging courts to rely on the NAS report when deciding questions of admissibility); see  Glynn, 578 F.Supp.2d 567 (S.D.N.Y. 2008) (“cross-examination is inherently handicapped by the jury’s own lack of background knowledge, so that the Court must play a greater role, not only in excluding unreliable testimony, but also in alerting the jury to the limitations of what is presented.”); Murray, 2014 D.C. Super. LEXIS at ¶60 (“the court cannot be confident that effective advocacy can eliminate the risk that a jury would be misled by [the expert’s] testimony and reach a result on an improper basis.”); Jennifer A. Mnookin, “Clueless Science,” L.A. TIMES (Feb. 19, 2009) (“[J]udges would be well advised to throw out forensic science altogether -- not forever, but until adequate research establishes, for example, that the conventional wisdom about evidence of arson is empirically valid, or until fingerprint and ballistics experts provide adequate proof that their real-world error rate is reasonably low.”); American Bar Association, “Forensic Sciences: Judges as Gatekeeper,” at 29-30 (2015).

[278] New, 2014 IL 116306, at ¶26; see also People v. Newberry, 166 Ill.2d 310, 316-17  (1995) (“The State asserts that [the defendant] is not without recourse because he can still assail the State's test results by…cross-examining the State's experts about the procedures they followed. While these opportunities may exist, the relief they offer is illusory.  Whatever the actual reliability of the tests performed in the lab -- and the reliability may not be great -- the laboratory analysis of the evidence will carry great weight with the jury”). 

[279] PCAST, “Forensic Science in Criminal Courts,” at 45.

[280] See e.g., Joseph Sanders, “Reliability Standards—Too High, Too Low, or Just Right? The Merits of the Paternalistic Justification for Restrictions on the Admissibility of Expert Evidence,” 33 Seton Hall L. Rev. 881, 936-938 (2003) (noting, in summary of the author’s analysis of a wide swath of literature, that the results “lend support to the argument that rulings excluding unreliable evidence promote jury accuracy even if we assume jurors are as good as judges in assessing reliability on jurors that” and that “the empirical research does lend some support to the paternalistic justification for restrictions on the admissibility of unreliable expert testimony”).

[281] See PCAST, “Forensic Science in Criminal Courts,” at 45(“In an online experiment, researchers asked mock jurors to estimate the frequency that a qualified, experienced forensic scientist would mistakenly conclude that two samples of specified types came from the same person when they actually came from two different people. The mock jurors believed such errors are likely to occur about 1 in 5.5 million for fingerprint analysis comparison; 1 in 1 million for bitemark comparison; 1 in 1 million for hair comparison; and 1 in 100 thousand for handwriting comparison. While precise error rates are not known for most of these techniques, all indications point to the actual error rates being orders of magnitude higher.”); William C. Thompson & Eryn J. Newman, “Lay Understanding of Forensic Statistics: Evaluation of Random Match Probabilities, Likelihood Ratios, & Verbal Equivalents,” 39 L. & Hum. Behav. 332 (2015) (juror evaluation of DNA evidence influenced by preconceived notions about the discipline & factfinders are susceptible to statistical fallacies, both prosecution and defense varieties); Jonathan J. Koehler, “If the Shoe Fits They Might Acquit: The Value of Forensic Science Testimony,” 8(s1) J. of Empirical Legal Studies 21-48 (2011) (“As detailed in the NRC report the ‘science’ part of forensic science has not kept pace with the extraordinary claims made on its behalf. As a result, jurors have little idea what the chance is that a forensic scientist’s conclusions are wrong, how often different objects share particular characteristics, or how much weight to give the forensic science as proof of identity.” Further noting that jurors “are slow to revise incorrect probabilistic hypotheses” “fall prey to logical fallacies” and “failed to appreciate the role that error plays in interpreting the value of a reported match”); Dawn McQuiston-Surrett & Michael J. Saks, “Communicating Opinion Evidence in the Forensic Identification Sciences: Accuracy & Impact,” 59 Hastings L.J. 1159, 1170 (2008) (“most jurors have an exaggerated view of the nature and capabilities of forensic identification”); Sanders, “Reliability Standards—Too High, Too Low, or Just Right?,” at 901, 919 (describing jurors as struggling with statistical information and unable to detect expert witness biases).

[282] Koehler, “If the Shoe Fits They Might Acquit,” (“Contrary to predictions, none of the source and guilt dependent measures in the main experiment were affected by the introduction of cross examination. There was no effect for cross examination on source confidence, source probability, guilt confidence, guilty probability, or verdict. Likewise there was no effect for cross examination across the two individualization conditions on any of the dependent measures.”); Sanders, “Reliability Standards—Too High, Too Low, or Just Right?,” at 913, 934-36 (Concluding that multiple studies bear out the sobering reality that even robust cross examination of experts affects neither ultimate verdicts nor even juror confidence in said verdicts); Saks, “The Testimony of Forensic Identification Science, (Authors conducted a study and reviewed others, ultimately finding “little or no ability of cross-examination to undo the effects of an expert’s testimony on direct examination, even if the direct testimony is fraught with weaknesses and the cross is well designed to expose those weaknesses.” Interestingly, the authors conclude that cross examination can effect juror evaluation of expert evidence if it is presented honestly as a subjective guess, but that “...the unshakeableness of the traditional forms: match and similar-in-all-microscopic-characteristics produce something of a ceiling effect, which resist moderation by the presentation of other information.”);Shari Seidman Diamond, et al., “Juror Reactions to Attorneys At Trial,” 87 J. Crim. L. & Criminology 17, 41 (1996) (The author conducted an experiment, using 1925 jury-eligible residents of Cook County, which varied the strength of an attorney’s cross examination of an expert witness and found that: “Although juror perceptions of the attorney appear susceptible to influence by the attorney's efforts during cross-examination, the strong cross-examination had no effect on the verdict.”).

[283] PCAST, “Forensic Science in Criminal Courts,” at 45-46 (“The potential prejudicial impact is unusually high, because jurors are likely to overestimate the probative value of a “match” between samples” thus the term match conveys “inappropriately high probative value, a more neutral term should be used for an examiner’s belief that two samples come from the same source.”);  Koehler, “If the Shoe Fits They Might Acquit,” (“people are more persuaded by statistical testimony that ignores various error risks than by testimony that is objectively stronger by virtue of taking those risks into account”);Sanders, “Reliability Standards—Too High, Too Low, or Just Right?,” at 935 (Concluding that  testimony couched in terms of an expert’s experience, was “more impervious to cross-examination and opposing experts.”); Saks, “Communicating Opinion Evidence in the Forensic Identification Sciences,” at 1177 (“The conclusions of examiners in all areas of forensic identification other than DNA typing reach their conclusions on the basis of subjective guesstimations (clinical rather than actuarial), they present their opinions in nonquantitative, usually categorical, terms, and by all indications laypersons are generally quite persuaded by their testimony.”); Dawn McQuiston-Surrett & Michael J. Saks, “The Testimony of Forensic Identification Science: What Expert Witnesses Say & What Factfinders Hear,” 33 Law & Hum. Behav. 436 (2009) (“Participants in the conditions [hearing testimony in terms of a match or that targets were similar in all microscopic characteristics] which led to the highest estimates that the crime scene hair came from the defendant paradoxically gave the highest estimates of the incidence of the same hair traits in the reference population. This reinforces the inference that those two testimonial conditions lead to the least understanding of the basic concepts of forensic identification while leading to the highest inculpatory judgments” & “These data suggest that the two traditional forms in which forensic identification testimony is expressed [again referring to match of the similar-in-all-microscopic-characteristics language] are most damaging to the defense, while communicating a comfortingly simple and easily grasped (though not very informative and presumably misleading) understanding of the basis for the identification opinion.”); John Thorton, “The General Assumptions & Rationale of Forensic Identification,” In Modern Scientific Evidence: The Law & Science of Expert Testimony, at 16 (1997) (when an expert “and bases [an] opinion on ‘years of experience’ the practical result is that the witness is immunized against effective cross examination”).

[284] Sanders, “Reliability Standards—Too High, Too Low, or Just Right?,” at 934. Moreover, even the task of locating a favorable expert has more to do with luck than the ground truth of a bullet or cartridge’s actual source. To clarify, because an expert’s subjective conception of agreement sufficient to declare a match boils down to the prior known non-matches such an expert has personally examined, the defense could not simply request the assistance of any one examiner. Such an expert may well reach the same conclusion as the State’s hired-hand, but lurking just across the border, or down the street, or even in the same lab or agency may be another practitioner who by chance has come upon more guns of a similar make and model and encountered a better non-match than have his/her peers. Such knowledge may not even be correlated to experience. See Thomas Fadul, et al., “An Empirical Study to Improve the Scientific Foundation of Forensic Firearm and Tool Mark Identification Utilizing Consecutively Manufactured Glock EBIS Barrels with the Same EBIS Pattern,” DOJ Grant Project, at 30 (December 2013). But regardless, the relationship will never be absolute. Therefore, only by querying every firearms examiner across the globe could a defendant rest assured that he has not missed the few examiners with the relevant entry in the database of their mind’s eye necessary to reach the correct determination. But that collective concept of best known non-match could never be made available to the factfinder.

[285] McQuiston-Surrett & Saks, “Communicating Opinion Evidence in the Forensic Identification Sciences, at 1188; see also Sanders, Reliability Standards—Too High, Too Low, or Just Right?,” at 936 (same).

[286] See e.g., People v. Baynes, 88 Ill. 2d 225, 240 (Ill. 1981) (excluding polygraph evidence under the 403 calculus in part because, despite advances in the tool’s performance, its “recordings cannot be interpreted with the degree of accuracy that would render them reliable enough for the court to accept them into evidence”); Harbold, 124 Ill. App. 3d at 379 (questioning the probative value of genetic marker evidence on the basis of reliability and noting that: “One of the most important factors in Illinois' rejection of polygraph analysis, the subjectivity of interpretation of test results, is also involved in genetic marker analysis.”)

[287] PCAST, “Forensic Science in Criminal Courts,” at 53.

[288] United States v. Yee, 134 F.R.D. 161, 181 (N.D. Ohio 1990).

[289] See Modelski v. Navistar Int'l Transp. Corp., 302 Ill. App. 3d 879, 886 (1st Dist. 1999) (emphasizing that expert “testimony grounded in guess, surmise, or conjecture, not being regarded as proof of a fact, is irrelevant as it has no tendency to make the existence of a fact more or less probable.”); People v. Sargeant, 292 Ill. App. 3d 508, 511 (1st Dist. 1997) (excluding the “inconclusive, tentative, and speculative” testimony of a handwriting expert).

[290] See Harbold, 124 Ill.App.3d at 382-84 (expert testimony not relevant because “Jurors would be hard pressed to explain how the 1-in-500 chance of an accidental match did not equate with a 1-in-500 chance that defendant was innocent. Of course, the statistic means nothing of the sort. Absent a sound basis to limit the number of possible defendants, the defendant here is but one of thousands of people who share these same characteristics. Legion possibilities incapable of quantification, such as the potential for human error or fabrication, or the possibility of a frame-up, must be excluded from the probability calculation.”); People v. Pike, 2016 IL App (1st) 122626, ¶¶72-75 (1st Dist. 2016) (excluding DNA statistic that would include 50% of the population as irrelevant as well as more prejudicial than probative).

[291] PCAST, “Forensic Science in Criminal Courts,” at 53.

[292] See People v. Zayas, 131 Ill. 2d 284, 292 (1989) (in ruling hypnotically-assisted-recall testimony inadmissible court emphasized the likelihood and danger of prior juror exposure to misleading information about hypnosis); Baynes, 88 Ill.2d at 244 (“There is significant risk the jury will regard [polygraph] evidence as conclusive…It is questionable whether any jury would follow limiting instructions because the polygraph evidence is likely to be shrouded with an aura of near infallibility, akin to the ancient oracle of Delphi.”) (internal citations & quotations omitted).

[293] D. Michael Risinger & Michael J. Saks, “A House With No Foundation,” Issues in Science & Technology, Vol. XX, Issue I (2003); Compare PCAST, “Forensic Science in Criminal Courts,” at 26 (explaining that decisions excluding DNA evidence actually forced practitioners to team with molecular biologists and develop rigorously scientific standards and practices).

[294] Paul C. Gianelli, “Crime Labs Need Improvement,” Issues in Science & Technology, Vol. XX, Issue I (2003).

[295] Kozinski, “Rejecting Voodoo Science in the Courtroom,” Wall Street Journal (Sept. 19, 2016).

[296] Even if this Court determines that the proposed firearms examination testimony survives review under Frye and Rule 403, the shortcomings of such evidence expounded on throughout this motion (i.e. the lack of adequate empirical validation, questionable peer review, limited understanding of error rates, vague / tautological standards, and wholesale rejection by leading scientific authorities outside the insular community of firearms examiners) would render it inadmissible under the standards set forth in Daubert v. Merrell Dow Pharms., 509 U.S. 579 (1993) and Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999). And, “while our supreme court has recently noted that Illinois courts have not addressed the issue of whether Daubert should supplant Frye, it has continued to hint that this issue is ripe for its consideration.” See Donnellan v. First Student, Inc., 383 Ill. App. 3d 1040, 1057 (1st Dist. 2008). Mr. Jones acknowledges, however, that regardless of said intimation, this Court is bound to apply Frye and Rule 403, rather than Daubert, unless and until the Illinois Supreme Court adopts a new test for the admissibility of expert testimony. See Luna, 2013 IL App (1st) 072253 at ¶121. But the Frye standard, unlike that of Daubert, simply fails to comport with contemporary understanding of how to gauge the validity of scientific evidence (exemplified by the PCAST report’s rejection of mere training or longstanding use in favor of rigorous empirical testing). As such it does more harm than good and “is potentially capricious because it excludes scientifically reliable evidence which is not yet generally accepted, and admits scientifically unreliable evidence which although generally accepted, cannot meet rigorous scientific scrutiny.” State v. Coon, 974 P.2d 386, 393-394, (Alaska 1999). Therefore, Mr. Jones preserves for review by the Illinois Supreme Court the issues of whether sound public policy and respect for the due process rights of criminal defendants should compel adoption of the Daubert standard in criminal cases, as well as whether the firearms evidence in this case could possibly satisfy that standard.  See Manson v. Brathwaite, 432 U.S. 98, 114 (1977) (Under the Due Process Clause of the Fourteenth Amendment “reliability is the linchpin of determining the admissibility” of evidence.)

Writing Reflection #4 Writing Reflection #4

Please go to our Moodle Page and under "Class 4" you will find the prompt and submission folder for Writing Reflection #4.

2.2.1 OPTIONAL for Class 4 2.2.1 OPTIONAL for Class 4

2.3 Class 5: Admissibility under the Daubert standard (Part A) 2.3 Class 5: Admissibility under the Daubert standard (Part A)

Chapter 61 in Learning Evidence (Merritt & Simmons) Chapter 61 in Learning Evidence (Merritt & Simmons)

This chapter covers rule 702 and the Supreme Court's decision in Daubert. If you do not have a copy of the textbook, please consult our Moodle course page.

Daubert v. Merrell Dow Pharmaceuticals, Inc. Daubert v. Merrell Dow Pharmaceuticals, Inc.

580*580 Blackmun, J., delivered the opinion for a unanimous Court with respect to Parts I and II—A, and the opinion of the Court with respect to Parts II—B, II—C, III, and IV, in which White, O'Connor, Scalia, Kennedy, Souter, and Thomas, JJ., joined. Rehnquist, C. J., filed an opinion concurring in part and dissenting in part, in which Stevens, J., joined, post, p. 598.

581*581 Michael H. Gottesman argued the cause for petitioners. With him on the briefs were Kenneth J. Chesebro, Barry J. Nace, David L. Shapiro, and Mary G. Gillick.

Charles Fried argued the cause for respondent. With him on the brief were Charles R. Nesson, Joel I. Klein, Richard G. Taranto, Hall R. Marston, George E. Berry, Edward H. Stratemeier, and W. Glenn Forrester.[*]

582*582 Justice Blackmun, delivered the opinion of the Court.

In this case we are called upon to determine the standard for admitting expert scientific testimony in a federal trial.

I

Petitioners Jason Daubert and Eric Schuller are minor children born with serious birth defects. They and their parents sued respondent in California state court, alleging that the birth defects had been caused by the mothers' ingestion of Bendectin, a prescription antinausea drug marketed by respondent. Respondent removed the suits to federal court on diversity grounds.

After extensive discovery, respondent moved for summary judgment, contending that Bendectin does not cause birth defects in humans and that petitioners would be unable to come forward with any admissible evidence that it does. In support of its motion, respondent submitted an affidavit of Steven H. Lamm, physician and epidemiologist, who is a well-credentialed expert on the risks from exposure to various chemical substances.[1] Doctor Lamm stated that he had reviewed all the literature on Bendectin and human birth defects—more than 30 published studies involving over 130,000 patients. No study had found Bendectin to be a human teratogen (i. e., a substance capable of causing malformations in fetuses). On the basis of this review, Doctor Lamm concluded that maternal use of Bendectin during the first trimester of pregnancy has not been shown to be a risk factor for human birth defects.

583*583 Petitioners did not (and do not) contest this characterization of the published record regarding Bendectin. Instead, they responded to respondent's motion with the testimony of eight experts of their own, each of whom also possessed impressive credentials.[2] These experts had concluded that Bendectin can cause birth defects. Their conclusions were based upon "in vitro" (test tube) and "in vivo" (live) animal studies that found a link between Bendectin and malformations; pharmacological studies of the chemical structure of Bendectin that purported to show similarities between the structure of the drug and that of other substances known to cause birth defects; and the "reanalysis" of previously published epidemiological (human statistical) studies.

The District Court granted respondent's motion for summary judgment. The court stated that scientific evidence is admissible only if the principle upon which it is based is "`sufficiently established to have general acceptance in the field to which it belongs.' " 727 F. Supp. 570, 572 (SD Cal. 1989), quoting United States v. Kilgus, 571 F. 2d 508, 510 (CA9 1978). The court concluded that petitioners' evidence did not meet this standard. Given the vast body of epidemiological data concerning Bendectin, the court held, expert opinion which is not based on epidemiological evidence 584*584 is not admissible to establish causation. 727 F. Supp., at 575. Thus, the animal-cell studies, live-animal studies, and chemical-structure analyses on which petitioners had relied could not raise by themselves a reasonably disputable jury issue regarding causation. Ibid. Petitioners' epidemiological analyses, based as they were on recalculations of data in previously published studies that had found no causal link between the drug and birth defects, were ruled to be inadmissible because they had not been published or subjected to peer review. Ibid.

The United States Court of Appeals for the Ninth Circuit affirmed. 951 F. 2d 1128 (1991). Citing Frye v. United States, 54 App. D. C. 46, 47, 293 F. 1013, 1014 (1923), the court stated that expert opinion based on a scientific technique is inadmissible unless the technique is "generally accepted" as reliable in the relevant scientific community. 951 F. 2d, at 1129-1130. The court declared that expert opinion based on a methodology that diverges "significantly from the procedures accepted by recognized authorities in the field . . . cannot be shown to be `generally accepted as a reliable technique.' " Id., at 1130, quoting United States v. Solomon, 753 F. 2d 1522, 1526 (CA9 1985).

The court emphasized that other Courts of Appeals considering the risks of Bendectin had refused to admit reanalyses of epidemiological studies that had been neither published nor subjected to peer review. 951 F. 2d, at 1130-1131. Those courts had found unpublished reanalyses "particularly problematic in light of the massive weight of the original published studies supporting [respondent's] position, all of which had undergone full scrutiny from the scientific community." Id., at 1130. Contending that reanalysis is generally accepted by the scientific community only when it is subjected to verification and scrutiny by others in the field, the Court of Appeals rejected petitioners' reanalyses as "unpublished, not subjected to the normal peer review process and generated solely for use in litigation." Id., at 1131. The 585*585 court concluded that petitioners' evidence provided an insufficient foundation to allow admission of expert testimony that Bendectin caused their injuries and, accordingly, that petitioners could not satisfy their burden of proving causation at trial.

We granted certiorari, 506 U. S. 914 (1992), in light of sharp divisions among the courts regarding the proper standard for the admission of expert testimony. Compare, e. g., United States v. Shorter, 257 U. S. App. D. C. 358, 363— 364, 809 F. 2d 54, 59-60 (applying the "general acceptance" standard), cert. denied, 484 U. S. 817 (1987), with DeLuca v. Merrell Dow Pharmaceuticals, Inc., 911 F. 2d 941, 955 (CA3 1990) (rejecting the "general acceptance" standard).

II

A

In the 70 years since its formulation in the Frye case, the "general acceptance" test has been the dominant standard for determining the admissibility of novel scientific evidence at trial. See E. Green & C. Nesson, Problems, Cases, and Materials on Evidence 649 (1983). Although under increasing attack of late, the rule continues to be followed by a majority of courts, including the Ninth Circuit.[3]

The Frye test has its origin in a short and citation-free 1923 decision concerning the admissibility of evidence derived from a systolic blood pressure deception test, a crude precursor to the polygraph machine. In what has become a famous (perhaps infamous) passage, the then Court of Appeals for the District of Columbia described the device and its operation and declared:

"Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages 586*586 is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized, and while courts will go a long way in admitting expert testimony deduced from a well-recognized scientific principle or discovery, the thing from which the deduc- tion is made must be sufficiently established to have gained general acceptance in the particular field in which it belongs. " 54 App. D. C., at 47, 293 F., at 1014 (emphasis added).
Because the deception test had "not yet gained such standing and scientific recognition among physiological and psychological authorities as would justify the courts in admitting expert testimony deduced from the discovery, development, and experiments thus far made," evidence of its results was ruled inadmissible. Ibid.

The merits of the Frye test have been much debated, and scholarship on its proper scope and application is legion.[4] 587*587 Petitioners' primary attack, however, is not on the content but on the continuing authority of the rule. They contend that the Frye test was superseded by the adoption of the Federal Rules of Evidence.[5] We agree.

We interpret the legislatively enacted Federal Rules of Evidence as we would any statute. Beech Aircraft Corp. v. Rainey, 488 U. S. 153, 163 (1988). Rule 402 provides the baseline:

"All relevant evidence is admissible, except as otherwise provided by the Constitution of the United States, by Act of Congress, by these rules, or by other rules prescribed by the Supreme Court pursuant to statutory authority. Evidence which is not relevant is not admissible."
"Relevant evidence" is defined as that which has "any tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence." Rule 401. The Rules' basic standard of relevance thus is a liberal one.

Frye, of course, predated the Rules by half a century. In United States v. Abel, 469 U. S. 45 (1984), we considered the pertinence of background common law in interpreting the Rules of Evidence. We noted that the Rules occupy the field, id., at 49, but, quoting Professor Cleary, the Reporter, 588*588 explained that the common law nevertheless could serve as an aid to their application:

"`In principle, under the Federal Rules no common law of evidence remains. "All relevant evidence is admissible, except as otherwise provided . . . ." In reality, of course, the body of common law knowledge continues to exist, though in the somewhat altered form of a source of guidance in the exercise of delegated powers.' " Id., at 51-52.
We found the common-law precept at issue in the Abel case entirely consistent with Rule 402's general requirement of admissibility, and considered it unlikely that the drafters had intended to change the rule. Id., at 50-51. In Bourjaily v. United States, 483 U. S. 171 (1987), on the other hand, the Court was unable to find a particular common-law doctrine in the Rules, and so held it superseded.

Here there is a specific Rule that speaks to the contested issue. Rule 702, governing expert testimony, provides:

"If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training, or education, may testify thereto in the form of an opinion or otherwise."
Nothing in the text of this Rule establishes "general acceptance" as an absolute prerequisite to admissibility. Nor does respondent present any clear indication that Rule 702 or the Rules as a whole were intended to incorporate a "general acceptance" standard. The drafting history makes no mention of Frye, and a rigid "general acceptance" requirement would be at odds with the "liberal thrust" of the Federal Rules and their "general approach of relaxing the traditional barriers to `opinion' testimony." Beech Aircraft Corp. v. Rainey, 488 U. S., at 169 (citing Rules 701 to 705). See also Weinstein, Rule 702 of the Federal Rules of Evidence is 589*589 Sound; It Should Not Be Amended, 138 F. R. D. 631 (1991) ("The Rules were designed to depend primarily upon lawyer-adversaries and sensible triers of fact to evaluate conflicts"). Given the Rules' permissive backdrop and their inclusion of a specific rule on expert testimony that does not mention "general acceptance," the assertion that the Rules somehow assimilated Frye is unconvincing. Frye made "general acceptance" the exclusive test for admitting expert scientific testimony. That austere standard, absent from, and incompatible with, the Federal Rules of Evidence, should not be applied in federal trials.[6]

B

That the Frye test was displaced by the Rules of Evidence does not mean, however, that the Rules themselves place no limits on the admissibility of purportedly scientific evidence.[7] Nor is the trial judge disabled from screening such evidence. To the contrary, under the Rules the trial judge must ensure that any and all scientific testimony or evidence admitted is not only relevant, but reliable.

The primary locus of this obligation is Rule 702, which clearly contemplates some degree of regulation of the subjects and theories about which an expert may testify. "If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue" an expert "may testify thereto. " (Emphasis added.) The subject of an expert's testimony must 590*590 be "scientific .. . knowledge."[8] The adjective "scientific" implies a grounding in the methods and procedures of science. Similarly, the word "knowledge" connotes more than subjective belief or unsupported speculation. The term "applies to any body of known facts or to any body of ideas inferred from such facts or accepted as truths on good grounds." Webster's Third New International Dictionary 1252 (1986). Of course, it would be unreasonable to conclude that the subject of scientific testimony must be "known" to a certainty; arguably, there are no certainties in science. See, e. g., Brief for Nicolaas Bloembergen et al. as Amici Curiae 9 ("Indeed, scientists do not assert that they know what is immutably `true'—they are committed to searching for new, temporary, theories to explain, as best they can, phenomena"); Brief for American Association for the Advancement of Science et al. as Amici Curiae 7-8 ("Science is not an encyclopedic body of knowledge about the universe. Instead, it represents a process for proposing and refining theoretical explanations about the world that are subject to further testing and refinement" (emphasis in original)). But, in order to qualify as "scientific knowledge," an inference or assertion must be derived by the scientific method. Proposed testimony must be supported by appropriate validation—i. e., "good grounds," based on what is known. In short, the requirement that an expert's testimony pertain to "scientific knowledge" establishes a standard of evidentiary reliability.[9]

591*591 Rule 702 further requires that the evidence or testimony "assist the trier of fact to understand the evidence or to determine a fact in issue." This condition goes primarily to relevance. "Expert testimony which does not relate to any issue in the case is not relevant and, ergo, non-helpful." 3 Weinstein & Berger ¶ 702[02], p. 702-18. See also United States v. Downing, 753 F. 2d 1224, 1242 (CA3 1985) ("An additional consideration under Rule 702—and another aspect of relevancy—is whether expert testimony proffered in the case is sufficiently tied to the facts of the case that it will aid the jury in resolving a factual dispute"). The consideration has been aptly described by Judge Becker as one of "fit." Ibid. "Fit" is not always obvious, and scientific validity for one purpose is not necessarily scientific validity for other, unrelated purposes. See Starrs, Frye v. United States Restructured and Revitalized: A Proposal to Amend Federal Evidence Rule 702, 26 Jurimetrics J. 249, 258 (1986). The study of the phases of the moon, for example, may provide valid scientific "knowledge" about whether a certain night was dark, and if darkness is a fact in issue, the knowledge will assist the trier of fact. However (absent creditable grounds supporting such a link), evidence that the moon was full on a certain night will not assist the trier of fact in determining whether an individual was unusually likely to have behaved irrationally on that night. Rule 702's "helpfulness" 592*592 standard requires a valid scientific connection to the pertinent inquiry as a precondition to admissibility.

That these requirements are embodied in Rule 702 is not surprising. Unlike an ordinary witness, see Rule 701, an expert is permitted wide latitude to offer opinions, including those that are not based on firsthand knowledge or observation. See Rules 702 and 703. Presumably, this relaxation of the usual requirement of firsthand knowledge—a rule which represents "a `most pervasive manifestation' of the common law insistence upon `the most reliable sources of information,' " Advisory Committee's Notes on Fed. Rule Evid. 602, 28 U. S. C. App., p. 755 (citation omitted)—is premised on an assumption that the expert's opinion will have a reliable basis in the knowledge and experience of his discipline.

C

Faced with a proffer of expert scientific testimony, then, the trial judge must determine at the outset, pursuant to Rule 104(a),[10] whether the expert is proposing to testify to (1) scientific knowledge that (2) will assist the trier of fact to understand or determine a fact in issue.[11] This entails a preliminary assessment of whether the reasoning or methodology 593*593 underlying the testimony is scientifically valid and of whether that reasoning or methodology properly can be applied to the facts in issue. We are confident that federal judges possess the capacity to undertake this review. Many factors will bear on the inquiry, and we do not presume to set out a definitive checklist or test. But some general observations are appropriate.

Ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge that will assist the trier of fact will be whether it can be (and has been) tested. "Scientific methodology today is based on generating hypotheses and testing them to see if they can be falsified; indeed, this methodology is what distinguishes science from other fields of human inquiry." Green 645. See also C. Hempel, Philosophy of Natural Science 49 (1966) ("[T]he statements constituting a scientific explanation must be capable of empirical test"); K. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge 37 (5th ed. 1989) ("[T]he criterion of the scientific status of a theory is its falsifiability, or refutability, or testability") (emphasis deleted).

Another pertinent consideration is whether the theory or technique has been subjected to peer review and publication. Publication (which is but one element of peer review) is not a sine qua non of admissibility; it does not necessarily correlate with reliability, see S. Jasanoff, The Fifth Branch: Science Advisors as Policymakers 61-76 (1990), and in some instances well-grounded but innovative theories will not have been published, see Horrobin, The Philosophical Basis of Peer Review and the Suppression of Innovation, 263 JAMA 1438 (1990). Some propositions, moreover, are too particular, too new, or of too limited interest to be published. But submission to the scrutiny of the scientific community is a component of "good science," in part because it increases the likelihood that substantive flaws in methodology will be detected. See J. Ziman, Reliable Knowledge: An Exploration 594*594 of the Grounds for Belief in Science 130-133 (1978); Relman & Angell, How Good Is Peer Review?, 321 New Eng. J. Med. 827 (1989). The fact of publication (or lack thereof) in a peer reviewed journal thus will be a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.

Additionally, in the case of a particular scientific technique, the court ordinarily should consider the known or potential rate of error, see, e. g., United States v. Smith, 869 F. 2d 348, 353-354 (CA7 1989) (surveying studies of the error rate of spectrographic voice identification technique), and the existence and maintenance of standards controlling the technique's operation, see United States v. Williams, 583 F. 2d 1194, 1198 (CA2 1978) (noting professional organization's standard governing spectrographic analysis), cert. denied, 439 U. S. 1117 (1979).

Finally, "general acceptance" can yet have a bearing on the inquiry. A "reliability assessment does not require, although it does permit, explicit identification of a relevant scientific community and an express determination of a particular degree of acceptance within that community." United States v. Downing, 753 F. 2d, at 1238. See also 3 Weinstein & Berger ¶ 702[03], pp. 702-41 to 702-42. Widespread acceptance can be an important factor in ruling particular evidence admissible, and "a known technique which has been able to attract only minimal support within the community," Downing, 753 F. 2d, at 1238, may properly be viewed with skepticism.

The inquiry envisioned by Rule 702 is, we emphasize, a flexible one.[12] Its overarching subject is the scientific validity—and 595*595 thus the evidentiary relevance and reliability—of the principles that underlie a proposed submission. The focus, of course, must be solely on principles and methodology, not on the conclusions that they generate.

Throughout, a judge assessing a proffer of expert scientific testimony under Rule 702 should also be mindful of other applicable rules. Rule 703 provides that expert opinions based on otherwise inadmissible hearsay are to be admitted only if the facts or data are "of a type reasonably relied upon by experts in the particular field in forming opinions or inferences upon the subject." Rule 706 allows the court at its discretion to procure the assistance of an expert of its own choosing. Finally, Rule 403 permits the exclusion of relevant evidence "if its probative value is substantially outweighed by the danger of unfair prejudice, confusion of the issues, or misleading the jury . . . ." Judge Weinstein has explained: "Expert evidence can be both powerful and quite misleading because of the difficulty in evaluating it. Because of this risk, the judge in weighing possible prejudice against probative force under Rule 403 of the present rules exercises more control over experts than over lay witnesses." Weinstein, 138 F. R. D., at 632.

III

We conclude by briefly addressing what appear to be two underlying concerns of the parties and amici in this case. Respondent expresses apprehension that abandonment of "general acceptance" as the exclusive requirement for admission will result in a "free-for-all" in which befuddled juries are confounded by absurd and irrational pseudoscientific assertions. 596*596 In this regard respondent seems to us to be overly pessimistic about the capabilities of the jury and of the adversary system generally. Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence. See Rock v. Arkansas, 483 U. S. 44, 61 (1987). Additionally, in the event the trial court concludes that the scintilla of evidence presented supporting a position is insufficient to allow a reasonable juror to conclude that the position more likely than not is true, the court remains free to direct a judgment, Fed. Rule Civ. Proc. 50(a), and likewise to grant summary judgment, Fed. Rule Civ. Proc. 56. Cf., e. g., Turpin v. Merrell Dow Pharmaceuticals, Inc., 959 F. 2d 1349 (CA6) (holding that scientific evidence that provided foundation for expert testimony, viewed in the light most favorable to plaintiffs, was not sufficient to allow a jury to find it more probable than not that defendant caused plaintiff's injury), cert. denied, 506 U. S. 826 (1992); Brock v. Merrell Dow Pharmaceuticals, Inc., 874 F. 2d 307 (CA5 1989) (reversing judgment entered on jury verdict for plaintiffs because evidence regarding causation was insufficient), modified, 884 F. 2d 166 (CA5 1989), cert. denied, 494 U. S. 1046 (1990); Green 680-681. These conventional devices, rather than wholesale exclusion under an uncompromising "general acceptance" test, are the appropriate safeguards where the basis of scientific testimony meets the standards of Rule 702.

Petitioners and, to a greater extent, their amici exhibit a different concern. They suggest that recognition of a screening role for the judge that allows for the exclusion of "invalid" evidence will sanction a stifling and repressive scientific orthodoxy and will be inimical to the search for truth. See, e. g., Brief for Ronald Bayer et al. as Amici Curiae. It is true that open debate is an essential part of both legal and scientific analyses. Yet there are important differences between the quest for truth in the courtroom and the quest 597*597 for truth in the laboratory. Scientific conclusions are subject to perpetual revision. Law, on the other hand, must resolve disputes finally and quickly. The scientific project is advanced by broad and wide-ranging consideration of a multitude of hypotheses, for those that are incorrect will eventually be shown to be so, and that in itself is an advance. Conjectures that are probably wrong are of little use, however, in the project of reaching a quick, final, and binding legal judgment—often of great consequence—about a particular set of events in the past. We recognize that, in practice, a gatekeeping role for the judge, no matter how flexible, inevitably on occasion will prevent the jury from learning of authentic insights and innovations. That, nevertheless, is the balance that is struck by Rules of Evidence designed not for the exhaustive search for cosmic understanding but for the particularized resolution of legal disputes.[13]

IV

To summarize: "General acceptance" is not a necessary precondition to the admissibility of scientific evidence under the Federal Rules of Evidence, but the Rules of Evidence— especially Rule 702—do assign to the trial judge the task of ensuring that an expert's testimony both rests on a reliable foundation and is relevant to the task at hand. Pertinent evidence based on scientifically valid principles will satisfy those demands.

The inquiries of the District Court and the Court of Appeals focused almost exclusively on "general acceptance," as gauged by publication and the decisions of other courts. Accordingly, 598*598 the judgment of the Court of Appeals is vacated, and the case is remanded for further proceedings consistent with this opinion.

It is so ordered.

Chief Justice Rehnquist, with whom Justice Stevens joins, concurring in part and dissenting in part.

The petition for certiorari in this case presents two questions: first, whether the rule of Frye v. United States, 54 App. D. C. 46, 293 F. 1013 (1923), remains good law after the enactment of the Federal Rules of Evidence; and second, if Frye remains valid, whether it requires expert scientific testimony to have been subjected to a peer review process in order to be admissible. The Court concludes, correctly in my view, that the Frye rule did not survive the enactment of the Federal Rules of Evidence, and I therefore join Parts I and II—A of its opinion. The second question presented in the petition for certiorari necessarily is mooted by this holding, but the Court nonetheless proceeds to construe Rules 702 and 703 very much in the abstract, and then offers some "general observations." Ante, at 593.

"General observations" by this Court customarily carry great weight with lower federal courts, but the ones offered here suffer from the flaw common to most such observations—they are not applied to deciding whether particular testimony was or was not admissible, and therefore they tend to be not only general, but vague and abstract. This is particularly unfortunate in a case such as this, where the ultimate legal question depends on an appreciation of one or more bodies of knowledge not judicially noticeable, and subject to different interpretations in the briefs of the parties and their amici. Twenty-two amicus briefs have been filed in the case, and indeed the Court's opinion contains no fewer than 37 citations to amicus briefs and other secondary sources.

599*599 The various briefs filed in this case are markedly different from typical briefs, in that large parts of them do not deal with decided cases or statutory language—the sort of material we customarily interpret. Instead, they deal with definitions of scientific knowledge, scientific method, scientific validity, and peer review—in short, matters far afield from the expertise of judges. This is not to say that such materials are not useful or even necessary in deciding how Rule 702 should be applied; but it is to say that the unusual subject matter should cause us to proceed with great caution in deciding more than we have to, because our reach can so easily exceed our grasp.

But even if it were desirable to make "general observations" not necessary to decide the questions presented, I cannot subscribe to some of the observations made by the Court. In Part II—B, the Court concludes that reliability and relevancy are the touchstones of the admissibility of expert testimony. Ante, at 590-592. Federal Rule of Evidence 402 provides, as the Court points out, that "[e]vidence which is not relevant is not admissible." But there is no similar reference in the Rule to "reliability." The Court constructs its argument by parsing the language "[i]f scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, . . . an expert . . . may testify thereto . . . ." Fed. Rule Evid. 702. It stresses that the subject of the expert's testimony must be "scientific . . . knowledge," and points out that "scientific" "implies a grounding in the methods and procedures of science" and that the word "knowledge" "connotes more than subjective belief or unsupported speculation." Ante, at 590. From this it concludes that "scientific knowledge" must be "derived by the scientific method." Ibid. Proposed testimony, we are told, must be supported by "appropriate validation." Ibid. Indeed, in footnote 9, the Court decides that "[i]n a case involving scientific evidence, evidentiary 600*600 reliability will be based upon scientific validity. " Ante, at 591, n. 9 (emphasis in original).

Questions arise simply from reading this part of the Court's opinion, and countless more questions will surely arise when hundreds of district judges try to apply its teaching to particular offers of expert testimony. Does all of this dicta apply to an expert seeking to testify on the basis of "technical or other specialized knowledge"—the other types of expert knowledge to which Rule 702 applies—or are the "general observations" limited only to "scientific knowledge"? What is the difference between scientific knowledge and technical knowledge; does Rule 702 actually contemplate that the phrase "scientific, technical, or other specialized knowledge" be broken down into numerous subspecies of expertise, or did its authors simply pick general descriptive language covering the sort of expert testimony which courts have customarily received? The Court speaks of its confidence that federal judges can make a "preliminary assessment of whether the reasoning or methodology underlying the testimony is scientifically valid and of whether that reasoning or methodology properly can be applied to the facts in issue." Ante, at 592-593. The Court then states that a "key question" to be answered in deciding whether something is "scientific knowledge" "will be whether it can be (and has been) tested." Ante, at 593. Following this sentence are three quotations from treatises, which not only speak of empirical testing, but one of which states that the "`criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.' " Ibid.

I defer to no one in my confidence in federal judges; but I am at a loss to know what is meant when it is said that the scientific status of a theory depends on its "falsifiability," and I suspect some of them will be, too.

I do not doubt that Rule 702 confides to the judge some gatekeeping responsibility in deciding questions of the admissibility of proffered expert testimony. But I do not think 601*601 it imposes on them either the obligation or the authority to become amateur scientists in order to perform that role. I think the Court would be far better advised in this case to decide only the questions presented, and to leave the further development of this important area of the law to future cases.

[*] Briefs of amici curiae urging reversal were filed for the State of Texas et al. by Dan Morales, Attorney General of Texas, Mark Barnett, Attorney General of South Dakota, Marc Racicot, Attorney General of Montana, Larry EchoHawk, Attorney General of Idaho, and Brian Stuart Koukoutchos; for the American Society of Law, Medicine and Ethics et al. by Joan E. Bertin, Marsha S. Berzon, and Albert H. Meyerhoff; for the Association of Trial Lawyers of America by Jeffrey Robert White and Roxanne Barton Conlin; for Ronald Bayer et al. by Brian Stuart Koukoutchos, Priscilla Budeiri, Arthur Bryant, and George W. Conk; and for Daryl E. Chubin et al. by Ron Simon and Nicole Schultheis.

Briefs of amici curiae urging affirmance were filed for the United States by Acting Solicitor General Wallace, Assistant Attorney General Gerson, Miguel A. Estrada, Michael Jay Singer, and John P. Schnitker; for the American Insurance Association by William J. Kilberg, Paul Blankenstein, Bradford R. Clark, and Craig A. Berrington; for the American Medical Association et al. by Carter G. Phillips, Mark D. Hopson, and Jack R. Bierig; for the American Tort Reform Association by John G. Kester and John W. Vardaman, Jr.; for the Chamber of Commerce of the United States by Timothy B. Dyk, Stephen A. Bokat, and Robin S. Conrad; for the Pharmaceutical Manufacturers Association by Louis R. Cohen and Daniel Marcus; for the Product Liability Advisory Council, Inc., et al. by Victor E. Schwartz, Robert P. Charrow, and Paul F. Rothstein; for the Washington Legal Foundation by Scott G. Campbell, Daniel J. Popeo, and Richard A. Samp; and for Nicolaas Bloembergen et al. by Martin S. Kaufman.

Briefs of amici curiae were filed for the American Association for the Advancement of Science et al. by Richard A. Meserve and Bert Black; for the American College of Legal Medicine by Miles J. Zaremski; for the Carnegie Commission on Science, Technology, and Government by Steven G. Gallagher, Elizabeth H. Esty, and Margaret A. Berger; for the Defense Research Institute, Inc., by Joseph A. Sherman, E. Wayne Taff, and Harvey L. Kaplan; for the New England Journal of Medicine et al. by Michael Malina and Jeffrey I. D. Lewis; for A Group of American Law Professors by Donald N. Bersoff; for Alvan R. Feinstein by Don M. Kennedy, Loretta M. Smith, and Richard A. Oetheimer; and for Kenneth Rothman et al. by Neil B. Cohen.

[1] Doctor Lamm received his master's and doctor of medicine degrees from the University of Southern California. He has served as a consultant in birth-defect epidemiology for the National Center for Health Statistics and has published numerous articles on the magnitude of risk from exposure to various chemical and biological substances. App. 34-44.

[2] For example, Shanna Helen Swan, who received a master's degree in biostatistics from Columbia University and a doctorate in statistics from the University of California at Berkeley, is chief of the section of the California Department of Health and Services that determines causes of birth defects and has served as a consultant to the World Health Organization, the Food and Drug Administration, and the National Institutes of Health. Id., at 113-114, 131-132. Stuart A. Newman, who received his bachelor's degree in chemistry from Columbia University and his master's and doctorate in chemistry from the University of Chicago, is a professor at New York Medical College and has spent over a decade studying the effect of chemicals on limb development. Id., at 54-56. The credentials of the others are similarly impressive. See id., at 61-66, 73-80, 148-153, 187— 192, and Attachments 12, 20, 21, 26, 31, and 32 to Petitioners' Opposition to Summary Judgment in No. 84-——G(I) (SD Cal.).

[3] For a catalog of the many cases on either side of this controversy, see P. Giannelli & E. Imwinkelried, Scientific Evidence § 1-5, pp. 10-14 (1986 and Supp. 1991).

[4] See, e. g., Green, Expert Witnesses and Sufficiency of Evidence in Toxic Substances Litigation: The Legacy of Agent Orange and Bendectin Litigation, 86 Nw. U. L. Rev. 643 (1992) (hereinafter Green); Becker & Orenstein, The Federal Rules of Evidence After Sixteen Years—The Effect of "Plain Meaning" Jurisprudence, the Need for an Advisory Committee on the Rules of Evidence, and Suggestions for Selective Revision of the Rules, 60 Geo. Wash. L. Rev. 857, 876-885 (1992); Hanson, James Alphonzo Frye is Sixty-Five Years Old; Should He Retire?, 16 West. St. U. L. Rev. 357 (1989); Black, A Unified Theory of Scientific Evidence, 56 Ford. L. Rev. 595 (1988); Imwinkelried, The "Bases" of Expert Testimony: The Syllogistic Structure of Scientific Testimony, 67 N. C. L. Rev. 1 (1988); Proposals for a Model Rule on the Admissibility of Scientific Evidence, 26 Jurimetrics J. 235 (1986); Giannelli, The Admissibility of Novel Scientific Evidence: Frye v. United States, a Half-Century Later, 80 Colum. L. Rev. 1197 (1980); The Supreme Court, 1986 Term, 101 Harv. L. Rev. 7, 119, 125-127 (1987).

Indeed, the debates over Frye are such a well-established part of the academic landscape that a distinct term—"Frye -ologist"—has been advanced to describe those who take part. See Behringer, Introduction, Proposals for a Model Rule on the Admissibility of Scientific Evidence, 26 Jurimetrics J. 237, 239 (1986), quoting Lacey, Scientific Evidence, 24 Jurimetrics J. 254, 264 (1984).

[5] Like the question of Frye `s merit, the dispute over its survival has divided courts and commentators. Compare, e. g., United States v. Williams, 583 F. 2d 1194 (CA2 1978) (Frye is superseded by the Rules of Evidence), cert. denied, 439 U. S. 1117 (1979), with Christophersen v. Allied-Signal Corp., 939 F. 2d 1106, 1111, 1115-1116 (CA5 1991) (en banc) (Frye and the Rules coexist), cert. denied, 503 U. S. 912 (1992), 3 J. Weinstein & M. Berger, Weinstein's Evidence ¶ 702[03], pp. 702-36 to 702-37 (1988) (hereinafter Weinstein & Berger) (Frye is dead), and M. Graham, Handbook of Federal Evidence § 703.2 (3d ed. 1991) (Frye lives). See generally P. Giannelli & E. Imwinkelried, Scientific Evidence § 1-5, at 28-29 (citing authorities).

[6] Because we hold that Frye has been superseded and base the discussion that follows on the content of the congressionally enacted Federal Rules of Evidence, we do not address petitioners' argument that application of the Frye rule in this diversity case, as the application of a judgemade rule affecting substantive rights, would violate the doctrine of Erie R. Co. v. Tompkins, 304 U. S. 64 (1938).

[7] The Chief Justice "do[es] not doubt that Rule 702 confides to the judge some gatekeeping responsibility," post, at 600, but would neither say how it does so nor explain what that role entails. We believe the better course is to note the nature and source of the duty.

[8] Rule 702 also applies to"technical, or other specialized knowledge." Our discussion is limited to the scientific context because that is the nature of the expertise offered here.

[9] We note that scientists typically distinguish between "validity" (does the principle support what it purports to show?) and "reliability"(does application of the principle produce consistent results?). See Black, 56 Ford. L. Rev., at 599. Although "the difference between accuracy, validity, and reliability may be such that each is distinct from the other by no more than a hen's kick," Starrs, Frye v. United States Restructured and Revitalized: A Proposal to Amend Federal Evidence Rule 702, 26 Jurimetrics J.249, 256 (1986), our reference here is toevidentiary reliability— that is, trustworthiness. Cf., e. g., Advisory Committee's Notes on Fed. Rule Evid. 602, 28 U. S. C. App., p. 755 ("`[T]he rule requiring that a witness who testifies to a fact which can be perceived by the senses must have had an opportunity to observe, and must have actually observed the fact' is a `most pervasive manifestation' of the common law insistence upon `the most reliable sources of information' " (citation omitted)); Advisory Committee's Notes on Art. VIII of Rules of Evidence, 28 U. S. C.App., p. 770 (hearsay exceptions will be recognized only "under circumstances supposed to furnish guarantees of trustworthiness"). In a case involving scientific evidence, evidentiary reliability will be based upon scientific validity.

[10] Rule 104(a) provides:

"Preliminary questions concerning the qualification of a person to be a witness, the existence of a privilege, or the admissibility of evidence shall be determined by the court, subject to the provisions of subdivision (b) [pertaining to conditional admissions]. In making its determination it is not bound by the rules of evidence except those with respect to privileges." These matters should be established by a preponderance of proof. See Bourjaily v. United States, 483 U. S. 171, 175-176 (1987).

[11] Although the Frye decision itself focused exclusively on "novel" scientific techniques, we do not read the requirements of Rule 702 to apply specially or exclusively to unconventional evidence. Of course, wellestablished propositions are less likely to be challenged than those that are novel, and they are more handily defended. Indeed, theories that are so firmly established as to have attained the status of scientific law, such as the laws of thermodynamics, properly are subject to judicial notice under Federal Rule of Evidence 201.

[12] A number of authorities have presented variations on the reliability approach, each with its own slightly different set of factors. See, e. g., Downing, 753 F. 2d, at 1238-1239 (on which our discussion draws in part); 3 Weinstein & Berger ¶ 702[03], pp. 702-41 to 702-42 (on which the Downing court in turn partially relied); McCormick, Scientific Evidence: Defining a New Approach to Admissibility, 67 Iowa L. Rev. 879, 911-912 (1982); and Symposium on Science and the Rules of Evidence, 99 F. R. D. 187, 231 (1983) (statement by Margaret Berger). To the extent that they focus on the reliability of evidence as ensured by the scientific validity of its underlying principles, all these versions may well have merit, although we express no opinion regarding any of their particular details.

[13] This is not to say that judicial interpretation, as opposed to adjudicative factfinding, does not share basic characteristics of the scientific endeavor: "The work of a judge is in one sense enduring and in another ephemeral. . . . In the endless process of testing and retesting, there is a constant rejection of the dross and a constant retention of whatever is pure and sound and fine." B. Cardozo, The Nature of the Judicial Process 178-179 (1921).

Except from Daubert v. Merrell Dow Pharmaceuticals, Inc., 43 F.3d 1311 (9th Cir. 1995) Except from Daubert v. Merrell Dow Pharmaceuticals, Inc., 43 F.3d 1311 (9th Cir. 1995)

This opinion is the Daubert case on remand: the Supreme Court decided Daubert and remanded the case back to the Ninth Circuit Court of Appeals. This decision is the Ninth Circuit's resolution of the reamand. (The opinion was authored by Judge Kozinski, whose article you read for Class 1.)

In this except you can hear from a judge who is struggling with how to apply the new standard.

. . .

A. Brave New World

Federal judges ruling on the admissibility of expert scientific testimony face a far more complex and daunting task in a postDaubert world than before. The judge’s task under Frye is relatively simple: to determine whether the method employed by the experts is generally accepted in the scientific community. Solomon, 753 F.2d at 1526. Under Daubert, we must engage in a difficult, two-part analysis. First, we must determine nothing less than whether the experts’ testimony reflects “scientific knowledge,” whether their findings are “derived by the scientific method,” and whether their work product amounts to “good science.” — U.S. at -, -, 113 S.Ct. at 2795, 2797. Second, we must ensure that the proposed expert testimony is “relevant to the task at hand,” id. at -, 113 S.Ct. at 2797, i.e., that it logically advances a material aspect of the proposing party’s ease. The Supreme Court referred to this second prong of the analysis as the “fit” requirement. Id. at -, 113 S.Ct. at 2796.

The first prong of Daubert puts federal judges in an uncomfortable position. The question of admissibility only arises if it is first established that the individuals whose testimony is being proffered are experts in a particular scientific field; here, for example, the Supreme Court waxed eloquent on the impressive qualifications of plaintiffs’ experts. Id. at - n. 2, 113 S.Ct. at 2791 n. 2. Yet something doesn’t become “scientific knowledge” just because it’s uttered by a *1316scientist; nor can an expert’s self-serving assertion that his conclusions were “derived by the scientific method” be deemed conclusive, else the Supreme Court’s opinion could have ended with footnote two. As we read the Supreme Court’s teaching in Daubert, therefore, though we are largely untrained in science and certainly no match for any of the witnesses whose testimony we are reviewing, it is our responsibility to determine whether those experts’ proposed testimony amounts to “scientific knowledge,” constitutes “good science,” and was “derived by the scientific method.”

The task before us is more daunting still when the dispute concerns matters at the very cutting edge of scientific research, where fact meets theory and certainty dissolves into probability. As the record in this case illustrates, scientists often have vigorous and sincere disagreements as to what research methodology is proper,- what should be accepted as sufficient proof for the existence of a “fact,” and whether information derived by a particular method can tell us anything useful about the subject under study.

Our responsibility, then, unless we badly misread the Supreme Court’s opinion, is to resolve disputes among respected, well-ere-dentialed scientists about matters squarely within their expertise, in areas where there is no scientific consensus as to what is and what is not “good science,” and occasionally to reject such expert testimony because it was not “derived by the scientific method.” Mindful of our position in the hierarchy of the federal judiciary, we take a deep breath and proceed with this heady task.

B. Deus ex Machina

The Supreme Court’s opinion in Daubert focuses closely on the language of Fed. R.Evid. 702, which permits opinion testimony by experts as to matters amounting to “scientific ... knowledge.” The Court recognized, however, that knowledge in this context does not mean absolute certainty. — U.S. at -, 113 S.Ct. at 2795. Rather, the Court said, “in order to qualify as ‘scientific knowledge,’ an inference or assertion must be derived by the scientific method.” Id. Elsewhere in its opinion, the Court noted that Rule 702 is satisfied where the proffered testimony is “based on scientifically valid principles.” Id. at -, 113 S.Ct. at 2799. Our task, then, is to analyze not what the experts say, but what basis they have for saying it.

Which raises the question: How do we figure out whether scientists have derived their findings through the scientific method or whether their testimony is based on scientifically valid principles? Each expert proffered by the plaintiffs assures us that he has “utiliz[ed] the type of data that is generally and reasonably relied upon by scientists” in the relevant field, see, e.g., Newman Aff. at 5, and that he has “utilized the methods and methodology that would generally and reasonably be accepted” by people who deal in these matters, see, e.g., Gross Aff. at 5. The Court held, however, that federal judges perform a “gatekeeping role,” Daubert, — U.S. at -, 113 S.Ct. at 2798; to do so they must satisfy themselves that scientific evidence meets a certain standard of reliability before it is admitted. This means that the expert’s bald assurance of validity is not enough. Rather, the party presenting the expert must show that the expert’s findings are based on sound science, and this will require some objective, independent validation of the expert’s methodology.

While declining to set forth a “definitive checklist or test,” id. at -, 113 S.Ct. at 2796, the Court did list several factors federal judges can consider in determining whether to admit expert scientific testimony under Fed.R.Evid. 702: whether the theory or technique employed by the expert is generally accepted in the scientific community; whether it’s been subjected to peer review and publication; whether it can be and has been tested; and whether the known or potential rate of error is acceptable. Id. at -, 113 S.Ct. at 2796-97.3 We read these *1317factors as illustrative rather than exhaustive; similarly, we do not deem each of them to be equally applicable (or applicable at all) in every ease.4 Rather, we read the Supreme Court as instructing us to determine whether the analysis undergirding the experts’ testimony falls within the range of accepted standards governing how scientists conduct their research and reach their conclusions.

One very significant fact to be considered is whether the experts are proposing to testify about matters growing naturally and directly out of research , they have conducted independent of the litigation, or whether they have developed their opinions expressly for purposes of testifying. That an expert testifies for money does not necessarily cast doubt on the reliability of his testimony, as few experts appear in court merely as an eleemosynary gesture. But in determining whether proposed expert testimony amounts to good science, we may not ignore the fact that a scientist’s normal workplace is the lab or the field, not the courtroom or the lawyer’s office.5

That an expert testifies based on research he has conducted independent of the litigation provides important, objective proof that the research comports with the dictates of good science. See Peter W. Huber, Galileo’s Revenge: Junk Science in the Courtroom 206-09 (1991) (describing how the prevalent practice of expert-shopping leads to bad science). For one thing, experts whose findings flow from existing research are less likely to have been biased toward a particular conclusion by the promise of remuneration; when an expert prepares reports and findings before being hired as a witness, that record will limit the degree to which he can tailor his testimony to serve a party’s interests. Then, too, independent research carries its own indicia of reliability, as it is conducted, so to speak, in the usual course of business and must normally satisfy a variety of standards to attract funding and institutional support. Finally, there is usually a limited number of scientists actively conducting research on the very subject that is germane to a particular case, which provides a natural constraint, on parties’ ability to shop for experts who will come to the desired conclusion. That the testimony proffered by an expert is based directly on legitimate, preexisting research unrelated to the litigation provides the most persuasive basis for concluding that the opinions he expresses were “derived by the scientific method.”

We have examined carefully the affidavits proffered by plaintiffs’ experts, as well as the testimony from prior trials that plaintiffs have introduced in support of that testimony, and find that none of the experts based his testimony on preexisting or independent research. While plaintiffs’ scientists are all experts in their respective fields, none claims to have studied the effect of Bendectin on limb reduction defects before being hired to testify in this or related cases.

If the proffered expert testimony is not based on independent research, the party *1318proffering it must come forward with other objective, verifiable evidence that the testimony is based on “scientifically valid principles.” One means of showing this is by proof that the research and analysis supporting the proffered conclusions have been subjected to normal scientific scrutiny through peer review and publication.6 Huber, Galileo’s Revenge at 209 (suggesting that “[t]he ultimate test of [a scientific expert’s] integrity is her readiness to publish and be damned”).

Peer review and publication do not, of course, guarantee that the conclusions reached are correct; much published scientific research is greeted with intense skepticism and is not borne out by further research. But the test under Daubert is not the correctness of the expert’s conclusions but the soundness of his methodology. See n. 11 infra. That the research is accepted for publication in a reputable scientific journal after being subjected to the usual rigors of peer review is a significant indication that it is taken seriously by other scientists, i.e., that it meets at least the minimal criteria of good science. Daubert, — U.S. at -, 113 S.Ct. at 2797 (“[Scrutiny of the scientific community is a component of ‘good science.’”). If nothing else, peer review and publication “increase the likelihood that substantive flaws in methodology will be detected.” Daubert, — U.S. at -, 113 S.Ct. at 2797.7

Bendectin litigation has been pending in the courts for over a decade, yet the only review the plaintiffs’ experts’ work has received has been by judges and juries, and the only place their theories and studies have been published is in the pages of federal and state reporters.8 None of the plaintiffs’ experts has published his work on Bendectin in a scientific journal or solicited formal review by his colleagues. Despite the many years the controversy has been brewing, no one in the scientific community — except defendant’s experts — has deemed these studies worthy of verification, refutation or even comment. It’s as if there were a tacit understanding within the scientific community that what’s going on here is not science at all, but litigation.9

Establishing that an expert’s proffered testimony grows out of pre-litigation research or that the expert’s research has been subjected to peer review are the two principal ways the proponent of expert testimony can show that the evidence satisfies the first prong of Rule 702.10 Where such evidence is *1319unavailable, the proponent of expert scientific testimony may attempt to satisfy its burden through the testimony of its own experts. For such a showing to be sufficient, the experts must explain precisely how they went about reaching their conclusions and point to some objective source — a learned treatise, the policy statement of a professional association, a published article in a reputable scientific journal or the like — to show that they have followed the scientific method, as it is practiced by (at least) a recognized minority of scientists in their field. See United States v. Rincon, 28 F.3d 921, 924 (9th Cir.1994) (research must be described “in sufficient detail that the district court [can] determine if the research was scientifically valid”).11

Plaintiffs have made no such showing. . . . 

Fed. Rule of Evidence 702 Fed. Rule of Evidence 702

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied the principles and methods to the facts of the case.

New Amendment to Fed. R. of Evidence 702 and Committee Note New Amendment to Fed. R. of Evidence 702 and Committee Note

The Evidence Rules Committee recommended an amendment to Rule 702 in June 2022, that could go into effect in 2023 if approved by the Judicial Committee and the Supreme Court, and not vetoed by Congress. The Report (available here) introducing the amendment explains why the amendment was made (Note - you don't need to read this explanation, but please do read the amended rule and the committee notes):

The Committee has been researching and discussing the possibility of an amendment to
Rule 702 for five years. The project began with a Symposium on forensic experts and Daubert,
held at Boston College School of Law in October, 2017. That Symposium addressed, among other things, the challenges to forensic evidence raised in a report by the President’s Council of Advisors on Science and Technology. A Subcommittee on Rule 702 was appointed to consider possible treatment of forensic experts, as well as the weight/admissibility question discussed below. The Subcommittee, after extensive discussion, recommended against certain courses of action. The Subcommittee found that: 1) It would be difficult to draft a freestanding rule on forensic expert testimony, because any such amendment would have an inevitable and problematic overlap with Rule 702; and 2) It would not be advisable to set forth detailed requirements for forensic evidence either in text or Committee Note because such a project would require extensive input from the scientific community, and there is substantial debate about what requirements are appropriate.

The full Committee agreed with these suggestions. But the Subcommittee did express
interest in considering an amendment to Rule 702 that would focus on one important aspect of
forensic expert testimony --- the problem of overstating results (for example, an expert claiming
that her opinion has a “zero error rate”, where that conclusion is not supportable by the expert’s
methodology). The Committee heard extensively from DOJ on the important efforts it is now
employing to regulate the testimony of its forensic experts, and to limit possible overstatement.
The Committee considered a proposal to add a new subdivision (e) to Rule 702 that would
essentially prohibit any expert from drawing a conclusion overstating what could actually be
concluded from a reliable application of a reliable methodology. But a majority of the members
decided that the amendment would be problematic, because Rule 702(d) already requires that the
expert must reliably apply a reliable methodology. If an expert overstates what can be reliably concluded (such as a forensic expert saying the rate of error is zero) then the expert’s opinion
should be excluded under Rule 702(d). The Committee was also concerned about the possible
unintended consequences of adding an overstatement provision that would be applied to all
experts, not just forensic experts.


The Committee, however, unanimously favored a slight change to existing Rule 702(d)
that would emphasize that the court must focus on the expert’s opinion, and must find that the
opinion actually proceeds from a reliable application of the methodology. The Committee
unanimously approved a proposal—released for public comment in August, 2021--- that would
amend Rule 702(d) to require the court to find that “the expert’s opinion reflects a reliable
application of the principles and methods to the facts of the case.” As the Committee Note
elaborates: “A testifying expert’s opinion must stay within the bounds of what can be concluded
by a reliable application of the expert’s basis and methodology.” The language of the amendment
more clearly empowers the court to pass judgment on the conclusion that the expert has drawn
from the methodology. Thus the amendment is consistent with General Electric Co., v. Joiner,
522 U.S. 136 (1997), in which the Court declared that a trial court must consider not only the
expert’s methodology but also the expert’s conclusion; that is because the methodology must not
only be reliable, it must be reliably applied.


Finally, the Committee resolved to respond to the fact that many courts have declared that
the reliability requirements set forth in Rule 702(b) and (d) --- that the expert has relied on
sufficient facts or data and has reliably applied a reliable methodology --- are questions of weight
and not admissibility, and more broadly that expert testimony is presumed to be admissible. These statements misstate Rule 702, because its admissibility requirements must be established to a court by a preponderance of the evidence. The Committee concluded that in a fair number of cases, the courts have found expert testimony admissible even though the proponent has not satisfied the Rule 702(b) and (d) requirements by a preponderance of the evidence --- essentially treating these questions as ones of weight rather than admissibility, which is contrary to the Supreme Court’s holdings that under Rule 104(a), admissibility requirements are to be determined by court under the preponderance standard.


Initially, the Committee was reluctant to propose a change to the text of Rule 702 to address
these mistakes as to the proper standard of admissibility, in part because the preponderance of the evidence standard applies to almost all evidentiary determinations, and specifying that standard in one rule might raise negative inferences as to other rules. But ultimately the Committee unanimously agreed that explicitly weaving the Rule 104(a) standard into the text of Rule 702 would be a substantial improvement that would address an important conflict among the courts.


While it is true that the Rule 104(a) preponderance of the evidence standard applies to Rule 702
as well as other rules, it is with respect to the reliability requirements of expert testimony that many courts are misapplying that standard. Moreover, it takes some effort to determine the applicable standard of proof --- Rule 104(a) does not mention the applicable standard of proof, requiring a resort to case law. And while Daubert mentions the standard, Daubert does so only in a footnote in the midst of much discussion about the liberal standards of the Federal Rules of Evidence.


Consequently, the Committee unanimously approved an amendment for public comment that  would explicitly add the preponderance of the evidence standard to Rule 702(b)-(d). The language of the proposal released for public comment required that “the proponent has demonstrated by a preponderance of the evidence” that the reliability requirements of Rule 702 have been met. The Committee Note to the proposal made clear that there is no intent to raise any negative inference regarding the applicability of the Rule 104(a) standard of proof to other rules --- emphasizing that incorporating the preponderance standard into the text of Rule 702 was made necessary by the decisions that have failed to apply it to the reliability requirements of Rule 702.


More than 500 comments were received on the proposed amendments to Rule 702. In
addition, a number of comments were received at a public hearing held on the rule. Many of the
comments were opposed to the amendment, and almost all of the fire was directed toward the term “preponderance of the evidence.” Some thought that “preponderance of the evidence” would limit the court to considering only admissible evidence at the Daubert hearing. Others thought that the term represented a shift from the jury to the judge as factfinder. By contrast, commentators who supported the amendment argued that the amendment should go further and clarify that it is the court, not the jury, that decides admissibility.


The Committee carefully considered the public comments. The Committee does not agree
that the preponderance of the evidence standard would limit the court to considering only
admissible evidence; the plain language of Rule 104(a) allows the court deciding admissibility to
consider inadmissible evidence. Nor did the Committee believe that the use of the term
preponderance of the evidence would shift the factfinding role from the jury to the judge, for the
simple reason that, when it comes to making preliminary determinations about admissibility, the
judge is and always has been a factfinder. But while disagreeing with these comments, the Committee recognized that it would be possible to replace the term “preponderance of the evidence” with a term that would achieve the same purpose while not raising the concerns (valid or not) mentioned by many commentators. The Committee unanimously agreed to change the proposal as issued for public comment to provide that the proponent must establish that it is “more likely than not” that the reliability requirements are met. This standard is substantively identical to “preponderance of the evidence” but it avoids any reference to “evidence” and thus addresses the concern that the term “evidence” means only admissible evidence.


The Committee was also convinced by the suggestion in the public comment that the rule
should clarify that it is the court and not the jury that must decide whether it is more likely than
not that the reliability requirements of the rule have been met. Therefore, the Committee
unanimously agreed with a change requiring that the proponent establish “to the court” that it is
more likely than not that the reliability requirements have been met. The proposed Committee
Note was amended to clarify that nothing in amended Rule 702 requires a court to make any
findings about reliability in the absence of a proper objection. With those changes, and a few stylistic and corresponding changes to the Committee Note, the Committee unanimously voted in favor of adopting the amendments to Rule 702, for final approval. 

Rule 702. Testimony by Expert Witnesses

 [the proposed text is indicated in bold and strikethrough]

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if the proponent has demonstrated by a preponderance of the evidence that:

(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;

(b) the testimony is based on sufficient facts or data;

(c) the testimony is the product of reliable principles and methods; and

(d) the expert has reliably applied expert’s opinion reflects a reliable application of the principles and methods to the facts of the
case.

Committee Note:

Rule 702 has been amended in two respects:
(1) First, the rule has been amended to clarify and emphasize that expert testimony may not be admitted unless the proponent demonstrates to the court that it is more likely than not that the proffered testimony meets the admissibility requirements set forth in the rule. See Rule 104(a). This is the preponderance of the evidence standard that applies to most of the admissibility requirements set forth in the evidence rules. See Bourjaily v. United States, 483 U.S. 171, 175 (1987) (“The preponderance standard ensures that before admitting evidence, the court will have found it more likely than not that the technical issues and policy concerns addressed by the Federal Rules of Evidence have been afforded due consideration.”); Huddleston v. United States, 485 U.S. 681, 687 (1988) (“preliminary factual findings under Rule 104(a) are subject to the preponderance-of-theevidence standard”). But many courts have held that the critical questions of the sufficiency of an expert’s basis, and the application of the expert’s methodology, are questions of weight and not admissibility. These rulings are an incorrect application of Rules 702 and 104(a).

There is no intent to raise any negative inference regarding the applicability of the Rule 104(a) standard of proof for other rules. The Committee concluded that emphasizing the preponderance standard in Rule 702 specifically was made necessary by the courts that have failed to apply correctly the reliability requirements of that rule. Nor does the rule require that the court make a finding of reliability in the absence of objection.

The amendment clarifies that the preponderance standard applies to the three reliability-based requirements added in 2000—requirements that many courts have incorrectly determined to be governed by the more permissive Rule 104(b) standard. But it remains the case that other admissibility requirements in the rule (such as that the expert must be qualified and the expert’s testimony must help the trier of fact) are governed by the Rule 104(a) standard as well.

Some challenges to expert testimony will raise matters of weight rather than admissibility even under the Rule 104(a) standard. For example, if the court finds it more likely than not that an expert has a sufficient basis to support an opinion, the fact that the expert has not read every single study that exists may raise a question of weight and not admissibility. But this does not mean, as certain courts have held, that arguments about the sufficiency of an expert’s basis always go to weight and not admissibility. Rather it means that once the court has found it more likely than not that the admissibility requirement has been met, any attack by the opponent will go only to the weight of the evidence.

It will often occur that experts come to different conclusions based on contested sets of facts. Where that is so, the Rule 104(a) standard does not necessarily require exclusion of either side’s experts. Rather, by deciding the disputed facts, the jury can decide which side’s experts to credit. “[P]roponents ‘do not have to demonstrate to the judge by a preponderance of the evidence that the assessments of their experts are correct, they only have to demonstrate by a preponderance of evidence that their opinions are reliable. . . . The evidentiary requirement of reliability is lower than the merits standard of correctness.’” Committee Note to the 2000 amendment to Rule 702, quoting In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 744 (3d Cir. 1994).


Rule 702 requires that the expert’s knowledge “help” the trier of fact to understand the evidence or to determine a fact in issue. Unfortunately, some courts have required the expert’s testimony to “appreciably help” the trier of fact. Applying a higher standard than helpfulness to otherwise
reliable expert testimony is unnecessarily strict.

(2) Rule 702(d) has also been amended to emphasize that each expert opinion must stay within the bounds of what can be concluded from a reliable application of the expert’s basis and methodology. Judicial gatekeeping is essential because just as jurors may be unable, due to lack of specialized knowledge, to evaluate meaningfully the reliability of scientific and other methods underlying expert opinion, jurors may also lack the specialized knowledge to determine whether the conclusions of an expert go beyond what the expert’s basis and methodology may reliably support.

The amendment is especially pertinent to the testimony of forensic experts in both criminal and civil cases. Forensic experts should avoid assertions of absolute or one hundred percent certainty—or to a reasonable degree of scientific certainty—if the methodology is subjective and thus potentially subject to error. In deciding whether to admit forensic expert testimony, the judge should (where possible) receive an estimate of the known or potential rate of error of the methodology employed, based (where appropriate) on studies that reflect how often the method produces accurate results. Expert opinion testimony regarding the weight of feature comparison evidence (i.e., evidence that a set of features corresponds between two examined items) must be limited to those inferences that can reasonably be drawn from a reliable application of the principles and methods. This amendment does not, however, bar testimony that comports with substantive law requiring opinions to a particular degree of certainty.

Nothing in the amendment imposes any new, specific procedures. Rather, the amendment is simply intended to clarify that Rule 104(a)’s requirement applies to expert opinions under Rule 702. Similarly, nothing in the amendment requires the court to nitpick an expert’s opinion in order to reach a perfect expression of what the basis and methodology can support. The Rule 104(a) standard does not require perfection. On the other hand, it does not permit the expert to make claims that are unsupported by the expert’s basis and methodology.

Writing Reflection #5 Writing Reflection #5

Please go to our Moodle Page and under "Class 5" you will find the prompt and submission folder for Writing Reflection #5.

2.3.1 OPTIONAL for Class 5 2.3.1 OPTIONAL for Class 5

2.4 Class 6: Admissibility under the Daubert standard (Part B) 2.4 Class 6: Admissibility under the Daubert standard (Part B)

Almeciga v. Center for Investigative Reporting, Inc. Almeciga v. Center for Investigative Reporting, Inc.

This opinion is useful because the author, Judge Rakoff, (a) explicitly tackles the relationship between Daubert and Kumho; (b) actually understands the Daubert factors by a judge who actually understands them; and (c) provides an overview of another forensic “science;” handwriting analysis.  

Erica ALMECIGA, Plaintiff, v. CENTER FOR INVESTIGATIVE REPORTING, INC., Univision Communications, Inc., Univision Noticias, Bruce Livesey, and Josiah Hooper, Defendants.

15-cv-4319 (JSR)

United States District Court, S.D. New York.

Signed 05/06/2016

*407Kevin Landau, The Landau Group, PC, New York, NY, for Plaintiff.

Thomas Rohlfs Burke, Davis Wright Tremaine LLP, San Francisco, CA, Alison Brooke Schary, Davis Wright Tremaine LLP, Washington, DC, Jeremy Adam Chase, Davis Wright Tremaine LLP, New York, NY, for Defendants.

OPINION AND ORDER

JED S. RAKOFF, United States District Judge

Before the Court - is the .motion of defendant Center for Investigative Reporting, Inc. (“CIR”), for judgment on the pleadings, as well as CIR’s Rule 11 motion for sanctions against plaintiff Erica Almeciga and her counsel. Subsumed within defendant’s Rule 11 motion is a Daubert motion to exclude the testimony of plaintiffs handwriting expert, Wendy Carlson. On March 31,' 2016, the Court issued a bottom-line Order granting CIR’s motion for judgment on the pleadings and dismissing the Amended Complaint with prejudice against all defendants (including defendants Livesey and Hooper). This Opinion and Order explains the reasons for that ruling, addresses CIR’s remaining motions, and directs the entry of final judgment. In particular, the Court grants defendant’s motion to exclude Carlson’s “expert” testimony, finding that handwriting analysis in general is unlikely to meet the admissibility requirements of *408Federal Rule of Evidence 702 and that, in any event, Ms. Carlson’s testimony does not meet those standards. Additionally, because the Court finds that plaintiff has fabricated the critical allegations in her Amended Complaint, the Court imposes sanctions, though because of her impecunious status, the sanctions are non-monetary in nature. The Court declines, however, to impose sanctions on her counsel.

I. Defendant CIR’s Rule 12(c) Motion for Judgment on the Pleadings

A Rule 12(c) motion is governed by the same standard as that of motions to dismiss under Rule 12(b)(6). See Cleveland v. Caplaw Enters., 448 F.3d 518, 521 (2d Cir.2006). Accordingly, to survive a motion for judgment on the pleadings, “a complaint must contain sufficient factual matter, accepted as true, to ‘state a claim to relief that is plausible on its face.’” Ashcroft v. Iqbal, 556 U.S. 662, 678, 129 S.Ct. 1937, 173 L.Ed.2d 868 (2009) (quoting Bell Atlantic Corp. v. Twombly, 550 U.S. 544, 570, 127 S.Ct. 1955, 167 L.Ed.2d 929 (2007)). As a result, for purposes of deciding defendant’s Rule 12(c) motion, the following allegations drawn from plaintiffs Amended Complaint are assumed to be true.

Defendant CIR is an investigative reporting organization that produces reports in various media formats on such subjects as criminal justice, money and politics, and government oversight. See Amended Complaint (“Am.Compl.”) ¶ 2, ECF No. 50. In August 2012, CIR entered into a partnership with Univision Communications, Inc. (“Univision”) pursuant to which “CIR agreed to provide Univision with access to CIR stories and documentaries focusing on Latin America.” Id. ¶ 4. Plaintiff Erica Al-meciga alleges that in March 2012 defendant Bruce Livesey, a producer for CIR, contacted plaintiff in connection with a story on which Livesey was working regarding plaintiffs romantic partner at the time, Rosalio Reta. Id. ¶¶ 5, 9-13. Reta was and remains an inmate at Woodville Penitentiary in Texas and was a former member of the Los Zetas Drug Cartel, id. ¶¶ 8-9, a drug trafficking organization that is “among the most brutal in all of Mexico” and “among the most violent in the world,” id. ¶27.

Almeciga travelled to Woodville, Texas to meet with Livesey and his co-producer, defendant Josiah Hooper, for an interview on August 14, 2012. Id. ¶ 15. According to the Amended Complaint, Almeciga’s participation in the interview was “conditioned upon the explicit requirement” that defendants conceal her identity, id. ¶ 14, which defendants orally agreed to do, id. ¶ 16. Around the same time, Almeciga was interviewed by the Canadian Broadcasting Corporation (CBC)1 for a different story about Reta, which ultimately aired in June or July 2012 with Almeciga’s face concealed “per the Plaintiffs demand.” Id. ¶ 11. In that interview, a reporter stated that the network could not show Almeciga’s face “for her own safety.” Id. ¶ 12.

Sometime in late 2013, CIR and Univision posted the CIR video report about Reta and the Los Zetas cartel (the “CIR Report”) to their respective YouTube channels. Id. ¶¶ 19-20. The CIR Report, entitled “I was a Hitman for Miguel Trevino,” id. ¶ 5, has since been viewed over 250,000 times on CIR’s YouTube channel and over 3,000,000 times on Univision’s *409YouTube Channel. Id. ¶¶ 29-30. Plaintiff was featured in the report without her identity concealed. Plaintiff claims that, as a result of this alleged breach of contract, she has “endured public humiliation, demeaning and often threatening remarks from the viewers, as well as the overwhelming fear that [the] Los Zetas cartel ... may take retribution against her.” Id. ¶ 31. She has “move[d] to different locations in an effort to avoid interaction with outsiders,” has “developed paranoia,” and “has been treated for depression and Post Traumatic Stress Disorder.” Id. 1132.

In August 2014, plaintiff’s counsel sent CIR a letter demanding that CIR cease and desist from showing the CIR Report without concealing Almeciga’s identity. Id. ¶ 33. In response, defendant produced a standard release form (the “Release”) purportedly signed by plaintiff, authorizing CIR to use plaintiffs “name, likeness, image, voice, biography, interview, performance and/or photographs or films taken of [her] ... in connection with the Project.” Id.; Def. CIR’s Answer to Am. Compl., Ex. A, ECF No. 48-1. Plaintiff denies having ever seen or signed the Release. See Am. Compl. ¶ 34.

On April 23, 2015, plaintiff filed this action in New York Supreme Court against defendants CIR, Livesey, Hooper, Univision, and Univision Noticias, asserting a breach-of-eontract claim against CIR, fraud and fraudulent-concealment claims against CIR, Livesey, and Hooper, and a negligence claim against Univision. Plaintiff subsequently added unjust enrichment claims against CIR and Univision in the operative Amended Complaint filed on July 24, 2015.

On June 4, 2015, CIR, with the consent of Hooper and Livesey, removed the action to this Court, asserting that the Univision defendants,. both of which, are citizens of New York, were fraudulently joined, and that, without them, the Court had diversity jurisdiction. On June 26, the Univision defendants moved to dismiss the claims against them, and, on July 1,’ plaintiff moved to remand. The Court denied plaintiffs motion, finding that the Univision defendants were not properly joined under 28 U.S.C. § 1441(b)(2) because plaintiffs claims against Univision failed as a matter of law. See Memorandum Order dated Aug. 17, 2015, at 6-16, ECF No. 49. For the same reason, the Court granted the Univision defendants’ motion to dismiss with prejudice. Id. at. 16.

CIR then filed the instant Rule 12(c) motion, contending that plaintiffs breach of contract claim must be dismissed because it is barred by New York’s Statute of Frauds and that plaintiffs remaining fraud claims and unjust enrichment claim must be dismissed because they are dupli-cative of her barred' breach of contract claim and impermissibly attempt to circumvent the Statute of Frauds.

New York’s Statute of Frauds renders “void” any oral contract that “[b]y its terms is not to be performed within one year from the making thereof or the performance of which is not to be completed before the end of a lifetime.” N.Y. Gen. Oblig. Law. § 5-701(a)(1). Put differently, if a contract is not capable of complete performance within one year, it must be in writing to be enforceable.

Here, the alleged oral agreement entered into by plaintiff and defendants was by its (alleged) terms intended to apply in perpetuity. Plaintiff does not plead that defendants’ agreement to conceal Al-meciga’s identity was in any way limited in duration; indeed, reading such a limitation into the agreement would frustrate its purpose given the severe consequences of breach that plaintiff alleges. See Robins v. Zwirner, 713 F.Supp.2d 367, 375 *410(S.D.N.Y.2010) (where oral agreement was premised on third party “never learning” of .a given fact, the agreement "could not be fully performed within one year” and was therefore barred by the Statute of Frauds).

Plaintiff, misapprehending the Statute of Frauds, argues that the “contract at issue was not only ‘capable’ of being performed within one year, but that the contract was actually performed by Plaintiff within one year of its making.” PI. Erica Almeciga’s Mem. of Law in Opp. to CIR Defs. Mot. for J. on the Pleadings (“Pl.’s Opp.”) at 3, EOF No. 59. On plaintiffs view, the fact that plaintiff upheld her end of the bargain to participate in the interview (within one year) precludes any Statute of Frauds argument. Id. at 4. Plaintiff thus appears to be laboring under the mistaken impression that the Statute of Frauds is concerned with partial performance of an oral contract. It is not. Rather, it requires that an oral agreement be capable of complete performance within a year to be enforceable.

New York law is well settled on this point. See, e.g., Cron v. Hargro Fabrics, Inc., 91 N.Y.2d 362, 368, 670 N.Y.S.2d 973, 694 N.E.2d 56 (1998) (the Statute of Frauds “relates to the performance of the contract and not just of one party thereto”); Guilbert v. Gardner, 480 F.3d 140, 151 (2d Cir.2007) (“[F]ull performance by all parties must be possible within a year to satisfy the Statute of Frauds.” (internal quotation marks omitted)). “[T]he fact that the plaintiff has fully completed her performance under the contract as that contract is described by her is of no moment” where “the defendant’s performance ... will continue in perpetuity,” as it would here under the alleged contract. Myers v. Waverly Fabrics, 101 A.D.2d 777, 475 N.Y.S.2d 860, 861 (1st Dep’t 1984), aff'd in part sub nom. Meyers v. Waverly Fabrics, Div. of F. Schumacher & Co., 65 N.Y.2d 75, 489 N.Y.S.2d 891, 479 N.E.2d 236 (1985). Nor would it matter if defendants had performed for a year or more after entering into the alleged agreement and then breached. The dispositive point is that defendants could not complete their performance within one year since their obligation was an ongoing one.2

Turning to plaintiffs fraud claim (Count Two), under New York law plaintiff must plead: “(1) a material misrepresentation or omission of fact (2) made by defendant with knowledge of its falsity (3) and intent to defraud; (4) reasonable reliance on the part of the plaintiff; and (5) resulting damage to the plaintiff.” Crigger v. Fahnestock & Co., 443 F.3d 230, 234 (2d Cir.2006).

However, New York law bars fraud claims that “arise[ ] out of the same facts as plaintiffs breach of contract claim, with the addition only of an allegation that *411defendant never intended to perform the precise promises spelled out in the contract between the parties.” Telecom Int’l Am., Ltd. v. AT & T Corp., 280 F.3d 175, 196 (2d Cir.2001). In such circumstances, “the fraud claim is redundant and plaintiffs sole remedy is for breach of contract.” Id. (internal quotation mark omitted). In other words, a plaintiff may not “bootstrap a breach of contract claim into a fraud claim by simply including in his complaint an allegation that defendant never intended to uphold his end of the deal.” Sudul v. Computer Outsourcing Servs., 868 F.Supp. 59, 62 (S.D.N.Y.1994). Nor are plaintiffs permitted to “avoid the statute of frauds by calling the breach of contract claim a fraud claim.” Massey v. Byrne, 112 A.D.3d 532, 977 N.Y.S.2d 242, 243 (1st Dep’t 2013); see also Gora v. Drizin, 300 A.D.2d 139, 752 N.Y.S.2d 297, 298-99 (1st Dep’t 2002) (“Defendant cannot avoid [the Statute of Frauds] by re-characterizing the claim as one for fraud.... ”).

Trying to avoid this bar, plaintiff submits that her fraud claim is premised, not on the same underlying facts as her breach of contract claim, but rather on the’ allegedly forged Release. This characterization is at odds with her Amended Complaint, which, in pleading the fraud claim, alleges that defendants “provided Plaintiff and Reta with intentionally misleading information, such as promises to conceal her identity ... which the Defendants were reasonably certain promoted Ms. Almeciga’s reliance and' ultimate participation” (Am. Compl. ¶ 53 (emphasis added)) and that defendants’ “promise to Plaintiff that her identity would be protected, and that her face would not appear in their Report, was made without any intention of performance (id. ¶ 54 (emphasis added)). To be sure, plaintiff also alleges that defendants forged the Release, id. ¶ 44, and that defendants “benefited substantially by using the Release as justification to air the interview of [p]laintiff without concealing her identity,” id. ¶ 46. But that does not state , any cause of action by itself, since, among much else, plaintiff plainly did not rely in any respect on the Release she maintains she never, signed and was a forgery. Rather, the gravamen of her fraud claim is that defendants entered into an oral (contractual) agreement with plaintiff that they had no intention of honoring, which is precisely the sort of duplicative fraud claim that is not cognizable under New York law.

Plaintiffs fraudulent concealment claim (Count Three) must also be dismissed as duplicative of her breach of contract claim. To plead fraudulent concealment under New York law, plaintiff must allege “(1) that the defendant failed to meet its duty to disclose ... (2) that the defendant had an intent to defraud or scienter, (3) [that] there was reliance on the part of the plaintiff, and (4) damages.” Brass v. Am. Film Techs., Inc., 987 F.2d 142, 152 (2d Cir.1993). “A duty to disclose information can arise under New York law where (1) there is a fiduciary relationship between the parties; (2) one party makes a partial or ambiguous statement that requires additional disclosure to avoid misleading the- other party; or (3) one party to a transaction possesses superior knowledge of the facts not readily available to the other, and knows that the other is acting on the basis of mistaken knowledge.” Fertitta v. Knoedler Gallery, LLC, 2015 WL 374968, at *11 (S.D.N.Y. Jan. 29, 2015) (internal quotation mark omitted). “However, the intention to breach does not give rise to a duty to disclose. Instead, the duty to disclose must exist separately from the duty to perform under the contract.” TVT Records v. Island Def Jam Music Grp., 412 F.3d 82, 91 (2d Cir.2005). Here, plaintiff appears to plead that defen*412dants were under a duty to disclose that plaintiffs identity would not be concealed in the CIR Report (i.e., their intention to breach) “based upon their relationship with Ms. Almeciga regarding her appearance in the Report.” Am. Compl. ¶ 65. Thus, plaintiffs fraudulent concealment claim is impermissibly duplicative of her breach of contract claim.

Although plaintiffs pleading of her fraudulent concealment claim does not even mention the Release, plaintiff once again pivots in her briefing and argues that defendants were under a duty to disclose to plaintiff their intent to use a forged release to license the CIR Report to Univision as well as their intent to use the Release to avoid litigation with plaintiff. See Pl.’s Opp. at 13. This arguments fails, however, because the Release is simply the mechanism by which defendants allegedly concealed then- breach of contract: it cannot support an independent fraud claim under the circumstances. Indeed, it is well settled under New York law that “alleged concealment of a breach is insufficient to transform what would normally be a breach of contract action into one for fraud.” Rosenblatt v. Christie, Manson & Woods Ltd., 2005 WL 2649027, at *10 (S.D.N.Y. Oct. 14, 2005) (internal quotation mark omitted); see also, e.g., Compagnia Importazioni Esportazioni Rapresentanze v. L-3 Commc’ns Corp., 2007 WL 2244062, at *6 (S.D.N.Y. July 31, 2007) (dismissing fraud claims on this basis); Ray Larsen Assocs., Inc. v. Nikko Am., Inc., 1996 WL 442799, at *5 (S.D.N.Y. Aug. 6, 1996) (same); Fisher v. Big Squeeze (N.Y.), Inc., 349 F.Supp.2d 483, 489 (E.D.N.Y.2004) (dismissing fraudulent concealment claim on this, basis where defendants were alleged to have fraudulently calculated profits subject to distribution under contract through the creation of false or misleading financial statements).

The facts of IKEA North American Services, Inc. v. Northeast Graphics, Inc., 56 F.Supp.2d 340 (S.D.N.Y.1999) are instructive. There, plaintiff IKEA engaged the defendants (a graphic designer and mass-mailer distributor) to produce a holiday brochure to be mailed to millions of homes throughout the United States and Canada. Id. at 341-42. In response to inquiries as to the status of the project, the defendants assured IKEA and its agent that the project was proceeding apace and created thirteen fraudulent postal register statements purportedly confirming the mailing of nearly 3 million brochures. Id. at 342. Applying the principle that “attempted concealments of contractual breach” do not give rise to independent actions for fraud, the Court granted defendants’ motion to dismiss the fraud claims with prejudice. Id. at 342-43. Like the forged postal register statements in IKEA the alleged forged Release at issue here constitutes, at worst, an attempted concealment of contractual breach. As such, plaintiff’s fraud claims are no more than dressed-up breach of contract claims and are hereby dismissed.

With respect to plaintiffs unjust enrichment claim against defendant CIR, plaintiff posits that because there is a “bona fide dispute concerning the existence of the contract at issue ... [she] is not required to elect her remedies, and may proceed on her unjust enrichment claim as well as her breach of contract claim.” Pl.’s Opp. at 15. While that may be true in the main, there are exceptions, and this case involves one of them: “A party may not circumvent the Statute of Frauds by repleading an already barred breach of contract claim as a claim for unjust enrichment.” Four Star Capital Corp. v. Nynex Corp., 183 F.R.D. 91, 108 *413(S.D.N.Y.1997); see also Almazan v. Almazan, 2015 WL 500176, at *13 (S.D.N.Y. Feb. 4, 2015) (“[Plaintiffs may not pursue unjust enrichment claims if such claims are based on an oral agreement that is barred by the Statute of Frauds.” (internal quotation marks omitted)); KJ Roberts & Co. v. MDC Partners Inc., 2014 WL 1013828, at *12 (S.D.N.Y. Mar. 14, 2014) (“[T]he Statute of Frauds applies to the Alleged Agreement; therefore, Plaintiff cannot use a theory of quantum meruit or unjust enrichment to escape it.”), aff'd, 605 Fed.Appx. 6 (2d Cir.2015). If the law were otherwise, plaintiffs could easily achieve “an end-run around the statute of frauds.” Komolov v. Segal, 40 Misc.3d 1228(A), 2013 WL 4411232, at *3 (N.Y.Sup.Ct. Aug. 14, 2013). Plaintiff cites no case in which a court sustained an unjust enrichment claim where a breach of contract claim had been dismissed under the Statute of Frauds.3 Accordingly, plaintiffs unjust enrichment claim is hereby dismissed.

While the Rule 12(c) motion was brought on behalf of defendant CIR, and not defendants Livesey or Hooper (neither of whom had been served at the time the motion was brought), the claims against Livesey and Hooper fail for the same reasons they fail against defendant CIR, and the Court has ample authority in such circumstances to dismiss them as to these defendants as well. See Antidote Int’l Films, Inc. v. Bloomsbury Pub., PLC, 467 F.Supp.2d 394, 399 (S.D.N.Y.2006) (“[Wjhile dismissing a complaint as to a non-moving defendant is not an ordinary practice, a district court may dismiss claims sua sponte for failure to state a claim, at least so long as the plaintiff had notice and an opportunity to be heard on the issue.” (internal quotation marks omitted)). Accordingly, plaintiffs Amended Complaint is dismissed with prejudice as against all remaining defendants.

II. Defendant CIR’s Rule II Motion for Sanctions

Defendant CIR seeks sanctions against plaintiff for allegedly perpetrating a fraud upon the Court, and against her counsel for willfully blinding himself to her misrepresentations. Since the outcome of defendant’s Rule 11 motion is affected by the admissibility vel non of the proffered opinion of plaintiffs handwriting expert that the Release was forged, the Court first addresses the admissibility of that expert opinion.

A. The admissibility of the proffered expert testimony under Rule 702.

Shortly before the expert disclosure deadline -in this case, plaintiff engaged a reputed handwriting expert, Wendy Carlson, to provide an opinion on the authenticity of plaintiffs signature on the Release.4 The signature on the Release appears as follows:

*414

See Decl. of Thomas Burke dated Sept. 14, 2015 (“Sept. 14 Burke Decl.”), Ex. 2, EOF No. 58-2.5

On August 18, 2015, plaintiffs counsel sent an email to Carlson asking her to provide a Rule 26 report by the next day that analyzed the Release against purported “known” signatures of the plaintiff. Plaintiffs counsel provided numerous purported “known” signatures to Carlson (all of which were either dated after the initiation of the parties’ dispute or were undated), a representative example of which is as follows:

Sept. 14 Burke Deck, Ex. 16 (“Carlson Expert Report”) at Ex. Kl. After comparing these “known” signatures to the signature on the Release, Carlson opined, in an expert report submitted August 20, 2015, that “[biased on [her] scientific examination” the signature on the Release was a forgery. Id. at 7.

On December 4, 2015, the Court held a combined evidentiary hearing on CIR’s Rule 11 motion and a “Daubert ” hearing on the admissibility of Carlson’s testimony. See Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993). At the hearing, Carlson admitted that she had no basis for knowing (other than the representation of plaintiffs counsel) that the purported “known” signatures she had received were actually plaintiffs, such that she could not definitively state whether the “known” signatures had been forged or whether the Release had been forged. See Transcript dated Dec. 4, 2015 (“Dec. 4 Transcript”), at 55-57, ECF No. 88. For that reason, the Court asked plaintiff to write her signature on a piece of paper 10 times in open court (the “In-Court Signatures”). Id. at 90. Although Carlson observed that plaintiff was writing these signatures “very slow[ly],” id. at 104, nonetheless, at the Court’s request and on consent of all involved, Carlson prepared a supplemental report following the hearing, submitted on December 9, 2015, in which she found that the author of the In-Court Signatures (ie., plaintiff) was the author of the purported “known” signatures .that formed the basis of Carlson’s initial expert report, and that, once again, her opinion, “[biased on [her] scientific examination,” was that the signature on the Release was made by someone other than plaintiff, ie., was a forgery, see *415Forensic Handwriting and Document Examiner Expert Report Suppl. (“Supplemental Expert Report”) at 8, EOF No. 87.6

In order for expert' testimony to be admissible under the Federal Rules of Evidence, Rule 702 requires that an “expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue,” that “the testimony is based on sufficient facts or data” and “is the product of reliable principles and methods,” and that “the expert has reliably applied the principles and methods to the facts of- the case.” Fed.R.Evid. 702. “[T]he proponent of expert testimony has the burden of establishing by a preponderance of the evidence that the admissibility requirements of Rule 702 are satisfied.” United States v. Williams, 506 F.3d 151, 160 (2d Cir.2007).

While “Rule 702 embodies a liberal standard of admissibility "for expert opinions," Nimely v. City of New York, 414 F.3d 381, 395 (2d Cir.2005), “nothing in either Daubert or the Federal Rules of Evidence requires a district court to admit opinion evidence that is connected to existing data only by the' ipse dixit of the expert.” Gen. Elec. Co.v. Joiner, 522 U.S. 136, 146, 118 S.Ct. 512, 139 L.Ed.2d 508 (1997). Rather, Daubert has “charged trial judges’ with' the responsibility of acting as gatekeepers to exclude unreliable expert testimony” and junk science from the courtroom. Fed.R.Evid. 702 advisory committee’s note to 2000 amendment. With respect to expert-opinions purporting to offer scientific conclusions in particular, Daubert states that courts should ordinarily pay particular attention to whether the expert’s methodology has or can be tested, whether it has been subject to peer review and publication, whether it has a known error rate, whether it is subject to internal controls and standards, and whether it has received general acceptance in the relevant scientific community. See' Daubert, 509 U.S. at 593-94, 113 S.Ct. 2786. While expert testimony that does not fare well under these particular standards may still sometimes be admissible as non-scientific expert testimony pursuant to the doctrine of Kumho Tire Co. v. Carmichael, 526 U.S. 137, 119 S.Ct. 1167, 143 L.Ed.2d 238 (1999) (discussed infra), it is the Court’s role to ensure that a given discipline does not falsely lay claim to the mantle of science, cloaking itself with the aura of unassaila-bility that the imprimatur of “science” confers and thereby distorting the truth-finding process.7 There have been too many pseudo-scientific disciplines that have since been exposed as profoundly flawed, unreliable, or baseless for any Court to take this role lightly.8

*416Handwriting analysis, or “forensic document examination” as its practitioners prefer to call it, involves the “asserted ability to determine the authorship vel non of a piece of handwriting by examining the way in which the letters are inscribed, shaped and joined, and comparing it to exemplars of a putative author’s concededly authentic handwriting.” D. Michael Risinger, Handwriting Identification § 33:1, in 4 Modem Scientific Evidence: The Law and Science of Expert Testimony (David L. Faigman et al. eds., 2015-2016) (footnote omitted). Before assessing the discipline under Dau-bert, some historical context is in order. Unlike, say, physics or chemistry, or even DNA analysis, handwriting identification is not a field that arose from scientific inquiry or that developed independent of the courtroom. It was a purely forensic development, intended to deal with cases like this one in which the question of whether someone authored a particular document might be a dispositive issue in the case, or even make the difference between a guilty verdict or an acquittal. Id. § 33:3 (“[Wjhen expert handwriting identification testimony was first declared admissible in America and England, there were no experts .... When the legal system agreed to accept such testimony, however, it created a demand which was to be met by people who turned their entire attention to filling it.”); Jennifer L. Mnookin, Scripting Expertise: The History of Handwriting Identification Evidence and the Judicial Construction of Reliability, 87 Va. L.Rev. 1723, 1727 (2001) (“Handwriting identification is an unusual form of expert evidence because it was the first kind of expertise that was primarily forensic, invented specifically for use in the legal arena.”).

Initially, testimony by putative handwriting experts was met with skepticism by U.S. courts. Through the late 19th century, many jurisdictions did not admit it at all and the enterprise was viewed with suspicion. See Risinger, Handwriting Identification § 33:3; Hoag v. Wright, 174 N.Y. 36, 42, 66 N.E. 579 (1903) (“The opinions of experts upon handwriting, who testify from comparison only, are regarded by the courts as of uncertain value, because in so many cases where such evidence is received witnesses of equal honesty, intelligence, . and experience reach conclusions not only diametrically opposite, but always in favor of the party who called them.”). To persuade the courts that their expertise was legitimate, the early handwriting experts therefore “claimfed] the mantle of science”:

*417The experts argued that they had well-developed methods by which they could distinguish the penmanship of one writer from that of another. Their knowledge, they claimed, resulted not simply from experience or innate talent, but from careful application of well-honed procedures, rigorous attention to methodology, and the precision and detail of measurements. Aspiring handwriting experts thus drew upon the arsenal of scientific methods, but. equally important, they invoked the- rhetoric of science to buttress their own authority. By proclaiming themselves scientific, they hoped to persuade judges and juries that their conclusions were both objective and warranted.

Mnookin, Scripting Expertise, 87 Va. L.Rev. at 1786-87.

Against this background, the tide shifted in favor of admissibility when Albert Osborn, widely recognized as a progenitor.of modem forensic document examination, embarked with John Henry Wigmore (of Wigmore on Evidence fame) on a decades-long campaign to promote handwriting analysis as a scientific endeavor. Risinger, Handwriting Identification § 33:3 (“Osborn’s book, Osborn’s personality, and Osborn’s relationship with Wigmore, are the cornerstones upon which respect for asserted handwriting identification expertise in the "United States was built.”). The vision was perhaps best realized when Osborn (among other handwriting experts) testified that the man accused of kidnapping and murdering Charles Lindbergh’s baby had written the ransom notes at issue. “Osborn became a celebrity” and the place of handwriting analysis in the courtroom became firmly entrenched: “In the half century after the [Lindbergh case], no reported opinion rejected handwriting expertise, nor was much skepticism displayed towards it.” Id. This was despite some highly-publicized instances where handwriting experts got it wrong. Indeed, when the notorious journalist Clifford Irving convinced a book publisher in the early 1970’s that Howard Hughes had authorized him to write Hughes’s autobiography, it was Osborn’s firm that mistakenly authenticated Irving’s forgeries of Hughes’s handwriting as genuine, concluding that it was “impossible” that anyone other than Hughes could have authored the forgeries. See Robert R. Bryan, The Execution of the Innocent: The Tragedy of the Hauptmann-Lindbergh and Bigelow Cases, 18 N.Y.U. Rev. L. & Soc. Change 831, 844 n.48 (1991), Thereafter, however, Irving confessed that he had forged Hughes’s signature and pled guilty to a federal felony arising therefrom. Id.) see also Lawrence Van Gelder, Irving Sentenced to Year Term, N.Y. Times, June 17, 1972, at 1, 34.

In recent years, however, Daubert has spurred some courts to scrutinize handwriting analysis anew, and several district courts have found testimony from purported handwriting experts inadmissible under Daubert. See, e.g., United States v. Hidalgo, 229 F.Supp.2d 961, 966 (D.Ariz.2002) (collecting cases); see also United States v. Hines, 65 F.Supp.2d 62, 70-71 (D.Mass.1999) (admitting such testimony “to the extent that [the expert] restricts her testimony to similarities or dissimilarities between the known exemplars and the robbery note” but prohibiting the expert from “render[ing] an ultimate conclusion on who penned the unknown writing”). At least as many courts, however, continue to fully admit testimony by handwriting experts, often invoking the field’s historical pedigree and affirming the validity of the field as a general matter. See, e.g., United States v. Crisp, 324 F.3d 261, 271 (4th Cir.2003) (noting that “handwriting comparison testimony has a long history of admissibility” and finding that the “the *418fact that handwriting comparison analysis has achieved widespread and lasting acceptance in the expert community gives us the assurance of reliability that Daubert requires”). While the reasoning of cases such as Crisp may be questioned — since, even if handwriting expertise were always admitted in the past (which it was not), it was not until Daubert that the scientific validity of such expertise was subject to any serious scrutiny — such pedigree often provided a vehicle for affirming a district judge’s admission of handwriting analysis on the ground that it was not an abuse of discretion.9

In the Second Circuit, however, the issue of the admissibility and reliability of handwriting analysis is an open one. See United States v. Adeyi, 165 Fed.Appx. 944, 945 (2d Cir.2006) (“Our circuit has not authoritatively decided whether a handwriting expert may offer his opinion as to 'the authorship of a handwriting sample, based on a comparison with a known sample.”); United States v. Brown, 152 Fed.Appx. 59, 62 (2d Cir.2005) (same). As such, the Court is free to consider how well handwriting analysis fares under Daubert and whether Carlson’s testimony is admissible, either as “science” or otherwise.

Carlson, like other handwriting experts, purports to use the “ACE-V” methodology in conducting handwriting comparison, an acronym for “Analyze, Compare, Evaluate, and Verify.”10 Carlson Expert Report at 6. In her report, Carlson explains this methodology in largely conclusory terms:

The identification of any signature or handwriting is based on the agreement, without unexplainable difference, of the handwriting characteristics displayed. These characteristics include the form of the letters, the beginning, connecting, and ending strokes, the proportions of letters, both inter-letter and intra-letter, the slope, size, and curvature of the writing and/or printing, the spacing and arrangement, the skill of the writer, and line quality. The alignment, positioning and outstanding significant features are other factors used to analyze, compare and evaluate. The elimination of an author is based on a lack of some or all of the above-noted comparisons.

Id. at 5.

At the Daubert hearing, Carlson elaborated on the ACE-V method as follows:

The A is analyze. I examine and analyze the purported knowns to determine that they were authored by the same person, that all the knowns were authored by the same person.
I then take the questioned signature, also enlarge that to 200 percent and do the comparison, which is C. I compare to determine similarities or dissimilarities within the writings and make a determination as to what is really significant, what is just maybe a factor of writing that needs to be taken accounted for. And then we move to E which is evalua*419tion and I take my findings of similarities and dissimilarities and- evaluate the weight of the evidence that I have and make a determination as to authorship, whether similar authorship, different authorship, In many cases what I do is verification. I don’t do that with every case. With science I know that every experiment is not verified. With this case I felt like the differences were so dramatic and striking that I did not do a verification. I didn’t feel it was necessary in this matter.

Dec. 4 Transcript, at 69-70.

Carlson further explained:

[WJhat I am looking for are, again, habits that are repeatedly seen, patterns within the writing; Does this person make a loop clockwise or counterclockwise? What do the ending stroke, the beginning stroke, the connecting strpkes look like? I am looking at a portion of one letter to another like ratios. One thing that I really find to be very helpful and significant are the angles in writing. For example, if I am drawing an angle from the top of maybe the first initial in the first name to the first initial in the last name, you sign your name a specific way every time so that angle is going to be very similar most every time.

Id. at 61.

On its face, this bears none of the indicia of science and suggests, at best, a form of subjective expertise. Indeed, in her testimony at the Daubert hearing, Carlson appeared to concede as much, affirming that what she was “chiefly relying on [ ] is not what we would call science in the sense of physic[s] or chemistry or biology,” but rather “experience” such that she knows what “to look for ... in a way that the everyday layperson would not.” Dec. 4 Transcript, at 63. Yet this did not stop her from stating, in her second report submitted a few days after this testimony, that her latest opinions were “[biased on [her] scientific examination” and “scientific methodology.” Supplemental Expert Report at 6, 8. It therefore behooves the Court to examine more specifically whether the ACE-V method of handwriting analysis, as described by Carlson, meets the common indicia of admissible scientific expertise as set forth in Daubert.

The first Daubert factor is whether the methodology has been or can be tested. See Daubert, 509 U.S. at 590, 113 S.Ct. 2786 (“[Sjcience .., represents a process for proposing and refining theoretical explanations about the world that are subject to further testing and refinement.” (quoting Brief for American Association for the Advancement of Science et al, as Amici Curiae 7-8)). To this Court’s knowledge, no studies have evaluated the reliability or relevance of the specific techniques, methods, and markers used by forensic document examiners to determine authorship (as opposed to their overall ability to “get it right” — a subject discussed under the rubric of error rate, infra). For example, there are no studies, to this Court’s knowledge, that have evaluated the extent to which the angle at which one writes or the curvature of one’s loops distinguish one person’s handwriting from the next. Precisely what degree of variation falls within or outside an expected range of natural variation in one’s handwriting — such that -an examiner could distinguish in an objective way between variations that indicate different authorship and variations that do not — appears to be completely unknown and untested. Ditto the extent to which such a range is affected by the use of different writing instruments or the intentional disguise of. one’s natural hand or the passage of time. Such things could be tested and studied, but they have not been; and this by itself renders the field unscientific in nature. See United *420 States v. Johnsted, 30 F.Supp.3d 814, 818 (W.D.Wis.2013) (“The lack of testing also calls into question the reliability of analysts^ ] highly discretionary decisions as to whether some aspect of a questioned writing constitutes a difference or merely a variation; without any proof indicating that the distinction between the two is valid, those decisions do not appear based on a reliable methodology.”).

As such, it is hardly surprising that Carlson’s expert report reads more like a series of subjective observations than a scientific analysis {e.g., “the ‘e’, ‘c’s, upper ⅛’ loop, and ‘a’s in the questioned signature are more narrow than the known signatures which display fuller, rounder letters” (Carlson Expert Report at 6)). Indeed, as noted, Carlson herself conceded as much at the Daubert hearing.

To be sure, “no one has ever doubted that there [is] information in a handwriting trace that might be used for attribution of authorship under some circumstances.” D. Michael Risinger, Appendix: Cases Involving the Reliability of Handwriting Identification Expertise Since the Decision in Daubert, 43 Tulsa L.Rev. 477, 494 (2007). The rub “is simply that we don’t know what those circumstances are, and when humans are or are not good at such attributions, regardless of their own claims at skill.” Id. Until the forensic document examination community refines its methodology, it is virtually untestable, rendering it an unscientific endeavor.

The second Daubert factor concerns whether the methodology has been subject to peer review and publication. Of course, the key question here is what constitutes a “peer,” because, just as astrologers will attest to the reliability of astrology, defining “peer” in terms of those who make their living through handwriting analysis would render this Daubert factor a charade. While some journals exist to serve the community of those who make their living through forensic document examination, numerous courts have found that “[t]he field of handwriting comparison ... suffers from a lack of meaningful peer review” by anyone remotely disinterested. United States v. Saelee, 162 F.Supp.2d 1097, 1103 (D.Alaska 2001) (“[S]ome articles are presented at professional meetings for review [but] there is no evidence that any of these articles are subjected to peer review by disinterested parties, such as academics.”). “There is no peer review by a ‘competitive, unbiased community of practitioners and academics,’ ” as would be expected in the case of a scientific field. Hines, 55 F.Supp.2d at 68 (quoting United States v. Starzecpyzel, 880 F.Supp. 1027, 1038 (S.D.N.Y.1995)); United States v. Fujii, 152 F.Supp.2d 939, 940-41 (N.D.Ill. 2000) (“[T]here has been no peer review by an unbiased and financially disinterested community of practitioners and academics _”).

Relatedly, as the National Academy of Sciences found in a comprehensive report issued on the forensic sciences in 2009, “there has been only limited research to quantify the reliability and replicability of the practices used by trained document examiners.” Comm. on Identifying the Needs of the Forensic Science Community, Nat’l Research Council, Strengthening Forensic Science in the United States: A Path Forward [“NAS Report”] 167 (Aug. 2009). This is hardly surprising given that forensic document examination “has no academic base.” Risinger, Handwriting Identification § 33:11 n.5. Indeed, as Carlson testified at deposition, “there are no colleges or universities that offer degrees in forensic document examination.” Deck of Thomas R. Burke dated Nov. 24, 2015 (“Nov. 24 Burke Deck”), Ex. A at 9, ECF No. 83-1 at 6.

*421In sum, to the extent the field has been subject to any “peer” review and publication, the review has not been sufficiently robust or objective to lend credence to the proposition that handwriting comparison is a scientific discipline.

Turning to the third Daubert factor, “[tjhere is little known about the error rates of forensic document examiners.” Saelee, 162 F.Supp.2d at 1103. While a handful of studies have been conducted, the results have been mixed and “cannot be said to have ‘established’ the validity of the field to any meaningful degree.” Hines, 55 F.Supp.2d at 69. Certain studies conducted by Dr. Moshe Kam, a computer scientist commissioned by the FBI to research handwriting expertise, have suggested that forensic document examiners are moderately better at handwriting identification than laypeople. For example, in one such study, the forensic document examiners correctly identified forgeries as forgeries 96% of the time and only incorrectly identified forgeries as genuine .5% of the time, while laypeople correctly identified forgeries as forgeries 92% of the time and incorrectly identified forgeries as genuine 6.5% of the time. Ri-singer, Appendix, 43 Tulsa L.Rev. at 491. Furthermore, forensic document examiners incorrectly identified genuine signatures as forgeries 7% of the time, while laypeople did so 26% of the time. Id.

Although such studies may seem to suggest that trained forensic document examiners in the aggregate do have an advantage over laypeople in performing particular tasks, not all. of these results appear to be statistically significant and the methodology of the Kam studies has been the subject of significant criticism.11 In any event, in contrast to the study cited above (which involved attempted simulations of genuine signatures), the immediate task for the proffered expert in this case, as Carlson implicitly acknowledged at the Daubert hearing, was to determine whether a signature that does not look anything like plaintiffs purported “known” signatures was or was not authored by plaintiff.12 See Liberty Media Corp. v. Vivendi Universal, S.A., 874 F.Supp.2d 169, 172 (S.D.N.Y.2012) (“Under Rule 702 and Daubert, the district court must determine whether the proposed expert testimony ‘both rests on a reliable foundation and is relevant to the task at hand.’” (quoting Daubert, 509 U.S. at 597, 113 S.Ct. 2786)). Put differ*422ently, the task at hand, so far as expertise is concerned, is to determine whether plaintiff intentionally disguised her natural handwriting in producing the “known” signatures. And in this respect, the available error rates for handwriting experts are unacceptably high.

For example, in a 2001 study in which forensic document examiners were asked to compare (among other things) the “known” signature of an individual in his natural hand to the “questioned” signature of the same individual in a disguised hand, examiners were only able to identify the association 30% of the time. Twenty-four percent of the time they were wrong, and 46% of the time they were unable to reach a result. See Risinger, Handwriting Identification § 33:34. Similarly, and strikingly, in an unpublished study conducted by the Forensic Sciences Foundation in 1984, participating labs were supplied with three handwritten letters (the “questioned” documents) and handwriting exemplars for six suspects. Two of the three letters were written by one person, who was not among the suspects for whom the examiners had exemplars, and the third letter was written by a suspect who had written his exemplars in his normal hand, but who had tried to simulate the writing of the other two letters when producing his letter. Of the 23 labs that submitted responses, 74% perceived the difference in authorship between the letters, but exactly 0% recognized that the third letter was written by a suspect who had disguised his handwriting. These results suggest that while forensic document examiners might have some arguable expertise in distinguishing an authentic signature from a close forgery, they do not appear to have much, if any, facility for associating an author’s natural handwriting with his or her disguised handwriting. See Risinger, Appendix, 43 Tulsa L.Rev. at 549 (“[T]here is absolutely no empirical evidence to support the skill claim in regard to distinguishing between disguised exemplars and normal hand exemplars independent of comparison to some ... everyday writing pre-existing the obtaining of the demand exemplars”).

As such, the known error rates, as they apply to the task at hand, cut against admission.

As for the fourth Daubert factor, the field of handwriting comparison appears to be “entirely lacking in controlling standards,” Saelee, 162 F.Supp.2d at 1104, as is well illustrated by Carlson’s own amorphous, subjective approach to conducting her analysis here. At her deposition, for example, when asked “what amount of difference in curvature is enough to identify different authorship,” Carlson vaguely responded, “[y]ou know, that’s just a part of all of the features to take into context, so I wouldn’t rely on a specific stroke to determine authorship.” Deck of Thomas R. Burke dated Jan. 21 (“Jan. 21. Burke Dec!.”), Ex. 2 at 49, ECF No. 95-2 at 43. Similarly, when asked at the Daubert hearing how many exemplars she requires to conduct a handwriting comparison, Carlson testified:

You know, that’s really — that has been up for debate for a long time. I know that a lot of document examiners, myself included, I would prefer — I ask for a half a dozen to a dozen. That at least gives me a decent sampling. Others request 25 or more. I feel like if you get too many signatures you have got so much information it is overwhelming and you tend to get lost in it.

Dec. 4 Transcript, at 62-63; see also Starzecpyzel, 880 F.Supp. at 1046 (noting that forensic document examiners “lack objective standards in regard to the number of exemplars required for an accurate determination as to genuineness”).

*423Nor is there any “agreement as to how many similarities it takes to declare a match.” Hines, 55 F.Supp.2d at 69; see also United States v. Rutherford, 104 F.Supp.2d 1190, 1193 (D.Neb.2000) (“[The forensic document examiner] testified that unlike fingerprint Identification, there is no specific number of characteristics an [examiner] is required to find before declaring that a positive match has been made. Rather, [the examiner] testified- that a match is declared upon the subjective satisfaction of the [examiner] performing the handwriting analysis based on his education, training, and experience.”); And because there are no recognized standards, it is impossible, to “compare the opinion reached by an examiner with a standard protocol - subject, to validity -testing.” Hines, 55 F.Supp.2d at 69.

Furthermore; “there is no standardization of training - enforced either by any licensing agency- or by professional tradition,” nor a “single accepted professional certifying body” of forensic document examiners. Risinger, Handwriting Identification § 33:11 n.5. Rather, training is by apprenticeship, which in Carlson’s case, took the form of a two-year, part-time internet course, involving about five to ten hours of work, per week under the tutelage of a mentor she met with personally when they were “able -to connect.” Nov. 24 Burke Decl., Ex. A at 13, ECF No. 83-1 at 10. . -

As for the final Daubert factor— general acceptance in the expert community — handwriting experts “certainly find ‘general acceptance’ within their own community, but this community is devoid of financially disinterested parties.” Starzecpyzel, 880 F.Supp. at 1038. Such acceptance cannot therefore be taken for much. A more objective measure of acceptance is thé National Academy of Sciences’ 2009 Report, which struck a cautious note, finding that while “there may be some value in handwriting analysis,” • “[t]he scientific basis for handwriting comparisons needs to be strengthened.” NAS Report at 166-67. The Report also noted that “there may be a scientific basis for handwriting comparison, at least in the absence of intentional obfuscation or forgery ” — a highly relevant caveat for present purposes. Id. at 167 (emphasis added). This is far from general acceptance.

For decades, the forensic document examiner community has essentially said to courts, “Trust us.” ' And many courts have. But that does not make what the examiners do science.

Of course, just because Carlson’s testimony flunks Dambert does not mean it is inadmissible under Rule 702 altogether. If, Carlson has (among other requirements) “technical” or “other specialized knowledge” that “will help the trier of fact to understand the evidence or to determine a fact in issue,” her testimony may be admissible. Fed.R.Evid. 702(a). Indeed, the Supreme Court’s decision in Kumho Tire “made clear that while [Dau-bert ’s] basic requirements of reliability-— as they are now articulated in Rule 702— apply across the board to all expert testimony, the more particular [Daubert ] standards for scientific evidence need not be met when the testimony offered” is not scientific in nature. United States v. Glynn, 578 F.Supp.2d 567, 570 (S.D.N.Y.2008). “[T]he test of reliability is ‘flexible,’ and Daubert’s list of specific factors neither necessarily nor exclusively applies to all experts or in every case.” Kumho Tire, 526 U.S. at 141, 119 S.Ct. 1167.13

*424But while courts are free under Kumho Tire to apply different factors than are called for by Daubert based on what factors best “fit” the inquiry, “the particular questions that [Daubert] mentioned will often be appropriate for use in determining the reliability of challenged expert testimony.” Id. at 152, 119 S.Ct. 1167. Here, the Court finds that the Dau-bert criteria suit the instant inquiry well. See Saelee, 162 F.Supp.2d at 1101 (“Factors that ‘fit’ the instant case are whether the theories and techniques of handwriting comparison have been tested, whether they have been subjected to peer review, the known or potential error rate of forensic document examiners, the existence of standards in making comparisons between known writings and questioned documents, and the general acceptance by the forensic evidence community.”). It remains the case that the methodology has not been subject to adequate testing or peer review, that error rates for the task at hand are unacceptably high, and that the field sorely lacks internal controls and standards, and so forth. Accordingly, this Court is of the view that, as a general matter, a court should be cautious in admitting testimony from a forensic document examiner even under the flexible approach of Kumho Tire — particularly when an examiner offers an opinion on authorship — and should not do so without carefully evaluating whether the examiner has actual expertise in regard to the specific task at hand.

In this case, Carlson’s testimony is far too problematic to be admissible under Rule 702 as technical or otherwise “specialized” expert testimony, even on a Kumho Tire approach, for at least four reasons.

First, as a threshold matter, plaintiffs counsel sought to bias Carlson from the start. In plaintiffs counsel’s email to Carlson seeking to retain her, plaintiffs counsel stated flatly that “[t]he questioned document was a Release that Defendant CIR forged” and that a Rule 26 Report (to this effect) was needed from Carlson by the next day. Nov. 24 Burke Deck, Ex. B., ECF No. 83-2. He continued:

I understand that we are asking a lot, in a short period of time, however, this is what we need, and you’re the expert that we want and feel comfortable working with. You were a rock star for us at our last case! We are asking the same performance here. Our client was really taken advantage of by this Defendant, and it put her, and her young children in danger, and we need your help to right this wrong. If you need anything else, please let us know. We can’t thank you enough.

Id.

In the same vein, one of the “known” signatures that plaintiffs counsel provided to Carlson was an affidavit signed by plaintiff reciting her claim that the Release is a “fake” which “does not contain my signature.” Carlson Expert Report, Ex. Kl. The affidavit concludes with Almeci-ga’s averment that she is “truly disgusted and deeply disturbed with the manner in which CIR has forged these documents. CIR’s conduct has destroyed my life!” Id.

Plaintiffs counsel also sent Carlson the letter from plaintiffs prior (uncalled) expert stating that the Release was forged, see Nov. 24 Burke Decl., Ex. B. (though *425Carlson testified that she did not recall reviewing it, see Dec. 4 Transcript, at 74). All of this is contrary to the well-established principle that , experts must, to the maximum extent possible, proceed “blindly,” that is, without knowledge of the result sought by the party seeking to retain them. Indeed, even one of the earliest treatises on handwriting analysis, authored in 1894 by William Hagan, stated that “[t]he examiner must depend wholly upon what is seen, leaving out of consideration all suggestions or hints from interested parties [as] it best subserves the conditions of fair examination that the expert should not know the interest which the party employing him to make the investigation has in the result.” William E. Hagan, Disputed Handwriting 82 (1894). Plaintiffs counsel’s blatant biasing tactics compromised Carlson’s ability to provide a neutral examination, a danger made even greater by the highly subjective nature of Carlson’s methodology.

Second, the subjectivity and vagueness that characterizes Carlson’s analysis severely diminishes the reliability of Carlson’s methodology. Carlson describes letters in the questioned signature as “oversized” and “formed incorrectly;” as characterized by “very smooth strokes and curves” as opposed to the “very jerky, angular strokes” of the known signatures; as “more narrow” compared to the “fuller, rounder letters” of the known signatures; as “too tall when compared to the respective letters in the known signatures;” as “very symetrical [sic]” compared to the “wider, distorted loops” of the known signatures; and so on. Carlson Expert Report at 6; Supplemental Expert Report at 7. Based on such observations, Carlson concludes that the Release was not signed by Erica Almeci-ga, But the. critical missing link is why any of these observed differences indicate different authorship at all, let alone in a context where someone has potentially disguised his or her handwriting.

Third, and relatedly, while testimony that accounted for the possibility of disguise and addressed why the “known” signatures were not the product of intentional disguise could at least have potentially assisted the trier of fact, Carlson did not offer such testimony. To the contrary, Carlson confirmed at her deposition that she was “relying on the plaintiffs representations that [the known signatures] are accurate representations of her signature.” Nov. 24 Burke Decl., Ex. A at 47 (emphasis added), ECF No. 83-1 at 21; see also id. at 60, ECF No. 83-1 at 29. This is a critical flaw in Carlson’s methodology because it assumes away a key issue: whether Almeciga intentionally disguised her handwriting in producing the known samples after this dispute was initiated or whether the known samples accurately represent her actual handwriting. By relying on plaintiffs counsel’s representation that the “known” signatures were accurate representations of plaintiffs signature, the result of Carlson’s analysis was effectively pre-ordained and her testimony cannot be considered the “product of reliable principles and methods.” Fed.R.Evid. 702. In fact, Carlson’s testimony has been excluded by at least one other court in part on such a basis. See United States v. LeBeau, 2015 WL 4068158, at *8 (D.S.D. June 10, 2015) (“[Carlson’s] analysis and opinions entirely hinge on whether she received an accurate ‘known’ signature from [the defendant].”).

The tainting effect of Carlson’s assumption in this regard may be gleaned from what she infers on the basis of her observation that the “signature on the questioned document is written with great fluidity and a faster speed, unlike the known signatures that display a slower, more methodical and unrefined style of *426writing.” Carlson Expert Report at 6. To Carlson, who took on faith that the “known” signatures were accurate representations of plaintiffs handwriting, this discrepancy is evidence that the Release was forged. Yet, at the Daubert hearing, Carlson confirmed that slower, methodical handwriting was “equally consistent ... or maybe even more consistent ] with someone trying to fake the known signatures,” Dec. 4 Transcript, at 65, and she observed that the exemplars written by plaintiff in open court were written slowly, id. at 104. While Carlson further testified that she was able to assure herself that plaintiff did not disguise her handwriting because “subconscious traits and ... characteristics” will reveal themselves in disguised writing, this testimony cannot be considered of much value in light of Carlson’s earlier, contrary deposition testimony and the complete absence of any indication in her reports that she was accounting for the possibility of disguise. Id. at 78.

Fourth, also diminishing Carlson’s credibility are a number of striking contradictions between her Report and her in-court testimony. Thus, while Carlson purported to apply the ACE-V method in her expert report, see Carlson Expert Report at 6, she admitted at the Daubert hearing that she did not have time to obtain a verification of her opinion in this case and that her report was inaccurate in this respect, see Dec. 4 Transcript, at 70-71, 76. Virtually by definition, then, Carlson failed to “reliably appl[y] the principles and methods” in question “to the facts of this ease.” Fed.R.Evid. 702(d); see also United States v. McDaniels, 2014 WL 2609693, at *5 (E.D.Pa. June 11, 2014) (disqualifying handwriting expert who purported to apply ACE-V method but who failed to provide evidence that she had actually done so).14 Moreover, in her initial expert report, Carlson stated that the signature on the Release was “made to resemble” plaintiffs. See Carlson Expert Report at 6. But at the Daubert hearing, Carlson took the opposite position. See Dec. 4 Transcript, at 63 (agreeing that the signature on the Release and the known signatures “weren’t even close,” that the signature on the Release “was not like an attempted forgery,” and that the signatures being compared were “very different”). Confirming this reversal, in her' Supplemental Expert Report, Carlson describes the signature on the Release as “not made to resemble Erica Almeciga’s signature” and as “remarkably dissimilar [to the In-Court Signatures], indicating forgery/different authorship.” Supplemental Expert Report at 6 (emphasis added).

Several courts that have found themselves dubious of the reliability of forensic document examination have adopted a compromise approach of admitting a handwriting expert’s testimony as to similarities and differences between writings, while precluding any opinion as to authorship. See, e.g., Rutherford, 104 F.Supp.2d at 1192-94. That Solomonic solution might be justified in some circumstances, but it cannot be here where the Court finds the proffered expert’s methodology fundamentally unreliable and critically flawed in so many respects. Such testimony would be more likely to obfuscate the issues in this case than to “help the trier of fact to understand the evidence or to determine a fact in issue.” Fed.R.Evid. 702(a). It would be an abdication of this Court’s gatekeeping role under Rule 702 to admit Carlson’s testimony in light of its deficiencies and unreliability. According*427ly, Carlson’s testimony must be excluded in its entirety.

B. The Rule 11 Motion

Having determined that Ms. Carlson’s expert opinion is inadmissible under Rule 702, the Court turns to the merits of defendant’s Rule 11 motion.

Rule 11 of the Federal Rules of Civil Procedure requires attorneys filing papers with the court to certify “that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances ,.. (3) the factual contentions have evidentiary support or, if specifically so identified, will likely have evidentiary support after a reasonable opportunity for further investigation or discovery.” Fed.R.Civ.P. 11(b)(3). If “the court determines that Rule 11(b) has been violated, the court may impose an appropriate sanction on any attorney, law firm, or party that violated the rule or is responsible for the violation.” Fed. R.Civ.P. 11(c)(1). “[T]he standard for triggering the award of fees under Rule 11 is objective unreasonableness.” Margo v. Weiss, 213 F.3d 55, 65 (2d Cir.2000). In addition, Rule 11 “sanctions may not be imposed unless a particular [factual] allegation is utterly lacking in support.” O’Brien v. Alexander, 101 F.3d 1479, 1489 (2d Cir.1996). Indeed, “Rule 11 sanctions are not warranted where the ‘evidentiary support is merely weak and the claim is unlikely to prevail.’ ” Mealus v. Nirvana Spring Water N.Y. Inc., 2015 WL 4546023, at *6 (N.D.N.Y. July 28, 2015) (quoting Scientific Components Corp. v. Sirenza Microdevices, Inc., 2007 WL 1026411, at *2 (E.D.N.Y. Mar. 30 2007)). And “[e]ven if the district court concludes that the assertion of a given claim violates Rule 11 ... [t]he decision whether to impose a sanction for a Rule 11(b) violation is ... committed to the district court’s discretion.” Perez v. Posse Comitatus, 373 F.3d 321, 325 (2d Cir.2004).

Separate and apart from Rule 11, a court has" the inherent power to impose sanction on a party for perpetrating a fraud on the Court. ' Such sanctions “are warranted if it is ‘established by clear and convincing evidence that [a party] has sentiently set in motion some unconscionable scheme calculated to interfere with the judicial system’s ability impartially to adjudicate’ the action.’” New York Credit & Fin. Mgmt. Grp. v. Parson Ctr. Pharmacy, Inc., 432 Fed.Appx. 25, 25 (2d Cir.2011) (quoting Scholastic, Inc. v. Stouffer, 221 F.Supp.2d 425, 439 (S.D.N.Y.2002)) (internal quotation marks omitted). “Because of their very potency, inherent powers must be exercised with restraint and discretion.” Chambers v. NASCO, Inc., 501 U.S. 32, 44, 111 S.Ct. 2123, 115 L.Ed.2d 27 (1991). Accordingly, “as a general matter, a court should not impose sanctions on a party or attorney pursuant to its inherent authority unless it finds, by clear and convincing evidence, that the party or attorney knowingly submitted a materially false or misleading pleading, or knowingly failed to correct false statements, as part of a deliberate and unconscionable scheme to interfere with the Court’s ability to adjudicate the case fairly.” Braun ex rel. Advanced Battery Techs., Inc. v. Zhiguo Fu, 2015 WL 4389893, at *17 (S.D.N.Y. July 10, 2015); see also McMunn v. Mem’l Sloan-Kettering Cancer Ctr., 191 F.Supp.2d 440, 445 (S.D.N.Y.2002) (“the essence of a fraud upon the Court” is “when a party lies to the court.and his adversary intentionally, repeatedly, and about issues that are central to .the truth-finding process”).

CIR asserts that sanctions are warranted because the totality of the evidence demonstrates that plaintiff fabricated the key factual allegations underlying her lawsuit — to wit, that defendants prom*428ised to conceal her identity in published footage of her August 2012 interview with them and that she did not sign the Release in connection with that interview.

At the evidentiary hearing, defendants Livesey and Hooper both testified that plaintiff never expressed interest in having her identity concealed at the time of the interview, that they never promised to conceal plaintiffs identity, and that plaintiff signed the Release in their presence. See Dec. 4 Transcript, at 6-10 (Hooper), 31-33 (Livesey). The key factual evidence offered in support of plaintiffs claim is thus her own testimony; but, for the following reasons, plaintiff is not a remotely credible witness and her allegations collapse under scrutiny.

As an initial matter, plaintiffs version of events is undermined by her own contemporaneous conduct in connection with the interview. On July 17, 2013, a day after the CIR Report was published, Livesey and Hooper each emailed plaintiff a link to an abridged version of the CIR Report available on YouTube. See Sept. 14 Burke Decl. ¶ 5; see id., Exs. 5, 6. This version of the CIR Report included footage from plaintiffs interview, with plaintiffs name, face, and relationship to Rosalio Reta revealed. Plaintiff responded to these emails separately the next day, asking both Livesey and Hooper to call her (and advising Livesey that it was “important”). Id., Ex. 5. Hooper also sent plaintiff an email on July 17, 2013 in which he “apologize[d] for the misspelling of [plaintiffs] name” in the Spanish version of the Report. Id., Ex. 7. Plaintiff responded to this email on July 21, 2013 asking Hooper to call her regarding a question she had. Id. 15 Plaintiff was thus plainly aware in July 2013 of the fact that CIR had revealed her identity in connection with the CIR Report. She has never contended otherwise.

Yet, the evidence shows that plaintiff did not raise concerns about the revelation of her identity until almost a year later, in June 2014. See Dec. 4 Transcript, at 35-36 (defendant Livesey testifying that plaintiff first raised concerns in June 2014); id. at 134-35 (Stephen Talbot testifying as to same). Plaintiffs substantial delay in raising any concerns about the revelation of her identity casts significant doubt on her allegation that defendants promised not to conceal her identity, particularly given the severe harm that the breach of that promise has allegedly caused her. See Am. Compl. ¶¶ 31-32.

Even more damning for plaintiffs version of events, on July 22, 2013 a link to the YouTube video of the CIR Report was posted to the Twitter account of ERYCA LEE (@eryca_reta), with the following description: “new interview of myself and my husband Rosalio Reta.” Sept. 14 Burke Decl., Ex. 8. Plaintiff has disclaimed any connection to the Twitter account, and the name of the Twitter account in question was subsequently changed from “ERYCA LEE” to “hacked.” Sept. 14 Burke Decl. ¶¶ 6-9. But plaintiff has not offered any credible explanation as to why anyone would impersonate her on Twitter — let alone post a link to the CIR Report — and the Court finds plaintiffs claim that she did not post the tweet to be dubious. Plaintiffs apparent promotion of the CIR Report on social media is virtually irreconcilable with her claim that her participation in the interview was conditioned on her identity being concealed.

*429Moreover, the signature on the Release very closely matches that of various court filings that CIR located in Georgia and Massachusetts state courts that were purportedly signed by plaintiff, but which she denies signing. The signature on the Release, as noted, appears as follows:

Sept. 14 Burke Decl., Ex. 2.16

The signatures on certain Georgia state court documents, which involved a petition for a protective order filed by an Erica Almeciga against a Rosendo Gutierrez, appear (to take an illustrative few) as follows:

See Sept. 14 Burke Decl., Ex. 15 at 1.

See id. at 3.

And the signatures on the critical documents filed in Massachusetts state court in 2007 and 2008, in a litigation in which Ms. Almeciga was a defendant, appear as follows 17:

*430

See Def.’s Ex. 1 at 19, ECF No. 95-1 at 20.

Def.’s Ex. 1 at 20, ECF No. 95-1 at 21.

Def.’s Ex. 1 at 21, ECF No. 95-1 at 22.

The strong similarities to the naked eye between the signature on the. allegedly forged Release, on the one hand, and the signatures on the state court documents, on the other, are significant because plaintiff has represented that she does not sign her name in a manner consistent with the distinctive signature on the Release. Yet plaintiff admitted at the evidentiary hear*431ing that these filings contain at least partially accurate information regarding her address, children, and familial circumstances, and there is no contention that they were filed by some other person actually named Erica Almeciga. See Transcript dated Dec. 22, 2015 (“Dec. 22 Transcript”), at 15, 29-30, 37, ECF No. 90. While still disclaiming these are her signatures, plaintiff offers no coherent explanation for why or how someone would impersonate her in domestic matters in state courts.18 Nor does plaintiff explain (1) how the alleged impersonator could have known that she was intending to move back to Massachusetts imminently (as is stated in one of the Georgia documents) when, according to her own testimony, she told ño oné she'was leaving Georgia,19 see Dec. 22 Transcript, at' 37, or (2) how the hypothetical impersonator would have known to list a neighbor’s cell phone number that plaintiff occasionally gave out as her own phone number, see Dec. 22 Transcript, at 30-34. Plaintiffs contentions that she did not author the relevant signatures on the Georgia and Massachusetts state court documents are thus not credible, which casts significant doubt on her contention that the Release was forged.20

Plaintiff was also caught in several apparent lies at the evidentiary hearing, which further reinforces this Court’s finding that plaintiff is a generally incredible and unreliable witness. ■ For example, one of the documents filed in Georgia state court (which, to reiterate, plaintiff denied filing) is a form titled “Petitioner’s Identifying Information,” which lists the name, date of birth, sex, and race of each of the protected parties (in this case, plaintiff and her children), in relevant part as follows;

*432

Sept. 14 Burke Decl., Ex. 15, ECF No. 58-15 at 11.

On cross-examination, the following colloquy took place between plaintiff and defense counsel:

Q. Can you identify what the figure is in the box that has the heading 'Sex’?
A. It looks like a 7 to me.
Q. So that’s a 7?
A. That’s what I would assume. I—
Q. Okay.

Dec. 22 Transcript, at 35.

Plaintiff was then confronted with the list of capital and lower-case letters she had submitted to her handwriting expert for purposes of handwriting analysis, which was included as an exhibit to the handwriting expert’s report. There, plaintiff wrote out her capital “F” as follows:

Carlson Expert Report, Ex. K2.

The colloquy continued:

Q. Do you recognize this?
A. Yes, I do.
Q. What is this?
A. That’s — I wrote my letters out.
Q. And as you go down to F?
A. Yes.
Q. Does that F look like a 7 to you?
A. No, it looks like an F.
Q. So, and I am looking on the left-hand side E, between E and G you have — is that how you write an F?
A. Yes. My F swoops down a lot more than what was written on this paper, this looks like a no. 7. My F swoops down and goes up.

Dec. 22 Transcript, at 36.

Despite plaintiffs effort to distinguish the two “F’”s when confronted with the inconvenient fact that she had already represented to her expert and the Court that she writes capital “F’”s in the unusual manner that appears in the Georgia state court documents, any layperson could tell that the “F” on the Georgia document and the “F” provided by plaintiff as a sample of her handwriting are highly similar and highly distinctive. The fact that plaintiff testified under questioning that the “F” on the Georgia document appeared to be a “7” — even when she had the context that the figure appeared under the heading for “Sex” and even when she, of course, knew that that is how she herself writes a capital “F” — confirms plaintiffs willingness to tes*433tify untruthfully both in general and, critically, with respect to her handwriting.21

Plaintiff also testified that she has never been pregnant with the child of-any man other than the fathers of her three children, which was relevant because both the Georgia and 'Massachusetts state court documents refer to other pregnancies. See Dec. 22 Transcript, at 46. That testimony was contradicted by her boyfriend (who is not the father of any of plaintiffs children, see id. at 34-35), who, having been called to the stand by plaintiff herself (for other reasons), testified that plaintiff had been pregnant with his child in 2014 but had miscarried. Id. at 68-69.

Moreover, in contrast to defendants, who had no discernible motive to breach a promise to plaintiff to conceal her identity, there is evidence in the record indicating that plaintiff had substantial motive to fabricate her allegations. Defendant Lives-ey, whom the Court finds credible, testified that when plaintiff contacted him in June 2014 to raise concerns about the revelation of her identity in the CIR Report, plaintiff explained that her association with Reta was being used as “ammunition” in a custody battle over one of her children. Dec. 4 Transcript, at 36. At the evidentia-ry hearing, plaintiff corroborated that her association with Reta has had an “adverse effect” on her custody proceedings with respect both to her own children and her current boyfriend’s children. Id. at 116. She also testified at deposition that she has “been labeled as dangerous when it comes to being around [her] children,” as well as her boyfriend’s son, because of her association.with Reta and CIR’s publication of it. Jan. 21 Burke Decl., Ex. 3 at 200, ECF No. 95-3. at .8. It thus appears likely that plaintiff filed this lawsuit— which seeks, inter alia, to impose a constructive trust “over all film footage and material shot and obtained by CIR in their Report,” Am. Compl. ¶ 117(f) — in an effort to unwind a decision she regrets and to distance-herself from Reta.

Given the Court’s finding that plaintiff is not remotely credible and the Court’s determination that her handwriting expert’s testimony does not pass muster under Daubert and Kumho Tire, plaintiff is left with virtually no admissible evidence in support of her version of events in the face of a mountain of contrary evidence. Plaintiffs boyfriend, Isaac Duarte-Morillo, submitted an affidavit in which he averred that, “[a]fter looking at the signatures that people are claiming belong to Erica, I am 100% confident in saying that they are not her’s [sic]. I have seen her sign her name thousand’s [sic] of time’s [sic].” Aff. of Kevin A. Landau dated Sept. 25, 2015 (“Landau Aff. dated Sept. 25, 2015”), Ex. 11, ¶ 18, ECF No. 72. Because of revelations at the evidentiary hearing, however, Duarte-Morillo’s affidavit and accompanying testimony are of little to no value. At the evidentiary hearing, after Duarte-Mor-illo testified that the affidavit reflected his “wording” and that he gave this wording to “the attorney,” plaintiffs counsel, to his credit, stated that this was “not accurate.” Dec. 22 Transcript, at 64. Upon further questioning, Duarte-Morillo testified that he gave the information to plaintiff (and not her attorney) and that plaintiff (and not her attorney) typed the affidavit. See id. at 64-65. Duarte-Morillo then clarified that, in fact, plaintiff showed him the affidavit already typed and asked if the infor*434mation contained therein was accurate. See id. at 66.

Furthermore, whatever the provenance of the affidavit, Duarte-Morillo failed to correctly identify six different signatures appearing on various of the Massachusetts state court documents as plaintiffs, despite plaintiff having confirmed that these were, in fact, her signatures. See id. at 57-58, 70-75. Thus, Duarte-Morillo’s opinion that the Release does not contain plaintiffs signature — a matter of which he has no firsthand knowledge — cannot be credited.

Plaintiff also attempted to rely on an unsworn affidavit submitted by Rosalio Reta in which Reta purportedly averred that he “agreed to let Mr. Livesey interview my fiancée Ms. Almeciga on one condition (her identity not disclosed [sic]) for fear of putting a target on her head.” Landau Aff. dated Sept. 25, 2015, Ex. 10, EOF No. 71. Reta further purportedly averred that “Mr. Livesey accepted an [sic] drew out a contract stating that the interview was to be conducted in a secure area and have my fiancée [sic] face blurred out for fear of reprisal.” Id. As the Court indicated at the conclusion of the eviden-tiary hearing, these statements are plainly inadmissible hearsay; and, even if they were somehow admissible, they are largely irrelevant given that Reta had no power to determine the conditions under which plaintiff would or would not submit to an interview and given that Reta did not attach the alleged written contract he entered into with Livesey (which has not been otherwise produced or corroborated through testimony).

Plaintiff also asks the Court to make various inferences in favor of plaintiffs version of events, none of which withstands scrutiny. First, plaintiff makes much of the fact that the CBC aired an interview (shortly before CIR interviewed plaintiff) in which plaintiffs face was concealed for her own safety. Plaintiff asks the Court to infer that she would have requested the same of CIR. But in an email dated June 12,2014, plaintiff asked a CBC employee involved in the CBC story, inter alia, “What made you decide to interview me in shadow? Which I greatly appreciate.” Sept. 14 Burke Deck, Ex. 23. The CBC employee responded, in relevant part, that “we decided that your association with Rosalito [sic] made you vulnerable and that we had an obligation to look out for you as best we could.” Id. In other words, contrary to plaintiffs contention in her Amended Complaint that the CBC concealed her identity “per [her] demand,” Am. Compl. ¶ 11, this email exchange demonstrates that the CBC made this decision independently and not at plaintiffs request.

Second, plaintiff points to the fact that defendants concealed the identity of an individual and the face of a second individual in the CIR Report, the first of whom did not sign a release and the second of whom did. Plaintiff insists that this somehow supports her allegation that she reached a similar agreement with defendants that was breached. To the contrary, if anything, these facts indicate that defendants were perfectly willing to conceal an interviewee’s identity when the request was made.

Third, plaintiff contends that the fact that CIR concealed plaintiffs identity in a different video report that it posted in November 2014 “plainly establishes that there was an understanding between CIR and Plaintiff that her identity would be concealed.” PI. Erica Almeciga’s Mem. of Law in Opp. to CIR Defs. Mot. for Sanctions at 2, ECF No. 63. But it is hardly surprising that, as a “courtesy to her,” CIR chose to conceal plaintiffs identity in media content released after plaintiff *435raised her claims and concerns. Dec. 4 Transcript, at 134-37.

The Court has considered the various other arguments raised and alleged inconsistencies identified by plaintiff and finds them to be without merit.

In sum, in view of all of the evidence adduced through the two-day evi-dentiary hearing and the copious submissions before the Court, the Court finds by clear and convincing evidence that plaintiff perpetrated a fraud on the Court by pressing critical and serious allegations that she knew to be false. Where a fraud upon the court is shown by clear and convincing evidence, courts consider five factors in fashioning an appropriate sanction: “(i) whether the misconduct was the product of intentional bad faith; (ii) whether and to what extent the misconduct prejudiced the injured party; (iii) whether there is a pattern of misbehavior rather than an isolated instance; (iv) whether and when the misconduct was corrected; and (v) whether further misconduct is likely to occur in the future.” Passlogix, Inc. v. 2FA Tech., LLC, 708 F.Supp.2d 378, 394 (S.D.N.Y.2010). Here, all five factors weigh in favor of imposing sanctions on plaintiff: plaintiffs misconduct was the product of intentional bad faith; her misconduct prejudiced defendants; the misconduct was part of an extended and troubling pattern of fabrications and denials; the misconduct has not been corrected; and further misconduct would be likely to occur if the case were to proceed.

There would be little point, however, in imposing a monetary sanction on plaintiff given that she testified that she is homeless and given that she has . mental health issues and no apparent source of income. See Dec. 22 Transcript, at 23; Marvello v. Bankers Trust Co., 1999 WL 38252, at *2 n. 1 (S.D.N.Y. Jan. 27, 1999) (“Plaintiff is unemployed and appears to be dependent upon public assistance. The imposition of monetary sanctions would therefore be pointless.”). CIR is aware that plaintiff is likely judgment-proof but nevertheless seeks its costs and fees incurred in defending this action, which it represents are in the hundreds of thousands. See Summation Mem. in Support of CIR’s Mot. for Sanctions at 3, ECF No. 94 (“CIR recognizes that it is unlikely to recover its significant fees from Plaintiff given her financial situation.”). In such circumstances, imposing a monetary sanction on plaintiff that she is unable to pay and that could only be enforced by contempt proceedings would be tantamount to the creation of a debtor’s prison — a shameful, practice that the Court is not willing to facilitate.

The Court does, however, find that ah appropriate sanction for plaintiffs bad-faith allegations is the dismissal of this action with prejudice, independent of this Court’s granting of CIR’s motion for judgment on the pleadings. To be sure “dismissal is a harsh sanction to be used only in extreme situations.” McMunn v. Mem’l Sloan-Kettering Cancer Ctr., 191 F.Supp.2d 440, 461 (S.D.N.Y.2002). But “[w]hen faced with a fraud upon the court ,such a[] powerful sanction is entirely appropriate.” Id. Indeed, where the misconduct at- issue is the knowing fabrication of the critical allegations underlying the complaint that plaintiff must prove in order to recover, it would be pointless to allow the case to proceed. Dismissal is virtually required under such circumstances.

CIR also seeks sanctions against plaintiffs counsel, whom they describe as a “willing participant” in plaintiffs fraud on the Court for having “willfully blind[ed] himself to his client’s misrepresentations” and unreasonably *436continuing to press this case in the face of her collapsing allegations. Mem. in Support of CIR’s Mot. for Sanctions at 4, ECF No. 55. The motion raises the thorny issue of where vigorous advocacy ends and punishable disregard of the facts begin. As the Advisory Committee has cautioned, Rule 11 “is not intended to chill an attorney’s enthusiasm or creativity in pursuing factual or legal theories” and “[t]he court is expected to avoid using the wisdom of hindsight and should test the signer’s conduct by inquiring what was reasonable to believe at the time the pleading, motion, or other paper was submitted.” Fed.R.Civ.P. 11 advisory committee’s note to 1983 amendment. Moreover, courts have held that attorneys are “entitled to rely on the representations of their client[s], without having to assess [their clients’] credibility.” Jeffreys v. Rossi, 275 F.Supp.2d 463, 481 (S.D.N.Y.2003); see also Braun ex rel. Advanced Battery Techs., Inc. v. Zhiguo Fu, 2015 WL 4389893, at *16 (S.D.N.Y. July 10, 2015) (“[A]n attorney who relies on a client’s verification made under the penalty of perjury is not acting in bad faith; indeed, it is unlikely that such reliance would even rise to the level of objective unreasonableness.”); Mar Oil, S.A. v. Morrissey, 982 F.2d 830, 844 (2d Cir. 1993) (“An unfavorable credibility assessment is rarely a sufficient basis for such an award.”). While this Court would not frame that principle in such categorical terms and would not exclude the possibility that a lawyer might be subject to sanctions where he knows to a reasonable certainty that his client is lying and yet persists in pursuing a cause of action premised on such lies, this is not such a case.

Specifically, in this case, where plaintiffs version of events was corroborated, at least to some degree, by others, and where plaintiffs counsel had obtained a favorable expert opinion, counsel (barely) satisfied his obligation under Rule 11 to ensure through an “inquiry reasonable under the circumstances” that his client’s “factual contentions have evidentiary support,” Fed.R.Civ.P. 11(b); Servicemaster Co. v. FTR Transport, Inc., 868 F.Supp. 90, 97 (E.D.Pa.1994) (Rule 11 motion denied where two experts supported plaintiffs view of the facts); Wagner v. Allied Chem. Corp., 623 F.Supp. 1407, 1411-12 (D.Md. 1985) (despite “serious factual weaknesses with several of the claims,” counsel’s pre-filing inquiry, which included consultation with expert, was “within the range of reasonableness” under Rule 11). To be sure, plaintiffs allegations and various denials were highly dubious in light of the contrary evidence that CIR presented to plaintiffs counsel, Duarte-Morillo’s affidavit was biased and weak, Reta’s unsworn affidavit was largely irrelevant, and the handwriting expert who’s favorable opinion counsel sought to procure was no expert at all. But counsel could not have known what view the Court would take of this evidence (and of the admissibility of the expert report in particular), and it cannot be said that plaintiffs allegations were “utterly lacking in support” under such circumstances. O’Brien, 101 F.3d at 1489. Counsel’s pursuit of this lawsuit in the face of the mounting evidence indicating his client was lying is certainly questionable and borders on unreasonable, but the Court does not find that it quite meets the high standard that must be satisfied to impose sanctions.22

*437Finally, the Court notes that plaintiffs counsel has, somewhat improbably, argued in plaintiffs papers that CIR and defense counsel should be sanctioned for bringing a frivolous Rule 11 motion. Because defense counsel’s motion was in fact meritorious, that baseless entreaty is moot.

CONCLUSION

In sum, for the foregoing reasons, the Court in its Order dated March 31, 2016, granted defendant CIR’s Rule 12(c) motion for judgment on the pleadings and dismissed the Amended Complaint with prejudice as against all defendants. Regarding CIR’s Rule 11 motion and accompanying Daubert motion, the Court excludes the reports and testimony of plaintiffs handwriting expert in their entirety for failing to meet the standards of Rule 702 under both Daubert and Kumho Tire. CIR’s Rule 11 motion is granted to the extent it seeks dismissal of the action for plaintiff’s perpetration of a fraud upon the Court, but denied to the extent it seeks monetary sanctions against either plaintiff or her counsel.

The Clerk of the Court is hereby directed to enter final judgment dismissing plaintiffs Amended Complaint with prejudice and to close this case.

SO ORDERED.

Excerpt from Advisory Committee Notes on Rule 702 Excerpt from Advisory Committee Notes on Rule 702

This excerpt addresses the relationship between Daubert and Kumho.

As stated earlier, the amendment does not distinguish between scientific and other forms of expert testimony. The trial court's gatekeeping function applies to testimony by any expert. See Kumho Tire Co. v. Carmichael, 119 S.Ct. 1167, 1171 (1999) (“We conclude that Daubert's general holding—setting forth the trial judge's general ‘gatekeeping’ obligation—applies not only to testimony based on ‘scientific’ knowledge, but also to testimony based on ‘technical’ and ‘other specialized’ knowledge.”). While the relevant factors for determining reliability will vary from expertise to expertise, the amendment rejects the premise that an expert's testimony should be treated more permissively simply because it is outside the realm of science. An opinion from an expert who is not a scientist should receive the same degree of scrutiny for reliability as an opinion from an expert who purports to be a scientist. See Watkins v. Telsmith, Inc., 121 F.3d 984, 991 (5th Cir. 1997) (“[I]t seems exactly backwards that experts who purport to rely on general engineering principles and practical experience might escape screening by the district court simply by stating that their conclusions were not reached by any particular method or technique.”). Some types of expert testimony will be more objectively verifiable, and subject to the expectations of falsifiability, peer review, and publication, than others. Some types of expert testimony will not rely on anything like a scientific method, and so will have to be evaluated by reference to other standard principles attendant to the particular area of expertise. The trial judge in all cases of proffered expert testimony must find that it is properly grounded, well-reasoned, and not speculative before it can be admitted. The expert's testimony must be grounded in an accepted body of learning or experience in the expert's field, and the expert must explain how the conclusion is so grounded. See, e.g., American College of Trial Lawyers, Standards and Procedures for Determining the Admissibility of Expert Testimony after Daubert, 157 F.R.D. 571, 579 (1994) (“[W]hether the testimony concerns economic principles, accounting standards, property valuation or other non-scientific subjects, it should be evaluated by reference to the ‘knowledge and experience’ of that particular field.”).

The amendment requires that the testimony must be the product of reliable principles and methods that are reliably applied to the facts of the case. While the terms “principles” and “methods” may convey a certain impression when applied to scientific knowledge, they remain relevant when applied to testimony based on technical or other specialized knowledge. For example, when a law enforcement agent testifies regarding the use of code words in a drug transaction, the principle used by the agent is that participants in such transactions regularly use code words to conceal the nature of their activities. The method used by the agent is the application of extensive experience to analyze the meaning of the conversations. So long as the principles and methods are reliable and applied reliably to the facts of the case, this type of testimony should be admitted.

Nothing in this amendment is intended to suggest that experience alone—or experience in conjunction with other knowledge, skill, training or education—may not provide a sufficient foundation for expert testimony. To the contrary, the text of Rule 702 expressly contemplates that an expert may be qualified on the basis of experience. In certain fields, experience is the predominant, if not sole, basis for a great deal of reliable expert testimony. See, e.g., United States v. Jones, 107 F.3d 1147 (6th Cir. 1997) (no abuse of discretion in admitting the testimony of a handwriting examiner who had years of practical experience and extensive training, and who explained his methodology in detail); Tassin v. Sears Roebuck, 946 F.Supp. 1241, 1248 (M.D.La. 1996) (design engineer's testimony can be admissible when the expert's opinions “are based on facts, a reasonable investigation, and traditional technical/mechanical expertise, and he provides a reasonable link between the information and procedures he uses and the conclusions he reaches”). See also Kumho Tire Co. v. Carmichael, 119 S.Ct. 1167, 1178 (1999) (stating that “no one denies that an expert might draw a conclusion from a set of observations based on extensive and specialized experience.”).

If the witness is relying solely or primarily on experience, then the witness must explain how that experience leads to the conclusion reached, why that experience is a sufficient basis for the opinion, and how that experience is reliably applied to the facts. The trial court's gatekeeping function requires more than simply “taking the expert's word for it.” See Daubert v. Merrell Dow Pharmaceuticals, Inc., 43 F.3d 1311, 1319 (9th Cir. 1995) (“We've been presented with only the experts’ qualifications, their conclusions and their assurances of reliability. Under Daubert, that's not enough.”). The more subjective and controversial the expert's inquiry, the more likely the testimony should be excluded as unreliable. See O'Conner v. Commonwealth Edison Co., 13 F.3d 1090 (7th Cir. 1994) (expert testimony based on a completely subjective methodology held properly excluded). See also Kumho Tire Co. v. Carmichael, 119 S.Ct. 1167, 1176 (1999) (“[I]t will at times be useful to ask even of a witness whose expertise is based purely on experience, say, a perfume tester able to distinguish among 140 odors at a sniff, whether his preparation is of a kind that others in the field would recognize as acceptable.”).

Asking the Gatekeepers: A National Survey of Judges on Judging Expert Evidence in a Post-Daubert World Asking the Gatekeepers: A National Survey of Judges on Judging Expert Evidence in a Post-Daubert World

Sophia I. Gatowski et al., 25 Law & Hum. Behav. 43 (2001)

This document is posted on Moodle under "Class 5."

This study of 400 judges demonstrates how poorly many judges understand at least four of the five Daubert factors. I highlighted the assigned text (which you will find on pages 438, 444-45, 447-48, 452-53, & 455).

Writing Reflection #6 Writing Reflection #6

Please go to our Moodle Page and under "Class 6" you will find the prompt and submission folder for Writing Reflection #6.

2.5 Class 7: Frye and Daubert & the culture of science 2.5 Class 7: Frye and Daubert & the culture of science

Missouri v. Goodwin-Bey (2016) Missouri v. Goodwin-Bey (2016)

This case (available at this link) demonstrates what can happen when a judge considers the scientific method. It is an unusual case for many reasons, not least of which is the inclusion of a visual aid

 

Characteristics of science Characteristics of science

Read any one of these three short lists of the characteristics of science:

Ten Characteristics of Scientific Research or Knowledge 

Five Characteristics of the Scientific Method 

Nine Characteristics of Scientific Research 

Writing Reflection #7 Writing Reflection #7

Please go to our Moodle Page and under "Class 7" you will find the prompt and submission folder for Writing Reflection #7.

2.6 Class 8: Qualifying experts & expert testimony 2.6 Class 8: Qualifying experts & expert testimony

Chapter 62 in Learning Evidence (Merritt and Simmons) Chapter 62 in Learning Evidence (Merritt and Simmons)

This chapter covers the process for qualifying an expert witness.

Excerpt from EVIDENTIARY FOUNDATIONS, by Edward J. Imwinkelried (2012) Excerpt from EVIDENTIARY FOUNDATIONS, by Edward J. Imwinkelried (2012)

This section, § 9.03, provides an overview of the rules applicable to expert testimony and a sample examination. It is a pdf so it posted on Moodle.

Excerpt from transcript in criminal case before The Honorable T.S. Ellis, III. Excerpt from transcript in criminal case before The Honorable T.S. Ellis, III.

In this excerpt you can see a real-life example of the process of qualifying the prosecution’s firearm expert through direct examination and oral motion. (This is a pdf so it is posted on Moodle.).

 

You can access more of the transcript here if you are interested.

Excerpt from transcript in trial of Jessie Misskelley, one of the West Memphis 3 Excerpt from transcript in trial of Jessie Misskelley, one of the West Memphis 3

Transcript with Dr. Richard Ofshe

This excerpt from the transcript shows the defense’s effort to qualify an expert in false confessions. You can see more of the transcript at this link, and you can learn more about the West Memphis 3 case here.

The Jesse Misskelley Trial (January 26 - February 4, 1994): Richard Ofshe

Witness for the Defense

February 2, 1994

DOCTOR RICHARD OFSHE having been first duly sworn to speak the truth, the whole truth, and nothing but the truth, then testified as follows:

DIRECT EXAMINATION BY MR. STIDHAM:

Q: Please state your name for the Court.

A: Richard Ofshe.

Q: And what do you do for a living, Mr. Ofshe?

A: I'm a professor of sociology at the University of California at Berkeley.

Q: Okay. Can you tell the Court and the jury a little about about your education and background?

A: I received a Bachelor's Degree in psychology from Queens College of the City University of New York, and then a Master's Degree in sociology from the same institution, and then a Ph.D. in the sociology department of Stanford University with a specialty in a sub-field called social psychology.

Q: Would you explain to the Court and the Jury what social psychology is?

A: Social psychology is a specialty area that is found both within psychology and within sociology. It has to do principally and particularly the part that I specialize in -- it has to do with influence, decision making, belief, and attitude change, techniques of pressure and coercion and I specialize particularly extraordinary techniques control and influences.

Q: Do you have any experience or training in the area of influence and more specifically in the area of influence with regard to police interrogation?

A: All my work for the last thirty years or more has been on the subject of influence starting out doing work in traditional problems -- excuse me -- traditional problems in social psychology having to do with decision making, group influence, interpersonal influence. Then starting about the early part of the nineteen seventies I became interested in complex real world systems of influence. That is to say not laboratory research, but rather studying on-going very complicated influence environments and particularly those kinds of environments that have massive effects on individuals. So initially I did a lot of work for about ten or twelve years studying what are called cult groups. That is to say groups that are very strongly organized, that exert enormous pressure on individuals and that can lead individuals to change the way in which they see the world and be willing to take part in activities that they otherwise would ordinarily not take part in. During -- and I specialized in studying cult groups that generate violence. During that period of time I did a great deal of work often involving the analysis of groups that led their followers to commit murders. I did a lot of work for prosecutorial agencies, analyzing and prosecuting such crimes. Then my interest in influence continued and I began to become interested in the study of police interrogation. Ah, police interrogation is the root of -- out of which various studied round the world procedures of influence groups -particularly techniques that have to do with coercing confessions from individuals and generally manipulating them in extraordinary ways. And that work began in the late nineteen eighties and since then I've done a great deal of work and written about police interrogation tactics, in particular police interrogations that can and does lead to coerced and/or false confessions.

Q: Has any of your work been published, Doctor Ofshe?

A: Yes. I've published four or five books, and thirty or more articles in scientific journals, and presented papers at dozens of conferences over the years. The work on all of these subjects have been published.

Q: Are you familiar with a Doctor Gudjonsson?

A: Yes, I am.

Q: And how are you familiar with his work?

A: He's one of the other people who is a specialist in techniques of interrogation and influencing police interrogations.

MR. DAVIS: Your Honor, at this time if I may interrupt, as I understood it he is qualifying him as -or in the process of qualifying him as an expert. They're moving on to another area and I'd ask that I'd have an opportunity to voir dire the witness regarding his special qualifications.

THE COURT: Well---

MR. STIDHAM: Your Honor, I asked him about what has been published.

THE COURT: You're-asking about somebody else's work.

MR. STIDHAM: Your Honor, I was---

THE COURT: Right now if you're' qualifying him, then-- then go through his qualification, his vitae, and then pass him, and then if they've got any questions, then I'm going to allow them to voir dire.

MR. STIDHAM: I think my next question will clear this up, your Honor.

THE COURT: All right.

BY MR. STIDHAM:

Q: Are you mentioned in Doctor Gudjonsson's book, "The Psychology of Interrogations, Confessions, and Testimony"?

A: My work is discussed in that book, yes.

Q: Did you contribute to the book in any form or fashion?

A: Well, he asked me to review certain chapters of the book and I reviewed them, and made comments, and then he thanked me in the introduction for doing that, and then he also discusses my work in the substance of the book.

Q: I also understand, Doctor Ofshe, that you've won a Pulitzer Prize?

A: I shared in the nineteen seventy-nine Pulitzer Prize for public service, yes.

Q: And what was that for-- I mean, what was the subject of your---

A: That was for work I did with the publisher of small weekly newspaper in West Marin County, California. We did an expose of a group called Synanon which started out as a drug rehabilitation. organization and. turned into a violent cult group that was assaulting and attempting to murder people in the immediate area. It became quite a major subject and that year we were lucky enough to be awarded a Pulitzer Prize.

Q: Are you a member of any professional associations?

A: Yes. I'm a member of the American Psychological Association, the American Sociological Association, the American Psychological Society, the Sociologic Practice Association, and the Pacific Sociologic Association.

Q: Have you ever served as a consultant to any law enforcement agencies?

A: Oh, yes, I have. Starting in nineteen seventy-nine I served as consultant to Marin County Sheriff's Department and then subsequent to that the office of the Attorney General of the State of California, the office of the Attorney General of the state of Arizona, the United States Department of Justice -- both the tax division and the criminal division -- the Prosecuting Attorney of Jefferson County, West Virginia, the Los Angeles District Attorney's office, the Internal -- that's not a law enforcement agency, I guess. The United States Attorney's office in West Virginia, the Thurston County, Washington, prosecutor's office, currently the State's Attorney's office in Fort Lauderdale, Florida, and again for the United States Attorney's office in West Virginia'

Q: Have you ever testified on behalf of the prosecution in a criminal case?

A: I don't believe -- I'll have to look at the list of cases in which I've testified.

Q: Well, I'll go on to the next question. Do you lecture to groups regarding the influence of police tactics in false confessions?

A: Yes, I do. I'm -- in fact I've been asked to -- in May of this year to -- at the request of the Supreme Court of the State of Florida -- been asked to address for a half day a judicial conference in Florida on the subject of false confessions.

Q: Have you been involved in both civil and criminal cases dealing with false confessions and confessions in general?

A: Yes, I have.

Q: How many -- excuse me -- how many cases dealing with confessions have you been involved in?

A: Confessions specifically thirteen -- I've testified thirteen separate times. I've been involved in many more cases. Much of the work that I do is consulting work that doesn't necessarily culminate in testimony. That's why I wasn't certain whether I had actually testified in this criminal matter. I'm scheduled to the week after next, but I can't at this moment think of another example where I already have.

Q: Okay. Have you testified in court with regard to any confessions taken on the defense side? ....

A: Yes. Most -- most of the confession cases in which I've testified have been cases involving coerced or coerced false confessions and, therefore, my testimony has been principally for the defense in those cases.

Q: How many times have you been qualified as an expert in the area of influence and police interrogation? A. Twenty-five times.

Q: Twenty-five times? In both state and federal courts?

A: Yes, sir.

MR. DAVIS: Your Honor, if I might -- the question was: Qualified as an expert in the area of influence and police interrogations -- can we break that down? I didn't hear anything in the background as far as police interrogation.

THE COURT: Can you break it down?

BY MR. STIDHAM:

Q: Have you been qualified as a expert by any court in the area of police interrogation tactics and influence on individuals during police interrogations tactics?

A: Yes. On influence in police interrogation in particular I've qualified and testified thirteen times. On influence in general I've been qualified and testified an additional twelve times making a total of twenty-five.

Q: Okay. Have these been in both state and federal courts?

A Yes, they have.

Q: Have you ever testified in the State of Arkansas?

A: Yes, I have.

Q: And where was that at?

A: In Fort Smith in federal court in a case brought by a young man and his family against a person named Tony Alamo who ran a cult group located in Fort Smith, and the case had to do with the beating of this child.

MR. STIDHAM: Your Honor, at this time we would ask that the witness be qualified as an expert in the area of police interrogation tactics and influence of people involved in police interrogations.

MR. DAVIS: Whether or not he's qualified as a expert is what we would like to address in voir dire.

THE COURT: All right.

VOIR DIRE

BY MR. DAVIS:

Q: Doctor Ofshe, you are a social science professor at the University of California at Berkeley. Is that correct?

A: I'm a professor in the sociology department.

Q: Okay. And what-- so you teach sociology. Is that right?

A: I teach specifically courses in social psychology and courses on extreme techniques of influence including police interrogation.

Q: You are not a licensed psychologist, correct?

A: Ah, that's correct.

Q: Okay. You can't practice psychology in California or any other state, can you?

A: Ah --no, I don't practice clinical psychology which is -- what is generally licensed.

Q: Okay. And would it be a fair statement to say that psychology is different from social -- sociology in that sociology deals with group activities?

A: No, that's a very general and unhelpful definition. Social psychology which is an area that I work in is an area that's represented in both disciplines and I'm a member of the professional association of both disciplines. Both disciplines maintain sub-sections called social psychology and social psychology deals with a very special set of topics that has to do with influence on individuals, decision making, attitude change, interpersonal and group pressure.

Q: Are you a licensed social psychologist?

A: It's not necessary to be licensed to be a social psychologist because I don't treat anyone.

Q: Is there such a thing as a licensed social psychologist?

A: NO.

Q Okay. In other words---

A: Because it does not engage in the treatment of people it's generally not licensed ....

Q: Okay. So there are sociologists and there are people that hold themselves out to be social psychologists, correct?

A: People who are members of the requisite professional associations and members of the sub-sections that are specialties in social psychology and I'm a member of both and in each case as a social psychologist.

Q: How many states and how many courts have refused to accept you as an expert in this work?

A: No state has ever refused to accept me as an expert.

Q: How many courts?

A: There's one case in which a line of testimony to which my testimony would have been foundational was rejected. It has to do with whether or not a certain theory---

Q: Where was that?

A: That was in California.

Q: Okay.

A: That had to do with whether or not a certain line of testimony was appropriate for the insanity defense and in that case the judge barred that line of testimony.

Q: As far as -- what is it that you studied in relationship to this case?

A: In this case in particular I have studied the following materials: The police reports and notes of Detectives Gitchell, Ridge, and Durham, the transcript of the first tape recorded interrogation of Jessie Misskelley, the transcript of the second tape recorded interrogation of Jessie Misskelley. I've listened to the tape recordings of both interrogations. I studied the transcript and the video recording of an interview of Buddy Lucas. I've studied the treatment records of Jessie Misskelley at East Arkansas Mental Health Center. The transcript of a hearing in which Detective Ridge sought his search warrants from Judge Rainey. I attended a hearing in this case on January the thirteenth, nineteen ninety-four at which I heard and saw the testimony of Detectives Allen, Durham Ridge, and Gitchell with respect to what occurred during the interrogation. And I subsequently reviewed the transcripts of that hearing and then I interviewed Jessie Misskelley on December the fifteenth, nineteen ninety-three, and have subsequently carefully reviewed, and studied, and analyzed the transcript of that interview.

Q: How long was that interview?

A: Three hours, more or less. It may have a bit more. It may have been a bit less. I don't have the -- I don't have that -- it might be helpful. It worked out to an eighty-seven page transcript.

Q: You talked with Jessie Misskelley for three hours. Is that right?

A: No. I talked with Jessie Misskelley for the length of time it took to produce this transcript here.

Q: And you reviewed testimony of the police officers?

A: I reviewed their reports. I reviewed the actual transcript of the one part of the interrogation that -- or the two parts of the interrogation that were tape recorded, I studied and analyzed their notes, studied and analyzed their testimony.

Q And what scientific basis is it that you intend to give an opinion on?

A: Well, the first thing that's necessary is to try to get a clear picture of the history of the interrogation of exactly what happened step-by-step. Subsequently, that---

Q if you could---

A: Yes.

Q What scientific basis and what scientific tests are you basing your opinion on that you-- that the defense is proposing that you testify to?

A: It is based on the literature on the subject of influence, and particularly what is known about techniques of influence, the conditions that lead up to coerced confessions. The analysis that I will do on this involves specifying the pattern, what happened during the interrogation---

Q: --What scientific basis is it based on? Not what your procedure is, but what scientific basis is your opinion grounded in?

A: The opinion is grounded in the research on what is known about the conditions that lead up to coerced confessions. There are patterns of conduct that are known to lead to coerced confessions. There are consequences that follow from those patterns that are generally used to identify a coerced confession. There are criteria that are used to judge whether or not a confession is coerced or is not coerced, and whether it is a confession that appears to be the product of influence or appears to be the product of memory.

Q: Again -- is that -- is that based on empirical studies?

A: Oh, yes.

Q: And those empirical studies would have to determine which confessions were coerced and which were not coerced in order for those studies to have any validity, correct?

A Well there are studies of confessions---

Q Would you answer my question, please, sir? You would -- you would, have to determine-- someone would have to determine was a confession coerced or was it voluntary before those studies would have any validity?

A: The studies of confessions are often broken down into---

MR. DAVIS: Your Honor, could you ask him---

THE COURT: Answer yes or no and then -- then I'm going to allow you to explain your answer.

THE WITNESS: Okay.

THE COURT: If you can, answer yes or no. If you can't, Just say, "It's not capable of being answered yes or no."

THE WITNESS: It's not capable of being answered yes or no. I could probably answer your question if you'll allow me to explain why it's not capable.

THE COURT: Well, I don't want to allow a long narrative discourse. If you can answer the question concisely then proceed.

BY THE WITNESS:

A: The validity -- the truth or falsity of a confession is certainly important and sometimes it's possible to know whether a confession was in fact true or false. There have been studies -- a lot of studies are done on what are called disputed confessions as opposed to undisputed confessions, and the undisputed confessions are more important because it is known whether or not the confession was true or false.

Q: Well, if your studies are based -- is there empirical data that you' re basing your opinion on?

A: Yes.

Q: Okay. Those studies would have to say -- you would have to presume that a confession was coerced for those studies to have any validity, correct?

A: No. Sometimes one knows that a confessional is false and therefore coerced because of -- of independent factors, such as knowing -- eventually identifying who the real killer might be.

Q: But in those studies for those to have any value at all scientifically, somebody has to make a determination that the confessions were coerced or not, correct?

A: Not necessarily because we know the conditions that lead up to confessions that are undisputed where individuals give true confessions and do not recant them, and we know under other circumstances when People give false confessions which are subsequently proven to be false because the perpetrator is in fact caught.

Q: Well, let me ask you this: How would you characterize the situation where you said it was a false confession and a court determined that it was not a false confession, where would you categorize that?

A: I don't know that I've ever said that something was a false confession. I know I've testified as to whether something was coerced or not.

Q: So you -- as far as this talk previously about false confessions you don't deal in that area?

A: No. I -- the question suggested to me, you're asking me about a time when I testified in a court that a confession was false, and it was judged the other way and I don't believe that that's ever occurred.

Q: Have you not testified as to inaccurate contents of confessions in a court and the jury disregarded that and ruled another way?

A: I testified I believe in one case in which I testified that in my opinion a particular confession was coerced and the confession was not suppressed and I've testified in other cases where it is my opinion that a confession was in fact coerced and the court found that way.

Q: Okay. Well---

MR. CROW: May we approach the bench, please?

THE COURT: All right.

(THE FOLLOWING DISCUSSION WAS HELD AT THE BENCH OUT OF THE HEARING OF THE JURY.)

MR. CROW: Your Honor, is he qualifying this witness or is he cross examining him?

MR. DAVIS: Your Honor---

THE COURT- Well, I'm going to be honest, gentlemen, I'm real interested in knowing what a sociologist is going to testify to that would aid and benefit the jury and what is the scientific basis of that testimony. It seems to me that you've called this witness to give an opinion that the confession was coerced---

MR. STIDHAM: That is---

THE COURT: ---and that it was involuntary.

MR. STIDHAM: That's exactly right, your Honor.

THE COURT: And I think that -- that's a question for the Jury to decide and I'm not sure I'm going to allow him to testify in that narrow framework. I can see him having value testifying that these are common techniques employed by the police overrides one's free will. I found such and such of these conditions prevailing here and things of that nature, or maybe group dynamics of a cult.

MR. CROW: Your Honor--

THE COURT: But I'm not sure I'm prepared to allow him to testify that in his opinion it's coerced and therefore invalid.

MR. CROW: Your Honor---

THE COURT: I mean, what the hell do we need a jury for?

MR. STIDHAM: He's not going to testify whether or not the confession is false or true or whether the defendant is guilty or innocent. He's going to testify to the voluntary nature of the confession -- statement to the police -- whether or not it was coerced. That's an issue that the jury has to decide and that's what an expert witness is for, to help the jury decide these issues.

MR. DAVIS: No. No, Judge, that's where -- that's the real crux of the matter whether-- the confession was coerced or not, doesn't make -- whether it was the truth. It's whether it was the truth and they're trying to get through the back door what they can't get through the front door.

MR. CROW: Disagree, your Honor. I---

MR. STIDHAM: Your Honor, that's not the correct statement of the law.

MR. CROW: The law recognizes--

THE COURT: No. The -- the -- the -- I mean, of course, I've ruled that it was voluntary. The jury, I guess, could go back and decide that it wasn't. If that's the issue you're talking about

MR. CROW: That is what Arkansas law--

THE COURT: ---but the question of whether or not psychological ploys or tools were used to get a guilty person to give a true statement, now that's another issue.

MR. STIDHAM: Your Honor, that's not what he's going to testify to.

THE COURT: I don't know what you've got him here for. What is he going to testify to? I want to know.

MR. STIDHAM: Your Honor, he has an opinion as to whether or not the statements made by Mr. Misskelley to the West Memphis Police Department were voluntary.

THE COURT: Is that the way you're going to couch the question to him and is that the way he's going give his opinion. In my opinion they were involuntary.

MR. STIDHAM: Yes, your Honor.

THE COURT: That the police used subtle techniques to cause an innocent man to confess-- to confess.

MR. CROW: He's not going to say whether he's innocent or not, your Honor.

MR. STIDHAM: Your Honor, that's for the Jury to decide.

MR. DAVIS: Judge, what we've got -- they're trying to get through the back door what they can't get through the front. It's the same way.

MR. STIDHAM: Your Honor---

THE COURT: Well, unfortunately they might be able to do that under the status of our law.

MR. DAVIS: Your Honor, the concern that I have here is that for there to be any empirical data and for him to actually claim to have any scientific basis, somebody somewhere has to categorize these cases as false confession cases or coercion cases. And what I'm saying is that this man along with his cohorts in the field have -- they label things to -- to back up or substantiate their particular theories, and -- and---

THE COURT: Well, I think all of those go to the weight of his -- weight of his testimony.

MR. STIDHAM: That's what -- that's what experts do. If they want to bring an expert to counter them, they can!.

THE COURT: I think you can call your man to say in his opinion that there was nothing that they did out of the ordinary and that the statement was freely and voluntarily made.

MR. STIDHAM: That's the correct statement of the law, your Honor.

THE COURT: Well, we might as well get on with it. I'm going to let him testify but I'm not about to let him testify that in his opinion Misskelley is innocent--

MR. CROW: No, your Honor.

THE COURT: ---that his confession was a lie and false. I'm not going to allow him to do that.

MR. STIDHAM: He has an opinion as to what--

THE COURT: Don't even try to ask him whether or not he has an opinion whether the confession was true or false, because I'm ruling that he cannot do that.

MR. DAVIS: I want him cautioned before we proceed any further so that he doesn't blurt that out.

MR. CROW: Your Honor, can you give us two minutes?

THE COURT: Okay. Well, do you understand what I'm saying? I'm saying that there are areas where he has expertise that might be of some benefit and that is in the areas of group dynamics, in the area of -- of possibly coercive or -- or techniques that can be employed to make someone testify -- or -- or give a statement. Now, whether or not that statement true or false is another matter.

MR. CROW: That's not what he's testifying about, your Honor.

THE COURT: And I'm not going to allow him to testify that, In my opinion these officers illegally exacted or coerced a confession from his either. I'm not going to allow him to testify to that.

MR. STIDHAM: That's the Court's job, your Honor. That's the jury job.

THE COURT: Well, that's exactly right. So what is he going to testify to?

MR. STIDHAM: He's going to testify as to -- he has an opinion that this -- the statements made by the defendant were involuntary and a result of psychological coercive tactics employed by the West Memphis Police Department.

THE COURT: Were involuntary in what sense?

MR. STIDHAM: That's what he'll testify to.

THE COURT: Well, I want to know. What -- in what sense?

MR. DAVIS: Could we move in chambers?

( RECESS. )

THE COURT: All right, ladies and gentlemen, you can have about a fifteen minute recess with the usual admonition not to discuss the case.

( RECESS. )

THE COURT: All right, court will be in session. All right, ladies and gentlemen, you have heard a number of persons testify that have been presented and characterized as expert witnesses and perhaps will hear some more, and in that regard I'm going to give you an instruction of law that you should consider and it will again be read to you at the time all of the instructions are given. An expert witness is a person who has special knowledge, skill, experience, training, or education on the subject to which his testimony relates. An expert witness may give his opinion on questions and controversies. You may consider his opinion in the light of his qualifications and credibility, the reasons given for his opinion, and the facts and other matters upon which his opinion is based. You are not bound to accept an expert opinion as conclusive, but you should give it whatever weight you think it should have. You may disregard any opinion testimony if you find it to be unreasonable. All right, gentlemen, let's proceed.

MR. STIDHAM: Your Honor, may I approach the bench?

THE COURT- Sure.

(THE FOLLOWING DISCUSSION WAS HELD AT THE BENCH OUT OF THE HEARING OF THE JURY.)

MR. STIDHAM: I assume that the witness has now been qualified and I can go on with my questioning?

THE COURT: Again, I never make that statement. I Just tell you to proceed.

MR. STIDHAM: Thank you.

(RETURN TO OPEN COURT. )

THE COURT: Do you have any additional voir dire?

MR. DAVIS: No, sir, your Honor, not at this time. We'll reserve it for cross examination.

THE COURT: All right. All right, you may proceed.

MR. STIDHAM: Thank you, your Honor.

CONTINUED DIRECT EXAMINATION BY MR. STIDHAM:

Writing Reflection #8 Writing Reflection #8

Please go to our Moodle Page and under "Class 8" you will find the prompt and submission folder for Writing Reflection #8.

2.7 (Omitted for Spring 23: additional materials on qualifying experts & expert testimony) 2.7 (Omitted for Spring 23: additional materials on qualifying experts & expert testimony)

Chapter 63 in Learning Evidence (Merritt & Simmons) Chapter 63 in Learning Evidence (Merritt & Simmons)

Chapter 63 covers rules 703 and 705 and the proper bases for an expert’s opinion. You can skim or skip the last section on the Confrontation Clause.

Federal Rule of Evidence 703 Federal Rule of Evidence 703

An expert may base an opinion on facts or data in the case that the expert has been made aware of or personally observed. If experts in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject, they need not be admissible for the opinion to be admitted. But if the facts or data would otherwise be inadmissible, the proponent of the opinion may disclose them to the jury only if their probative value in helping the jury evaluate the opinion substantially outweighs their prejudicial effect.

Optional reading: Advisory Committee Notes: 

Rule 703 has been amended to emphasize that when an expert reasonably relies on inadmissible information to form an opinion or inference, the underlying information is not admissible simply because the opinion or inference is admitted. Courts have reached different results on how to treat inadmissible information when it is reasonably relied upon by an expert in forming an opinion or drawing an inference. Compare United States v. Rollins, 862 F.2d 1282 (7th Cir. 1988) (admitting, as part of the basis of an FBI agent's expert opinion on the meaning of code language, the hearsay statements of an informant), with United States v. 0.59 Acres of Land, 109 F.3d 1493 (9th Cir. 1997) (error to admit hearsay offered as the basis of an expert opinion, without a limiting instruction). Commentators have also taken differing views. See, e.g., Ronald Carlson, Policing the Bases of Modern Expert Testimony, 39 Vand.L.Rev. 577 (1986) (advocating limits on the jury's consideration of otherwise inadmissible evidence used as the basis for an expert opinion); Paul Rice, Inadmissible Evidence as a Basis for Expert Testimony: A Response to Professor Carlson, 40 Vand.L.Rev. 583 (1987) (advocating unrestricted use of information reasonably relied upon by an expert).

When information is reasonably relied upon by an expert and yet is admissible only for the purpose of assisting the jury in evaluating an expert's opinion, a trial court applying this Rule must consider the information's probative value in assisting the jury to weigh the expert's opinion on the one hand, and the risk of prejudice resulting from the jury's potential misuse of the information for substantive purposes on the other. The information may be disclosed to the jury, upon objection, only if the trial court finds that the probative value of the information in assisting the jury to evaluate the expert's opinion substantially outweighs its prejudicial effect. If the otherwise inadmissible information is admitted under this balancing test, the trial judge must give a limiting instruction upon request, informing the jury that the underlying information must not be used for substantive purposes. See Rule 105. In determining the appropriate course, the trial court should consider the probable effectiveness or lack of effectiveness of a limiting instruction under the particular circumstances.

The amendment governs only the disclosure to the jury of information that is reasonably relied on by an expert, when that information is not admissible for substantive purposes. It is not intended to affect the admissibility of an expert's testimony. Nor does the amendment prevent an expert from relying on information that is inadmissible for substantive purposes.

Nothing in this Rule restricts the presentation of underlying expert facts or data when offered by an adverse party. See Rule 705. Of course, an adversary's attack on an expert's basis will often open the door to a proponent's rebuttal with information that was reasonably relied upon by the expert, even if that information would not have been discloseable initially under the balancing test provided by this amendment. Moreover, in some circumstances the proponent might wish to disclose information that is relied upon by the expert in order to “remove the sting” from the opponent's anticipated attack, and thereby prevent the jury from drawing an unfair negative inference. The trial court should take this consideration into account in applying the balancing test provided by this amendment.

Federal Rule of Evidence 705 Federal Rule of Evidence 705

Unless the court orders otherwise, an expert may state an opinion — and give the reasons for it — without first testifying to the underlying facts or data. But the expert may be required to disclose those facts or data on cross-examination.

Optional: Advisory Committee Notes:

The hypothetical question has been the target of a great deal of criticism as encouraging partisan bias, affording an opportunity for summing up in the middle of the case, and as complex and time consuming. Ladd, Expert Testimony, 5 Vand.L.Rev. 414, 426–427 (1952). While the rule allows counsel to make disclosure of the underlying facts or data as a preliminary to the giving of an expert opinion, if he chooses, the instances in which he is required to do so are reduced. This is true whether the expert bases his opinion on data furnished him at secondhand or observed by him at firsthand.

The elimination of the requirement of preliminary disclosure at the trial of underlying facts or data has a long background of support. In 1937 the Commissioners on Uniform State Laws incorporated a provision to this effect in the Model Expert Testimony Act, which furnished the basis for Uniform Rules 57 and 58. Rule 4515, N.Y. CPLR (McKinney 1963), provides:

“Unless the court orders otherwise, questions calling for the opinion of an expert witness need not be hypothetical in form, and the witness may state his opinion and reasons without first specifying the data upon which it is based. Upon cross-examination, he may be required to specify the data * * *,”

See also California Evidence Code §802; Kansas Code of Civil Procedure §§60–456, 60–457; New Jersey Evidence Rules 57, 58.

If the objection is made that leaving it to the cross-examiner to bring out the supporting data is essentially unfair, the answer is that he is under no compulsion to bring out any facts or data except those unfavorable to the opinion. The answer assumes that the cross-examiner has the advance knowledge which is essential for effective cross-examination. This advance knowledge has been afforded, though imperfectly, by the traditional foundation requirement. Rule 26(b)(4) of the Rules of Civil Procedure, as revised, provides for substantial discovery in this area, obviating in large measure the obstacles which have been raised in some instances to discovery of findings, underlying data, and even the identity of the experts. Friedenthal, Discovery and Use of an Adverse Party's Expert Information, 14 Stan.L.Rev. 455 (1962).

These safeguards are reinforced by the discretionary power of the judge to require preliminary disclosure in any event.

Notes of Advisory Committee on Rules—1993 Amendment

This rule, which relates to the manner of presenting testimony at trial, is revised to avoid an arguable conflict with revised Rules 26(a)(2)(B) and 26(e)(1) of the Federal Rules of Civil Procedure or with revised Rule 16 of the Federal Rules of Criminal Procedure, which require disclosure in advance of trial of the basis and reasons for an expert's opinions.

If a serious question is raised under Rule 702 or 703 as to the admissibility of expert testimony, disclosure of the underlying facts or data on which opinions are based may, of course, be needed by the court before deciding whether, and to what extent, the person should be allowed to testify. This rule does not preclude such an inquiry.

Townsend v. Morequity, Inc. (In re Townsend) Townsend v. Morequity, Inc. (In re Townsend)

This is a civil case that addresses the admissibility of testimony from a proffered expert in handwriting analysis. It is useful for the distinction it makes between the reliability of the method and the qualifications of the individual examiner.

In re Mary Jo TOWNSEND, Debtor. Mary Jo Townsend, Plaintiff, v. Morequity, Inc., Defendant.

Bankruptcy No. 01-26777.

Adversary No. 01-2362.

United States Bankruptcy Court, W.D. Pennsylvania.

April 29, 2004.

*180Mary Bower Sheats, Pittsburgh, PA, for Debtor.

Charles F. Perego, Pittsburgh, PA, for Morequity, Inc.

MEMORANDUM OPINION1

JUDITH K. FITZGERALD, Chief Judge.

Debtor has objected to the claim and secured status of Morequity, Inc., contending that her signature on the mortgage was forged. The issue before the court is whether the handwriting analysis testimony of Thelma Greco is admissible under Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993), with respect to the signature on Debtor’s mortgage. If Debt- or signed the document, then Debtor is liable to Morequity and Morequity’s claim will be allowed.

Mary Jo Townsend (Debtor) and her parents owned a home located in Monroe-ville, Pennsylvania, in which she and her husband lived with her children. Debtor’s husband applied to borrow $81,723.68 in order to refinance the existing mortgage loan and to put the home in the name of the Debtor and her husband only, to pay certain taxes, and to finance an addition to *181the house. Both Debtor and her husband appeared at the closing and signed and acknowledged all documents that were necessary. Debtor filed a Complaint to Determine Secured Status, Adv. No. 01-2362, seeking to have the mortgage with Morequity, Inc., avoided as unperfected and unsecured on the basis that her signature on the mortgage is a forgery. Debtor also objected to Morequity’s claim, asserting that it should be disallowed “because the claim is unperfected due to the forgery of [Debtor’s] signature.” Objection to Claim Filed by Morequity, Inc., Bankr. Dkt. No. 47, at ¶ 4.

Debtor offered the expert testimony of Thelma Greco to prove that Debtor’s husband forged her signature to receive the loan. Ms. Greco purports to be a forensic documents examiner whose education2 in the field consists of a certificate from the Andrew Bradley Course, obtained in 1994.3 She also took a course in 1999 offered through the National Association of Document Examiners (“NADE”). She testified that she had been “asked to be qualified as an expert witness” on two occasions, Transcript of May 28, 2003 (hereafter “Tr. 5/28/03”), Dkt. No. 117, at 14. When asked if she had ever been appointed by a court, she testified that she had “always been privately engaged.” Id. at 17. Ms. Greco testified that she started doing document analysis as a professional in 1991, before completing the Bradley Course, and that she has examined hundreds of documents for authenticity since 1992. Although she initially testified that she is “certified” as a forensic document examiner, it was established at trial that she is not yet a candidate for board certification by NADE. Tr. 5/28/03 at 11-12, 21-22, 38.4 This court permitted Ms. Greco to testify subject to consideration of the weight to be given her testimony or its being stricken in the event she does not qualify as an expert. We find that she does not so qualify and we therefore strike her testimony.

Ms. Greco first examined unique characteristics of the questioned handwriting to determine whether Debtor’s signature was a forgery. In addition, she employed the “cross check” system which involves placing a dot above each letter to determine top and bottom letter pattern, slant pattern and space pattern. Id. at 46-47. According to Ms. Greco, the cross check system is a way to validate the conclusion reached through examination of unique characteristics. Id. at 47. By using this method, Ms. Greco came to the conclusion that the Debtor’s signature on the mortgage was a forgery.

Morequity, Inc.’s (Morequity) position is that Ms. Greco’s testimony should not be admissible for two reasons:

first because she does not possess the requisite qualifications of a questioned document examiner to offer an opinion as to the genuineness of Debtor’s signature on the mortgage; and secondly, on *182the basis that the methodology she employed for evaluating the questioned document was flawed and has not been accepted by her peers in the questioned document community.

Brief in Support of Motion to Strike Expert Testimony (hereafter “Defendant’s Brief’), Adv. Dkt. No. 31, at 2. Morequity claims that Ms. Greco does not have the requisite qualifications because her certificate from the “Bradley Course” states that she has the “knowledge and ability to conduct basic document examination ... and to create a foundation for qualification as an expert in this field.” Defendant’s Brief,. Adv. Dkt. No. 31, at 3 (emphasis omitted). This is the only education she has completed in the field, although she is in the process of completing other courses.5

Morequity argues that Ms. Greco’s experience is extremely “thin” because she has been involved in only two court cases, and testified in only one of those cases, in the Orphan’s Court Division of the Court of Common Pleas of Allegheny County. In addition, her involvement in the first case took place before she obtained her certificate from the Bradley Course. Tr. 5/28/03 at 19. Morequity also points out that Ms. Greco has never been accepted as an expert in federal court.

Morequity’s expert witness, J. Wright Leonard, sits on the board of directors of the National Association of Document Examiners, and is certified by that organization as well as by the American Board of Forensic Examiners. Ms. Leonard was accepted as an expert at the hearing on May 28, 2003. Ms. Leonard testified that she had never heard of Ms. Greco’s “cross check” system prior to this case. Tr. 5/28/03 at 151. She further stated that doing a slant pattern or top and bottom of the letter analysis would not always give accurate information because the spacing within words can vary. Id. at 153. She testified that because of the confined space within which the questioned signature was made, the method used by Ms. Greco would not have been of value in that the top or bottom of the letter pattern method used by Ms. Greco would not be useful because the tops and bottoms of the letters could be different from the writer’s standard. Ms. Greco testified that although she had many samples of Debtor’s handwriting, she did not perform a “dots and ... line spacing” analysis with respect to samples in which the signature was in a confined space. Id. at 94.

Ms. Greco testified that in applying her method of analysis she used transparent paper using it to draw lines in relation to the top and bottom of the letters. Id. at 51. Ms. Leonard testified that although performing slant stroke analysis is not of great value except in rare instances, glass grid plates with slant lines of various angles should be used as well as protractors and certain types of rulers. Id. at 161-62. Using the grid provides uniform results. Ms. Leonard testified that if Ms. Greco drew the lines herself using transparent paper and then measured, it would not be an acceptable method of measuring a slant. Id. at 162.

DISCUSSION

We find that under Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592-93, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993), Ms. Greco’s testimony is inadmissible and will be stricken. “Daubert holds that expert testimony is admissible under Rule 702 of the Federal *183Rules of Evidence only if ‘the reasoning or methodology underlying the testimony is scientifically valid and.. .that reasoning or methodology properly can be applied to the facts in issue.’ ” In re Armstrong World Industries, Inc., 285 B.R. 864, 870 (Bankr.D.Del.2002), quoting Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 592-93, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993). The court in In re Armstrong World Industries noted that in Daubert the U.S. Supreme Court set forth a non-exclusive list of four factors and that the U.S. Court of Appeals for the Third Circuit expanded the list to eight. Those factors include:

(1) whether a method consists of a testable hypothesis;
(2) whether the method has been subject to peer review;
(3) the known or potential rate of error;
(4) the existence and maintenance of standards controlling the technique’s operation;
(5) whether the method is generally accepted;
(6) the relationship of the technique to methods which have been established to be reliable;
(7) the qualifications of the expert witness testifying based on the methodology; and
(8)the non-judicial uses to which the method has been put.

In re Armstrong World Industries, Inc., 285 B.R. at 870, quoting In re Paoli R.R. Yard PCB Litigation, 35 F.3d 717, 742 n. 8 (3d Cir.1994).6

Daubert Factors

There is no evidence of record concerning all the factors mentioned above. We will discuss those that clearly apply.

Factor Number 2

According to Ms. Greco’s testimony, none of her methodologies have been subjected to peer review. Ms. Greco stated that she used the “cross check” system to determine whether or not the Debtor’s signature was forged. Ms. Leonard testified that she had never heard of the “cross check” system prior to reviewing Ms. Gre-co’s report and that this system had never been presented to her during her training or course work. Tr. 5/28/03 at 153. Ms. Leonard also testified that “doing a slant pattern” or a “top and bottom of the letter analysis and a slant pattern” will not always give accurate information and, because of the confined space allowed for signature on the document in question, such an analysis would prove of little value. Id. at 153-54. See supra. We find *184Ms. Leonard’s testimony credible and conclude that the methods employed by Ms. Greco were insufficient to complete a reliable analysis. Further, because there has been no peer review, there is no evidence regarding Factors 1, 3, 5, or 8.

Factor Number 5

Ms. Greco testified that she analyzed the slant pattern of the letters on the documents that she reviewed. Tr. 5/28/03 at 48. According to Edna Robertson, whom Ms. Greco cites as an expert in this field, id. at 81,7 this is a recognized method of examination. However, Ms. Leonard’s testimony, which we accept, is that the methodology used by Ms. Greco was flawed because the test requires that graph paper, among other things, be used as a constant with the sample as an overlay and that analysis of the slant of the handwriting is not of great value except in rare instances. Id. at 161-62. Ms. Greco did not use any of the appropriate tools or paper or plates in her work. Thus, even if the slant method had been subject to peer review, Ms. Greco did not follow its dictates. We accept Ms. Leonard’s testimony in this regard.

Factor Number 7

Ms. Greco’s qualifications include a certificate from the Bradley Course received in 1994. As noted above, the certificate establishes that she demonstrated the ability to conduct basic document examination and to “create a foundation for qualification as an expert in this field.” Tr. 5/28/03 at 18 (emphasis added). Therefore, she only possesses the foundation of a qualification as an expert, not the actual qualifications. She has not completed the NADE certification course and has limited experience with respect to forgeries.

Ms. Greco’s prior experience includes only two court cases, one of which occurred before she completed the Bradley Course. She testified in only one of them and neither was a federal court case. Tr. 5/28/03 at 19. See also id. at 33-34; Defendant’s Brief, Adv. Dkt. No. 31, at 3. She admitted that she has not completed her apprenticeship and is not a candidate for board certification with NADE. Tr. 5/28/03 at 20-22. Ms. Greco testified that her only prior experience in document examination before taking the Bradley Course was with “graphoanalysis” which concerns the study of character traits and has nothing to do with handwriting analysis of the type at issue in this case. Tr. 5/28/03 at 15-16.

On the other hand, Ms. Leonard testified that she was an apprentice for several years under the guidance of a internationally recognized expert and consultant. Tr. 5/28/03 at 110-11. Her specialty is handwriting identification and she is certified as a question documents examiner by the National Association of Document Examiners and the American Board of Forensic Examiners. Id. at 111. She belongs to various professional organizations, is on the board of directors of the National Association of Document Examiners, is on the editorial staff of the Forensic Journal Board. Id. at 112. She engages in continuing education, lectures in her field and has published articles. Id. at 112-13. She has been accepted in approximately 72 court cases as an expert on the issue of question documents examination since 1983. Id. at 113. She further testified that out of 75 to 90 cases per year, she is called to testify in approximately 10 to 12 cases.8 Id. at 114.

*185Based on the foregoing, we find that Ms. Greco is not qualified to give expert testimony in the field of forensic or question document analysis. We also find that her methodology does not meet the Daubert test as it is not reliable and is not a generally accepted method of performing handwriting analysis. Daubert v. Merrell Dow Pharmaceuticals, Inc., supra, 509 U.S. at 594, 113 S.Ct. at 2797. Accordingly, it is inadmissible and, therefore, will be stricken from the record.

Even if Ms. Greco had the necessary qualifications and if her methods had been subjected to peer review and Ms. Greco had correctly employed the slant analysis procedure, we would give no weight to her testimony and would credit Ms. Leonard’s. Ms. Leonard’s training and experience outweigh Ms. Greco’s. Further, Ms. Leonard’s conclusions are consistent with the credible non-expert witness evidence in the case. We find that Debtor did, in fact, sign the questioned document.

Example of Expert Notice (2) Example of Expert Notice (2)

This is the expert notice for Melissa Gische filed in the Frye fingerprint pleading you have read about. Ms. Gische was also featured in the portion of the film we watched together about fingerprinting. (This document is a pdf, so it is posted on Moodle under "Class 8.") You can skim the document - I just want you to see the types of information that are included. 

2.8 Class 9: Quiz on Unit 2 2.8 Class 9: Quiz on Unit 2

Writing Reflection #9 Writing Reflection #9

Please go to our Moodle Page and under "Class 9" you will find the prompt and submission folder for Writing Reflection #9.