3 Unit 3: Beyond Admissibility: Litigation Strategies 3 Unit 3: Beyond Admissibility: Litigation Strategies

3.1 Class 10: Beyond Admissibility 3.1 Class 10: Beyond Admissibility

Limiting Testimony, Discovery, Jury instructions, and Cross-Examination

3.1.1 Limiting testimony 3.1.1 Limiting testimony

Williams v. United States Williams v. United States

Judge Kate Easterly’s concurrence lays out the rationale for seeking a limitation on an examiner’s testimony. 

Focus on Judge Easterly’s concurrence which is highlighted in yellow. Feel free to skim (a) the facts, (b) the footnotes, and (c) even the court’s opinion - unless you want to see a powerful example of the damage that can be done by an unprepared defense attorney and how the standard of review inhibits progress in courts’ assessment of forensic evidence.

Judge Easterly is also one of my favorite writers, in part for writing clear, strong sentences like this one: “As matters currently stand, a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true.”

Marlon WILLIAMS, Appellant, v. UNITED STATES, Appellee.

No. 13-CF-1312.

District of Columbia Court of Appeals.

Argued Sept. 29, 2015.

Decided Jan. 21, 2016.

*345Enid Hinkes for appellant.

John Cummings, Assistant United States Attorney, with whom Ronald C. Machen, Jr., United States Attorney at the time the brief was filed, Elizabeth Tros-man, John P. Mannarino, and Gary Wheeler, Assistant United States Attorneys, were on the brief, for appellee.

Before THOMPSON and EASTERLY, Associate Judges; and NEBEKER, Senior Judge.

Easterly, Associate Judge:

Marlon Williams was arrested and prosecuted for the shooting death of Min Soo Kang. As no eyewitnesses to the crime were discovered and as Mr. Williams had no known relationship with Mr. Kang, it took a number of investigative steps for the police to connect Mr. Williams with.the crime: after finding Mr. Kang’s body, the police located his car; after examining fingerprints recovered from Mr. Kang’s car, the police identified Mr. Williams as a potential suspect; and after searching Mr. Williams’s apartment, the police recovered a gun that, when test-fired, left markings on the bullets that appeared to match the markings on bullets recovered from Mr. Kang’s car. This evidence, in conjunction with the testimony of an individual to whom Mr. Williams had made incriminating statements while they were in the courthouse cellblock, formed the bulk of the government’s case. After considering this evidence, a jury convicted Mr. Williams of first-degree felony murder while armed,1 attempt - to commit robbery while armed,2 two counts of possession of a firearm during a crime of violence (PFCV),3 and carrying .a pistol without a license.4 He received an aggregate sentence of 480 months’ imprisonment.

On appeal Mr. Williams primarily attacks the firearms and toolmark evidence presented against him, arguing among other things that, although defense counsel never objected, the examiner should not have been permitted to testify that the markings on the bullets recovered from Mr. Kang’s car were “unique” to the gun recovered from Mr. Williams’s apartment and thus that he did not have any doubt of their source. Because, to date, this court has only assumed without deciding that such testimony of absolute certainty is impermissible, we conclude that Mr. Williams has failed to establish. that it was plain error for the trial court to permit the jury to hear it. We discern no other error warranting reversal, although we agree that Mr. Williams’s attempted robbery conviction and associated PFCV conviction merge with his felony murder conviction and must be vacated.

I. Facts

In the early morning hours of September 13, 2010, the bullet-riddled body of Min Soo Kang was discovered lying on the side of the road in Southeast D.C. The Metropolitan Police Department (MPD) began investigating and learned that Mr. Kang drove a Cadillac Escalade equipped with OnStar, a service that could remotely disable the vehicle. At MPD’s request, OnStar disabled Mr. Kang’s Escalade by the evening of September 13 and directed MPD officers to the vehicle’s location in Northeast D.C,

An MPD officer inspected the Escalade. He found no damage to the exterior of the *346car but discovered what he suspected were bullet holes in the backrest of the driver’s seat. The officer cut into the seat and recovered three bullets. He also collected fingerprints from the Escalade.

An MPD fingerprint examiner entered the fingerprints lifted from the Escalade into the national Automated Fingerprint Identification System (AFIS), which connects unknown prints to known prints in a digital database, AFIS identified Mr. Williams as a possible source of the fingerprints. Based on the fingerprint examiner’s preliminary conclusion that the prints on the Escalade belonged to Mr. Williams, MPD applied for and was granted a search warrant for Mr. Williams’s residence. Executing this' warrant, MPD officers recovered a High Point brand firearm from Mr. Williams’s bedroom.

At trial,5 the government relied almost exclusively on forensic evidence, presenting expert testimony from a fingerprint examiner and a firearms and toolmark examiner.6 The fingerprint examiner testified to his conclusion that the prints recovered from the Escalade belonged to Mr. Williams,- The firearms and toolmark examiner, Luciano Morales, testified on direct examination that when a bullet is fired from a particular gun, .the gun leaves “unique” identifying marks, “similar to a fingerprint,- basically.” He then testified that he had compared the markings on the bullets recovered, from Mr. Kang’s ear with the markings on the bullets test-fired from the gun recovered from Mr. Williams’s apartment (manufactured by High Point and admitted as Exhibit No. 58), and he had concluded that the bullets were fired by the same gun. On redirect, when the prosecutor asked whether there was “any doubt in [his] mind” that the bullets recovered from Mr. Kang’s Esca-lade were fired from the gun found in Mr. Williams’s room, the examiner responded, “[n]o sir.” He elaborated that “[t]hese three bullets were identified as being fired out of Exhibit No. 58. And it doesn’t matter how many firearms High Point made. Those markings are unique to that gun and that gun only.” The prosecutor then asked the examiner whether, “judging from the markings that you find in 58, it’s your conclusion that those three bullets were fired from 58?” The examiner was unequivocal: “Item Number 58 fired these three bullets.”

. Counsel for Mr. Williams did not object to any of this testimony. The jury also heard stipulations that a print lifted from the gun did not match Mr. Williams and that the blood and DNA recovered from the gun did not match Mr. Kang or Mr. Williams. The jury convicted Mr. Williams on all charges.

II. Analysis

A. Sufficiency of the Evidence

We first address Mr. Williams’s argument that the government did not present sufficient evidence to support his felony murder conviction because it failed *347to establish the underlying ’felony of attempted robbery, and specifically failed to prove that Mr, Williams, and. not another person, had stolen.Mr. Kang’s Escalade. Reviewing the sufficiency of the evidence de novo, Nero v. United States, 73 A.3d 153, 157 (D.C.2013), we disagree. As Mr. Williams acknowledges in his brief, the. government presented the following evidence t(o support an. attempted robbery conviction: (1) testimony by the fingerprint examiner that the fingerprints lifted from both the exterior and interior of Mr. Kang’s Escalade matched Mr. .Williams; (2) eyewitness testimony that a person consistent with Mr. Williams’s physical description was seen opening and closing the hood of the Escalade around the time it. was disabled; and (3) testimony by the firearms and toolmark examiner that the bullets recovered from the Escalade matched bullets fired from Mr. Williams’s gun. From this evidence, drawing all reasonable inferences in favor of the government as we must, Nero, 73 A.3d at 157, we conclude that the- jury- reasonably could have determined that Mr. Williams stole Mr. Kang’s car, and thus necessarily committed the crime of attempted robbery.7 See Ray v. United States, 575 A.2d 1196, 1199 (D.C.1990) (“Every completed criminal offense necessarily includes an attempt to commit that offense.”). But see (Richard) Jones v. United States, 124 A.3d 127, 132-34 (D.C.2015) (Beckwith, J., concurring) (highlighting conflicting precedent from this court indicating that for general intent crimes, an attempt conviction requires proof of a higher mens rea than conviction for the completed offense).

B. The Firearms and Toolmark Examiner’s Opinion Testimony

Mr. Williams argues that the firearms and toolmark examiner should not have been able to testify that the markings on the bullets recovered from Mr. Kang’s Escalade were unique or that he was without “any doubt” that these bullets were fired from the gun found in Mr. Williams’s room. Because Mr. Williams did not object at trial to this testimony, we review only for plain error. See (John) Jones v. United States, 990 A.2d 970, 980-81 (D.C.2010). To prevail under this test, it is not enough for an appellant to demonstrate error; the appellant must also show that the error is plain, i.e., that the error is “so egregious and obvious as to make the.trial judge and prosecutor derelict in permitting it, despite the defendant’s failure to object.” Id. at 981. We attribute such dereliction to the trial court only when an error is “clear under current law.”8 Conley v. United States, 79 A.3d 270, 289 (D.C.2013) (quoting United States v. Olano, 507 U.S. 725, 734, 113 S.Ct. 1770, 123 L.Ed.2d 508 (1993)). Applying this standard, we cannot say the trial court plainly erred by permitting the jury to hear the examiner’s certainty statements.

There is no precedent in this jurisdiction that limits a toolmark and firearms *348examiner’s testimony about the certainty of his pattern-matching conclusions. The closest this court has come to addressing this issue was in (.Ricardo) Jones v. United States, 27 A.3d 1130 (D.C.2011). In that case the defense argued inter alia that toolmark and firearms examiners could not “stat[e] their conclusions with ‘absolute certainty excluding all other possible firearms.’ ” Id. at 1138, In response, the government assured this court, both in its appellate brief and at oral argument, that it was the government’s policy not to present such testimony. “In light of the government’s representation,” this court “assume[d], without deciding, that such experts should not be permitted to testify that they are 100% certain of a match, to the exclusion of all other firearms.” Id. at 1139. The court then determined that any such error was harmless. Id. Jones did not plainly bar the toolmark examiner in this case from testifying as he did and does not provide a foundation for a determination of plain error.

Nor can we say that the weight of non-binding authority outside this jurisdiction is a sufficient foundation for a determination that the -trial court “plainly” erred by not sua sponte limiting the tool-mark examiner’s testimony. See Euceda v. United States, 66 A.3d 994, 1012 (D.C.2013) (holding that error cannot be plain where neither this court nor the Supreme Court has decided the issue, and other courts are split on the issue). We are aware of only one state supreme court decision9 and no federal appellate decisions limiting the opinion testimony of firearms and toolmark examiners. Indeed, as one federal district court judge has observed, “[although the scholarly literature is extraordinarily critical” of toolmark pattern-matching, it appears that courts have made little effort to limit or qualify the admission of such evidence.10 United States v. Green, 405 F.Supp.2d 104, 122 (D.Mass.2005).

Mr. Williams refers us to the policy representation made by the government in Jones. The government concedes that, at Mr. Williams’s trial, it violated its policy “to only elicit firearms examiners’ opinions to a reasonable degree of scientific certainty.” But this concession cannot serve as the sole foundation for a determination of plain error. The government’s internal policy does not-constitute binding law11— let alone a “clear” or “obvious” rule — that a trial court should be presumed to know.12 *349 Cf. Rose v. United States, 49 A.3d 1252, 1256, 1258 (D.C.2012) (holding .that a trial court’s error could not be plain when there was “no clear case law” in our jurisdiction and that a published concurrence from a judge of this court, while on point, “is not the law of our jurisdiction”).

Since Mr. Williams has not shown that the state of the law is such that the trial court plainly should have sua sponte precluded or struck the certainty statements of the firearms and toolmark examiner in this case, Mr. Williams’s unpreserved challenge to these certainty statements cannot prevail under our test for plain error.

C. Confrontation Clause and Hearsay Challenges to the Firearms and Toolmark Evidence

Regarding the firearm and toolmark evidence presented in this case, Mr. Williams also challenges the admission, over objection, of two “worksheets” documenting the analysis of the bullets. These worksheets were signed by the firearms and toolmark examiner who testified at trial, Mr. Morales, but they also bore the signature and initials of his colleague, the “lead examiner on that particular case,” Rosalyn Brown.13 The government did not call Ms. Brown to testify because she had since been fired. On appeal, Mr. Williams argues that the admission of the worksheets violated his Sixth Amendment right to confrontation.

The Confrontation Clausé of the Sixth Amendment, U.S. Constamend. VI, prohibits the government from introducing “testimonial” hearsay at a criminal trial, unless the declarant isunavailable and the defendant has had a prior opportunity to cross-examine him. Crawford v. Washington, 541 U.S. 36, 53-54, 124 S.Ct. 1354, 158 L.Ed.2d 177 (2004). A hearsay statement is considered testimonial if it is “ ‘a solemn declaration or affirmation made for the purpose of establishing or proving some fact’ ... in the prosecution or investigation of a crime.” Young v. United States, 63 A.3d 1033, 1039-40 (D.C.2013) (quoting Crawford, 541 U.S. at 51, 124 S.Ct. 1354). Forensic evidence is also subject to the Confrontation Clause, which means a defendant must have an opportunity to cross-examine the analyst who actually conducted or observéd the forensic testing. Id. at 1039.

Assuming. the ballistics worksheets contained Ms. Brown’s testimonial hearsay statements, we conclude that their erroneous admission was harmless. See Duvall v. United States, 975 A.2d 839, 843 (D.C.2009) (applying the test for harmless error under Chapman v. California, 386 U.S. 18, 87 S.Ct 824, 17 L.Ed.2d 705 (1967) to admission of a lab report in violation of the Confrontation Clause). To begin with, the jury never heard any testimony about Ms. Brown’s observations and conclusions in Mr. Williams’ case and thus *350had no reason to think that the worksheets might document her examination of the bullet and firearm -evidence. On the contrary, Mr. Morales testified (without “any doubt”, see supra Part II.B) only as to his own observations and conclusions. Meanwhile, the prosecution made no reference to another examiner in closing or rebuttal. Lastly, nothing on the worksheets themselves indicated that they reflected the independent conclusions of another, absent examiner. Thus, at most, the jury saw an ambiguous extra signature at the bottom of a document that Mr. Morales had testified reflected his work product. Based on these particular facts, we cannot discern any harm to Mr. Williams from admission at his trial, of these worksheets.14

D. Other Issues

With one exception, Mr. Williams’s remaining arguments fail. His unpreserved challenge to the admission of fingerprint evidence fails the third prong of the test for plain error where trial counsel conceded,'both in opening and in closing, that the fingerprints on the Escalade belonged to Mr. Williams.15 Mr. Williams’s new argument that he is entitled to a Franks hearing16 also fails; the trial court did not plainly err by overlooking the discrepancy between the affidavit in support of the search warrant for Mr. Williams’s apartment, which cited fingerprint evidence as a basis for probable cause, and the fingerprint examiner’s testimony that he reviewed the prints and linked them to Mr. Williams on a date after the search warrant was executed. Instead, given other documentation indicating that the fingerprint examiner was asked to analyze the latent prints before the police sought and obtained the warrant, it would have been reasonable for the trial court to conclude that the examiner was simply mistaken as to the date on which he first examined the latent prints and connected them to Mr. Williams.

Mr. Williams prevails on his argument that this court must merge his attempted' robbery and corresponding PFCV conviction with his felony murder conviction. “[A] person cannot be convicted of both felony murder'and the underlying felony that supported the felony murder conviction.” Matthews v. United States, 13 A.3d 1181, 1191 (D.C.2011). Accordingly, we remand the case with instructions for the trial court to vacate Mr. *351Williams's convictions for attempted robbery and the associated count of PFCV. See Morris v. United States, 622 A.2d 1116, 1130 (D.C.1993) (holding that when two predicate crimes for PFCV merge into one, the PFCV offenses also merge).

In all other respects, we affirm the judgment of the trial court.

So ordered.

Concurring opinion by Associate Judge EASTERLY.

EASTERLY, Associate Judge,

concurring:

In our adversarial system, we do not expect trial courts to “recognize on [their] own” that an expert’s testimony is “scientifically unorthodox or controversial.” (John) Jones v. United States, 990 A.2d 970, 980-82 (D.C.2010). In the absence of any objection at Mr; Williams’s trial to the admission of the firearms and toolmark examiner’s certainty statements, we could only reverse if the law were clear that the expert could not make these statements. See supra Majority Opinion, Part II.B. As discussed above, the law in this jurisdiction does not clearly preclude a firearms and toolmark examiner from testifying with unqualified, absolute certainty.1 But it should.

A statement that markings on a bullet are “unique” to a particular gun is a statement that the probability of finding another gun that can create identical bullet markings is zero. If purportedly unique patterns on bullets are declared a match, that declaration likewise negates the possibility that more than one gun could have fired the-bullets — it is a statement of unqualified certainty that the bullets were fired from a specific gun to the exclusion of all others. Here the firearms and tool-mark examiner testified that he had identified matching “unique” patterns; .he also declared that he did not have “any doubt” that the bullets recovered from Mr. Kang’s car had been fired by the gun recovered from Mr. Williams’s apartment.

The government has a policy, admittedly violated here, not to elicit such certainty statements. This court was advised of the government’s policy in Jones. At oral argument in that case, in November 2011, counsel for the government stated that, as “concede[d]” in its brief, it was the government’s “position that practitioners should not state their conclusions to 100% scientific certainty.” The government further noted that it had “conceded in every hearing, starting two to three years ago when we first started having Frye hearings on this issue, that firearms examiners should not state their conclusions with absolute certainty.”2 Id. Which ráises the question: why did the government adopt a policy to limit the opinion testimony of firearms and toolmark examiners? What happened “two to "three” years'before the Jones oral argument that prompted the creation of this policy?

In 2008, a committee of scientists and statisticians assembled by. the National Research Council (NRC),3 which was in *352turn acting at the behest of the Department of Justice, issued a report on bullet pattern-matching analysis, Ballistic Imaging. 4 Although the NRC Committee’s charge was to assess the feasibility and utility of establishing “a national reference ballistic image database ... that would house images from firings of all newly manufactured or imported firearms,” it recognized that the “[underlying ... question” is “whether firearms-related toolmarks are unique: that is, whether a particular set of toolmarks can be shown to come from one weapon to the exclusion of all others.” Ballistic Imaging, supra note 3, at 1, 3. The NRC Committee determined that there was no data-based foundation to declare, with any certainty, individualization based on toolmark pattern matching.

Specifically, the NRC Committee made a “finding” that the “validity of the fundamental assumptions of uniqueness and reproducibility of firearms-related toolmarks has not yet been fully demonstrated.” Ballistic Imaging, supra note 3, at 3, 81. The NRC Committee noted that “derivation of an objective, statistical basis for rendering decisions [about matches] is hampered by the fundamentally random nature of parts of the firing process. The exact same conditions — of ammunition, of wear and cleanliness of firearms parts, of burning of propellant particles and the resulting gas pressure, and so forth — do not necessarily apply for every shot from the same gun.” Id. at 55. The NRC Committee concluded that “[a] significant amount of research would be needed to scientifically determine the degree to which firearms-related toolmarks are unique or even to quantitatively characterize the probability of uniqueness.” Id. at 3, 82.

The NRC Committee further expressed concern that, notwithstanding the absence of data and the corresponding statistical unknowns, firearms and toolmark examiners “tend to cast their assessments in bold absolutes, commonly asserting that a match can be made ‘to the exclusion of all other firearms in the world.’” Ballistic Imaging, supra note 3, at 82. The NRC Committee denounced this sort of testimony, explaining that “[sjuch comments cloak an inherently subjective assessment of a match with an extreme probability statement that has no firm grounding and unrealistically implies an error rate of zero.” Id. “[Shopping short of commenting on whether firearms toolmark evidence should be admissible” in court, the NRC Committee determined that “[c\onclusions drawn in firearms identification should not be made to imply the presence of a firm statistical basis when none has been demonstrated.” Id. (emphasis in original).

In a subsequent report commissioned by Congress and issued in 2009, Strengthening Forensic Science in the United States: A Path Forward,>5 another NRC Commit*353tee published similar words of warning regarding firearms and toolmark evidence.6 This Committee explained that “[individual patterns from manufacture or from wear might, in some cases, be distinctive enough, to suggest one particular source.” Id. at 154 (emphasis added). But “[b]ecause not enough is known about the variabilities among individual tools and guns,” the Committee was “not able to specify how many points of similarity are necessary for a given level of confidence in the result.”7 In other words, there is currently no statistical basis to declare with any degree of certainty that tool-marks on a bullet connect that bullet to a particular gun or “match” the markings on other bullets fired from that gun.8

Against this backdrop, there is only one permissible answer to the question left undecided'in Jones regarding firearms and toolmark examiners’ assertions of certainty in their pattern-matching conclusions: the District of Columbia courts should not allow them. It is well established that expert opinion evidence is admissible if “it will not mislead the jury and will prove useful in understanding the facts in issue.” *354 Clifford v. United States, 532 A.2d 628, 632 (D.C.1987) (citing Dyas v. United States, 376 A.2d 827, 831 (D.C.1977)); of Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 589, 113 S.Ct. 2786, 125 L.Ed.2d 469 (1993) (“[T]he trial judge must ensure that any and all scientific testimony, or evidence admitted is not only relevant, but reliable.”). Certainty statements such as those .elicited by the government in.this case.are misleading and lack any legitimate utility in criminal trials; they express a solid statistical foundation for individualization that does not currently (and may never) exist.

The government states in its brief to this court that it is “regrettable” that its expert was permitted to state his pattern-matching conclusion with absolute certainty. It is more than regrettable.. It is alarming. We know that faulty forensic evidence, and in particular, objectively unfounded statements of certainty regarding forensic analysis, can contribute to wrongful convictions. See Strengthening Forensic Science, supra note 5, at 45; Brandon L. Garrett, Judging Innocence, 108 Colum. L.Rev, 55, 83-84 (2008).

Take the case of Donald Gates, who was wrongfully convicted of rape and murder and needlessly served twenty-seven years in prison.9 To- persuade a jury of Mr. Gates’s guilt, the government relied on the similarly subjective pattern-matching analysis of hair evidence. The hair examiner in Mr. Gates’s case testified- with only slightly more restraint than the firearms and toolmark examiner in this case, • acknowledging that “it cannot be said that a hair came from one person to the exclusion of all others,” but nonetheless asserting that it was “ ‘highly unlikely’ that the hair found on the victim came from someone other than [Mr. Gates].” Brief for Appellee at 8, Donald E. Gates v. United States, 481 A.2d 120 (D.C.1984) (transcript citations omitted). But, just as in this case, there was no data-based foundation for the expert’s expression of certainty in his opinion.10

The use of these subjective certainty statements not only implicates the government’s “duty to refrain from improper methods calculated to produce a wrongful conviction,”11 it also calls into question the *355“fairness, integrity [and] public reputation of judicial proceedings.”12 Courts are oiir society’s chosen forum for ascertaining guilt in criminal cases. Our justice system can only function if it maintains the trust of the community. We rely on judges-r-as the umpires in our adversarial system — to prohibit the admission of evidence that is clearly without foundation. As matters currently stand, a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true. This is not evidence on which we can in good conscience rely, particularly in criminal cases, where we demand proof — real proof — beyond a reasonable doubt, - precisely because the stakes are so high. To uphold the public’s trust, the District of Columbia courts must bar the admission of these certainty statements, whether or not the government has a policy that prohibits their elicitation. We cannot be complieit in their use.

Motion to limit examiner’s testimony in United States v. Antwan Holcomb Motion to limit examiner’s testimony in United States v. Antwan Holcomb

This is an example of an excellent motion prepared by defense attorneys who are seeking a limitation on the examiner’s testimony. (This is the reply to the government’s opposition to the defense’s original motion.) (Because this is a pdf document it is posted on Moodle under "Class 10.)

Feel free to skim pages 4-11, which cover arguments we’ve already studied, such as: (a) whether Frye is limited to “novel” methods; (b) how the relevant scientific community should be defined; and (c) how an admissibility standard should be applied to non-scientific evidence. Start reading closely on page 11.

You’ll note the overlap between some of the arguments attorneys make when seeking to exclude an expert, and when arguing in favor of a limitation on the expert’s testimony.

Note that one of the attorneys on this brief, Katerina Semyonova, is a CUNY Law graduate!

 

Writing Reflection #10 Writing Reflection #10

Please go to our Moodle Page and under "Class 10" you will find the prompt and submission folder for Writing Reflection #10.

3.1.2 Omitted for Spring 2023: Discovery 3.1.2 Omitted for Spring 2023: Discovery

Judge Rakoff’s resignation letter Judge Rakoff’s resignation letter

As you know, the now disbanded National Commission on Forensic Science was formed in response to the 2009 NRC Report and was charged with adopting recommendations for the Department of Justice. 

One of the Commission members was Judge Rakoff. At one point, Judge Rakoff abruptly resigned from the commission because of the position the government adopted regarding discovery.  (He later rejoined after DOJ reversed its position.)

This is the full text of Judge Rakoff’s letter; note the emphasis he places on the relationship between discovery and scrutinizing forensic science.

Dear Fellow Commissioners:

Last evening, January 27, 2015, I was telephonically informed that the Deputy Attorney General of the U.S. Department of Justice has decided that the subject of pre-trial forensic discovery — i.e., the extent to which information regarding forensic science experts and their data, opinions, methodologies, etc., should be disclosed before they testify in court — is beyond the “scope” of the Commission’s business and therefore cannot properly be the subject of Commission reports or discussions in any respect. Because I believe that this unilateral decision is a major mistake that is likely to significantly erode the effectiveness of the Commission — and because I believe it reflects a determination by the Department of Justice to place strategic advantage over a search for the truth — I have decided to resign from the Commission, effective immediately. I have never before felt the need to resign from any of the many committees on which I have served over the years; but given what I believe is the unsupportable position now taken by the Department of Justice, I feel I have no choice.

This issue first arose last October when the Subcommittee on Reporting and Testimony, which I have the honor to co-chair along with Wyoming prosecutor Matt Redle, presented to the full Commission for discussion a draft report, authored by Prof. Paul Giannelli, recommending, in essence, that federal prosecutors go beyond what is presently required by federal criminal rules and make available in cases in which they intend to call forensic experts the same particularized information that forensic experts are required to provide in federal civil cases. The Commission then debated the draft report on the merits, and many helpful suggestions were offered, reflecting the broad composition of the Commission and its ability, unlike judicial rule-making bodies or the like, to ascertain what makes sense in the specialized area of forensic science. However, the Department’s co-chair of the Commission, having expressed his view that the entire discussion was beyond the Commission’s scope, then determined that the issue, not of the merits but of whether such discovery matters could even be considered by the Commission, would be put to the Deputy Attorney General for decision. Matt Redle and I then requested the opportunity to submit a memorandum stating our views; this was permitted (a copy is here attached), and, as I understand, was attached as one of several appendices to a memorandum taking the opposite view that was submitted to the Deputy Attorney General in late November but never shared with Matt, me, our Subcommittee, or the Commission. After a substantial delay, the Deputy Attorney General adopted the view that any discussion of discovery changes was entirely outside the Commission’s purview, and this decision, without further explanation, was telephonically conveyed to me last night.

The notion that pre-trial discovery of information pertaining to forensic expert witnesses is beyond the scope of the Commission seems to me clearly contrary to both the letter and the spirit of the Commission’s Charter. That Charter specifies six duties that the Commission is commanded to fulfill. The third of these duties is “To develop proposed guidance concerning the intersection of forensic science and the courtroom.” A primary way in which forensic science interacts with the courtroom is through discovery, for if an adversary does not know in advance sufficient information about the forensic expert and the methodological and evidentiary bases for that expert’s opinions, the testimony of the expert is nothing more than trial by ambush. Indeed, from the standpoint of improving forensic science and making its application to criminal prosecutions more accurate (which were key reasons for the very creation of the Commission), discovery is probably the most important area of intersection between forensic science and the courtroom, because it is only through adequate discovery that forensic science can be meaningfully scrutinized in any specific case. The notion that improved discovery, going beyond what is minimally required by the federal rules of criminal procedure (which were drafted without any consideration of the difficulties unique to forensic science) , is somehow outside the scope of the Commission’s work thus runs counter to both the mandate of the Commission’s Charter and the Commission’s overall purpose.

One might add that it seems unlikely that the Commission, at its very first meeting, would have created a Subcommittee on “Reporting and Testimony” if it were not concerned with how information about a forensic expert’s opinions was reported in advance of his testifying, i.e., discovery. And the written instruction that was sent by the Department of Justice’s liaison to the Subcommittee expressly stated that the Subcommittee should consider, inter alia, “legal issues inherent in reporting and testimony, such as discovery.”

As the federal rules of criminal procedure now stand, prosecutors who intend to call forensic experts to testify do not have to supply the same full pre-trial discovery about those experts and the methodological and evidentiary bases for their opinions that parties calling forensic experts in civil cases are required to supply under federal rules of civil procedure. But none of these rules focuses on the unique problems presented by forensic science, where there is much greater variance in standards, credentials, testing, and the like than in other scientific disciplines. That is why this Commission, which has such a broad range of participants in the field, is so well suited to consider whether, under the circumstances, greater pre-trial discovery, even though not required, should be embraced by the Department of Justice, both as a matter of fairness and also to help insure the determination of the truth. Does the Department have to be reminded of the many cases of grossly inaccurate forensic testimony that led to the creation of the Commission?

It is hard to escape the conclusion, therefore, that the Department’s determination that pre-trial discovery relating to forensic expert testimony is beyond the “scope” of the Commission is chiefly designed to preserve a courtroom advantage by avoiding even the possibility that Commission discussion might expose it as unfair. Prior to this decision, I have felt privileged to have been part of the Commission, not least because of the many wonderful fellow Commissioners with whom I have had a chance to work. I have also felt that, as the sole federal judge on the Commission, I could perhaps provide a useful perspective. But I cannot be a party to this maneuver by the Department to cabin the Commission’s inquiries, and I therefore must resign in protest.

Jed S. Rakoff

New York Criminal Procedure Law Regarding Discovery New York Criminal Procedure Law Regarding Discovery

S 245.20

This section includes excerpts from the discoveyr statute that are most relevant to experts and forensics. 

You can find the full text of the staute here

You can learn more about the recent discovery reforms reflected in this statute here.

S 245.20 Automatic discovery

  1. Initial discovery for the defendant. The prosecution shall disclose to the defendant, and permit the defendant to discover, inspect, copy, photograph and test, all items and information that relate to the subject matter of the case and are in the possession, custody or control of the prosecution or persons under the prosecution's direction or control, including but not limited to:

. . .

  (f) Expert opinion evidence, including the name, business address, current curriculum vitae, a list of publications, and a list of proficiency tests and results administered or taken within the past ten years of each expert witness whom the prosecutor intends to call as a witness at trial or a pre-trial hearing, and all reports prepared by the expert that pertain to the case, or if no report is prepared, a written statement of the facts and opinions to which the expert is expected to testify and a summary of the grounds for each opinion. This paragraph does not alter or in any way affect the procedures, obligations or rights set forth in section 250.10 of this title. If in the exercise of reasonable diligence this information is unavailable for disclosure within the time period specified in subdivision one of section 245.10 of this article, that period shall be stayed without need for a motion pursuant to subdivision two of section 245.70 of this article; except that the prosecution shall notify the defendant in writing that such information has not been disclosed, and such disclosure shall be made as soon as practicable and not later than sixty calendar days before the first scheduled trial date, unless an order is obtained pursuant to section 245.70 of this article. When the prosecution's expert witness is being called in response to disclosure of an expert witness by the defendant, the court shall alter a scheduled trial date, if necessary, to allow the prosecution thirty calendar days to make the disclosure and the defendant thirty calendar days to prepare and respond to the new materials.

  . . .

  (j) All reports, documents, records, data, calculations or writings, including but not limited to preliminary tests and screening results and bench notes and analyses performed or stored electronically, concerning physical or mental examinations, or scientific tests or experiments or comparisons, relating to the criminal action or proceeding which were made by or at the request or direction of a public servant engaged in law enforcement activity, or which were made by a person whom the prosecutor intends to call as a witness at trial or a pre-trial hearing, or which the prosecution intends to introduce at trial or a pre-trial hearing. Information under this paragraph also includes, but is not limited to, laboratory information management system records relating to such materials, any preliminary or final findings of non-conformance with accreditation, industry or governmental standards or laboratory protocols, and any conflicting analyses or results by laboratory personnel regardless of the laboratory's final analysis or results. If the prosecution submitted one or more items for testing to, or received results from, a forensic science laboratory or similar entity not under the prosecution's direction or control, the court on motion of a party shall issue subpoenas or orders to such laboratory or entity to cause materials under this paragraph to be made available for disclosure. The prosecution shall not be required to provide information related to the results of physical or mental examinations, or scientific tests or experiments or comparisons, unless and until such examinations, tests, experiments, or comparisons have been completed.

(k) All evidence and information, including that which is known to police or other law enforcement agencies acting on the government's behalf in the case, that tends to: (i) negate the defendant's guilt as to a charged offense; (ii) reduce the degree of or mitigate the defendant's culpability as to a charged offense; (iii) support a potential defense to a charged offense; (iv) impeach the credibility of a testifying prosecution witness; (v) undermine evidence of the defendant's identity as a perpetrator of a charged offense; (vi) provide a basis for a motion to suppress evidence; or (vii) mitigate punishment. Information under this subdivision shall be disclosed whether or not such information is recorded in tangible form and irrespective of whether the prosecutor credits the information. The prosecutor shall disclose the information expeditiously upon its receipt and shall not delay disclosure if it is obtained earlier than the time period for disclosure in subdivision one of section 245.10 of this article.

3.1.2.1 Read ONE of the three discovery requests below: 3.1.2.1 Read ONE of the three discovery requests below:

Facial Recognition Discovery Request Facial Recognition Discovery Request

Face recognition discovery

  1. The report produced by the analyst or technician who ran the facial recognition software, including any notes made about the possible match.
  2. The name and training, certifications, or qualifications of the analyst who ran facial recognition search query.
  3. All policies and procedures pertaining to the use of facial recognition software or databases
  4. Any and all validation studies used to test the reliability and validity of the facial recognition system
  5. Any and all training materials pertaining to the use of facial recognition
  6. Name & manufacturer of the facial recognition software used to conduct the search in this case, and the algorithm(s) version number(s) and year(s) developed, if available.
  7. The measurements, nodal points, or other unique identifying marks used by the system in creating facial feature vectors. If weighted differently, the scores given to each respective mark.
  8. Error rates for the facial recognition system used, including false accept and false reject rates (also called false match and false non-match rates—FMR and FNMR). Documentation of how the error rates were calculated, including whether they reflect test or operational conditions.
  9. Performance of the algorithm(s) on applicable NIST Face Recognition Vendor Tests, if available.
  10. The original copy of the query or "probe" photo submitted to the face recognition unit.
  11. All edited copies of the query or "probe" photo submitted to the facial recognition system, noting if applicable, which edited copy produced the candidate list the defendant was in, and a list of edits, filters, or any other modifications made to that photo. 
  12. Copy of the database photo matched to the query or "probe" photo and the percentage of the match, rank number, or confidence score assigned to the photo by the facial recognition system in the candidate list. 
  13. A list or description of the rank number or confidence scores produced by the system, including the scale on which the system is based (e.g. percentage, logarithmic, other).
  14. A copy of the complete candidate list returned by the face recognition or the first 20 candidates in the candidate list if longer than 20, in rank order and including the percentage of the match or confidence score assigned to each photo by the facial recognition system.
  15. Parameters of the database used: 
    1. How many photos are in the database?
    2. How are the photos obtained?
    3. How long the photos are stored?
    4. How often the database is purged?
    5. What is the process for getting photos removed from the database?
    6. Who has access to the database?
    7. How the database is maintained?
    8. The Privacy Policy for the database.

Stingray Discovery Request Stingray Discovery Request

Stingray software uses cellcite data, so it's a form of digitial forensic evidence. To learn more, click here.

STATE OF MINNESOTA

 

COUNTY OF HENNEPIN

 

DISTRICT COURT

 

FOURTH JUDICIAL DISTRICT

 

State of Minnesota,

 

                                    Plaintiff,

vs.

 

David Johnson,

 

                                    Defendant.

 

Court File No.:27-CR-14-16595

 

 

 

DEFENDANT’S DEMAND FOR SPECIFIC DISCOVERY PURSUANT TO RULES OF CRIMINAL PROCEDURE

 

TO:     THE ABOVE-NAMED COURT AND DANIEL ALLARD, ASSISTANT HENNEPIN COUNTY ATTORNEY, COUNSEL FOR THE STATE OF MINNESOTA.

 

            PLEASE TAKE NOTICE that, pursuant to Rule 9.01, subd. 1 (3) of the Minnesota Rules of Criminal Procedure, defendant hereby demands that the State provide discovery and produce the following items for inspection by defense counsel:

 

  1. Access to any and all Stingray or Stingray II devices used in connection with this case by law enforcement.
  2. Specifically, copies of any and all reports from any officers who used any Stingray or Stingray II devices during this case.
  3. Specifically, copies of any and all reports outlining what those Stingray or Stingray II devices are that were used in connection with this case by law enforcement.
  4. Specifically, copies of any and all records of training and/or certification those officers have for the use of the Stingray or Stingray II devices that were used in connection with this case by law enforcement.
  5. Specifically, copies of any and all training materials in the possession of the law enforcement agencies involved in this case for the Stingray or Stingray II devices that were used in connection with this case by law enforcement.
  6. Specifically, copies of any and all nondisclosure agreements in the possession of the law enforcement agencies involved in this case for the Stingray or Stingray II devices that were used in connection with this case by law enforcement.
  7. Copies of any and all policies on the use of Stingray or Stingray II devices for the law enforcement agencies involved in this case.
  8. Specifically, copies of any and all policies on the use of Stingray or Stingray II devices for the law enforcement agencies involved in this case for the Stingray or Stingray II devices that were used in connection with this case by law enforcement.
  9. Copies of any and all policies on the disclosure of Stingray or Stingray II devices for the law enforcement agencies involved in this case.
  10. Specifically, copies of any and all policies on the disclosure of Stingray or Stingray II devices for the law enforcement agencies involved in this case for the Stingray or Stingray II devices that were used in connection with this case by law enforcement.
  11. Copies of any and all warrants applied for by law enforcement during the investigation of this case.
  12. Specifically, copies of any and all warrants applied for by law enforcement during the investigation of this case for the use of the Stingray or Stingray II devices that were used in connection with this case by law enforcement.
  13. Copies of any and all warrants granted to law enforcement during the investigation of this case.
  14. Specifically, copies of any and all warrants granted to law enforcement during the investigation of this case for the use of the Stingray or Stingray II devices that was used in connection with this case by law enforcement.
  15. Copies of any and all subpoenas used by law enforcement during the investigation of this case.
  16. Specifically, copies of any and all subpoenas used by law enforcement during the investigation of this case for the use of the cell phone tracking that was used in connection with this case by law enforcement.
  17. Specifically, copies of any and all subpoenas involving telephone number 651-208-2966 used by law enforcement during the investigation of this case for the use of the cell phone tracking that was used in connection with this case by law enforcement.
  18. Specifically, copies of any and all subpoenas involving telephone service provider Verizon Wireless used by law enforcement during the investigation of this case for the use of the cell phone tracking that was used in connection with this case by law enforcement.
  19. Copy of any and all records produced as a result of the subpoenas used by law enforcement during the investigation of this case.
  20. Specifically, copies of any and all records produced as a result of the subpoenas used by law enforcement during the investigation of this case for the cell phone tracking that was used in connection with this case by law enforcement.
  21. Specifically, copies of any and all records produced as a result of the subpoenas involving telephone number 651-208-2966 used by law enforcement during the investigation of this case for the cell phone tracking that was used in connection with this case by law enforcement.
  22. Specifically, copies of any and all records produced as a result of the subpoenas involving telephone service provider Verizon Wireless used by law enforcement during the investigation of this case for the cell phone tracking that was used in connection with this case by law enforcement.

 

 

Date:  February 18, 2022                                            Respectfully Submitted,

 

 

                                                                                   

 

Fingerprint Discovery Request Fingerprint Discovery Request

FINGERPRINT DISCOVERY REQUEST

INTRODUCTION:

This is a request for disclosure of materials pertaining to friction ridge (fingerprint) analysis performed in the case of (CASE NAME, COUNTY, CASE NUMBER). This request applies to all fingerprint analysis that has been, is currently being, or will be performed in the instant case. The fingerprint analysis includes but is not limited to the collection of all latent prints, the evaluation of the prints and any comparisons attempted.. This request is ongoing. In the event that new materials responsive to this request are obtained, produced, discovered, or otherwise come into the possession of the prosecution or its agents, the materials should be provided to the defendant without delay.

DISCOVERY:

  1. Case file: Please provide a complete copy of the case file including all records made by the laboratory or agency in connection with this case. If the file includes photographs, please include photographic quality copies. If there are separate files for collection, evaluation and comparison, provide all files.
  2. Bench notes: Please provide a copy of all bench notes recorded by latent print examiners in the course of analyzing, uploading and comparing all print evidence in this case.
  3. Protocols: Please provide a copy of all Fingerprint Technician Quality Assurance and Training Guidelines and Protocols (Standard Operating Procedure Manuals) used in the laboratory or by the agency that analyzed the fingerprint evidence.  This includes, but is not limited to, the Quality Manual [see SWGFAST (Scientific Working Group on Friction Ridge Analysis Study and Technology), Quality Assurance Guidelines for Latent Print Examiners, version 2.11 (Aug. 22, 2002), §3]. To minimize any burden of duplicating these items, I invite you to provide them in electronic form.
  4. Chain of custody and current disposition of evidence: Please provide copies of all records that document the collection, handling and storage of fingerprint evidence in this case, from the initial point of collection up to the current disposition. This includes latent prints and exemplars.
  5. Software: Please provide a list of all Automated Fingerprint Identification systems (AFIS) used in this case, including name of software program, manufacturer and version used in this case.
  6. Data files: If AFIS was used in any way in this case, please provide the following:

(1.1) Latent prints: All electronic images of any and all “latent” prints (prints recovered as evidence in this case) entered into an AFIS in this case in standard (.eft or .wsq) format.

(1.2)Encoding: Please provide the encoding record, indicating the ridge details (or “minutiae”) marked by laboratory personnel prior to any and all AFIS searches.

(1.3)Search results: Hard copy printout or electronic output in easily readable format of the results of any and all AFIS searches run in connection with this case. Information provided should include, but is not limited to:

(1.3.1)   ranked list of “candidate matches”

(1.3.2)   identification numbers of all images appearing on the “candidate list”

(1.3.3)   “match scores” of all images appearing on the “candidate list.”

(1.4)       Candidate matches: Electronic images of all items appearing on the candidate list in standard (.eft or .wsq) format.

(1.5)       Client’s records: Electronic images of any and all ten-print records associated with or identified to our client in standard (.eft or .wsq) format.

These files should include all data necessary to, (i) independently re-analyze the raw data and (ii) reconstruct the analysis performed in this case.

  1. Digital Enhancement: If the fingerprint evidence in this case was digitally enhanced at any time for any reason, please provide:

(1.1) Latent prints: All electronic images of any and all “latent” prints (prints recovered as evidence in this case) entered into an AFIS in this case in standard (.eft or .wsq) format.

(1.2)Encoding: Please provide the encoding record, indicating the ridge details (or “minutiae”) marked by laboratory personnel prior to any and all AFIS searches.

(1.3)Search results: Hard copy printout or electronic output in easily readable format of the results of any and all AFIS searches run in connection with this case. Information provided should include, but is not limited to:

(1.3.1)   ranked list of “candidate matches”

(1.3.2)   identification numbers of all images appearing on the “candidate list”

(1.3.3)   “match scores” of all images appearing on the “candidate list.”

(1.4)       Candidate matches: Electronic images of all items appearing on the candidate list in standard (.eft or .wsq) format.

(1.5)       Client’s records: Electronic images of any and all ten-print records associated with or identified to our client in standard (.eft or .wsq) format.

These files should include all data necessary to, (i) independently re-analyze the raw data and (ii) reconstruct the analysis performed in this case.

  1. Documentation of Corrective Actions for Discrepancies and Errors:

Please provide any and all laboratory records of erroneous individualizations, erroneous verifications, clerical or administrative errors,  or missed individualizations committed by the laboratory. [For definitions of these terms, please see SWGFAST, Quality Assurance Guidelines for Latent Print Examiners, version 2.11 (Aug. 22, 2002), §2.2.] Please provide the name(s) of the case(s), the name(s) of the examiner(s) involved, the reported cause(s) of the error(s), the resolution(s) of the case(s), and any corrective action(s) taken. According to SWGFAST, Quality Assurance Guidelines for Latent Print Examiners, version 2.11 (Aug. 22, 2002), §8.2,  “The specific policies, procedures, and criteria for any corrective action taken as a result of a discrepancy in a technical case review should be clearly documented in writing.” Please provide a copy of all documentation of any errors of any type and of all corrective actions taken in connection with any errors of any kind made by the print examiners and peer reviewers that performed fingerprint analysis or review in this case.  If the laboratory or agency does not comply with the SWGFAST requirement that it maintain this documentation, it is sufficient to respond: “The laboratory does not comply with the SWGFAST requirement that it document corrective actions.”

  1. Accreditation: Please provide copies of all licenses or other certificates of accreditation in fingerprint analysis held by the laboratory as wellas a copyof the lab or angency’s most recent external audit report.
  2. Laboratory personnel: Please provide background information about each person involved in conducting or reviewing the DNA testing performed in this case, including:

(10.1) Current resume or CV

(10.2) A summary of all profiency tests results for the past 5 years.

  1. Validation Studies: Copies of all validation studies conducted by the lab or agency and/or relied on by the lab or agency in connection with fingerprint evaluation or analysis.
  2. Quality Assurance: A copy of the quality assurance manual for the fingerprint unit.
  3. Training Manual: A copy of the training manual fingerprint examination and comparison.
  4. Testimony Reviews: All documentation prepared in connection with the review of prior testimony by print examiner and examiner who will testify in this case. [see SWGFAST, Quality Assurance Guidelines for Latent Print Examiners, version 2.11 (Aug. 22, 2002), §10].
  5. Communications: All communications regarding the case between latent print examiners, police officers and district attorneys including but not limited to oral communications, reports, letters, and emails.
  6. Initial Communication: A description of all information about the case and the suspect provided to the latent print examiner about the case prior to his or her comparison.

 

 

 

 

3.1.3 Omitted for Spring 2023: Jury Instructions 3.1.3 Omitted for Spring 2023: Jury Instructions

Standard jury instruction in New York on expert testimony Standard jury instruction in New York on expert testimony

This is the default (pattern) jury instruction in New York cases where experts testify.  (If lawyers do not advocate for special instructions, the jury will likely hear only the pattern instruction from the judge.)

EXPERT WITNESS 1

You will recall that (specify) testified

[about certain (scientific), (medical), (technical) matters]

[or specify the field(s)]

and gave an opinion on such matters.

Ordinarily, a witness is limited to testifying about facts and is not permitted to give an opinion. Where, however, scientific, medical, technical or other specialized knowledge will help the jury understand the evidence or to determine a fact in issue, a witness with expertise in a specialized field may render opinions about such matters.

You should evaluate the testimony of any such witness just as you would the testimony of any other witness. You may accept or reject such testimony, in whole or in part, just as you may with respect to the testimony of any other witness.

In deciding whether or not to accept such testimony, you should consider the following:

  • the qualifications and believability of the witness;
  • the facts and other circumstances upon which the witness's opinion was based;
  • [the accuracy or inaccuracy of any assumed or hypothetical fact upon which the opinion was based;]
  • the reasons given for the witness's opinion; and
  • whether the witness's opinion is consistent or inconsistent with other evidence in the case.

Defense Proposed CSI Jury Instruction Defense Proposed CSI Jury Instruction

Just as I have advised you to avoid any media or publicity about this case, the effort to ensure that you decide this case solely on the evidence presented in this courtroom also puts a limit on getting information from television entertainment. This applies to popular TV shows such as CSI and NCIS, which present the use of forensic methods such as fingerprint comparison to resolve criminal investigations. These and other similar shows will leave you with a false understanding of fingerprint comparison analysis. As far as this case is concerned, you are not prohibited from watching such shows, but you may not rely on any of the information from these CSI-type programs.  These programs are works of fiction.  The programs often present all forensic methodologies as being scientifically valid.  In this case, the scientific validity and reliability of the forensic method used – fingerprint comparison – will be an issue contested by the parties.  I will give you further instructions at the close of the evidence to help you evaluate the extent to which the firearm examiner’s conclusion was based on a scientifically valid and reliable method.  Thus, in this case you must put aside anything you think you know about fingerprint comparison evidence based on what you have seen on television.  Instead you must rely on the evidence and the testimony presented in this case and follow the instructions I will provide you on the law and the assessment of evidence.

Model motion requesting a jury instruction on eyewitness identification testimony Model motion requesting a jury instruction on eyewitness identification testimony

This is the type of brief an attorney files in support of a request for a non-default jury instruction.

Just read the bolded text at the beginning and then just skim the remainder to get a sense of the kind of authority lawyers use to support such requests.

  • Note: The references to the “Redbook” instructions are references to that jurisdictions default (pattern) jury instructions.

SUPERIOR COURT OF THE DISTRICT OF COLUMBIA

Criminal Division — Felony Branch

UNITED STATES OF AMERICA                      :          

                                                                        :           Case No.

                            v.                                          :           Judge

                                                                        :          

DEFENDANT                                                  :

                                                                        :

                                 

MEMORANDUM OF POINTS AND AUTHORITIES IN SUPPORT OF PROPOSED JURY INSTRUCTIONS

 

            [Client], through undersigned counsel, pursuant to [his/her] rights under the Fifth and Sixth Amendments to the United States Constitution, respectfully moves the Court to issue the attached proposed jury instructions on evaluating eyewitness identification evidence.  In this [Offense] case, which turns on a [single] [stranger] identification by the government’s witness, it is vital that the Court instruct the jury “on the importance of carefully evaluating the identification testimony and the circumstances surrounding the identification.”  Smith v. United States, 343 A.2d 40, 42-43 (D.C. 1975). 

The primary purpose of this jurisdiction’s model (“Redbook”) identification instruction is to prevent wrongful convictions stemming from misidentifications, but the instruction is based on scientific and scholarly work that is now several decades old. Because it has not been updated to reflect the considerable advances in science regarding eyewitness identifications, the instruction will fail to provide the jurors with an accurate guide for their consideration of eyewitness identifications. The current instruction does not accurately instruct the jury regarding variables that can affect a witness’s memory and confidence, the value of a witness’s statement of confidence, the nexus between identification and memory and the variables that affect memory, or the effect of factors such as stress, the presence of a weapon, and racial or ethnic differences between the perpetrator and the witness. Providing the Redbook instruction without modifications based on the scientific advancements of the past few decades will inject an unnecessary risk of a wrongful conviction into this trial. This Court should consider the current science on eyewitness identifications, as well as the decisions of the high courts of a number of other jurisdictions that have acknowledge the scientific advancements and that have incorporated the current science into their model jury instructions. This Court should follow those other jurisdictions and provide an accurate and up-to-date identification instruction that minimizes the risk in this case of a wrongful conviction based on misidentification. 

The “current” Redbook instruction, initially promulgated by the D.C. Circuit in an appendix to United States v. Telfaire, 469 F.2d 552, 555 (D.C. Cir. 1972) (per curiam) because “[t]he presumption of innocence that safeguards the common law system must be a premise that is realized in instruction and not merely a promise,”  is, at its core, based on studies and scholarly works that are all at least sixty years old. Indeed, some of the studies on which it is based are more than eighty years old. The instruction was adopted based on D.C. Circuit opinions from the late-1960’s and early-1970’s, including Macklin v. United States, 409 F.2d 174 (D.C. Cir. 1969), and United States v. Telfaire, 469 F.2d 552 (D.C. Cir. 1972) (per curiam); see also Criminal Jury Instructions for the District of Columbia § 5.06 cmt. (2d ed. 1972); see also Smith v. United States, 343 A.2d 40, 43 & n.5 (D.C. 1975). The D.C. Circuit opinions Macklin and Telfaire were in turn based on a series of cases decided by the Supreme Court in 1967: United States v. Wade, 388 U.S. 218 (1967); Gilbert v. California, 388 U.S. 263 (1967); and Stovall v. Denno, 388 U.S. 293 (1967).[1] Those cases, particularly Wade, were based on what was then the current academic study of the problems affecting eyewitness identifications.[2] For example, the Wade opinion relied on what it called “one of the most comprehensive studies” of pretrial identifications, which was conducted by Williams and Hammelmann and published in 1963. Wade, 388 U.S. at 235 (quoting Identification Parades, Part I, (1963) Crim.L.Rev. 479, 483). The Court also considered scholarly works from as far back as 1930. Id. at 229, n.7 (citing inter alia F. Gorphe, Showing Prisoners to Witness for Identification, 1 Am. J. Police Sci. 79 (1930)).[3]    

In the several decades since the studies that influence the Redbook identification instruction were conducted, there have been considerable advancements in the science regarding eyewitness identifications. Indeed, the fact that the Redbook instruction does not accurately reflect the accepted science on identifications has been acknowledged by the National Academy of Sciences (NAS). The NAS, in its landmark 2014 report entitled Identifying the Culprit: Assessing Eyewitness Identification (hereinafter “NAS Report”), attached as Exhibit X, directly criticized the inadequacy of instructions like the current Redbook instruction.[4]

The Redbook instruction’s failure to account for the tremendous scientific advances regarding eyewitness identifications means that it fails to serve its primary purpose: protecting against the risk of wrongful convictions caused by misidentifications. The opinions on which the Redbook instruction is based acknowledged the need for the jury to be instructed on the factors affecting eyewitness identifications in order to avoid wrongful convictions and “the very real danger of mistaken identification as a threat to justice.”  Telfaire, 469 F.2d at 555 (recognizing that the risk of a misidentification leading to the conviction of an innocent person weighed in favor of creating a model instruction and concluding that “[t]he presumption of innocence that safeguards the common law system must be a premise that is realized in instruction and not merely a promise”); Macklin, 409 F.2d at 177-178 (“[w]ithout doubt, conviction of the wrong man is the greatest single injustice that can arise in our system of criminal law” and an identification instruction “at least is a step in the right direction” towards limiting the risk of that injustice); see also Smith, 343 A.2d at 43 (“to the lessen the chance of … a miscarriage of justice, the jury should be instructed on the importance of carefully evaluating the identification testimony and the circumstances surrounding the identification”). 

Eyewitness misidentification is the leading contributor to wrongful convictions in the United States. Eyewitness misidentification contributed to four of the exonerations in DC.[5] As the Court of Appeals has recognized in 2009, “Every major study of wrongful convictions in the last decade has concluded that eyewitness misidentification is the most common cause of wrongful convictions in America. Of the first 200 DNA-based exonerations, 79% of the cases involved an eyewitness misidentification.” Benn v. United States (“Benn II), 978 A.2d 1257, 1290 (D.C. 2009) (quoting Professor Cynthia E. Jones, The Right Remedy for the Wrongly Convicted: Judicial Sanctions for the Destruction of DNA Evidence, 77 Fordham L.Rev. 2893, 2929 (May 2009)). “The decisions of the Supreme Court and of this court recognizing the potential for misidentification when the accused is a stranger to the witness are grounded in reality. They provide the legal context in which judicial discretion must be exercised at the trial court level.” Id. Because the Redbook instruction is based on woefully out-dated science, it fails to apprise the jury of issues that must be considered for properly evaluating identification testimony and therefore increases the risk of wrongful conviction.   

For several years the Redbook Committee has acknowledged in the comments to the identification instruction that its members do not agree about what changes are necessary for the instruction. The Redbook Committee has effectively abdicated any responsibility for the instruction and explicitly left trial courts in charge of determining what identification instruction should be provided. The Court of Appeals has also recently indicated, in Corbin v. United States, 120 A.3d 588 (D.C. 2015), that trial courts have the discretion to either provide the current model instruction or to change the instruction to account for the widely accepted social science regarding eyewitness identifications. 

As a result of the inadequacies identified by the National Academy of Sciences and the Redbook Committee’s inaction, D.C.’s current instruction lags behind several other jurisdictions that have taken steps to respond to both the scientific consensus on the factors that affect reliability and the role misidentifications have played in wrongful convictions. New Jersey and Massachusetts have adopted instructions for use in every case. Alaska and Oregon have adopted revised standards for admissibility of eyewitness identification testimony and Alaska has instructed its Criminal Pattern Jury Instructions Committee to develop jury instructions consistent with the scientific principles it relied upon in revising its admissibility standard. In so doing, these jurisdictions have recognized the scientific consensus that exists surrounding certain factors that affect the reliability of eyewitness identifications. CLIENT respectfully submits that this Court should share the same concern regarding wrongful convictions and should modify the instruction based on the accepted science, and use an identification instruction that is consistent with those other jurisdictions.

The proposed instructions are adapted from model instructions formulated in other jurisdictions where courts have carefully and thoroughly evaluated the decades of science underlying eyewitness identifications, as adduced through expert testimony and a multitude of peer-reviewed scientific studies, and concluded that “enhanced jury charges” that reflect the state of the science are necessary to help jurors weigh eyewitness identification evidence.  This Court should rely on the in-depth analyses conducted by the other jurisdictions and update the identification instruction accordingly.[6] 

In particular, the defense’s proposed instructions rely on the Massachusetts Model Jury Instructions on Eyewitness Identification, issued on November 16, 2015, in the wake of Com. v. Gomes, 22 N.E.3d 897 (Mass. 2015), its predecessor model jury instruction in New Jersey from State v. Henderson, 27 A.3d 872 (N.J. 2011), as well as on the principles espoused in the NAS Report and in State v. Lawson, 291 P.3d 673 (Or. 2012) and State v. Guilbert, 49 A.3d 705 (Conn. 2012). The instruction primarily relies on the Massachusetts instruction because it is built on the National Academy of Sciences Report as well as the principled decisions of the highest courts of Oregon, Connecticut, and New Jersey.

The NAS Report

The NAS Report was authored by an ad hoc study committee charged with assessing the state of research on eyewitness identifications and making recommendations as appropriate. The committee consisted of federal judges (including multiple judges from the U.S. Court of Appeals for the District of Columbia), state court judges, prominent law and social science professors, members of the National Academy of Sciences, a police chief, a federal defender, and a district attorney. The committee “analyzed relevant published and unpublished research, external submissions, and presentations made by various experts and interested parties.” NAS Report at xiii. The preface of the report recognized:

Basic research has progressed for many decades, is of high quality, and is largely definitive. Research of this category identifies principled and insurmountable limits of vision and memory that inevitably affect eyewitness accounts, bear on conclusions regarding accuracy, and provide a broad foundation for the committee’s recommendations.

 

Id.

And as the D.C. Court of Appeals recently acknowledged (even before the landmark NAS Report was published), “we have learned much to cause us to reexamine our view that average lay persons serving as jurors are well equipped to call upon their common sense knowledge of the reliability of eyewitness identifications, even when aided by cross-examination, to assess the credibility of such testimony.”  Minor v. United States, 57 A.3d 406, 413-14 (D.C. 2012); see also In re L.C., 92 A.3d 290, 296 (D.C. 2014) (“this court [has] recognized that the insights of modern psychological research into the factors influencing eyewitness identifications are not matters of common knowledge or common sense and are, indeed, often counterintuitive.”)

Model Jury Instructions in Massachusetts and New Jersey

State v. Henderson, 27 A.3d 872 (N.J. 2011), is a groundbreaking opinion that largely adopted its court-appointed Special Master’s report extensively documenting the “vast body of scientific research about human memory” in the context of eyewitness identifications.  The Special Master “presided over a hearing that probed testimony by seven experts and produced more than 2,000 pages of transcripts along with hundreds of scientific studies” id. at 877; the research presented represented the “gold standard in terms of the applicability of social science research to the law,” as “[e]xperimental methods and findings have been tested and retested, subjected to scientific scrutiny through peer-reviewed journals, evaluated through the lens of meta-analyses, and replicated at times in real-world settings.”  Id. at 916. 

As a result of the Special Master’s findings, the New Jersey Supreme Court changed its standard for the admissibility of eyewitness identification evidence, id. at 919, and ordered revisions to the model jury instruction on eyewitness identifications that reflected the scientific testimony elicited by the Special Master.  Id. at 878 (“[W]e have asked the Criminal Practice Committee and the Committee on Model Criminal Jury Charges to draft proposed revisions to the current model charge on eyewitness identification and address various system and estimator variables.”)

Building on the work started by the New Jersey Special Master, the Justices of the Massachusetts Supreme Judicial Court (SJC) convened the Study Group on Eyewitness Identification in 2011, which issued its report in July 2013 (hereinafter “Study Group Report”). The Study Group’s members were judges, prosecutors, defenders, law enforcement personnel, and academics. The Study Group “reviewed key scientific research, law review articles, the emerging case law, statutes, and police practices nationwide, among other authorities.” SJC Report at 2. The Study Group‘s “scientifically grounded set of recommendations” were “geared towards reducing juror confusion and increasing judicial involvement in implementing procedures and remedies that reduce the risk of wrongful convictions.” Id. at 11. One of the key recommendations of the Study Group Report was a revision of the Massachusetts eyewitness identification jury instructions. The recommended instructions set out to describe how memory works, instruct the jury of principles that “are generally accepted within the social science community; that is, the variables that are not substantially in dispute,” and contain some instructions that would be given in every case and some that the trial judge should give or omit, depending on the evidence in the case. Id. at 55. With those principles in mind, the Study Group drafted revisions to the eyewitness identification jury instructions, and submitted those instructions as a recommendation with its report.

Subsequent to the issuing of the Study Group Report, the Massachusetts Supreme Judicial Court (SJC) reconvened and decided Commonwealth v. Gomes, 22 N.E. 3d 897 (Mass. 2015) on January 12, 2015. In Gomes, Massachusetts issued a provisional eyewitness identification instruction based on the findings of the Study Group. The SJC “conclude[d] that there are scientific principles regarding eyewitness identification that are ‘so generally accepted’ that it is appropriate in the future to instruct juries regarding these principles so that they may apply the principles in their evaluation of eyewitness identification evidence.”  Com. v. Gomes, 22 N.E.3d 897, 900 (2015).

Notably, the report of the Massachusetts Study Group on Eyewitness Identification[7] found inadequacies in the then-Massachusetts instruction that are likewise present in the current Redbook instruction.[8] The Study Group found that, though the instructions “enumerate several factors that jurors should consider in assessing identification testimony,” there were three deficiencies with the instructions. Ex. X at 54. First, the instructions failed to instruct the jury on some factors “that social scientists have proved can influence the accuracy of an identification, such as stress, the perpetrator’s use of a weapon, and the racial or ethnic difference between the perpetrator and the witness.” Id. Second, the instructions “fail to explain the nexus between identification and memory; that is, the jury is not informed that certain… variables influence memory at its different stages, and therefore affect the reliability of an identification.” Id.  Third, the instructions fail to instruct the jury on the ways in which these “variables can affect a witness’s memory and confidence”—which is important because research shows that jurors rely heavily on a witness’s expressed confidence. Id.

The NAS Report was pending publication at the time of the Gomes decision. The Court in Gomes thus had the benefit of the NAS Report and cites the report in some of its findings. The SJC then allowed for a comment period on the instructions and released a model instruction on November 16, 2015. That instruction incorporates many of the uncontroversial findings in the NAS Report. That model instruction was an influence for the attached, proposed instruction.

Reforms in Oregon and Alaska

Following the lead of the New Jersey Supreme Court, the Supreme Court of Oregon in 2012 relied on the advances in scientific research underpinning eyewitness identifications to revise its legal test for the admissibility of eyewitness identification evidence and to adopt several additional procedures for determining the admissibility of such evidence.  State v. Lawson, 291 P.3d 673 (Or. 2012).  As in Henderson, the Lawson opinion engaged in extensive and useful discussion of the recent scientific research that establishes the factors known to affect the reliability of eyewitness identifications. The Lawson court appended a “summary of the scientific research and literature this court examined for these cases,” that cited to and explained the most prominent scientific research studies that outlined findings on each factor known to affect eyewitness reliability. Id. at 700.  Oregon courts have not yet incorporated these findings into jury instructions; however, courts do take judicial notice of these findings and can communicate them to jurors accordingly.

And most recently, Alaska’s Supreme Court, relying on the work of the New Jersey Special Master and the Massachusetts Study Group, altered its standard for admissibility of eyewitness identification evidence. Quoting the NAS Report, the Alaska Supreme Court stated that, “‘[t]he past few decades have seen an explosion of additional research that has led to important insights into how vision and memory work, what we see and remember best, and what causes these processes to fail.’” Young v. State, 374 P.3d 395, 414 (Alaska 2016) reh'g denied (July 19, 2016). The Alaska Supreme Court ruled that courts should consider factors, based on decades of scientific research, that could affect the reliability of identifications. The court also found that, “jury instructions specific to eyewitness identifications are necessary for the jury's proper understanding of the issue[,] … [m]any of the factors that affect reliability are counterintuitive and, therefore, not coterminous with common sense[, and] [t]hus while science has firmly established the inherent unreliability of human perception and memory” that reality is outside the understanding of the average juror. Id. at 428. The Court went on to “refer the issue of eyewitness-specific jury instructions to the Criminal Pattern Jury Instructions Committee and ask that it draft a model instruction appropriate for use in future cases, consistent with the principles we announce today.” Id. The Jury Instructions Committee has yet to issue a new instruction.

This Court’s Discretion to Instruct the Jury on Factors that Affect Identification

The Court of Appeals in Corbin recognized that the “trial court has broad discretion in formulating jury instructions.” Corbin v. United States, 120 A.3d 588, 606 (D.C. 2015) (internal quotations and citations omitted). The Court in Corbin found that the trial court in that case did not abuse its discretion in failing to give proposed jury instructions based on New Jersey’s jury instructions. In determining that the trial judge did not abuse his discretion, the court noted that “the defense counsel's proposed instructions cited to Henderson and several other cases in footnotes without any explanation or citation to the scientific studies cited in those cases. Instead, the instructions referred generally to the results of those studies[.]” Id. at 605. The trial court in Corbin did not have the benefit of the NAS Report, attached, or the Study Group Report from Massachusetts, attached, or the numerous studies explained in and attached to this motion. The attached studies are primarily meta-analyses. A meta-analysis is “a statistical technique for combining and contrasting the findings from independent studies[.]” Minor, 57 A.3d at 417 (D.C. 2012). Further, the attached instruction draws mostly from the more recent Massachusetts instruction, developed after another round of study and influenced by the NAS Report.

The trial court in Corbin also expressed its preference to “stick with the Redbook.” Id. at 607.  Though traditionally revisions to the model Redbook instructions are made through the Redbook Committee, the Redbook Committee recently indicated that no substantive revisions will be made to the Identification instruction in the Committee. The 2016 Redbook release includes an updated comment on the Identification instruction: “Because the Committee remains unable to agree about whether, and under what circumstances, additional instruction is necessary, the Committee believes that if any significant changes to the instruction are to be made those changes are best decided outside of this Committee through litigation.” When Corbin was decided, the trial court could likely have thought that if any change were to be made to the Redbook model instruction, that change would be made through the Committee. That is no longer the case. [CLIENT] should not be forced to go to trial with an inadequate instruction that the Redbook Committee will not endorse but also will not change.

It is unnecessary here for this Court to reinvent the wheel in instructing the jury in a manner that better guards against wrongful convictions based on mis-identifications. As Connecticut’s highest court observed of factors contributing to misidentification: “this broad based judicial recognition tracks a near perfect scientific consensus.  The extensive and comprehensive scientific research, as reflected in hundreds of peer reviewed studies and meta-analyses, convincingly demonstrates the fallibility of eyewitness identification testimony and pinpoints an array of variables that are most likely to lead to a mistaken identification.”  State v. Guilbert, 49 A.3d 705, 720-21 (Conn. 2012) (emphasis added). 

 The courts of New Jersey and Massachusetts have already appointed a respective Special Master and Study Group to review the research and both have concluded that there is a consensus on the relevance of particular factors.[9] It is efficient and appropriate for this Court to rely on those findings, as other courts have.[10] It would be unnecessarily expensive and time-consuming to defendants, courts, and juries to require expert testimony in every single case that involves eyewitness identification, even though the factors identified in the attached instruction are well-established. It is unnecessary and overly burdensome for the defense to be required to call an expert in every case in which a witness made an identification to talk about factors that, in the words of the Connecticut Supreme Court, have reached “a near perfect scientific consensus.”

The Redbook instruction is read in every case in which there is an identification, not just in cases in which expert testimony is presented. This is because “identification testimony presents special problems of reliability.” Telfaire, 469 F.2d at 555. If the Court is to read identification instructions in every case, whether expert testimony is presented or not, it should read instructions that tell the jury about factors on which researchers have reached a near perfect consensus, instead of giving the outdated, incomplete, and overly vague Redbook instruction. The current instructions tell the jury to consider some factors related to reliability, without any guidance on how to consider these factors and omit entirely some factors for which other courts have recognized a consensus. Because the Redbook Committee has expressed its unwillingness to change the instruction, either this Court should exercise its discretion to give the new, updated instruction proposed by the defense rather than continue to give an instruction based on science that is woefully out-of-date, and put [CLIENT] at risk of being wrongly convicted.  

AUTHORITY FOR PROPOSED INSTRUCTIONS

The following memorandum accompanies the proposed defense jury instructions. The attached instructions are outlined as follows:

Introduction (including description of human memory, the research on which is outlined below at pages X-X)

  1. Opportunity to View the Event
    1. Characteristics of the Witness (including stress, the research on which is outlined below at pages X-X)
    2. Duration (pages X-X)
    3. Weapon Focus (pages X-X)
    4. Distance (pages X-X)
    5. Lighting
    6. Disguise/Changed Appearance (pages X-X)
    7. Alcohol (pages X-X)
  2. Cross-racial Identification (pages X-X)
  3. Passage of Time (pages X-X)
  4. Expressed Certainty (pages X-X)
  5. Exposure to Outside Information (pages X-X)
  6. Identification Procedures (pages X-X)
  7. Failure to Identify or Inconsistent Identification (pages X-X)

Conclusion

 

Human Memory

Redbook Instruction on Human Memory:  None

Proposed Instruction: People have the ability to recognize other people from past experiences and to identify them at a later time, but research has shown that there are risks of making mistaken identifications. That research has focused on the nature of memory and the factors that affect the reliability of eyewitness identifications.

 

            The mind does not work like a video recorder.  A person cannot just replay a mental recording to remember what happened.  Memory and perception are much more complicated.   Remembering something requires three steps.  First, a person sees an event.  Second, the person's mind stores information about the event.  Third, the person recalls stored information.  At each of these stages, a variety of factors may affect -- or even alter -- a person’s memory of what happened and thereby affect the accuracy of a later identification. This can happen without the person being aware of it.

 

            This instruction is adopted from the Massachusetts instruction. A fundamental fact about human memory, accepted by researchers but often not appreciated by lay people is that memory is not like a videotape.  “Research has shown that human memory is not like a video recording that a witness need only replay to remember what happened. Memory is far more complex.” Com. v. Gomes, 22 N.E.3d 897, 919 (2015). The NAS Report recognized that memory is not like a pristinely preserved video recording: “Human vision does not capture a perfect, error-free trace of a witnessed event… . The recognition of one person by another—a seemingly commonplace and unremarkable everyday occurrence—involves complex processes that are limited by noise and subject to many extraneous influences.” NAS Report at 15. The NAS Report additionally described the three stages of remembering and their susceptibility to influence: 

Like vision, memory is also beset by noise. Encoding, storage, and remembering are not passive, static processes that record, retain, and divulge their contents in an informational vacuum, unaffected by outside influences. The contents cannot be treated as a veridical permanent record, like photographs stored in a safe. On the contrary, the fidelity of our memories for real events may be compromised by many factors at all stages of processing, from encoding through storage, to the final stages of retrieval. Without awareness, we regularly encode events in a biased manner and subsequently forget, reconstruct, update, and distort the things we believe to be true.

 

Id. at 59-60.

The D.C. Court of Appeals recognized this point in Benn II, admonishing that while “most potential jurors believe that a person’s memory functions like a camera, capable of retrieving a captured image on demand,” memory in fact “is influenced by a variety of factors[.]”  Benn II, 978 A.2d at 1267 n.26.

     The proposed instruction, which addresses these basic precepts and notes that human memory is not foolproof will counter common but incorrect intuitions that jurors hold about how recall works, and will aid the jury in more accurately weighing eyewitness testimony.  See Henderson, 27 A.3d at 895 (“Science has proven that memory is malleable.  The body of eyewitness identification research further reveals that an array of variables can affect and dilute memory and lead to misidentifications.”); see NAS Report at 119 (“As this report indicates, however, the malleable nature of human visual perception, memory, and confidence; the imperfect ability to recognize individuals; and policies governing law enforcement procedures can result in mistaken identifications with significant consequences.”) The proposed instruction is in line with the way that the NAS Report describes the operation of memory. The instruction gives the jurors information about which they would not otherwise be aware, as recognized by the D.C. Court of Appeals in Benn II. And the proposed instruction gives jurors information that is addressed in no way by the current Redbook identification instruction.

Stress

Redbook Instruction on Stress: None

Proposed Instruction on Stress:

1) a) Characteristics of the Witness:  You should consider the physical and mental characteristics of the witness when the observation was made.  For example, how good was the witness's eyesight?  Was the witness experiencing illness, injury, or fatigue?  Was the witness under a high level of stress?  High levels of stress may reduce a person's ability to make an accurate identification.

 

            This instruction is adopted from the Massachusetts instruction. The presence of high levels of eyewitness stress has been widely established by the science to be an important factor in assessing the reliability of eyewitness identifications.  The NAS Report found, “[u]nder conditions of high stress, a witness’ ability to identify key characteristics of an individual’s face (e.g., hair length, hair color, eye color, shape of face, presence of facial hair) may be significantly impaired.” NAS Report at 94 (citing C. A. Morgan III et al., “Misinformation Can Influence Memory for Recently Experienced, Highly Stressful Events,” International Journal of Law and Psychiatry 36(1): 11–17 (2013)). The Massachusetts SJC Study Group Report likewise concluded, “[h]igh levels of stress or fear can have a negative impact on a witness’s ability to make accurate identifications.” Study Group Report at 59.

The 2013 Morgan study, cited by the NAS Report as finding that when a witness experiences conditions of high stress, key characteristics of an individual, like hair length and color and facial features, can be harder to identify,  is attached as Exhibit X, as well as the two most prominent studies on stress and eyewitness identification. The first of these studies, attached as Exhibit X, is a meta-analysis, Kenneth A. Deffenbacher, Brian H. Bornstein, Steven D. Penrod, and E. Kiernan McGorty, A Meta-Analytic Review of the Effects of High Stress on Eyewitness Memory, 28 L. & Hum. Behav. 687, 699 (2004). This meta-analysis was cited by the Study Group Report at pages 29 and 60, the NAS Report at page 94, Henderson, 27 A.3d at 904, and Lawson, 291 P.3d at 700. The Study Group Report described the results of the study, “A meta analysis of 27 independent studies conducted on the effects of stress on identification accuracy showed that, while 59 percent of the 1,727 participants correctly identified the target individual in a target-present lineup after a low-stress encounter, only 39 percent did so after high-stress encounters.”  Ex. X at 59-60. The researchers wrote that the conclusions “regarding the negative effect of stress on eyewitness identification accuracy” and “that heightened stress debilitates eyewitness recall” were “safe” conclusions. Ex. X at 700.

The second attached study, attached as Exhibit X, looked at the effects of stress on combat veterans, Charles A. Morgan, Gary Hazlett, Anthony Doran, Stephan Garrett, Gargy Hoyt, Paul Thomas, Madelon Baranoski, and Steven M. Southwick, Accuracy of Eyewitness Memory for Persons Encountered during Exposure to Highly Intense Stress, 27 International Journal of Law and Psychiatry 265-279 (2004). The court in Lawson explained the findings:

Military survival school participants were subjected to two 40–minute interrogations, each by different interrogators, following a 12–hour period of confinement without food and sleep in a mock prisoner of war camp. One interrogation was conducted under high-stress conditions, involving physical confrontation, while the other was conducted under low-stress conditions, involving only deceptive questioning. When asked the next day to identify their interrogators, only 30 percent of the participants correctly identified their high-stress interrogator, while 60 percent correctly identified their low-stress interrogator. The study also noted an associated increase in false identifications—56 percent of the participants falsely identified another person as their high-stress interrogator, compared to 38 percent who did so with regard to their low-stress interrogator.

 

Lawson, 291 P.3d at 700-01 (internal citations omitted).

 

In Gomes, the Massachusetts court found that the fact that “high levels of stress can reduce an eyewitness’s ability to make an identification” was one of five recognized principles that were generally accepted in the scientific community. Gomes, 22 N.E.3d at 913. The highest courts in Connecticut and Oregon have similarly accepted the scientific consensus that high stress at the time of observation may adversely affect an eyewitness’s perception and memory, and thus their ability to accurately identify.  Lawson, 291 P.3d at 687; Guilbert, 49 A.3d at 722 FN 14 (citing United States v. Downing, 753 F.2d 1224, 1231 (3d Cir.1985); United States v. Smith, 621 F.Supp.2d 1207, 1216 (M.D.Ala. 2009); State v. Chapple, 135 Ariz. 281, 294, 660 P.2d 1208 (1983); Brodes v. State, 279 Ga. 435, 438, 614 S.E.2d 766 (2005); People v. Young, 7 N.Y.3d 40, 43, 850 N.E.2d 623, 817 N.Y.S.2d 576 (2006); State v. Bradley, 181 Ohio App.3d 40, 44, 907 N.E.2d 1205, appeal denied, 122 Ohio St.3d 1480, 910 N.E.2d 478 (2009)).

            In Benn II, and again in Minor, the D.C. Court of Appeals noted that while the “average juror is likely to believe that witnesses remember the details of violent events better than nonviolent ones,” in fact the scientific research establishes that the “opposite is true.”  Minor, 57 A.3d at 415 (citing Benn II, 978 A.2d at 1268 & n.36).  See also People v. Campbell, 847 P.2d 228, 233 (Col. Ct. App. 1992)(“[C]ontrary to average juror expectations, stress actually decreases rather than increases accuracy of perception. . . .”), United States v. Burton, 1998 U.S. Dist. LEXIS 18730 at *1, *34 (E.D. Tenn. Oct. 8, 1998) (“The results of the psychological research [on stress and accuracy] could well be counter intuitive to a juror, i.e., ‘common sense’ might tell one that if a weapon is present, the recollection of the event, and the face of the perpetrator, would somehow be forever etched in the mind of the witness (known as the ‘flashbulb effect’).”)

            A jury instruction that reflects the prevailing legal and scientific consensus on the relationship between high levels of stress and inaccurate identifications will thus aid jurors by giving them the proper tools to evaluate such identifications.

Exposure Duration (Length of Encounter)

Redbook Instruction on Exposure Duration (Length of Encounter):

A number of factors may affect the accuracy of an identification of the defendant by an alleged eyewitness.

  1. The witness’s opportunity to observe the criminal acts and the person committing them, including but not limited to, the length of the encounter… .

 

Proposed Instruction on Exposure Duration (Length of Encounter):

 

1) b) Duration.  The amount of time an eyewitness has to observe an event may affect the reliability of an identification. Although there is no minimum time required to make an accurate identification, a brief or fleeting contact is less likely to produce an accurate identification than a more prolonged exposure to the perpetrator. In addition, time estimates given by witnesses may not always be accurate because witnesses tend to think events lasted longer than they actually did.

 

            This instruction is adopted from the New Jersey instruction. Research shows that witnesses are often inaccurate in their estimates of the length of encounters. Attached as Exhibit X, Elizabeth Loftus, Jonathan W. Schooler, Stanley M. Boone, and Donald Kline, Time Went by so Slowly: Overestimation of Event Duration by Males and Females, 1 Applied Cognitive Psychology 3-13 (1987), has been cited to explain witness time estimation by the Study Group Report at pages 28, 63; by Lawson, 291 P.3d at 702; and by Henderson, 27 A.3d at 905. The court in Lawson explained the Loftus study: “Studies also show that witnesses consistently and significantly overestimate short durations of time (generally, durations of 20 minutes or less), especially during highly stimulating, stressful, or unfamiliar events.” Lawson, 291 P.3d at 702. The Loftus study summarizes research that found that witnesses routinely overestimated the length of crimes and then goes on to describe the three experiments at issue in the study in which participants watched videos of simulated crimes and were asked to estimate the length of the simulated crimes. Ex. X at 3-5. In discussing results, the study found “[t]aken together, the three experiments show pervasive overestimation of the duration of the videotape.” Id. at 10.

            Research shows that a shorter encounter is less likely to yield an accurate identification than a longer one. Brian Bornstein, Kenneth Deffenbacher, Steven Penrod, and E. Kiernan McGorty, attached as Exhibit X, Effects of Exposure Time and Cognitive Operations on Facial Identification Accuracy: a Meta-Analysis of Two Variables Associated with Initial Memory Strength 18(5) Psychology, Crime, and Law 473-490 (2012), is cited by the NAS Report at 97-98; the Study Group Report at 28; Lawson, 291 P.3d at 702. The NAS Report cites the Bornstein study in writing, “[m]eta-analyses on the effects of exposure time have found that relatively long exposure durations produce greater accuracy[.]”

[IF TOTAL DURATION OF EVENT WAS LESS THAN 30 SECONDS: And the Lawson court wrote of the Bornstein study,

Scientific studies indicate that longer durations of exposure (time spent looking at the perpetrator) generally result in more accurate identifications. [The Bornstein] meta-analysis shows that the beneficial effect of longer exposure time on accuracy is greatest between the shortest durations, up to approximately 30 seconds. In contrast, for durations over 30 seconds, only substantial increases in exposure time produced marked improvement in witness performance.

 

Lawson, 291 P.3d at 702 (internal citations omitted).]

Though “length of the encounter” is currently a factor contained in the existing D.C. jury instruction for jurors to consider in evaluating eyewitness evidence, the current language does not reflect the prevailing scientific understanding of how exposure duration affects the reliability of eyewitness identifications. With respect to violent crimes, as the D.C. Court of Appeals has acknowledged in citing the scientific research related to exposure duration and the accuracy of identifications, “witnesses most often think that the incident lasted longer than it did. . . . In other words, there is less time for the exposure-duration effect to increase the accuracy of an eyewitness’s identification than a lay person might otherwise assume.” Minor, 57 A.3d at 415 (citing Benn II, 978 A.2d at 1268 &n.37 and studies contained therein).   

            The model instruction adopted by the New Jersey Supreme Court integrates this scientific research by noting that time estimates given by witnesses may not always be accurate because witnesses tend to think that events take longer than they actually did. See Henderson, 27 A.3d at 905 (“studies have shown, and the Special Master found, ‘that witnesses consistently tend to overestimate short durations, particularly where much was going on or the event was particularly stressful.’”). The instruction also builds on the current language in the D.C. instruction to incorporate the studies that establish that “a brief or fleeting contact is less likely to produce an accurate identification than a more prolonged exposure.”  Id. at 905 (citing research). As the New Jersey Supreme Court recognized, an instruction on the actual effects that the time interval for viewing the perpetrator may have on the witness’s ability to make an accurate identification will guide the jury in more effectively weighing that witness’s testimony.

            The proposed instruction would better instruct jurors, based on established research and legal recognition of research, on how to consider testimony regarding duration.

Weapon Focus

Redbook Instruction on Weapon Focus: None.

Proposed Instruction on Weapon Focus:

1) c) Weapon Focus. You should consider whether the witness saw a weapon during the event.  If the event is of short duration, the visible presence of a weapon may distract the witness's attention away from the person's face.  But the longer the event, the more time the witness may have to get used to the presence of a weapon and focus on the person's face.

 

            This instruction is adopted from the Massachusetts instruction. “When a visible weapon is used during a crime, it can distract a witness and draw his or her attention away from the culprit. ‘Weapon focus’ can thus impair a witness’s ability to make a reliable identification and describe what the culprit looks like if the crime is of short duration.”  Henderson, 27 A.3d at 904-905. Discussing the numerous studies and testimony before the Special Master, the Henderson court found that “when the interaction is brief, the presence of a visible weapon can affect the reliability of an identification and the accuracy of a witness’ description of the perpetrator” and noted that “the longer the duration, the more time the witness has to adapt to the presence of a weapon and focus on other details.”  Id. at 905. See also Benn II, 978 A.2d at 1267 n.26 (citing studies that show that memory “is influenced by a variety of factors such as stress, including the stress induced by the presence of a weapon.”) The NAS Report explained the weapon focus as “a real-world case in point for eyewitness identification, in which attention is compellingly drawn to emotionally laden stimuli, such as a gun or a knife, at the expense of acquiring greater visual information about the face of the perpetrator[.]” NAS Report at 55.  

The Henderson court and the NAS Report both cited an influential meta-analysis that involved over 2,000 identifications, Nancy M. Steblay, A Meta–Analytic Review of the Weapon Focus Effect, 16 Law & Hum. Behav. 413, 415–17 (1992), attached as Exhibit X. Dr. Steblay’s meta-analysis is also cited by the NAS Report at 93; Study Group at 29; and Lawson, 291 P.3d at 702. Dr. Steblay explained, “Weapon focus refers to the visual attention that eyewitnesses give to a perpetrator’s weapon during the course of a crime. It is expected that the weapon will draw central attention, thus decreasing the ability of the eyewitness to adequately encode and later recall peripheral details.” Ex. X at 414. Dr. Steblay discussed the results of her extensive analysis, “The weapon focus effect has been found to be relatively robust across variations in stimulus presentation, experimental scenario, and experimenter and subject variables.” Id. at 421.

Notwithstanding the fact that this phenomenon is well-documented and accepted by courts, jurors often believe that the opposite is true—that the presence of a weapon can in fact focus the witness’s attention on the perpetrator. See United States v. Norwood, 939 F. Supp. 1132, 1137 (D.N.J. 1996) (“[W]hile scientific studies have consistently found that witness identifications are notably less accurate when a weapon was present, studies also reveal that many lay people erroneously believe the opposite to be true.”); People v. Allen, 875 N.E.2d 1221, 1232 (Ill. App. Ct. 2007) (“[R]easonable people well might believe an eyewitness will be more accurate when faced with a weapon . . . .”). And not only do laypeople believe the exact converse of the conclusions of the scientific literature on weapon focus, but most jurors remain generally unaware of the extent which the presence of a weapon can disrupt or distort eyewitness perception or memory.  See, e.g., United States v. Mathis, 264 F.3d 321, 342 (3d Cir. 2001) (holding that “the degree and scope of memory distortion that . . . a weapon typically causes for eyewitnesses are not matters that would necessarily be apparent to jurors,” and “it is difficult to comprehend how weapons’ destructive effect on memory might be elucidated through cross-examination”); United States v. Lester, 254 F. Supp. 2d 602, 612 (E.D. Va. 2003) (finding phenomenon of weapon focus to “fall outside the common sense of the average juror”).

The proposed instruction informs jurors of an effect of which they would otherwise not be aware—and contains proper limiting language, noting that the effect is lessened for longer encounters. This limiting language is present in both the New Jersey and Massachusetts instructions and the proposed instruction is taken from the Massachusetts model instruction.

Distance

Redbook Instruction on Distance:

A number of factors may affect the accuracy of an identification of the defendant by an alleged eyewitness.

  1. The witness’s opportunity to observe the criminal acts and the person committing them, including but not limited to… the distance between the various parties… .

 

Proposed Instruction on Distance:

 

1) d) Distance. A person is easier to identify when close by. The greater the distance between an eyewitness and a perpetrator, the higher the risk of a mistaken identification. In addition, a witness’s estimate of how far he or she was from the perpetrator may not always be accurate because people tend to have difficulty estimating distances.

 

This instruction is adopted from the New Jersey instruction. “Research has shown that the physical distance between the witness and the perpetrator is an important estimator variable, as it directly affects the ability of the eyewitness to discern visual details, including features of the perpetrator.” NAS Report at 92. The NAS Report cites Carla L. Maclean, C.A. Elizabeth Brimacombe, Meredith Allison, Leora C. Dahl, and Helena Kadlec, Post-Identification Feedback Effects: Investigators and Evaluators, 25(5) Applied Cognitive Psychology 739–752 (2011), which is attached as Exhibit X. The Maclean study positioned participants in seats close or farther from the screen to observe the video of a staged crime. Ex. X at 742. The “witnesses with a poor view reported that they had an inferior quality of view of the man who [committed the staged crime] and had a worse ability to make out the features of the culprit’s face.” Id. at 744. 

The Redbook instruction informs the jury that it should consider the distance between the witness and the perpetrator but does not tell the jury how to consider that distance. The proposed instruction is taken from the model New Jersey instruction and better instructs the jury on how it should consider an eyewitness’s testimony regarding distance. The court in Henderson cites R.C.L. Lindsay, Carolyn Semmler, Nathan Weber, Neil Brewer, and Marilyn Lindsay, How Variations in Distance Affect Eyewitness Reports and Identification Accuracy, 32 Law & Hum. Behav. 526 (2008), attached as Exhibit X, stating, “Research has also shown that people have difficulty estimating distances.” Henderson, 27 A.3d at 906. The Lindsay study noted that “[p]revious research demonstrates that people have difficulty judging the distance between themselves and inanimate objects and between two inanimate objects” and sought to determine whether or not the same was true for estimation of distances between oneself and another person. Ex. X at 533. To do this, researchers had participants view targets at various distances and later asked witnesses to make identifications and to estimate the distance between the participants and the targets. Id. at 526. The researchers’ data found that the difficulties judging distance between inanimate objects were also present for estimation of distances between oneself and another person and that judgment errors were often “substantial.” Id. at 528. The researchers further found that “[a]ccuracy of witness identification decisions was significantly influenced by the distance between the witness and the target at the time of exposure.” Id. at 533.

The proposed instruction tells the jury how to consider the effect that distance can have on identification, as described in the NAS Report. The instruction further informs jurors of the mistakes that people make in estimating distances. It goes farther than the Redbook instruction and gives jurors a set of considerations in its evaluation of considering the distance, instead of merely telling jurors that distance should be considered.

Wearing Items that Obscure Appearance

 

Redbook Instruction on Wearing Items that Obscure Appearance: None

 

Proposed Instruction on Wearing Items that Obscure Appearance:

 

1) f) Wearing Items that Obscure Appearance: If the perpetrator wears items that obscure part of his or her appearance, the wearing of those items can affect a witness’s ability both to remember and identify the perpetrator. Items like hats, sunglasses, or masks can reduce the accuracy of an identification.

 

            This instruction was adapted from the New Jersey Instruction “[T]he hair and hairline have been found to be important cues for identification accuracy.” Brian L. Cutler, Ph.D., A Sample of Witness, Crime, and Perpetrator Characteristics Affecting Eyewitness Identification Accuracy, 4 Cardozo Pub. L. Pol'y & Ethics J. 327, 332 (2006), attached as Exhibit X. Dr. Cutler’s article examined data from six studies in which masking of the hair and hairline was manipulated. The studies showed that “[i]n data from over 1300 eyewitnesses, the percentage of correct judgments on identification tests was lower among eyewitnesses who viewed perpetrators wearing hats (44%) than among eyewitnesses who viewed perpetrators whose hair and hairlines were visible (57%).” Id. The trend was “present in each study” and “not qualified by type of lineup[.]”

            Gomes, Lawson, and Henderson also all recognized the impact of disguises on one’s ability to make an identification. See Gomes, 22 N.E. at 920 at FN 5; Lawson, 291 P.3d at 703 (“studies show that hats, hoods, and other items that conceal a perpetrator's hair or hairline also impair a witness's ability to make an accurate identification.”); Henderson, 27 A.3d at 907 (2011) (“Disguises and changes in facial features can affect a witness' ability to remember and identify a perpetrator.”). The court in Henderson further recognized research that found that jurors are unaware of the effect of a disguise on the ability to make an identification. Henderson , 27 A.3d. at 911 (Citing a study conducted by Dr. Cutler, attached as Exhibit X, Juror Sensitivity to Eyewitness Identification Evidence, 14 Law & Hum. Behav. 185, 186-87 (1990), that had potential jurors evaluate testimony from a witness who either testified that the robber was wearing a hat that completely covered his head or not hat at all, that jurors were “insensitive to the effects of disguise” Henderson, 27 A.3d. at 911). The proposed instruction serves to instruct jurors of the effects that a disguise can have on an ability to make an identification and to remember a face.

Alcohol

 

Redbook Instruction on Alcohol: None

 

Proposed Instruction on Alcohol:

 

1) g) Alcohol: The influence of alcohol can affect the reliability of an identification. An identification made by a witness under the influence of a high level of alcohol at the time of the incident tends to be more unreliable than an identification by a witness who drank a small amount of alcohol or no alcohol.

 

            This instruction is adopted from the New Jersey instruction. Research shows that alcohol consumption can impair a witness’s ability to make a correct identification. As the Special Master in Henderson found, “the effects of alcohol on identification accuracy show that high levels of alcohol promote false identifications and… low alcohol intake produces fewer misidentifications than high alcohol intake. … That finding is undisputed.” Henderson, 27 A.3d at 906 (internal quotations omitted). Attached as Exhibit X is an article detailing research on which Henderson and Lawson both relied: Jennifer E. Dysart et al., The Intoxicated Witness: Effects of Alcohol on Identification Accuracy from Showups, 87 J. Applied Psychol. 170, 174 (2002). Dysart’s study finds that “intoxicated participants were more likely than sober participants to make a false identification from a target-absent showup.” Id. at 174. Further, in summarizing other research, Dysart notes that “intoxication while witnessing an event was associated with a lower rate of correct identifications” when the level of arousal was low. Id. at 171. The court in Lawson similarly recognized “intoxicated witnesses are more likely to misidentify an innocent suspect than their sober counterparts.” Lawson, 291 P.3d at 703.

Cross-Racial Identification

Redbook Instruction on Cross-Racial Identification: None.

Proposed Instruction on Cross-Racial Identification:

  1. Cross-Racial Identification. If the witness and the person identified appear to be of different races (or ethnicities), you should consider that people may have greater difficulty in accurately identifying someone of a different race (or ethnicity) than someone of their own race (or ethnicity).

 

This instruction is adopted from the Massachusetts instruction. Witnesses are better at identifying members of their own race than members of other races. This concept is referred to as “own-race bias.” As the NAS Report explained, “[o]wn-race bias occurs in both visual discrimination and memory tasks, in laboratory and field studies, and across a range of races, ethnicities, and ages. Recent analyses revealed that cross-racial (mis) identification was present in 42 percent of the cases in which an erroneous eyewitness identification was made.” NAS Report at 96 (citing The Innocence Project, “What Wrongful Convictions Teach Us About Racial Inequality, available at: http://www.innocenceproject.org/Content/What_Wrongful_Convictions_Teach_Us_About_

Racial_Inequality.php.) Further, “the existence of own-race bias is generally accepted[.]” Id.

The leading study on cross-racial identification, Christian A. Meissner and John C. Brigham, Thirty Years of Investigating the Own-Race Bias in Memory for Faces: A Meta-Analytic Review, 7 Psychol, Pub. Pol'y, & L. 3-35 (2001), is attached as Exhibit X. The Meissner study is a meta-analysis that analyzed data from 39 research articles, involving 91 independent samples, and nearly 5,000 participants. The study is cited by the NAS Report at 96; the Study Group Report at 31, 66; Lawson, 291 P.3d at 703; and Henderson, 27 A.3d at 907. The meta-analysis found that “[p]articipants were 1.56 times more likely to falsely identify a novel other-race face when compared with performance on own-race faces” but “participants were 2.23 times more likely to accurately discriminate an own-race face as new versus old when compared with performance on other-race faces.” Ex. X at 15, 16.

“Despite widespread acceptance of the cross-racial identification effect in the scientific community, fewer than half of jurors surveyed understand the impact of that factor.” Lawson, 291 P.3d at 703-704 (citing Richard S. Schmechel et. al., Beyond the Ken? Testing Juror’s Understanding of Eyewitness Reliability Evidence, 46 Jurimetrics 177, 200 (2006)). The Connecticut Supreme Court found a consensus in the judiciary as well—“Courts across the country now accept that… cross-racial identifications are considerably less accurate than same race identifications.” Guilbert, 49 A.3d at 721-22.

            The Massachusetts Supreme Judicial Court held that, because there is a “near consensus in the relevant scientific community” that “research has shown that people of all races may have greater difficulty in accurately identifying members of a different race than they do in identifying members of their own race,” the model cross-racial identification instruction “must be given in trials… where there is a cross-racial identification.” Com. v. Bastaldo, 32 N.E.3d 873, 880 (Mass. 2015). The SJC further held that “we shall direct that a cross-racial instruction be given unless all parties agree that there was no cross-racial identification.” Id. at 883.

The proposed instruction reflects the scientific and judicial consensus by informing the jury that if people are of different races, the identification may be more difficult. Given the role that cross-racial identification has played in wrongful convictions, it is critical that this instruction be given.

Passage of Time

Redbook Instruction on Passage of Time:

A number of factors may affect the accuracy of an identification of the defendant by an alleged eyewitness. …

  1. Any subsequent identification and the circumstances surrounding that identification, including the length of time that elapsed between the crime and the identification… .

 

Proposed Instruction on Passage of Time:

 

  1. Passage of time. You should consider how much time passed between the event observed and the identification. Generally, memory is most accurate immediately after the event and begins to fade soon thereafter.

 

This instruction is adopted from the Massachusetts instruction. The more time that passes between the crime and the identification, the weaker a witness’s memory. “As time passes, memories become less stable.” NAS Report at 15. The NAS Report further explained,

Retention interval, or the amount of time that passes from initial observation and encoding of a memory to a future time when the initial observation must be recalled from memory, can affect identification accuracy. Laboratory studies have demonstrated that stored memories are more likely to be forgotten with the increasing passage of time and can easily become “enhanced” or distorted by events that take place in this retention interval…. The amount of time between viewing a crime and the subsequent identification procedure can be expected to similarly affect the accuracy of the eyewitness identification, either independently or in combination with other variables.

 

Id. at 98.

 

The leading, extensive meta-analysis of the effect of the retention interval in 53 facial memory studies, Kenneth A. Deffenbacher, Brian H. Bornstein, and E. Kiernan McGorty, Forgetting the Once–Seen Face: Estimating the Strength of an Eyewitness's Memory Representation, 14 J. Experimental Psychol.: Applied 139-150 (2008), is attached as Exhibit X, and is cited by the NAS Report at 98; the Study Group Report at 32, 70; Lawson, 291 P.3d at 705; Henderson, 27 A.3d at 907. The Deffenbacher meta-analysis examined the results of 53 studies and found that “increased delay of a test for recognition memory for the once-seen face portends decreased probability of correct recognition judgments.” Ex. X at 142. The results of the meta-analysis “confirm that there is indeed a statistically reliable association between longer retention intervals and decreased face recognition memory… . That is, there is an increase in positive forgetting as the delay increases between encoding of a face and test of one’s memory for it.” Id. at 148.

 Henderson, relying on the findings of the Special Master, also stated that “[m]emories fade with time”, “memory decay is ‘irreversible,’ and that “delays between the commission of a crime and the time of identification is made can affect reliability” and “[t]hat basic principle is not in dispute.” 27 A.3d. at 907. See also Guilbert, 49 A.3d at 721-22 (“Courts across the country now accept that… a person’s memory diminishes rapidly over a period of hours rather than days or weeks.”) (citing State v. Chapple, 660 P.2d 1208 (Az.1983); Commonwealth v. Christie, 98 S.W.3d 485, 490 (Ky.2002); Henderson, 27 A.3d 872.)

The current Redbook instruction states that the jury should consider “the length of time that elapsed between the crime and the identification” but gives the jury no guidance on how to consider the length of time. The proposed instruction better reflects the judicial and scientific consensus on the impact of passage of time on memory.

Confidence and Accuracy/Post-Event Information

Redbook Instruction on Confidence and Accuracy/Post-Event Information:

A number of factors may affect the accuracy of an identification of the defendant by an alleged eyewitness.

  1. Any subsequent identification and the circumstances surrounding that identification, including… suggestive circumstances that may have influenced the witness, [and any statements or actions by law enforcement officers concerning the identification]… .

 

Proposed Instruction on Confidence and Accuracy/Post-Event Information:

 

  1. Expressed certainty. You should consider that a witness’s statement of how certain he/she is in an identification, standing alone, is generally not a reliable indicator of the accuracy of the identification, especially where the witness did not describe that level of certainty when the witness first made the identification.

 

  1. Exposure to outside information. You should consider that the accuracy of identification testimony may be affected by information that the witness received between the event and the identification, or received after the identification.   Such information may include identifications made by other witnesses, physical descriptions given by other witnesses, photographs or media accounts, or any other information that may affect the independence or accuracy of a witness's identification.   Exposure to such information before or after the witness makes an identification not only may affect the accuracy of an identification, but also may affect the witness's certainty in the identification and the witness's memory about the quality of his or her opportunity to view the event. The witness may not realize that his or her memory has been affected by this information.

 

An identification made after suggestive conduct by the police or others should be scrutinized with great care. Suggestive conduct may include anything that a person says or does prior to, during, or after an identification procedure that might influence the witness to identify a particular individual.  Suggestive conduct need not be intentional, and the person doing the "suggesting" may not realize that he or she is doing anything suggestive.

 

            These instructions are adopted from the Massachusetts instruction. Though the scientific research has established, and the courts have acknowledged, that “a witness’s level of confidence [at trial], standing alone, may not be an indication of the reliability of the identification,” Henderson, 27 A.3d 872 at 899, eyewitness confidence at trial has an extremely powerful effect on jurors.  See id. at 911 (citing research that shows that eyewitness confidence is “the most powerful predictor of verdicts regardless of other variables.”) (internal citations and quotations omitted); see Gomes, 22 N.E.3d at 913 (“there is a near consensus that jurors tend to give more weight to a witness’s certainty in evaluating the accuracy of an identification than is warranted by the research.”) (internal citations and quotations omitted); see Lawson, 291 P.3d at 704 (“Despite widespread reliance by judges and juries on the certainty of an eyewitness’s identification, studies show that, under most circumstances, witness confidence or certainty is not a good indicator of identification accuracy.”); see Perry v. New Hampshire, 132 S. Ct. 716, 739, 181 L. Ed. 2d 694 (2012) (Sotomayor, J. dissenting) (“Study after study demonstrates that…  jurors place the greatest weight on eyewitness confidence in assessing identifications even though confidence is a poor gauge of accuracy; and that suggestiveness can stem from sources beyond police-orchestrated procedures.”)

In Benn II, and again in Minor, the D.C. Court of Appeals cited studies that “concluded that jurors believe that the more confident a witness seems, the more accurate that witness’s testimony will be,”[11] but that the “correlation between a witness’s expression of certainty in an identification and its accuracy is, at a minimum, greatly overstated, and perhaps unwarranted.”  Minor, 57 A.3d at 414; Benn II, 978 A.2d at 1268, 1277 (“with respect to the correlation between confidence and accuracy of an identification . . . the scientific research findings are counterintuitive”).[12]  Certainly, the studies illustrate that eyewitness confidence in the ability to make an identification before viewing a lineup—i.e. before viewing multiple suspects—does not correlate with accuracy.  See Henderson, 27 A.3d at 899 n.7 (citing research).

         One of the reasons that a witness’s level of confidence may not be a reliable indicator of accuracy is that post-identification feedback—or confirmatory feedback that signals to the witness that his/her identification is correct— can “increase confidence in an identification, regardless of whether the identification is correct.” NAS Report at 92 (citing A. B. Douglass and N. K. Steblay, “Memory Distortion in Eyewitnesses: A Meta-Analysis of the Post-Identification Feedback Effect,” Applied Cognitive Psychology 20(7): 859–869 (2006)). Thus, “Law enforcement’s maintenance of  neutral pre-identification communications—relative to the identification of a suspect—is seen as vital to ensuring that the eyewitness is not subjected to conscious or unconscious verbal or behavioral cues that could influence the eyewitness’ identification.” Id. at 91-92. The Henderson court wrote that post-identification confirmatory feedback can “reduce doubt and engender a false sense of confidence.”  Henderson, 27 A.3d. at 899 (discussing “substantial research about confirmatory feedback” and confidence malleability.)  From this, the New Jersey Supreme Court concluded that “[c]onfirmatory feedback can distort memory,” and as a result, “to the extent confidence may be relevant in certain circumstances, it must be recorded in the witness’s own words before any possible feedback.  To avoid possible distortion, law enforcement officers should make a full record—written or otherwise—of the witness’ statement of confidence once an identification is made.”  Id. at 900.  In other words, in requiring that officers undertake procedures to assess the eyewitness’s confidence in his or her identification at the time of the identification, the Court codified the research finding that a person’s confidence in his or her identification is likely to increase over time, due to the influence of post-event information, but that this increase in confidence is not necessarily correlated with accuracy. 

         The proposed instruction cautions the jury against over-valuing expressed confidence at trial, especially when that confidence was not expressed at the initial identification. This is consistent with the findings of what the Study Group and the Oregon Supreme Court in Lawson called a “much-cited study”: Gary L. Wells and Amy L. Bradfield, “Good, You Identified the Suspect”: Feedback to Eyewitnesses Distorts Their Reports of the Witnessing Experience, 83 (3) Journal of Applied Psychology 360-276. Dr. Wells’ study is attached as Exhibit X and is cited by the SJC Report at 22, 62, 69; Perry, 132 S.Ct. at 739 FN 5 (Sotomayor, J. dissenting); Lawson, 291 P.3d at 710-11; and Henderson, 27 A.3d at 872.  The Lawson court described the findings of the study:

One much-cited study on the effects of post-identification confirming feedback staged an experiment in which witnesses, after making an incorrect identification from a target-absent lineup, were told either, “Good, you identified the suspect,” “Actually, the suspect was number ____,” or given no feedback at all. The witnesses were then asked to answer questions regarding the incident and the identification task. The study found that the witnesses who received confirming feedback were not only more certain in the accuracy of their identification, but also reported having had a better view of the perpetrator, noticing more details of the perpetrator's face, paying closer attention to the event they witnessed, and making their identifications quicker and with greater ease than participants who were given no feedback or disconfirming feedback.

 

Lawson, 291 P.3d at 710.

 

         Also attached, as Exhibit X, is a more recent (and also frequently cited) meta-analysis that examined the effect of feedback on memory, Douglass and Steblay, Memory Distortion in Eyewitnesses: A Meta-Analysis of the Post-Identification Feedback Effect, 20 Applied Cognitive Psychol. 859, 865-866 (2006), and is cited by the NAS Report at 92; Study Group at 83; Lawson, 291 P.3d at 711; and Henderson, 27 A.3d at 889. The Study Group described the findings of this meta-analysis, “A more recent meta-analysis examining the results of 20 experiments involving over 2,400 participants confirmed that studies on this factor have produced ‘remarkably consistent’ effects and ‘provide dramatic evidence that post identification feedback can compromise the integrity of a witness’s memory.’” Study Group at 83, quoting Ex. X at 865-66.

            An instruction that cautions the jury about the relationship between confidence and accuracy is thus necessary to mitigate jurors’ deeply ingrained and misguided intuitions about a witness that appears absolutely certain in his or her identification on the stand. An instruction that informs jurors that witnesses can be influenced by communication from other sources is necessary for jurors to understand the established connection between inaccurate identifications and pre-and post-identification procedure communications.

  1. Identification Procedures

Redbook Instructions on Identification Procedures:

A number of factors may affect the accuracy of an identification of the defendant by an alleged eyewitness.

  1. Any subsequent identification and the circumstances surrounding that identification, including… suggestive circumstances that may have influenced the witness, [and any statements or actions by law enforcement officers concerning the identification]… .

 

Proposed Instructions on Identification Procedures:

  1. Identification procedures.

 

[a. If there was evidence of a photographic array or a lineup] An identification may occur through an identification procedure conducted by police, which involves showing the witness a (set of photographs) (lineup of individuals).

 

Where a witness identified the defendant from a (set of photographs) (lineup), you should consider all of the factors I have already described about a witness’s perception and memory.

 

You also should consider whether anything about the defendant’s (photograph) (physical appearance in the lineup) made the defendant stand out from the others. A suspect should not stand out from other members of the lineup. The reason is simple: an array of look-alike faces forces witnesses to examine their memory. In addition, a lineup in which the defendant stands out may inflate a witness’s confidence in the identification because the selection process seemed so easy to the witness. It is, of course, for you to determine whether the composition of the lineup had any effect on the reliability of the identification.

 

You should consider whether the person (showing the photographs) (presenting the lineup) knew who was the suspect and could have, even inadvertently, influenced the identification, and whether anything was said to the witness that may have influenced the identification. You should consider that an identification made by picking a defendant out of a group of similar individuals is generally less suggestive than one that results from the presentation of a defendant alone to a witness.]

 

[b.] You have heard that the police showed the witness a number of photographs. The police have photographs of people from a variety of sources, including the Registry of Motor Vehicles. You should not make any negative inference from the fact that the police had a photograph of the defendant.

 

[c. If there was evidence of a showup] In this case, the witness identified the defendant during a “showup,” that is, the defendant was the [only] person shown to the witness at that time. Even though such a procedure is suggestive in nature, it is sometimes necessary for the police to conduct a “showup” or one-on-one identification procedure.  Although the benefits of a fresh memory may balance the risk of undue suggestion, showups conducted a longer time after an event present a heightened risk of misidentification.  Also, police officers must instruct witnesses that the person they are about to view may or may not be the person who committed the crime and that they should not feel compelled to make an identification.  In determining whether the identification is reliable or the result of an unduly suggestive procedure, you should consider how much time elapsed after the witness last saw the perpetrator, whether the appropriate instructions were given to the witness, and all other circumstances surrounding the showup.

 

  1. Biased lineup

This instruction is adopted from the Massachusetts instruction. The court in Lawson wrote that an identification procedure is like a “pseudo-scientific experiment” conducted by law enforcement to test the hypothesis that the suspect is the perpetrator. Lawson, 291 P.3d at 706. But, as in any experiment, “the validity of the results depends largely on the careful design and unbiased implementation of the underlying procedures.” Id. And, if “the suspect stands out from the other subjects in any way that might lead the witness to select the suspect based on something other than her own memory, the experiment fails to achieve its purpose.” Id. “Properly constructed lineups test a witness' memory and decrease the chance that a witness is simply guessing.” Henderson, 27 A.3d at 897. Attached as Exhibit X is a chapter from the Handbook of Eyewitness Psychology—Roy S. Malpass, Colin G. Tredoux, and Dawn McQuiston-Surrett, Lineup Construction and Lineup Fairness in 2 The Handbook of Eyewitness Psychology: Memory for People 155 (R.C.L. Lindsay et. al. eds., 2007). The Malpass chapter is cited by the Study Group at 23; Lawson, 291 P.3d at 706; Henderson, 27 A.3d at 891. On lineup structure, the authors write, “[d]ecades of empirical research suggests that mistaken eyewitness identifications are more likely to occur when the suspect stands out in a lineup.” Ex. X at 2. The proposed instruction reflects the judicial and scientific recognition that a lineup in which a suspect stands out from the fillers is a biased one and may be inaccurate.  

  1. Blind administration

This instruction is adopted from the Massachusetts instruction. The NAS Report made only eleven recommendations and one was that lineups be administered blind. NAS Report at 106-07. As the NAS Report explained, “[e]ven when lineup administrators scrupulously avoid comments that could identify which person is the suspect, unintended body gestures, facial expressions, or other nonverbal cues have the potential to inform the witness of his or her location in the lineup or photo array.” Id. at 106.The proposed instruction informs jurors that lineups should be conducted blindly—a principle that is well-established in law enforcement (and part of the Metropolitan Police Department General Order 304.07) as well as recognized by the courts and accepted in the scientific community. The proposed instruction tells jurors to consider whether or not the procedure was blind and informs them that if it was not, the administrator “could have, even inadvertently, influenced the identification[.]” This instruction is precisely in line with the findings of the NAS Report.

  1. Feedback

See Section 9 for a description of the law and research on feedback.

Showup Identifications

Courts have recognized the problems with showups, which are “essentially single-person lineups:  a single suspect is presented to a witness to make an identification,” usually “at the scene of a crime soon after its commission.” Henderson, 27 A.3d at 902-903.  Indeed, while noting that show-ups are “sometimes necessary,” the Henderson court noted that the inherent suggestivity of show-ups had led some courts to limit their admissibility.  Id., citing State v. Dubose, 699 N.W.2d 582, 584–85 (Wis. 2005); Commonwealth v. Martin, 850 N.E.2d 555, 562–63 (Mass. 2006); State v. Duuvon, 571 N.E.2d 654, 656 (N.Y. 1991).  As the Henderson court discussed, this suggestivity is due to a number of factors established by the research, including the fact that “showups increase the risk that witnesses will base identifications more on similar distinctive clothing than on similar facial features,” and the fact that show-ups “fail to provide a safeguard against witnesses with poor memories or those inclined to guess, because every mistaken identification in a showup will point to a suspect.”  Id. at 903.  See also Lawson, 291 P.3d at 686 (“Police showups are generally regarded as inherently suggestive—and therefore less

reliable than properly administered lineup identifications—because the witness is always aware of whom police officers have targeted as a suspect.”); Fields v. United States, 484 A.2d 570, 574 (D.C. 1984) (“It is generally conceded that a degree of suggestivity is inherent in any on-the-scene viewing of a suspect in the custody of police.”)

Moreover, the jury instruction should reflect the possibilities of suggestion by the administrators of showup procedures. Such language would be helpful to the jury because, as other courts have recognized, lay people are generally unaware of the degree to which external factors—such as suggestive statements by the police or the fact that suspects in a showup are often viewed in handcuffs and under a spotlight—can distort both perception and memory and thereby undermine the accuracy of an identification. See Chapple, 660 P.2d at 1221 (average juror is unaware of how “witnesses frequently incorporate into their identifications inaccurate information gained subsequent to the event and confused with the event”); McDonald, 690 P.2d at 720 (holding that “the effects on recall of bias or cues in identification procedures or methods of questioning” are “either not widely known to laypersons or not fully appreciated by them.”)

Research shows and courts recognize that showups conducted shortly after the crime (within minutes) tend to be more reliable than those conducted more than two hours after the crime.

Showups are most likely to be reliable when they occur immediately after viewing a criminal perpetrator in action, ostensibly because the benefits of a fresh memory outweigh the inherent suggestiveness of the procedure. In as little as two hours after an event occurs, however, the likelihood of misidentification in a showup procedure increases dramatically.

State v. Lawson, 291 P.3d 673, 708 (2012). The study on which the Lawson court relied, A. Daniel Yarmey, Meagan J. Yarmey, and A. Linda Yarmey, Accuracy of Eyewitness Identifications in Showups and Lineups 20(4) Law and Human Behavior (1996) is attached as Exhibit X and is also cited by Study Group at 26; and Henderson, 27 A.3d at 903. The Yarmey study tested witnesses’ ability to identify in showups compared to six-person lineups, testing immediately after viewing the target, 30 minutes later, 2 hours later, and 24 hours later in two different experiments. Ex. X at 464, 470. The study found that “[i]nnocent suspects were at significantly less risk in being falsely identified in a six-persons lineup than in a one-person lineup, especially with 2-h and 24-h retention intervals.” Id. at 473. The court in Lawson explained, “the immediate showup identification of an innocent suspect produced a misidentification rate of 18 percent (compared to 16 percent in an immediate lineup); a delay of only two hours increased the misidentification rate to 58 percent (compared to 14 percent in a lineup).” Lawson, 291 P.3d at 708.

For the aforementioned reasons, this Court should issue to the jury the proposed instructions relating to [witness’s] eyewitness identification in this case. If the Court determines that it will not give the entirety of Defendant’s instruction or determines that some of the wording of the instruction is unacceptable to the court, Defendant respectfully seeks the opportunity to request instructions on discrete parts of the instruction and/or the opportunity to revise language to satisfy the court’s concerns.

Conclusion

            WHEREFORE for all the reasons stated above and any others that may appear to the Court, [CLIENT] respectfully requests that this motion be granted.

 

Respectfully submitted,

 

 

__________________________

Counsel for Client

Public Defender Service for D.C.

633 Indiana Avenue NW

Washington, DC 20004

 

 

 

 

 

 

 

CERTIFICATE OF SERVICE

 

            I hereby certify that a copy of the foregoing filing has been served by e-mail on AUSA, United States Attorney’s Office for the District of Columbia, on DATE.

 

                                                                                                                                                            ___________________________

 

 

[1] Macklin, 409 F.2d at 178; Telfaire, 469 F.2d at 555-56.

[2] Crafting jury instructions based on extra-record scientific research nothing novel.  In developing its jury instruction on how to evaluate evidence of a defendant’s flight, for example, the D.C. Circuit reviewed the “available empirical data,” Miller v. United States, 320 F.2d 767, 772 (D.C. Cir. 1963) (opinion of Bazelon, C.J.), in “numerous psychological authorities,” Austin v. United States, 414 F.2d 1155, 1157 (D.C. Cir. 1969), and concluded that “[t]he observation that feelings of guilt may be present without actual guilt in so-called normal as well as neurotic people has been made by many recognized scholars and is a significant factor in the contemporary view of the dynamics of human behavior,” Miller, 320 F.2d at 772; see also id. at 772 nn. 10–11.  Thus, when evidence of flight is introduced into a case, the jury should be instructed “that flight does not necessarily reflect feelings of guilt, and that feelings of guilt, which are present in many innocent people, do not necessarily reflect actual guilt.”  Id. at 773; see also id. at 774 (opinion of Fahy, J.) (agreeing with Chief Judge Bazelon’s opinion); Austin, 414 F.2d at 1157–58.

[3] Throughout footnotes 5-17, 25, 26, 29, and 30 of Wade, the Supreme Court’s opinion cites to scholarly and social science publications. Some of the cited works describe experiments of staged crimes in classrooms. Marshall Houts, From Evidence to Proof 3-26 (1956)); Glanville Williams, Proof of Guilt: A Study of the English Criminal Trial 83-98 (1955); F. Gorphe, Showing Prisoners to Witness for Identification, 1 Am. J. Police Sci. 79 (1930). Others discuss the ways in which scientific research casts light on suggestive police procedures. John Henry Wigmore, Evidence in Trials at Common Law s 786a (3d ed. 1940); Edwin M. Borchard, Convicting the Innocent: Errors of Criminal Justice xiii-xxiv (1932)). Another describes how psychological research has demonstrated the unreliability of eyewitness testimony. (Daniel E. Murray, Criminal Lineup at Home and Abroad, Utah L. Rev. 610, 610-628 (1966)).

 

[4] The NAS Report wrote: “[w]ith the exception of the New Jersey instructions, jury instructions have tended to address only certain subjects, or to repeat the problematic Manson v. Braithwaite language, which was not intended as instructions for jurors.” (The Massachusetts instructions had not yet been finalized at the time of the release of the NAS Report.)  NAS Report at 112. Though the Redbook instruction repeats some of the factors identified for consideration by courts in Manson, the primary defect lies in the instruction’s failure to instruct on a variety of well-established factors known to affect identification, such as stress, presence of a weapon, and identification of a person of another race.

[5] National Registry of Exonerations, A Project of the University of Michigan Law School, Detailed View of Cases at http://www.law.umich.edu/special/exoneration/Pages/detaillist.aspx (last visited DATE)

[6] Because legislative facts are broadly applicable beyond an individual case, courts need not reinvent the wheel each time a ruling calls for analysis of scientific research.  Instead, courts can and do rely on previous analyses of the identical issue by other courts.  See Porter, 618 A.2d at 635 (holding that, in deciding whether a scientific methodology is generally accepted, court may consider, inter alia, “judicial opinions in other jurisdictions”).  

 

 

[7] The Massachusetts Study Group on Eyewitness Identification was convened by the Justices of the Massachusetts Supreme Judicial Court in the fall of 2011. The Study Group’s Report, attached here as Exhibit X, was relied on by the Supreme Judicial Court in Gomes, 22 N.E.3d 897, a significant Massachusetts eyewitness identification decision, discussed in further detail below at pages X-X.

[8] The then-model instruction in Massachusetts came from the Supreme Judicial Court’s decision in Com. v. Rodriguez, 391 N.E.2d 889, 897 (Mass. 1979), that appended instructions taken from

United States v. Telfaire, 469 F.2d 552, 558-559 (D.C. Cir. 1972). The current Redbook instruction was adopted from the Telfaire instruction.

 

[9] The Alaska Supreme Court, in responding to an argument from the prosecution that the Court should not consider scientific evidence that was not “subjected to the adversarial process at trial,” stated:

Other states' high courts have followed different procedural paths when modifying their standards for evaluating eyewitness identifications. The special master appointed by the New Jersey Supreme Court “to evaluate scientific and other evidence about eyewitness identifications ... presided over a hearing that probed testimony by seven experts and produced more than 2,000 pages of transcripts along with hundreds of scientific studies,” then issued an extensive report on which the court heavily relied. Other courts, acknowledging the scientific consensus, have not required that the science be tested again in a trial-like process. The Massachusetts Supreme Judicial Court convened a “Study Group” in 2011 to determine how it could improve its model jury instructions for the evaluation of eyewitness identifications. In 2015 the court “review [ed] the scholarly research, analyses by other courts, amici submissions, and the Study Group Report and comments” and adopted new standards. The supreme courts of Connecticut, Hawai'i, Oregon, Utah, and Wisconsin, while noting judicial trends, have also relied directly on the scientific research to explain why their standards should be modified.

Young, 374 P.3d at 414-15 (internal footnotes omitted).

[10] However, this Court also has the authority to appoint its own committee, special master, or study group to review the research and/or the authority to order a hearing on the research.

[11] See also State v. Chapple, 660 P.2d 1208, 1221 (Ariz. 1983) (en banc) (concluding that the average juror is unaware of how “witnesses frequently incorporate into their identifications inaccurate information gained subsequent to the event and confused with the event”); People v. McDonald, 690 P.2d 709, 720 (Cal. 1984), overruled on other grounds by People v. Mendoza, 4 P.3d 265 (Cal. 2000) (holding that “the effects on memory of the witness’s exposure to subsequent information or suggestions, and the effects on recall of bias or cues in identification procedures or methods of questioning” are “either not widely known to laypersons or not fully appreciated by them”).

[12] See also Newsome v. McCabe, 319 F.3d 301, 305 (7th Cir. 2003) (discussing a wide range of social science research that establishes low correlation between a witness’s confidence and the accuracy of the identification); United States v. Downing, 753 F.2d 1224, 1231 (3d Cir. 1985) (noting that lack of correlation between confidence and accuracy “goes beyond what an average juror might know as a matter of common knowledge,” and may “directly contradict ‘common sense.’”); United States v. Lester, 254 F. Supp. 2d 602, 612 (E.D.Va. 2003) (“[T]he. . . correlation (or lack thereof) between confidence and accuracy . . . do[es] seem to fall outside the common sense of the average juror.”); see also United States v. Norwood, 939 F. Supp. 1132, 1139 (D.N.J. 1996) (finding that the fact that “witnesses ofttimes profess considerable confidence in erroneous identifications is fairly counterintuitive”) (quoting United States v. Stevens, 935 F.2d 1380, 1400 (3d Cir. 1991)).

3.1.3.1 Read ONE of the three jury instructions below: 3.1.3.1 Read ONE of the three jury instructions below:

Defense Proposed Eyewitness ID Jury Instruction Defense Proposed Eyewitness ID Jury Instruction

PRELIMINARY/CONTEMPORANEOUS INSTRUCTION[1]

You may hear testimony from a witness who has identified the defendant as the person who committed [or participated in] the alleged crime[s]. Where a witness has identified the defendant as the person who committed [or participated in] the alleged crime[s], you should examine the identification with care. As with any witness, you must determine the credibility of the witness, that is, do you believe the witness is being honest? Even if you are convinced that the witness believes his or her identification is correct, you still must consider the possibility that the witness made a mistake in the identification. A witness may honestly believe he or she saw a person, but perceive or remember the event inaccurately. You must decide whether the witness's identification is not only truthful, but accurate.

People have the ability to recognize others they have seen and to accurately identify them at a later time, but research and experience have shown that people sometimes make mistakes in identification. The mind does not work like a video recorder. A person cannot just replay a mental recording to remember what happened. Memory and perception are much more complicated. Generally, memory is most accurate right after the event and begins to fade soon thereafter. Many factors occurring while the witness is observing the event may affect a witness's ability to make an accurate identification. Other factors occurring after observing the event also may affect a witness's memory of that event, and may alter that memory without the witness realizing that his or her memory has been affected. Later in the trial, I will discuss in more detail the factors that you should consider in determining whether a witness's identification is accurate. Ultimately, you must determine whether or not the Commonwealth has proved the charge[s], including the identity of the person who committed [or participated in] the alleged crime[s], beyond a reasonable doubt.

The burden is on the government to prove beyond a reasonable doubt, not only that the offense[s] was/were committed, but also that [name of defendant] is the person who committed it/them. If you are not convinced beyond a reasonable doubt that the defendant is the person who committed [or participated in] the alleged crime[s], you must find the defendant not guilty.[2]

In deciding whether the government has proved beyond a reasonable doubt that [name of defendant] is the person who committed the offense[s], you may consider any evidence relating to the identification of that person.

As with any witness, you must determine the witness’s credibility, that is, do you believe the witness is being honest? Even if you are convinced that the witness believes his or her identification is correct, you must still consider the possibility that the witness made a mistake in the identification. A witness may honestly believe he or she saw a person, but perceive or remember the event inaccurately. You must decide whether the witness’s identification is not only truthful, but accurate.[3]

People have the ability to recognize other people from past experiences and to identify them at a later time, but research has shown that there are risks of making mistaken identifications. That research has focused on the nature of memory and the factors that affect the reliability of eyewitness identifications.[4]

The mind does not work like a video recorder.  A person cannot just replay a mental recording to remember what happened.  Memory and perception are much more complicated.   Remembering something requires three steps.  First, a person sees an event.  Second, the person's mind stores information about the event.  Third, the person recalls stored information.  At each of these stages, a variety of factors may affect -- or even alter -- a person’s memory of what happened and thereby affect the accuracy of a later identification.[5] This can happen without the person being aware of it.[6]

 

A number of factors may affect the accuracy of an identification of [name of the defendant] by an alleged eyewitness. In evaluating this identification, you should consider the observations and perceptions on which the identification was based, the witness’s ability to make those observations and perceive events, and the circumstances under which the identification was made. Although nothing may appear more convincing than a witness’s identification of a perpetrator, you must critically analyze such testimony. Identifications, even if made in good faith, may be mistaken. Therefore, when analyzing such testimony, be advised that a witness’s level of confidence, standing alone, may not be an indication of the reliability of the identification.  In deciding what weight, if any, to give to the identification testimony, you should consider the following factors that are related to the witness, the alleged perpetrator, and the criminal incident itself. 

1. Opportunity to View the Event.[7]  In evaluating the reliability of the identification, you should consider the opportunity the witness had to observe the alleged offender at the time of the event.  For example, how good a look did the witness get of the person and for how long?  How much attention was the witness paying to the person at that time?  How far apart were the witness and the person?  How good were the lighting conditions?  You should evaluate a witness's testimony about his or her opportunity to observe the event with care and should consider the following: 

     a. Characteristics of the Witness.[8]  You should consider the physical and mental characteristics of the witness when the observation was made.  For example, how good was the witness's eyesight?  Was the witness experiencing illness, injury, or fatigue?  Was the witness under a high level of stress?  High levels of stress may reduce a person's ability to make an accurate identification.[9]

     b. Duration.[10]  The amount of time an eyewitness has to observe an event may affect the reliability of an identification. Although there is no minimum time required to make an accurate identification, a brief or fleeting contact is less likely to produce an accurate identification than a more prolonged exposure to the perpetrator. In addition, time estimates given by witnesses may not always be accurate because witnesses tend to think events lasted longer than they actually did.[11]

     c. Weapon Focus.[12]  You should consider whether the witness saw a weapon during the event.  If the event is of short duration, the visible presence of a weapon may distract the witness's attention away from the person's face.  But the longer the event, the more time the witness may have to get used to the presence of a weapon and be able to focus on the person's face.

     d. Distance.[13] A person is easier to identify when close by. The greater the distance between an eyewitness and a perpetrator, the higher the risk of a mistaken identification. In addition, a witness’s estimate of how far he or she was from the perpetrator may not always be accurate because people tend to have difficulty estimating distances.

     e. Lighting.[14] Inadequate lighting can reduce the reliability of an identification. You should consider the lighting conditions present at the time of the alleged crime in this case.

     f. Disguise/Changed Appearance.[15] The perpetrator’s use of a disguise can affect a witness’s ability both to remember and identify the perpetrator. Disguises like hats, sunglasses, or masks can reduce the accuracy of an identification. Similarly, if facial features are altered between the time of the event and a later identification procedure, the accuracy of the identification may decrease.   

     g. Intoxication.[16] The influence of alcohol can affect the reliability of an identification. An identification made by a witness under the influence of a high level of alcohol at the time of the incident tends to be more unreliable than an identification by a witness who drank a small amount of alcohol.

In addition to a witness’s opportunity to view the alleged perpetrator, you should also consider the following factors:

2.  Cross-racial identification.[17] If the witness and the person identified appear to be of different races (or ethnicities), you should consider that people may have greater difficulty in accurately identifying someone of a different race (or ethnicity) than someone of their own race (or ethnicity).

3. Passage of time.[18]  You should consider how much time passed between the event observed and the identification. Generally, memory is most accurate immediately after the event and begins to fade soon thereafter.

4. Exposure to outside information[19].  You should consider that the accuracy of identification testimony may be affected by information that the witness received between the event and the identification, or received after the identification.   Such information may include identifications made by other witnesses, physical descriptions given by other witnesses, photographs or media accounts, or any other information that may affect the independence or accuracy of a witness's identification.   Exposure to such information before or after the witness makes an identification not only may affect the accuracy of an identification, but also may inflate the witness's certainty in the identification and the witness's memory about the quality of his or her opportunity to view the event. [20] The witness may not realize that his or her memory has been affected by this information.

An identification made after suggestive conduct by the police or others should be scrutinized with great care. Suggestive conduct may include anything that a person says or does prior to, during, or after an identification procedure that might influence the witness to identify a particular individual.  Suggestive conduct need not be intentional, and the person doing the "suggesting" may not realize that he or she is doing anything suggestive. [21]

5. Expressed certainty.[22]  You should consider that a witness’s statement of how certain he/she is in an identification, standing alone, is generally not a reliable indicator of the accuracy of the identification, especially where the witness did not describe that level of certainty when the witness first made the identification.

6. Identification procedures.

[a. If there was evidence of a photographic array or a lineup] An identification may occur through an identification procedure conducted by police, which involves showing the witness a (set of photographs) (lineup of individuals).

Where a witness identified the defendant from a (set of photographs) (lineup), you should consider all of the factors I have already described about a witness’s perception and memory.

You also should consider whether anything about the defendant’s (photograph) (physical appearance in the lineup) made the defendant stand out from the others. A suspect should not stand out from other members of the lineup. The reason is simple: an array of look-alike faces forces witnesses to examine their memory. In addition, a lineup in which the defendant stands out may inflate a witness’s confidence in the identification because the selection process seemed so easy to the witness. You should consider that an identification made by picking a defendant out of a group of similar individuals is generally less suggestive than one that results from the presentation of a defendant alone to a witness.][23]

It is, of course, for you to determine whether the composition of the lineup had any effect on the reliability of the identification.[24]

You should consider whether the person (showing the photographs) (presenting the lineup) knew who was the suspect and could have, even inadvertently, influenced the identification, and whether anything was said to the witness that may have influenced the identification.

[The Metropolitan Police Department (MPD) is required to follow certain procedures when administering identifications. In this case, MPD was required, but failed, to:

[i. Instruct the witness prior to the identification procedure that the perpetrator may or may not be present in the identification procedure.]

[ii. Assure the witness that the Department will continue to investigate the offense regardless of whether the witness makes an identification or not.]

[iii. Conduct the identification procedure individually and privately.]

[iv. Indicate to the witness by words, sounds or actions, directly or indirectly, whether the witness has identified “the right” person or “the wrong” person.]

[v. Conduct the identification procedure using a blind or modified-blind method. When an identification procedure is administered “blind,” the administrator does not know which photograph is of the suspect and which photographs are of fillers. When an identification procedure is administered “modified-blind” the investigator is unable to discern during the identification procedure which photograph the witness is viewing.]

[any other violation of MPD General Order 304.07]

You may consider MPD’s failure to adhere to its own policies in your consideration of the reliability of the identification.

[b.] You have heard that the police showed the witness a number of photographs. The police have photographs of people from a variety of sources, including the Registry of Motor Vehicles. You should not make any negative inference from the fact that the police had a photograph of the defendant.

[c. If there was evidence of a showup] In this case, the witness identified the defendant during a “showup,” that is, the defendant was the [only] person shown to the witness at that time. Even though such a procedure is suggestive in nature, it is sometimes necessary for the police to conduct a “showup” or one-on-one identification procedure.  Although the benefits of a fresh memory may balance the risk of undue suggestion, showups conducted a longer time after an event present a heightened risk of misidentification.  Also, police officers must instruct witnesses that the person they are about to view may or may not be the person who committed the crime and that they should not feel compelled to make an identification.  In determining whether the identification is reliable or the result of an unduly suggestive procedure, you should consider how much time elapsed after the witness last saw the perpetrator, whether the appropriate instructions were given to the witness, and all other circumstances surrounding the showup.

[d. If multiple viewings] You should consider whether the witness viewed the defendant in multiple identification procedures or events. When a witness views the same person in more than one identification procedure or event, it may be difficult to know whether a later identification comes from the witness’s memory of the original event, or from the witness’s observation of the person at an earlier identification procedure or event.

7. Failure to identify or inconsistent identification.  You should consider whether a witness ever failed to identify the defendant, or made an identification that was inconsistent with the identification that the witness made at the trial.[25]  Research has shown that a non-identification of a suspect, or an identification of a known innocent filler, may be evidence of the suspect’s innocence. 

Based upon any identification[s] by the witness[es] and all additional evidence you have heard, you must be satisfied that the government has met its burden of proving beyond a reasonable doubt that [name of defendant] is the person who committed this offense before you may convict him/her. If the evidence concerning the identification of the defendant is not convincing beyond a reasonable doubt, you must find [name of defendant] not guilty.

 

 

 

 

[1] The preliminary instruction is adopted from the Massachusetts instruction.

[2] This paragraph is adopted from the Massachusetts instruction.

[3] This paragraph is adopted from the Massachusetts instruction.

[4] This paragraph is adopted from the Massachusetts instruction.

[5] NAS Report at 15 (“Human vision does not capture a perfect, error-free “trace” of a witnessed event… The recognition of one person by another—a seemingly commonplace and unremarkable everyday occurrence—involves complex processes that are limited by noise and subject to many extraneous influences.”); NAS Report at 59-60 (“Like vision, memory is also beset by noise. Encoding, storage, and remembering are not passive, static processes that record, retain, and divulge their contents in an informational vacuum, unaffected by outside influences. The contents cannot be treated as a veridical permanent record, like photographs stored in a safe. On the contrary, the fidelity of our memories for real events may be compromised by many factors at all stages of processing, from encoding through storage, to the final stages of retrieval. Without awareness, we regularly encode events in a biased manner and subsequently forget, reconstruct, update, and distort the things we believe to be true.”); NAS Report at 119 (“As this report indicates, however, the malleable nature of human visual perception, memory, and confidence; the imperfect ability to recognize individuals; and policies governing law enforcement procedures can result in mistaken identifications with significant consequences.”); Benn v. United States (Benn II), 978 A.2d 1257, 1267 n.26. (“most potential jurors believe that a person’s memory functions like a camera, capable of retrieving a captured image on demand,” memory in fact “is influenced by a variety of factors[.]”); See Commonwealth v. Gomes, 470 Mass. 352, 369 (2015); Supreme Judicial Court Study Group on Eyewitness Evidence: Report and Recommendations to the Justices 15 (July 25, 2013), available at http://www.mass.gov/courts/docs/sjc/docs/ eyewitness-evidence-report-2013.pdf [http://perma.cc/WY4M-YNZN] (Study Group Report), quoting Report of the Special Master, State vs. Henderson, N.J. Supreme Ct., No. A-8-08, at 9 (June 10, 2010) (Special Master's Report) ("The central precept is that memory does not function like a videotape, accurately and thoroughly capturing and reproducing a person, scene or event. . . . Memory is, rather[,] a constructive, dynamic and selective process"); State v. Henderson, 208 N.J. 208, 245 (2011); State v. Lawson, 352 Or. 724, 771 (2012) (Appendix).

[6] This paragraph is adopted from the Massachusetts instruction.

[7] This paragraph is adopted from the Massachusetts instruction. The Massachusetts instruction cites for support of this paragraph: D. Reisberg, The Science of Perception and Memory:  A Pragmatic Guide for the Justice System 51-52 (2014) (witnesses may not accurately remember details, such as length of time and distance, when describing conditions of initial observation);  Lawson, 352 Or. at 744 (information that witness receives after viewing event may falsely inflate witness's "recollections concerning the quality of [his or her] opportunity to view a perpetrator and an event").

[8] This paragraph is adopted from the Massachusetts instruction.  

[9] NAS Report at 94 (“High levels of stress or fear can affect eyewitness identification. This finding is not surprising, given the known effects of fear and stress on vision and memory… Under conditions of high stress, a witness’ ability to identify key characteristics of an individual’s face (e.g. hair length, hair color, eye color, shape of face, presence of facial hair) may be significantly impaired.”) (internal citations omitted).

[10] This paragraph is adopted from the New Jersey instruction.

[11] NAS Report at 50-51 (Factors “can restrict the visual information accessible” when witness is viewing “an object of any sort (such as a person) or events involving the object (a criminal act).” One of these factors is “viewing time” which “predictably influence[s] the quantity of information… that a viewer gains from a visual scene, and thus the degree to which the perceptual experience can accurately reflect the properties of the external world. At the extreme, short viewing times… simply reduce the number of correlated photons reaching the retina to the point where they scarcely exceed photon noise, and uncertainty is very high.”); see also NAS Report at 69 (“The committee has reviewed much of this research [on vision and memory], and has identified restrictions on what can be seen under specific environmental and behavioral conditions (e.g., as poor illumination, limited viewing duration, viewing angle)[.]”;  State v. Henderson, 208 N.J. 208, 264 (2011) (“studies have shown, and the Special Master found, ‘that witnesses consistently tend to overestimate short durations, particularly where much was going on or the event was particularly stressful.’” (citing Elizabeth F. Loftus et al., Time Went by So Slowly: Overestimation of Event Duration by Males and Females, 1 Applied Cognitive Psychol. 3, 10 (1987)).

[12] Adopted from Massachusetts instruction; see Study Group Report, supra at 130 ("A weapon can

distract the witness and take the witness's attention away from the perpetrator's face, particularly if the weapon is directed at the witness. As a result, if the crime is of short duration, the presence of a visible weapon may reduce the accuracy of an identification. In longer events, this distraction may decrease

as the witness adapts to the presence of the weapon and focuses on other details").

[13] Adopted from New Jersey instruction; see Henderson, 27 A.3d at 906 (“Research has also shown that people have difficulty estimating distances) (citing R.C.L. Lindsay et al., How Variations in Distance Affect Eyewitness Reports and Identification Accuracy, 32 Law & Hum. Behav. 526, 533 (2008); see also NAS Report at 92 (“Research has shown that the physical distance between the witness and the perpetrator is an important estimator variable, as it directly affects the ability of the eyewitness to discern visual details, including features of the perpetrator…”)

[14] Adopted from New Jersey instruction; See also FN 6.

[15] Adopted from New Jersey instruction; See also Lawson, 352 Or. at 775 (Appendix) ("[S]tudies confirm that the use of a disguise negatively affects later identification accuracy.  In addition to accoutrements like masks and sunglasses, studies show that hats, hoods, and other items that conceal a perpetrator’s hair or hairline also impair a witness’s ability to make an accurate identification"); Henderson, 27 A.3d at 907 ("Disguises and changes in facial features can affect a witness'[s] ability to remember and identify a perpetrator"); State v. Clopten, 223 P.3d 1103, 1108 (Utah 2009) ("[A]ccuracy is significantly affected by factors such as the amount of time the culprit was in view, lighting conditions, use of a disguise, distinctiveness of the culprit's appearance, and the presence of a weapon or other distractions"); Wells & Olson, Eyewitness Testimony, 54 Ann. Rev. Psychol. 277, 281 (2003) (Wells & Olson) ("Simple disguises, even those as minor as covering the hair, result in significant impairment of eyewitness identification").  See also Cutler, A Sample of Witness, Crime, and Perpetrator Characteristics Affecting Eyewitness Identification Accuracy, 4 Cardozo Pub. L. Pol'y & Ethics J. 327, 332 (2006) ("In data from over 1300 eyewitnesses, the percentage of correct judgments on identification tests was lower among eyewitnesses who viewed perpetrators wearing hats [44%] than among eyewitnesses who viewed perpetrators whose hair and hairlines were visible [57%]").

[16] Adopted from New Jersey instruction.

[17]Adopted from Massachusetts instruction; see NAS Report at 96 (“Own-race bias occurs in both visual discrimination and memory tasks, in laboratory and field studies, and across a range of races, ethnicities, and ages. Recent analyses revealed that cross-racial (mis) identification was present in 42 percent of the cases in which an erroneous eyewitness identification was made. …. [T]he existence of own-race bias is generally accepted”).

[18] Adopted from Massachusetts instruction; see NAS Report at 15 (“"For eyewitness identification to take place, perceived information must be encoded in memory, stored, and subsequently retrieved.  As time passes, memories become less stable").

[19] Adopted from Massachusetts instruction. See Gomes, 470 Mass. at 373-374; Study Group Report, supra at 21-22; Special Master’s Report, supra at 30-31 ("An extensive body of studies demonstrates that the memories of witnesses for events and faces, and witnesses' confidence in their memories, are highly malleable and can readily be altered by information received by witnesses both before and after an identification procedure"); Lawson, 352 Or. at 786 (Appendix) ("The way in which eyewitnesses are questioned or converse about an event can alter their memory of the event").

[20] NAS Report at 92 (“efforts to maintain objectivity and eliminate potentially informative communication will help ensure that eyewitness reports are not contaminated by knowledge or opinions held by others.”)

[21] NAS Report at 91-92 (“Law enforcement’s maintenance of neutral pre-identification communications—relative to the identification of a suspect—is seen as vital to ensuring that the eyewitness is not subjected to conscious or unconscious verbal or behavioral cues that could influence the eyewitness’ identification.”); NAS Report at 92 (“Furthermore, some types of law enforcement communication with a witness, after the witness has made an identification… can increase confidence in the identification, regardless of whether the identification is correct.”)

[22] Adopted from Massachusetts instruction; see NAS Report at 108 (“Evidence indicates that self-reported confidence at the time of trial is not a reliable predictor of eyewitness accuracy. The relationship between the witness’ stated confidence and accuracy of identifications may be greater at the moment of initial identification than at the time of trial. However, the strength of the confidence-accuracy relationship varies, as it depends on complex interactions among such factors as environmental conditions, persons involved, individual emotional states, and more. Expressions of confidence in the courtroom often deviate substantially from a witness’ initial confidence judgment, and confidence levels reported long after the initial identification can be inflated by factors other than the memory of the suspect.”)

[23] Adopted from Massachusetts instruction; see  NAS Report at 92 (“use of ‘blinded’ or ‘double-blind’ lineup identification procedures is an effective strategy for reducing the likelihood that a witness will be exposed to cues from interactions with law enforcement (such as feedback) that could influence identifications or confidence in those identifications.”); see Study Group Report, supra at 140, quoting Wells & Quinlivan, supra at 6 ("From the perspective of psychological science, a procedure is suggestive if it induces pressure on the eyewitness to make a lineup identification [a suggestion by commission], fails to relieve pressures on the witness to make a lineup selection [a suggestion by omission], cues the witness as

to which person is the suspect, or cues the witness that the identification response was correct or incorrect").

[24] Adopted from the Massachusetts and New Jersey instructions.

[25] Adopted from Massachusetts instruction.

Defense Proposed Likelihood Ratio in DNA Evidence Jury Instruction Defense Proposed Likelihood Ratio in DNA Evidence Jury Instruction

During this trial you heard testimony about DNA evidence.  The DNA analyst reported to you a number using a statistic called a likelihood ratio.  This is a very different statistic than what we ordinarily use in everyday life and should not be confused with statistics like the probability that it will rain next Monday or that a random person might match a DNA profile. This is a different statistic with a very different meaning.

There are certain misunderstandings that can come from a likelihood ratio.  To ensure there is no confusion the following points must be emphasized:

The likelihood ratio does not say how probable it is that Mr. ______ left DNA on an item.

The likelihood ratio likelihood ratio does not say how likely it is that Mr. _____ is guilty.

Finally, the likelihood ratio cannot tell you how DNA got onto an item whether it is through handling it, indirect transfer or laboratory contamination.

What is the likelihood ratio then?  The likelihood ratio is the relative probability of observing the DNA evidence under two competing hypotheses. Two possible explanations for the evidence were compared, but they are not the only possible explanations for the evidence, and the Likelihood Ratio cannot tell you how objectively probable either scenario is. It is important to remember that both hypotheses could be wrong, but one may still be a better explanation than the other. When compared, any two hypotheses will produce a Likelihood Ratio

In this case the two hypotheses compared are that the contributors to the DNA are the DEF and two random unrelated individuals or instead, three random unrelated individuals. No other possible hypotheses were considered. Neither of those hypotheses posed the question as to whether Mr. _____ handle the _____ or whether Mr. _____ is guilty.  That statistic cannot tell you whether Mr. ____ contributed to the DNA mixture on the _____.

Like with any scientific test, there are always risks of false positives and false negatives. If you do not believe that any hypothesis proposed by the government when calculating this statistic is true, you may not speculate as to what the Likelihood Ratio would be under any alternative hypothesis that was not tested.

The likelihood ratio is not commonly used in day-to-day life and it is understandable if there is confusion around its meaning. This evidence should be weighed with caution rather than assuming the validity of a statistic and should be weighed along with all the other evidence, or lack of evidence, in determining whether the government proved their case beyond a reasonable doubt. 

Defense Proposed Fingerprint Jury Instruction Defense Proposed Fingerprint Jury Instruction

Below we offer two alternative “expert” instructions:

  • Approach #1 assumes that the court has admitted the testimony not as scientific evidence, but, instead, as testimony by someone with special skills or experience about a matter that may be helpful to the jury.
  • The language in this instruction assumes specific reasons why the court the made its decision not to treat the discipline as science. You should review this language carefully and tailor it to the decision reached by the court.
  • Approach #2 assumes that the court has admitted the testimony as scientific evidence, and that your efforts to exclude the testimony as lacking foundational validity have failed, but that you have made some inroads with the court about potential validity concerns such that the court has permitted cross-examination challenging the accuracy and reliability of the method.

As support for the specific language choices here, we have included a version of the instructions with cites to:

  • The PCAST Report and Addendum: Report to the President, Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature Comparison Methods, Executive Office of the President, President’s Council of Advisors on Science and Technology (Sept. 20, 2016), and An Addendum to the PCAST Report on Forensic Science in Criminal Courts (Jan. 6, 2017);
  • ABA Resolution 101C: American Bar Association Resolution 101C (adopted Feb. 6, 2012).

Approach #1 (not scientific evidence):

An example of an instruction when the court permits opinion testimony based on training and experience but agrees the discipline is not based in science might be:

You are about to hear the testimony of a fingerprint examiner, who claims to have special qualifications in examining and comparing known and latent fingerprints.[1]

Witnesses are usually permitted to testify only about what they directly experienced, such as what they saw or what they did on a particular occasion.[2]  Witnesses are not generally allowed to express their opinions.  However, some witnesses are permitted to offer their opinions because they have developed a skill, through their training or experience[3] that few members of the general public possess. 

For example, in a lawsuit about a collision between two tractor-trailer trucks, jurors might find it helpful to hear the opinion of a witness who has no personal knowledge about the facts of the case, but who has spent years driving tractor-trailer trucks.[4]  No one would think of such an experienced trucker as an expert having what we would call “scientific” knowledge of tractor-trailer trucks.  The witness’s opinion is not based on scientific research.  Instead, the witness’ knowledge and skill is based on training and experience.

In the same way, fingerprint examiners, as a group, may have skills that members of the general public don’t have and may use those skills to reach nonscientific opinions that are useful to you in your deliberations.[5]  A fingerprint examiner may spend a substantial amount of time looking at latent and known fingerprints.  In the course of their work, a fingerprint examiner may have developed a skill in identifying similarities and differences between latent and known fingerprints 

I have studied the skill claimed by [name of expert], and I have decided it is more like a practical skill, such as driving a tractor-trailer, and less like a scientific skill, such as might be developed by a chemist or a physicist who conducts experiments.[6]  That is, although fingerprint examiners may work in “laboratories,” fingerprint examiners are not scientists.  Fingerprint examiners are more like artisans, that is, experienced craftsmen.  They are individuals whose opinions are based on their training and experience[7]  but not on scientific research. 

The experience-based opinion given by [name of expert] is therefore different from a science-based expert opinion.  Science-based opinions need to be based on methods that have been proven to be valid and reliable through what scientists call “empirical tests,” which are experiments that test how often examiners actually get the right answer using their chosen method.[8]  Training, experience, and education are no substitutes for scientific research when it comes to giving a science-based opinion.[9]  Even if a skilled examiner has done years of casework, such casework, without scientific research, cannot alone establish scientific validity and reliability.[10] And fingerprint comparison, as a field, has not yet been subject to the type of empirical testing that would establish it as a valid and reliable science-based method.

Because the method used by [name of expert] has not been subject to this type of scientific testing, we do not know what the error rate of that method is. Every human endeavor, of course, is subject to errors. When a method has gone through scientific testing, we can “quantify,” or put a number on, the chances that the expert’s opinion is wrong. But we cannot do that with this method, because, though it could be subject to such testing, it has not been subjected to a sufficient number of independent studies [and/or because the existing studies have not evaluated the performance of examiners with limited experience; and/or because the quality of the latent print in this instance is materially different than the quality of the prints used in the existing studies and thus the error rate for fingerprint examination under the circumstances present in this case has not been estimated.][11] Therefore, if the fingerprint examiner expresses a certain level of confidence about [his/her] opinion in this case, you are to take it as his level of personal confidence—and not a statement of accuracy based on scientific research or empirical data but simply as a statement of his opinion.[12]

There have not been a sufficient number of independent studies [and/or studies on less experienced examiners; and/or studies on latent prints of the quality present in this case] to estimate the error rate for fingerprint comparison in this instance.  But the studies that have been done on higher quality fingerprints [and/or more experienced examiners] indicate that the error rates or the false positive rates are “much higher” than you might believe based on the portrayal of fingerprint examination in TV entertainment, or as a result of longstanding claims about accuracy that are unsubstantiated.[13]  

Just because a witness is allowed to give an opinion does not mean that you must believe his or her opinion.[14] As with any other witness, it is up to you to decide whether you believe this opinion and whether you want to rely on it when making decisions about the case.  In evaluating the believability of this witness, follow the instructions about the believability of witness generally.  In addition, you can consider whether the witness has enough training and experience to give the opinion that the witness gave. You may also consider the factors or information that the witness relied on when reaching the opinion, and you may consider the reasoning and judgment the witness used in reaching [his/her] opinion.  You may completely or partially disregard any opinion that you find unbelievable, unreasonable, or unsupported by the evidence.[15] If you do not disregard the opinion, you still can determine what weight to give it, in light of its lack of basis in scientific studies.

Approach #2 (scientific evidence):

A second approach uses a more standard expert witness instruction, but modifies it with a list of factors the jury should consider when evaluating “scientific” expert testimony.[16]  Factors listed below are grouped into four categories: (1) “Foundational” factors, designed to ferret out whether the method has been “shown, based on empirical studies, to be repeatable, reproducible, and accurate, at levels that have been measured and are appropriate to the intended application;”[17]; (2) “Validity as applied” factors, designed to ferret out whether “the method has been reliably applied in practice;”[18] (3) “Expertise” factors, designed to help the jury assess the expert’s expertise; and (4) “Laboratory” factors, designed to help the jury assess the laboratory’s qualifications. This list of factors is in no way exclusive, nor should every factor be given in every case.  Instead, the list you ultimately develop should be targeted to the issues in your case to aid the jury without overwhelming it. 

Witnesses are usually permitted to testify only about what they directly experienced, such as what they saw or what they did on a particular occasion.[19]  Witnesses are not generally allowed to give their opinions. However, some witnesses are allowed to give their opinions because they have acquired a skill, through their training, education, or experience that few members of the general public possess.  

You are about to hear the testimony of a fingerprint examiner, who claims to have specialized knowledge and experiencein comparing known and latent fingerprints.[20] [Name of expert] will testify that this opinion is not based simply on his experience in casework, but rather is based on a method that is “scientific.”

In determining the value of an expert opinion claiming to be based on science, you should consider a number of specific factors, which I will talk about in a moment. In determining the value of any expert opinion, remember that just because a witness is allowed to give his or her opinion does not mean that you must believe his or her opinion. As with any other witness, it is up to you to decide whether you believe this testimony and wish to rely on it when making decisions about the case.

When you decide whether you believe the witness’ opinion, you may consider whether the witness has enough training and experience to give the opinion that you heard. You may also consider the method or technique or the witness used, and whether there is a strong enough scientific basis for drawing conclusions from that method or technique. [If hypotheticals were allowed: You may also consider the accuracy or inaccuracy of any assumed or hypothetical fact upon which the opinion was based.]

You may completely or partially disregard the opinion if you decide that there is not a strong enough scientific basis for the opinion, or that the witness used an unreliable method to form [his/her] opinion or used a method that hasn’t yet been proven, or that there are not enough reasons to support the opinion, or that the opinion is not based on enough training, education, or experience, or that the opinion is outweighed or contradicted by other evidence. You should consider this evidence with all the other evidence in the case and give it as much weight as you think it fairly deserves.

In deciding how much weight, if any, to give this opinion, you may consider any factors that you decide are relevant.  Specific factors that you may want to consider include the following:

  1. Factors related to the method’s scientific foundation, to guide you in determining whether the method itself is scientifically valid and reliable:

On whether the method’s validity and reliability have been sufficiently proven through scientific testing:

 

  1. Has fingerprint comparison been studied by individuals or organizations that had no stake in the outcome, and who empirically tested the method using an approach where the examiners did not know the right answer, and where a large number of examiners were tested?[21]
  2. And if so, did those studies demonstrate that fingerprint examination consistently produces accurate and reliable results?[22]
  3. And did those studies use evidence samples that are similar to the sample in this case? In other words, did the testing involve samples that are similar to the samples in this case such as: [select the particulars of your case]?[23]

 

On whether the method sufficiently accounts for cognitive bias:

 

  1. Is fingerprint comparison a subjective or objective method? Subjective methods involve significant human judgment based on the examiner’s training and experience.[24]  Objective methods can be performed by either an automated system or by human examiners exercising little or no judgment.[25]

 

  1. If fingerprint examination is a subjective method, did the fingerprint comparison in this case take into account that subjective methods are more vulnerable to human error, bias, and variations in performance by different examiners?[26] Specifically, consider the following:

 

  1. Cognitive bias includes the natural tendency of humans to be influenced by outside information and outside pressures.[27] Did the fingerprint examiner here take steps to avoid learning any information about the facts of the case, or to speak to others about their opinions of the case, that might have affected [his/her] fingerprint comparison before conducting [his/her] examination and comparison and documenting the results?[28]

 

  1. Another example of cognitive-bias in the forensic testing field is the natural tendency of human examiners to focus on similarities between samples, and downplay differences between samples.[29] Did the examiner in this case limit the effect of such bias by first documenting what [he/she] believed were the relevant points of comparison on the items taken from the crime scene before [he/she] compared those samples to the known items?[30]

 

  1. Did the laboratory conduct a “blind” verification, meaning that it asked a second examiner, who did not already know what decision the first examiner reached, to also compare the samples and decide whether they are consistent?[31]

 

On whether the method’s error rate is known and sufficiently low to merit trust in the examiner’s opinion:

 

  1. To determine whether a fingerprint comparison method is scientifically valid, you must also consider its error rate, or likelihood of producing an inaccurate conclusion.[32] All fingerprint examination and comparison methods, like any human endeavor, are subject to error and have an error rate that is greater than zero.[33] Not even highly automated tests have a zero error rate.[34] The error rate of a fingerprint examiner’s method cannot be inferred from that examiner’s case work alone,[35] nor the examiner’s expression of confidence about [his/her] opinion or about the accuracy of the field.[36] Instead, a fingerprint comparison method’s error rate can only be determined from conducting scientific experiments, sometimes called “black-box studies,” that actually test how often fingerprint examiners get the right answer.[37]  Here the evidence presented by the government suggests the error rate is [reported as most conservative.][38]

 

  1. Has the kind of “black-box testing” described in paragraph (a) been conducted with respect to [name of expert]’s fingerprint comparison method? Have there been a sufficient number of “black-box” tests to demonstrate reproducibility.[39]  Have the studies been conducted by independent parties with no stake in the outcome.[40] If so, what is the method’s error rate? That is, how often does the technique or method reach an incorrect conclusion?[41]

 

  1. To the extent the field of fingerprint comparison has not conducted the necessary empirical testing to estimate the method’s error rate, you may consider the absence of such testing in deciding how much weight, if any, you wish to give the examiner’s opinion in this case.

 

On whether the method is governed by scientific standards:

 

  1. Does fingerprint comparison have standards that have been developed by reputable scientific organizations?[42]

 

  1. Factors related to the validity of the method as applied, to help you determine whether the method was reliably applied by the examiner:

 

  1. Are there laboratory notes that show that the fingerprint examiner in this case properly followed each of the steps for this method?[43]

 

  1. b) Were those laboratory notes actually made at the time the examiner was examining the sample, rather than prepared after the comparison was completed?[44]

 

  1. c) Were there any times where the examiner did not follow the laboratory’s protocols and procedures, and were they documented and explained to your satisfaction?

 

 

  1. Factors related to whether the expert in this case is sufficiently qualified to render an accurate and helpful opinion based on the method:

 

On whether the examiner’s skills in applying the method have been sufficiently tested:

 

  1. Has the fingerprint examiner in this case taken proficiency tests, that is, “test[s] that measures how often this examiner reaches the correct answer”?[45]

 

  1. If so, were those tests conducted by a third party who had no incentive to skew the performance, and did the examiner know [he/she] was being tested?[46]

 

  1. Did the testing used test this examiner’s capacity to replicate the complexity of the task in this case, or were the samples in the test easier than the tasks in this case?[47]

 

On the expert’s individual qualifications:

 

  1. Is the expert certified in fingerprint comparison by a recognized organization in the field?[48] Does that organization evaluate the examiner’s skill through testing?[49] Is the testing based on reality, in the sense that it tests the ability to accurately analyze the type of samples the examiner faces in real casework?[50]  Has the examiner attempted to become certified but failed?

 

  1. Has the expert received training on the standards developed by a reputable scientific organization?[51]

 

  1. The fingerprint laboratory’s qualifications

 

  1. Is the laboratory or facility in which the fingerprint comparison was done accredited?[52]

 

  1. b) Does that laboratory or facility have Standard Operating Procedures for fingerprint comparison?[53]

 

  1. Does that laboratory or facility have a system for recording and reporting errors or mistakes?[54]

 

 

[1] This paragraph is based on the instruction approved in United States v. Starzecpyzel, 880 F. Supp. 1027, 1050-51 (S.D.N.Y. 1995).

[2] This paragraph is based on the instruction approved in Starzecpyzel, 880 F. Supp. at 1050-51.

[3] You could include education as well as training and experience in those instances when the witness has a relevant advanced degree.

[4] This paragraph is based on the instruction approved in Starzecpyzel, 880 F. Supp. at 1050-51.

[5] This paragraph is based on the instruction approved in Starzecpyzel, 880 F. Supp. at 1050-51.

[6] This paragraph is based on the instruction approved in Starzecpyzel, 880 F. Supp. at 1050-51.

[7] You could include education as well as training and experience in those instances when the witness has a relevant advanced degree.

[8] PCAST Addendum at 1 (“In its report, PCAST noted that the only way to establish the scientific validity and degree of reliability of a subjective forensic feature-comparison method – that is, one involving significant human judgment – is to test it empirically by seeing how often examiners actually get the right answer.”).

[9] PCAST Report at 6 (“We note, finally, that neither experience, nor judgment, nor good professional practices (such as certification programs and accreditation programs, standardized protocols, proficiency testing, and codes of ethics) can substitute for actual evidence of foundational validity and reliability. The frequency with which a particular pattern or set of features will be observed in different samples, which is an essential element in drawing conclusions, is not a matter of ‘judgment.’ It is an empirical matter for which only empirical evidence is relevant.”).

[10] PCAST Report at 32-33 (“Casework is not scientifically valid research, and experience alone cannot establish scientific validity.”).

[11] PCAST Report at 6 (“Without appropriate estimates of accuracy, an examiner’s statement that two samples are similar—or even indistinguishable—is scientifically meaningless: it has no probative value, and considerable potential for prejudicial impact.”)

[12] PCAST Report at 6 (“[A]n expert’s expression of confidence based on personal professional experience or expressions of consensus among practitioners about the accuracy of their field is no substitute for error rates estimated from relevant studies.”).

[13] PCAST Report at 95 (“The estimated false positive rates are much higher than the general public (and, by extension, most jurors) would likely believe based on long-standing claims about the accuracy of fingerprint analysis.”)

[14] This paragraph, except for the last sentence, is based on the instruction approved in Starzecpyzel, 880 F. Supp. at 1050-5.

[15] This sentence is based on Cal. Crim. Jury Instruction 332.

[16] American Bar Association Resolution 101C (adopted Feb. 6, 2012) (“RESOLVED, That the American Bar Association urges judges and lawyers to consider the following factors in determining the manner in which expert testimony should be presented to a jury and in instructing the jury in its evaluation of expert scientific testimony in criminal and delinquency proceedings: . . . Whether to include in jury instructions additional specific factors that might be especially important to a jury’s ability to fairly assess the reliability of and weight to be given expert testimony on particular issues in the case”).

[17] PCAST Report at 4.

[18] PCAST Report at 5.

[19] This paragraph is based on the instruction approved in United States v. Starzecpyzel, 880 F. Supp. 1027, 1050-51 (S.D.N.Y. 1995).

[20] This paragraph is based on the instruction approved in United States v. Starzecpyzel, 880 F. Supp. 1027, 1050-51 (S.D.N.Y. 1995).

[21] PCAST Report at 143 (describing requirements of scientific validity and reliability); American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12, 13 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . The extent to which the forensic science technique or theory has undergone validation.”).

[22] PCAST Report at 48 (“The method need not be perfect, but it is clearly essential that it’s accuracy has been measured based on appropriate empirical testing and is high enough to be appropriate to the application. Without an appropriate estimate of its accuracy, a metrological method is useless – because one has no idea how to interpret its results.”); id. at 5 (describing “essential points of foundational validity”); American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12, 13 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . . The known nature of error associated with the forensic science technique or theory.”

[23] PCAST Report at 66 (forensic examiners must “demonstrate that the samples used in the foundational studies are relevant to the facts of the case”).

[24] PCAST Report at 47 (Subjective methods “involve significant human judgment.”).

[25] PCAST Report at 47 (“Objective techniques or methods consist of procedures that are each defined with enough standardize and quantifiable detail that they can be performed by either an automated system for human examiners exercising little or no judgment.”); id. (“Objective methods are in general preferable to subjective methods. Analyses that depend on human judgment (rather than a quantitative measure of similarity) are obviously more susceptible to human error, bias, and performance variability across examiners. In contrast, objective quantified methods tend to yield greater accuracy, repeatability and reliability, including reducing variation in results among examiners.”). 

[26] PCAST Report at 49 (“subjective methods . . . are especially vulnerable to human error, inconsistency across examiners, and cognitive bias”). See also Dror, I. E., Charlton, D., and A. E. Peron.” Contextual information renders experts vulnerable to making erroneous identifications." Forensic Science International, Vol. 156 (2006): 74-878. Dror, I. E.,  and D. Charlton. "Why experts make errors." Journal of Forensic Identification Vol. 56, No. 4 (2006): 600 – 16.

[27] PCAST Report at 5 (“In the forensic feature-comparison disciplines, cognitive bias includes the phenomena that, in certain settings, humans . . . may also be influenced by extraneous information and external pressures about a case.”).

[28] PCAST Report at 10 (“Scientific validity as applied then, requires that an expert  . . . discloses whether, when performing the examination, he or she was aware of any other facts of the case that might influence the conclusion.”); see also id. at 31(“Cognitive bias refers to ways in which human perceptions and judgments can be shaped by factors other than those relevant to the decision at hand. It includes ‘contextual bias,’ where individuals are influenced by irrelevant background information; ‘confirmation bias,’ where individuals interpret information, or look for new evidence, in a way that conforms to their pre-existing beliefs or assumptions; and ‘avoidance of cognitive dissonance,’ where individuals are reluctant to accept new information that is inconsistent with their tentative conclusion.”); id at 32 (Several strategies have been proposed for mitigating cognitive bias in forensic laboratories, including managing the flow of information in a crime laboratory to minimize exposure of the forensic analyst to irrelevant contextual information . . . .”).

[29] PCAST Report at 5 (“In the forensic feature-comparison disciplines, cognitive bias includes the phenomena that, in certain settings, humans may tend naturally to focus on similarities between samples and discount differences . . . .”); American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . The extent to which the particular forensic science technique or theory relies on human interpretation that could be tainted by error; [and] The extent to which the forensic science examination in this case may have been influenced by the possibility of bias.”).

[30] PCAST Report at 32 (“Several strategies have been proposed for mitigating cognitive bias in forensic laboratories, including . . . ensuring that examiners work in a linear fashion, documenting their finding about evidence from crime science before performing comparisons with samples from a suspect.”).

[31] PCAST Report at 90 (noting that the ACE-V method of latent print comparison is problematic in laboratories that do not conduct “blind” “independent examinations,” because “the second examiner knows the first examiner reached a conclusion of proposed identification, which creates the potential for confirmation bias.”).

[32] PCAST Report at 6 (“Without appropriate estimates of accuracy, an examiner’s statement that two samples are similar—or even indistinguishable—is scientifically meaningless: it has no probative value, and considerable potential for prejudicial impact.”); id. at 53 (“[W]ithout appropriate empirical measurement of a method’s accuracy, the fact that two samples in a particular case show similar features has no probative value—and, as noted above, it may have considerable prejudicial impact because juries will likely incorrectly attach meaning to the observation.”).

[33] PCAST Report at 29 (“All laboratory tests and feature-comparison analyses have non-zero error rates . . . . ”).

[34] PCAST Report at 30 (“Even highly automated tests do not have a zero error rate.”).

[35] PCAST Report at 33 (“[O]ne cannot reliably estimate error rates from casework because one typically does not have independent knowledge of the ‘ground truth’ or ‘right answer.’” ); id. at 106 (“Because fingerprint analysis is at present a subjective feature-comparison method, its foundational validity can only be established through multiple independent black box studies . . .

[36] PCAST Report at 6 (“[A]n expert’s expression of confidence based on personal professional experience or expressions of consensus among practitioners about the accuracy of their field is no substitute for error rates estimated from relevant studies. ”); id. at 55 (same). See also id. at 54-55 (citing Williams v. United States, 130 A.3d 343, 355 (D.C. 2016) (Easterly, J., concurring) (“As matters currently stand, a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true.”).

[37] PCAST Report at 5-6 (“For subjective feature-comparison methods, because the individual steps are not objectively specified, the method must be evaluated as if it were a “black box” in the examiner’s head. Evaluations of validity and reliability must therefore be based on “black-box studies,” in which many examiners render decisions about many independent tests (typically, involving “questioned” samples and one or more “known” samples) and the error rates are determined.”); id. at 49 (“Since the black box in the examiner’s head cannot be examined directly for its foundational basis in science, the foundational validity of subjective methods can be established only through empirical studies of examiner’s performance to determine whether they can provide accurate answers; such studies are referred to as “black- box” studies . . . .”).

[39] PCAST Report at 53 (“To ensure that conclusions are reproducible and robust, there should be multiple studies by separate groups reaching similar conclusions.).”

[40] PCAST Report at 97 ("As noted above, study should be designed and conducted in conjunction with third parties with no stake in the outcome. This important feature was not present in the FBI [fingerprint] study.").

[41] PCAST Report at 101 ("Conclusions of a proposed identification may be scientifically valid, provided that they are accompanied by accurate information about limitations on the reliability of the conclusion – specifically, that (1) only two properly design studies of the foundational validity and accuracy of latent fingerprint analysis have been conducted, (2) these studies found false positive rates that could be as high as 1 error in 306 cases in one study and 1 error in 18 cases in the other, and (3) because the examiners were aware they were being tested, the actual false positive rate in case work may be higher. At present, claims of higher accuracy are not warranted or scientifically justified. Additional black-box studies are needed to clarify the reliability of the method.").

[42] American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . The extent to which the forensic science examination in this case uses operational procedures and conforms to performance standards established by reputable and knowledgeable scientific organizations.”).

[43] PCAST Report at 66 (“[V]alidity as applied requires that: (a) the forensic examiner must have been shown to be capable of reliably applying the method, . . . and must actually have done so, as demonstrated by the procedures actually used in the case, the results obtained, and the laboratory notes, which should be made available for scientific review by others . . . . “); American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . The extent to which the forensic science examiner followed or did not follow the prescribed scientific methodology during the examination.”); id. at 12 (“The extent to which the forensic science examiner in this case followed the prescribed operational procedures and conformed to the prescribed performance standards in conducting the forensic science examination of the evidence.”).

[44] PCAST Report at 99 (noting the important of ensuring that forensic “examiners . . . document[] their findings about evidence . . . before performing comparisons” with known samples) and at 102 ("Work by FBI scientists has shown that examiners typically alter the features that they initially mark in a latent print based on comparison with an up apparently matching exemplar. Such circular reasoning introduces a serious risk of confirmation bias. Examiners should be required to complete and document their analysis of a latent fingerprint before looking at any known fingerprint and should separately document any additional data used during their comparison and evaluation."); see also National Commission on Forensic Science, Recommendation to the Attorney General National Code of Professional Responsibility for Forensic Science and Forensic Medicine Service Providers (Adopted March 22, 2016) at. (Forensic examiners “must . . . [m]ake and retain contemporaneous, clear, complete, and accurate records of all examinations, tests, measurements, and conclusions, in sufficient detail to allow meaningful review and assessment by an independent professional proficient in the discipline.”); cf. ABA Standards for Testing and Interpretation of DNA Evidence 16-3.2(b) Commentary (“The lack of contemporaneously prepared case notes can result in erroneous results.”); Office of Inspector General, Dep’t of Justice, The FBI Laboratory: A Review of Protocol and Practice Vulnerabilities 107 (2004) (“If staff members are allowed to delay recording observations and test results until after they have examined all the items for a case or have completed all of their work for the day, their documentation may not be fully accurate. Also, staff members may be unduly influenced by protocol requirements when relying on memory, and document what they known should have occurred when their recollection is vague.”).

[45] PCAST Report at 57 (“[T]he only way to establish scientifically that an examiner is capable of applying a foundationally valid method is through appropriate empirical testing to measure how often the examiner gets the correct answer.  Such empirical testing is often referred to as ‘proficiency testing.’” ); see also id. at 101 (“ Because latent print analysis, as currently practiced, depends on subjective judgment, it is scientifically unjustified to conclude that a particular examiner is capable of reliably applying the method unless the examiner has undergone regular and rigorous proficiency testing. Unfortunately, it is not possible to assess the appropriateness of current proficiency testing because the test problems are not publicly released. (As emphasized previously, training and experience are no substitute, because neither provide any assurance that the examiner can apply the method reliably.") .

[46] PCAST Report at 57 (“To ensure integrity, proficiency testing should be overseen by a disinterested third party that has no institutional or financial incentive to skew performance.”); id. at 58 (“Finally, proficiency testing should ideally be conducted in a ‘test-blind’ manner—that is, with samples inserted into the flow of casework such that examiners do not know that they are being tested.”).

[47] PCAST Report at 57 (“Proficiency testing should be performed under conditions that are representative of casework and on samples, for which the true answer is known, that are representative of the full range of sample types and quality likely to be encountered in casework in the intended application. ”).

[48] For example, the International Association for Identification (IAI) has a certification program see https://www.theiai.org/certifications/latent_print/index.php 

[49] American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . Whether the forensic science examiner has been certified in the relevant field by a recognized body that evaluates competency by testing.”)

[50]PCAST Report at 57 ( "Proficiency testing should be performed under conditions that are representative of casework and on samples, for which the true answer is known, that are representative of the full range of samples and quality likely to be encountered in casework in the intended application.") and 57-59 (discussing proficiency testing).

[51] American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . The extent to which the forensic science examination in this case uses operational procedures and conforms to performance standards established by reputable and knowledgeable scientific organizations; [and] The extent to which the forensic science examiner in this case followed the prescribed operational procedures and conformed to the prescribed performance standards in conducting the forensic science examination of the evidence.”).

[52] American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . Whether the facility is accredited by a recognized body if accreditation is appropriate for that facility.”).

[53] American Bar Association Resolution 101C (adopted Feb. 6, 2012) at 11-12 (“The court should consider whether additional factors such as those set forth below might be especially important to a jury’s ability to fairly assess the reliability of and the weight to be given testimony on a particular issue . . . The extent to which the forensic science examination in this case uses operational procedures and conforms to performance standards established by reputable and knowledgeable scientific organizations.”).

[54] Cf. SWGGUN Quality Assurance Guideline 8.5 Revised Apr. 7, 2009, at 4 (“Documentation of the verification should be indicated in the case record.”); id. at 8.6 (“The laboratory shall have a procedure for addressing situations involving a discrepancy of conclusions.”); id. at 12.3 (“Corrective action procedures shall be established and followed for problems identified during testimony monitoring.”).