Forensics Transition Workshop May 9-11, 2016

Forensics Transition Workshop
May 9-11, 2016
SPEAKER TITLES/ABSTRACTS
Maria Cuellar
Carnegie Mellon University
“Shaken Baby Syndrome On Trial: Sources of Contextual Bias”
Over 1,100 individuals are in prison today on charges related to the diagnosis of Shaken Baby
Syndrome (SBS). In recent years this diagnosis has come under scrutiny, and more than 20 convictions
made on the basis of SBS have been overturned (Medill Justice Project, 2015). The overturned
convictions have fueled a controversy about alleged cases of SBS. I review the arguments made by the
prosecution and defense in cases related to SBS and point out a problem with contextual bias. To
resolve this problem, I suggest that only the task-relevant information be provided to the individual
who determines the diagnosis. I also suggest that in order for this to be possible, there must be a
change in the definition of SBS so it does not include the manner in which the injuries were caused. I
close with recommendations to researchers in statistics and the law about how to attempt to avoid
contextual bias in SBS trials.
Heidi Eldridge
RTI International
“Generic Principles for the Selection of Databases to Represent the Background Population”
Forensic scientists are concerned with the inference of the source of traces of interest through their
comparisons with material of known origin. Since the late 1980s and the advent of forensic DNA
profiling, it is becoming more common for forensic scientists to support the probative value of their
evidence using some measure of its rarity in a “relevant” population of potential sources. Courts and
scientists’ opinions have regularly diverged on which population to use to estimate this measure of
rarity. For example, in some legal cases, the choice of a database of potential sources similar to the
suspect has been put forward, while in some others, a database representing a population too wide to
be relevant was requested.
In this talk, we propose a formal analysis of this issue, which shows that database selection is driven
by the propositions considered by the prosecution and the defense, as well as by the type of crime and
type of evidence considered. We discuss how best determine “relevant” as well as who is the
appropriate party to make the determination.
Co-authors: Prof. Colin G.G. Aitken, University of Edinburgh and Dr. Cedric Neumann, South Dakota
State University
Karen Kafadar
University of Virginia
“Quantifying the Information Content in Pattern Evidence”
We describe a metric that quantifies the "information content" of individual features ("minutiae") in a
latent fingerprint. Currently latent print "quality" is based on an overall score for an entire print.
However, minutiae of sufficiently high quality can be useful for identification, even in prints having
large sections of low resolution. We develop a score with a scale of 0-100 (low to high quality) that
characterizes (via gradients) the clarity of a feature, and then show results of quality scores on minutiae
from NIST's public SD27a latent fingerprint database containing prints judged by "experts" as "good,"
"bad," or "ugly." The scores correlate well with the general classification and serve as objective, versus
subject, measures of minutiae "information content".
(Joint work with Dr Adele Peskin, NIST-Boulder)
Naomi Kaplan Damary
Hebrew University
“Shoe Prints: an exploratory analysis of the relationship among accidental mark characteristics”
In order to determine the evidential value of shoe print evidence, experts study accidental marks as a
way to determine the degree of the print's rarity. Though the characteristics of such marks are assessed
separately, criminal investigators try to find a way to combine them into an objective measure that can
be presented in court. The 2009 NRC report, Strengthening Forensic Science in the United States: A
Path Forward, calls for the establishment of a scientific basis for forensic procedures including shoe
prints. In a step toward the realization of this goal, this presentation will examine the relationships
among three characteristics of an accidental (i.e. the location, shape and orientation or angle of the
accidental) as well as their connection with a specific shoe through the use of Chi square independence
tests. The analysis is repeated by taking into account the specific pattern of the shoe. It is shown that
the characteristics of an accidental are not independent and that additionally these characteristics are
not independent of the shoe. Some of these dependencies change when conditioning on the pattern of
the shoe .For example, the relationships between orientation and location and between orientation and
the shoe ,both of which are not independent, turn into independent relationships when conditioning on
the pattern of the shoe sole. This suggests that some of the dependencies found are caused at least
partially by the elements of these patterns. Other dependencies are probably caused by factors specific
to the shoe such as walking patterns and ground surface .Thus, a further analysis of the structure of
this dependency must be conducted in the future in order to determine the way in which to
calculate the accidental’s degree of rarity. The result will naturally influence the extent to which a
combination of accidentals is seen to be rare and hence the rarity of the shoe sole in its entirety.
Michael Lavine
University of Massachusetts
“Modeling the Spatial Relationship between Features of Forensic Interest in Fingerprints”
The quantification of the weight of forensic evidence involves characterizing the probability
distribution of complex and heterogeneous vectors representing friction ridge features. In particular,
studies have shown that friction ridge features are not independent and that their spatial relationship is
a key contributor to the probative value of a trace recovered at a crime scene. Several models have
attempted to account for this spatial relationship, but none has proposed a formal solution to this issue.
During this talk, we present a model that formally characterize the probability distribution of spatial
relationships between friction ridge features and enable the quantification of the weight of fingerprint
evidence. The application of this model can easily be extended to other types of evidence where the
spatial characteristics between features is important, such as shoeprint evidence.
Co-author: Dr. Cedric Neumann, South Dakota State University
Lucas Mentch
SAMSI
“Making Sense of 'Making a Murderer'”
At the end of 2015, Netflix released the documentary series 'Making a Murderer' which followed the
murder trial of Steven Avery, a man who had previously been wrongfully convicted of sexual assault.
The series makes the case that much of the evidence against Avery was suspect and the forensic
evidence unreliable. In this talk, we'll discuss a number of forensic-related aspects in the Avery case,
why much of this evidence was unreliable, and how these issues continue to appear in a number of
high-profile cases. We'll focus in particular on the EDTA testing procedures developed by the FBI for
this case and provide updates on our efforts to evaluate the statistical properties of these tests.
Cedric Neumann
South Dakota State University
“Similarity-Based Models for the Quantification of the Weight of Forensic Evidence”
Observations made on items of forensic interest are difficult to summarize and describe
mathematically. In particular, pattern evidence, such as fingerprint or shoeprint evidence, display
many different types of features that can be represented by different types of variables. The high
dimension and heterogenous nature of variables observed on pattern evidence render difficult (not to
say impossible) the characterization of their joint likelihood structure. Since 1892, several models have
been proposed to quantify the probative value of fingerprint evidence. These models have
either heavily relied on the assumption of independence between fingerprint features, or on the
measure of similarity between pairs of observed patterns to reduce the dimension of the
problem. During this talk, we will briefly review the drive behind the need for statistical models to
quantify the weight of forensic evidence, and we will discuss the severe limitations of “similaritybased” statistical models in forensic science. We believe that these limitations are such that
“similarity-based” models should not be used in the currently advocated Bayesian paradigm.
Authors: Dr. Cedric Neumann (together with Madeline Ausdemore, Jessie Hendricks, Damon Bayer,
Douglas Armstrong, Danica Ommen and Dr. Christopher Saunders)
“Possible Options to Improve AFIS Workflow and Output Using Fingerprint Statistics”
The notion of “fingerprint statistics” is usually associated with the attempt to demonstrate that friction
ridge characteristics are unique or with the quantification of the probative value of a fingerprint
comparison though the calculation of a so-called likelihood ratio. Recently, research has focused on
the development of “quality metrics” to support the various decisions made by fingerprint examiners
during the multiple stages of the comparison of a latent print to a control impression. However, very
little has been attempted to improve the transparency and overall reliability of decisions made when no
suspect is available and that the latent print needs to be searched against a database of reference prints.
During this talk, we will present a model aimed at supporting examiners in the AFIS (Automated
Fingerprint Identification System) context. This model can support examiners when making objective
determination on the suitability of any given print for AFIS purpose. Ultimately, our model can enable
examiners to streamline workflow and dynamically allocate resources depending on workload or case
complexity.
Co-authors: Douglas Armstrong and Teresa Wu
Robin Richter
University of Gottingen
“A Quality Measure Based on the Global 3 Parts Decomposition”
The global 3 parts decomposition given by $f=u+v+n$ returns a cartoon, texture and residual image
respectively. Furthermore, it marks a region of interest. The texture component $v$ can be used as an
indicator to mark areas where there is no texture due to smudge and dryness. Guided by the binarization
of $v$ we propose a method that produces two correlated noise models for the "smudge"-noise and the
"dryness"-noise respectively. This can be used as a global or local quality measure and may be linked
with the minutiae distribution of fingerprints to yield a weighted local quality measure.
“Decomposing Shoeprint Images”
In forensic analysis of shoeprints, elements and accidentals play the role of ridge line patterns and
minutiae for fingerprints. All analysis is based on reliably extracting elements and accidentals, be it by
specialists who may, of course, introduce bias, or as more desirable, in a highly automated way. We
have begun with the development of the ElementSensor to address semi-automated element extraction.
As a first application, this is used to obtain the distribution of contact surfaces in the Jerusalem
Shoeprint Accidentals Dataset which can then be used to study the distribution of accidentals
conditioned on the elements.
Donia Slack
RTI International
“Technology Transition through the Forensic Technology Center of Excellence”
The National Institute of Justice (NIJ) improves the practice of forensic science by supporting a robust
research and development program, which focuses on basic and applied forensic research. To
strengthen its impact of technology corroboration, evaluation, and adoption, the NIJ has created a
Forensic Technology Center of Excellence (FTCoE). The main mission of this FTCoE is to assist in
the transition of law enforcement technology from the laboratory into practice by first adopters within
the criminal justice community. RTI International has managed the NIJ’s FTCoE since 2011 and in
this role has provided the forensic community with hundreds of resources aimed at addressing
challenges, advancing technology, and sharing knowledge. The FTCoE is known in the forensic
community as a resource for any practitioner to rely on should they need assistance not only with
dissemination of their research, but also transition of their technology. This presentation will
introduce the audience to what the FTCoE is, what we have done in the ways of technology transition,
ways in which the community can use the FTCoE to disseminate knowledge, and explain how the
FTCoE can serve practitioners of the forensic and criminal justice communities.
Cliff Spiegelman
Texas A&M University
“Why there is Reason to be Optimistic about Shoeprints”
This talk shows how the SAMSI shoeprint group collaborated on moving shoeprints forward. It not
only discusses the open research atmosphere the problems that are being addressed, and those that
have been addressed. This workgroup can be a template for forensic research teams.
Co-authors: Sarena Weisner, Yaron Shor, Yoram Yekutieli, Naomi Kaplan, David Sheets, Stephan
Huckemann, Vered Madar, David Banks
“When Marker Study Designs Fail the Markers Follow”
This work examines some of the poor study designs used in most non-FDA approved biomarker
studies and draws parallels to analogous indicators utilized in forensic science. We discuss how poor
sensitivity, the ability to detect a condition, and weak specificity, the ability to detect only the targeted
condition follow both from flawed study designs. We discuss examples such as AHT markers where
the most evident source of bias is that the study populations in these efforts are nonrandom. The study
populations were selected from children’s hospitals with physicians and child abuse specialists who
were aware of and interested in AHT. This is akin to looking for markers for battered women by using
women only at battered women’s shelters.
Co-authors: Maria Cuellar, and Lucas Mentch
Mark D. Stolorow
NIST
“OSAC Standards Implementation”
The objective of the Organization of Scientific Area Committees for Forensic Science (OSAC) is to
create a sustainable organizational infrastructure that produces consensus documentary standards and
guidelines to improve quality and consistency of work in the forensic science community. To
facilitate implementation of these standards and guidelines, OSAC operates under the premise that the
documents must have input from the user community, by academic researchers, measurement
scientists and statisticians, be written in a way that is implementable by the user community, and be
freely available to the user community. Since its inception, OSAC has focused on making standards
and guidelines readily available to the forensic science community and criminal justice stakeholders
and encourage voluntary adoption of these standards and guidelines by crime laboratory managers,
with eventual enforcement by forensic science accrediting bodies. This presentation will include a
brief introduction of the infrastructure of OSAC and its operational policies enabling the organization
to implement OSAC standards and guidelines in conformance with its long term strategic goals.
Finally, the presentation will describe potential enforcement strategies for adopting new OSAC
standards. The presentation will be followed by a question and answer period for SAMSI Forensics
Transition Workshop participants.
Henry Swofford
US Army Criminal Investigation Laboratory
“Development and Evaluation of a Model to Quantify the Weight of Fingerprint”
This presentation will introduce attendees to a novel method for quantifying the weight of fingerprint
evidence, which has been developed and currently undergoing validation by the U.S. Army Criminal
Investigation Laboratory.
After attending this presentation, attendees will understand the mathematical concepts by which this
method was developed, results of preliminary evaluation data against mated and non-mated fingerprints
obtained from a database of several million fingerprints, and on-going validation efforts to facilitate the
transition of this technology into practice.
Fingerprint analysts are faced with tremendous challenges when performing fingerprint comparisons
and evaluating the significance of their findings. Not only are their analyses and comparisons typically
performed visually without any tools capable of producing quantitative and statistically relevant data to
assist in their interpretation of the evidence, but they must render and defend conclusions of source
attribution based solely on their individual training and experience. Furthermore, these decisions are
made without any formal or nationally accepted criterion or thresholds. Without tools capable of
assisting the analysts with their interpretation of the evidence and standardized criterion by which
decisions can be based, analysts have no internal quality assurance mechanism to protect them from
making erroneous decisions, especially when faced with comparisons from large database searches,
other than the subjective examination of other analysts, which is valuable but not perfect having similar
limitations. This presentation will discuss a novel, empirically derived approach for evaluating and
quantifying the weight of fingerprint evidence based on the geospatial arrangement of friction skin
features. Preliminary evaluations of the software against the most similar feature configurations
detected by an Automated Fingerprint Identification System (AFIS) containing approximately onebillion fingerprints demonstrate the technology’s capability to distinguish between similar non-mated
fingerprints derived from database searches and mated fingerprints. On-going validation efforts
involve a large scale, systematic study characterizing the similarity of various feature configurations
between several hundred fingerprints known to have originated from the same source as well as
between the most similar fingerprint configurations from several hundred fingerprints known to have
originated from different sources obtained from a database search of approximately one-hundredmillion other fingerprints. This technology and underlying data provide a quantitative, transparent
and more objective foundation to demonstrate the weight of fingerprint associations to fact-finders.
Furthermore, such a capability lends the potential for leveraging greater investigative information from
partial fingerprints evaluated as “no value” under traditional methods of subjective, non-quantitative
interpretation. These results along with policy guidelines which may be developed from these data will
be presented along with the potential for transferring this technology into practice.
Disclaimer: The opinions or assertions contained herein are the private views of the authors and are not
to be construed as official or as reflecting the views of the United States Department of the Army or
United States Department of Defense.
Duy Hoang Thai
SAMSI
“Textured Image Deconvolution and Decomposition”
Approximation theory is at the heart of image analysis, especially image deconvolution and
decomposition. For piecewise smooth images, there are a many methods that have been developed
over the past several decades. The goal of this study is to illustrate a difficult issue in texture analysis
of images which has forensic applications (e.g. to fingerprinting, ballistic images and shoe prints). In
particular, it is known that texture information is almost destroyed by a blur operator, such as results
from a ballistic image captured by a low-cost microscope. The contribution of this work is twofold.
First, we propose a mathematical model for textured image deconvolution and decomposition into
several meaningful components. That deconvolution uses a fourth-order PDE approach based on the
directional mean curvature. Second, we discover a link between functional analysis and multiscale
sampling theory, as in harmonic analysis and filter banks. This is preliminary work for a challenging
project in estimation of image quality. It requires extensive pre-processing steps and approximation
theory.
(Joint work with Dr. David Banks, Duke University)
William Thompson
University of California, Irvine
“Optimizing Human Performance in Crime Laboratories through Testing and Feedback”
A number of forensic laboratories have adopted context management procedures that "blind" benchlevel analysts to task-irrelevant case information. While the primary goal of these procedures is
minimizing bias, they also create important new opportunities for blind testing of analysts'
performance. This presentation will discuss the blind testing procedures that are beginning to be
adopted by progressive forensic laboratories as part of rigorous quality assurance efforts. It will
discuss the opportunities created by these programs for validation of forensic methods and for
systematic testing of the strengths, weaknesses, and limits of forensic expertise and for various aspects
of forensic identification, including database searching.
Sarena Wiesner
Israeli Police Forensic Center
“Shoeprints: The Path from Practice to Science”
At present, the identification of a shoe is based on the comparison by an expert of a shoeprint found at
the crime scene with a test impression made from the suspect's shoe. Significance is given to the
characteristics that result from random processes, for example wear and tear, that cause a change in the
shapes of the shoe sole elements. It is customary to use an ordinal scale to present the level of
confidence that the test impression and the crime scene print originated from the same source. Each
grade describes the degree of correlation between the crime scene print and the print of the suspect's
shoe. Current methods lack formal statistical analyses, and fail to provide a scientific and quantitative
scale for assessing the match between a crime scene print and a suspect's shoe.
This presentation will survey current procedures of evaluating shoe print evidence from the collection
stage to the submission of testimony in court. Rising to the challenge of the National Academy of
Sciences, (Strengthening Forensic Science in the United States: A Path Forward, 2009) and will
propose initial steps that hopefully will carry us along the path from practice to science.
These will include putting together a detailed SOP, improving quality control (blind testing,
performance testing), creating a large representative data base, developing a model that explains the
creation of accidentals on shoe soles, reducing bias in the comparison process, studying various types
of noise that exist on crime scene shoeprint and performing quality control tests that will enable
relating to the expert as a black box with known error rate.