Search
  • Brendan

MINDING YOUR P AND Q VALUES: Unpacking Social Science Research in Forensic Custody Evaluations

Child-related matters in family law are no less complex than financial litigation and require just as much numeracy. That’s right, we cannot escape doing the math, even in custody cases. But the math involved in these kinds of cases is not like the math of our balance sheets, business valuations and compensation analyses. It is, primarily, having knowledge of and basic facility with specific branches of mathematics, namely statistics and probability. Why is this so?


In many forensic custody evaluation reports, the mental health professional will cite social science research conducted by others. Typically, this research will be cited as support for a particular contention relevant to the custody expert’s opinion. For a very simple example: “children in X circumstance have been shown to be Y times more likely to experience Z.” Perhaps it is “children of divorced parents are at 5 times greater risk of X.” There are many different pieces of research that many custody experts will place into a report in order to buttress their conclusion and opinion. But in doing so, the custody expert also has given the trial lawyer a clear opening. And it is one of the most under-appreciated openings that exist in custody litigation.


The opening is this: there is a lot of bad social science research. Even more specifically, there is a lot of bad research in the field of psychology. Many of us are familiar with the so-called “replication crisis” that began getting attention over the course of the last decade. Essentially, in a science, a test or experiment should be able to be reproduced. That is part of the scientific method. But where a test or experiment cannot be reproduced or replicated, there is a problem. Psychology in particular has been at the beating heart of this crisis for a number of reasons that are beyond the intent and scope of this writing to address. Briefly, though, an array of questionable research practices have been exposed over time, including frequently slipshod methodological rigor and dodgy determinations of statistical significance. Statistics, at its core, is the systematic collecting, organizing, analyzing and interpreting of data. In other words, all elements of data are covered within the discipline. Probability refers to how likely something is or how likely something is true. These two branches of mathematics are directly relevant in a social science where research is done, where data are collected and interpreted, and where assertions of likelihood are expressed.


What makes the issue acute from the standpoint of the custody evaluation is that the custody expert is relying on the research in order to make a kind of probabilistic statement of harm or risk in a very specific case, and is extrapolating down from the general. In other words, using a purely hypothetical assertion that could be cited in a report: “a study conducted by Dr. John Doe has shown that children whose parents move more than 50 miles away are 15% more likely to drop out of high school.” Ok. Why is that bit of research in the report? It is there to indicate that the evaluator’s conclusions and opinions are sensitive to and cognizant of a potential risk that a relocating child may be exposed to heightened potential for an adverse occurrence. Usually, such risk highlighting is not just a one off in these reports.


The expert wants support. So a string citation will often list a parade of horribles: more likely to use drugs, more likely to run away, more likely to get pregnant early, more likely to go to prison, etc. Occasionally, there will be a list of benefits when that will buttress. In all events, by grounding himself within the research, the expert is telling the Court, who must decide an issue related to little Johnny or Jane (two specific children in the hypothetical case), that experts have determined something relevant and generally applicable about kids who experience a particular variable. Even where potentially valid and sound from a methodological standpoint, at best the research (such as the hypothetical above) can only and ever be a statement of relative risk.


In other words, a 15% increased likelihood of bad thing X, if Y occurs or is present, often only means something if you know the absolute risk of bad thing X. If 0.5% of ALL children experience bad thing X (regardless of Y), then a 15% increase will look very different than if 50% of all children were susceptible to bad thing X (regardless of Y).


In light of all these considerations, the attorney in a custody matter cannot ever assume that the forensic custody expert has any detailed familiarity with the statistical validity or reliability of the research he or she has cited. Presumably, if she or he did, the citation may not have been made or would have been defended against proactively in some fashion. That said, the expert could simply be relying on the attorney not going that “deep” into the research. Regardless, given the panoply issues in social science research, it is advisable for the legal practitioner to take a very jaundiced and cautious view toward ANY citation of such research within a forensic custody report. From the sample size, to the methodological design and structure of the research, there are numerous areas where the research may be fundamentally of little to no use. And this matters. It matters because the court is looking to an expert for answers. That expert has turned to other experts for data points. If those data points are not reliable, then the impact cascades through the entirety of the custody process, potentially.


While the custody expert is truly an expert, he or she typically is NOT an expert in statistics or the proper design and application of research. Indeed, many social scientists who actually generate and conduct research are not experts in statistics or probability outside the academic setting. They are users of statistical and scientific methods, and often, are poor ones at that. Because of this, a soft underbelly of forensic custody reports is often found in the weeds of the research, in the data. While we lawyers are usually not conversant with these matters either, we can level the playing field far more than we realize. How? By learning the structure of research, the discipline of basic statistics, and the principles of proper research design. Even small amounts of knowledge in this area can provide exponential return in deposition and at trial.


In my own litigation experience, I have deposed or cross-examined numerous mental health professionals and exposed the fundamental flaws and weaknesses of the research they relied upon. In doing this, we as advocates for our clients not only demonstrate a substantive flaw within the report, but we also implicitly implicate the credibility and reliability of the expert himself. The thorough dismantling of research, therefore, yields material gains beyond the simple refutation of a single data point or conclusion.


And this is where we come now to the “art” of unpacking that research. Once one is armed with the information and ability necessary to comprehend, assess and weigh the research independently for oneself — more than half the battle has been fought and won. The practitioner is now equipped to throughly examine the custody expert about what level of familiarity he or she has with the research. The litigator can probe the extent of the expert’s own independent assessment of the data; and in doing so, we can drill down as far as necessary to get to the bedrock point where the expert potentially erred. And there is always a bedrock. Why? Because the expert saw fit to include that specific research in his or her report. The expert relied upon that research to some extent and found it relevant and material to the formation of the opinion. If the expert did not think it important for some purpose, it simply would not be in the report. How do we know that? Because by the point we get there in our inquiry or examination, the expert has been locked in and committed to materiality of EVERYTHING stated in the report.


Of course, HOW one specifically does all of that is the “special sauce” of custody litigation. Every competent trial lawyer will approach doing that in each circumstance and with each expert in the particular manner the attorney has developed and designed to elicit the answers needed to establish or disrupt a particular point. In this way, the examination is mechanically, sequentially and thematically very similar to any destructive or constructive cross-examination. To be sure, there are certain recurring “set pieces” that one will use to build up certain chapters of the examination regarding the research. But overall, the artistic aspect of the unpacking is no different from the art we litigators bring to dismantling the testimony of any expert witness.


As a whole, experienced trial lawyers in family law are adept at ferreting out inconsistencies or gaps in a report. We can detect the over- or under-weighting of certain facts versus others. We can suss out the lack of a nexus between a data point, a conclusion and an opinion. We know how to find and expose personal bias (and can sometimes do the same with respect to cognitive biases). Talented litigators will also be able to critically engage with the psychological testing conducted by the expert. But overall we are, to be honest, less able to work masterfully and engage meaningfully with all aspects of the expert’s cited research. It is my hope that we who work in this field sharpen our statistical saws and hone this critical skill — for our personal and professional improvement and for the benefit of our client’s cases.


0 views0 comments