This post may not work, because there is nothing funny about murder. Except Murder She Wrote. That was kind of funny–unintentionally. And Columbo. Peter Falk was very funny.
But real murder isn’t funny at all, which is why this may need to get yanked down faster than a Rosanne Barr tweet about African-American history. Should that be the case, please accept my tearful apology in advance.
I was inspired to write this post by an article in the New York Times Magazine about Joe Bryan. Mr. Bryan, a former high school principal, was convicted in 1985 of murdering his wife of sixteen years. He has been in prison for over three decades.
Mr. Bryan was convicted twice, the second time after an appellate court declared his first trial to be flawed. The evidence the State presented against him on both occasions was very thin. As the Times article states,
“Prosecutors asked the jury to believe that between 9:15 p.m. on Oct. 14, 1985, when the Bryans spoke by phone, and the following morning, when Mickey (Joe’s murdered wife)was found shot to death, Joe slipped out of his hotel in Austin; drove 120 miles to Clifton, at night, through heavy rain, even though he had an eye condition that made night driving difficult; shot his wife, with whom he had no history of conflict; drove 120 miles back to Austin; re-entered the hotel; and stole upstairs to his room—all in time to clean up and attend the conference’s morning session, and all without leaving behind a single eyewitness.”
Furthermore, the crack Texas lawmen investigating the case determined that Mr. Bryan was “queer” because he didn’t play poker or go fishing and instead liked to bake pies. This gay affliction, they concluded, must have driven him to murder his wife through pent-up sexual frustration.
Formidable logic, to be sure, but seemingly not the stuff of quick convictions, even in Texas. What could have caused a jury to throw the book at Joe?
The prosecution presented a police “expert” who testified about a flashlight found in the trunk of Mr. Bryan’s car, four days after the murder. The lens of the flashlight had some tiny flecks of type O blood on it. Mickey’s blood type was O, but that’s true of half the population.
This expert, a Mr. Thorman, “. . . testified that the flecks of blood on the flashlight lens were ‘back spatter’—a pattern that indicated a close-range shooting. He wove a narrative that placed the flashlight in the killer’s hand.”
Jurors found Mr. Thorman’s confident testimony very compelling. What they didn’t know at the time was that his bloodstain-pattern training was limited to a week-long seminar he had attended four months before the murder.
Even if Mr. Thorman had been the world’s foremost expert in bloodstain-pattern science, a jury should have viewed his testimony as just one piece in a complicated tableau of evidence. If Joe Bryan had no reasonable motive, if no physical evidence tied him to the crime, if he had thirty character witnesses testifying that he was incapable of such an act, then they should have felt free to question the veracity of the expert and his science.
This is especially true when the science in question is a pseudo-science, a discipline masquerading as a hard science. I’m not talking about things like climate change skepticism or anti-vaccine hysteria. Those cases involve people willfully ignoring the hard science.
I am instead referring to the application of data or “sciency” descriptions that lure the audience into setting aside all the other evidence, or the collective wisdom of their prior experiences. This is what happened in this tragic case, where the jury valued the bloodstain-pattern testimony over everything else. They did not understand that bloodstain-pattern analysis, as the National Academy of Science would describe in a 2009 report, is associated with “enormous uncertainties.”
Which brings me to the current state of marketing.
Marketing has always been about shaping human behavior, about getting consumers to make irrational decisions. This isn’t always true—sometimes a brand is clearly, objectively superior, and the marketer just needs to make consumers aware of its advantages. But often the differences between products are minimal, so marketers hope to convince consumers to buy their brand through “brand affinity,” an emotional connection to the brand that causes consumers to be something other than cold, rational calculators of utility.
This is why there are Coke people and Pepsi people. Or Ford men and Chevy men. Exactly how this happens is still poorly understood—no matter what somebody tells you at some conference.
Because we don’t really know exactly how this works, or perhaps more accurately, because how it works is unique to each individual, marketing effectiveness has always resisted precise measurement. This led to the now-famous adage, “I know half my advertising is wasted, I just don’t know which half.”
The digital revolution promised to change all this. Everything would be measurable now. We would know exactly what messages were seen by whom, we could track the consumer’s subsequent behavior, and we would be able to calculate, with complete precision, our return on our marketing.
Like bloodstain-pattern science, digital marketing uses data and all kinds of “sciency” language to impress us, to get us to ignore everything else we used to know about marketing. We don’t look at digital marketing’s output as a piece of evidence that fits into a tableau, we instead view it as the only evidence, the only authoritative truth because it has cold, hard numbers attached to it.
As I’ve written before, those numbers often don’t tell the whole story, or are even deceptive. They are crude simplifications of a complicated reality. At their best, in bottom-of-the-funnel applications, with proper A-B test protocols, they can be quite useful in helping us find what messages are most likely to cause someone to click and buy. At their worst, they absorb resources that should have been dedicated to affinity and consideration, causing us to stop tending the fields that will produce next year’s meal.
This is not an appeal to stop marketing through digital channels, or to stop striving to better measure your marketing effectiveness. This is, instead, a call for all of us to start using our brains and respecting our judgment again.
As one bloodstain-pattern analysis critic wrote about Joe Bryan’s case, “If you don’t understand the basic science, then you won’t understand its limitations.”
Amen to that.