Reasons Not to Outsource Report Writing

My team really values delegating various aspects of the research process. We even have a half-day training seminar on how to do it better.

Yes, it’s important to outsource parts of projects to allow you to focus on important elements (or things that are interesting to you), but you run the risk of dispersing the ownership across too many people. If no one has ownership, then usable insights will be overlooked.

I do market research because I want to provide meaningful data to decision makers. But I find that many misinterpret the delegation directive and end up relegating themselves to a project manager role. (NB: I have the utmost respect for project managers as it is something I often struggle with.)

A project manager gets from A to B. But a researcher should be concerned about discovering what “B” is.

No more critical is this mistake than in the reporting phase of a project. I recently was asked to write a report for a study I had no involvement in. But I was assured that this wouldn’t take very long because “all of the PowerPoint slides had been populated with data and QC’d.” I accepted, but it soon became very clear that at no point was much thought put into the development of the report. The slides were comprised of simple “data dumps” — one slide per question — with poorly thought out slides.

I asked for more time, read the original project proposal, developed a deeper understanding of the topic, got excited about writing a good report, then constructed a report that I thought would be worthy of the client’s review. In other words, I was asked to put words to the data before anyone had providing meaning to the data.

In short, it’s very difficult to outsource the insights of a study. Understanding what is useful requires the researcher to roll up their sleeves, grab some coffee (or your upper of choice), and compare the client’s needs to the data.

I highly encourage other researchers out there who work on a team with an operational philosophy to report writing to reject that norm. Every part of the report writing process is driven by insights and treating a report like a car on an assembly line is misguided.

Advertisement

Charlie Rose on the Phrase “That’s a Great Question!”

Stephen Dubner interviewed Charlie Rose on Freakonomics. The topic was “That’s a Great Question!” Here’s a snippet of the transcript (emphasis added). Love hearing how the best interviewers think about developing questions…

DUBNER: How do you take it when someone says – let’s say you’re sitting down with, you know, maybe it was Steve Jobs, maybe it was President Clinton, and you ask a question, and they look at you across the table, and say, “Charlie, you know, that’s a great question.” Does it feel like they’re trying to flatter you? How do you take that?

ROSE: I think they’re trying to flatter me most of the time. Or they believed it, I mean whether it is egotistical and I think it was a good question and I agree with them, because I thought about it and structured it and gave some consideration to it. Or, B: it’s spontaneous and flattering, and less so, it’s simply buying time as they crystallize their thought.

DUBNER: It strikes me that you’re someone who works hard to ask the kind of questions that people think are really good questions, that are really good questions.

ROSE: I do, I mean, it would be clearly naïve of me to say that I don’t think about the craft of the question. I think about that a lot. How to ask the question, what I expect to get from the question. And so how the question is perceived makes a difference to me. I structure the question hoping to get the best possible response. I used to make longer questions. With some assumption that I had to explain the question. I spend more time now simplifying the question.

DUBNER: Take us a little further into that. When you say that you structure the question hoping to get the best possible response, I guess what I want to know from you is, how do you know what the best possible response is? In other words, are you trying to, like a prosecutor, get the answer to a question that you sort of know already?

ROSE: Well, let me do something first. I would be tempted to say, “that’s a very good question that you just asked me.” But because of this conversation I’m not gonna say that.

DUBNER: Okay. Cheers.

ROSE: But it…I would say that because that is the right question. Um, it’s not, for me, that I want them to say something that I think they’ve said before and I want them to repeat it. So, I’m not asking questions to have someone tell me what I want them to say or to tell me something that they’ve already said before. What I want them to do is surprise me with an answer. To go deeper, wider, more interesting than they have before. And there is a kind of moment in which you try to say something that is…that just captures the moment and makes the person be caught up in the question rather than simply, you know, repeating something that they’ve said a thousand times before.

PR Studies: Balancing Beleivability and Interest

Writing a survey for public release can be tricky and reporting the results is no easy task either. 

People reading the results of any survey question have some probability distribution of expectations. Meaning, if I hear something about “how many people wore red pants today?” I’d have some guess (idk, 5%?). If the probability distribution is normal, then there’s a peak at mean expectation… it would look something like this:


There are three sections to this visual:

Red: Any result here would be close enough to my expectation that it wouldn’t interest me. Say, 3-8%. That would be close enough to my guess.

Orange: This would make me skeptical. Anything below 1% and I think, “c’mon. Some people wore red pants.” And more than 15%, same skepticism; except now I’m thinking that I didn’t see any red pants today…and I saw a lot of people. It can’t be that high.

The sweet spot is right in the middle. I’m not skeptical but I’m also not uninterested.

This helps to think about how to optimize two (of many) dimensions of a successful PR study:

Design

Questions should be crafted that have a “flat” distribution (high SD) of expectations. This will maximize the likelihood that a result falls into the “interesting” range. [Of course, a researcher need also consider the absolute level of interest, because uncertainty doesn’t imply anything about inherent interest. A question about how many people want their employer to offer better health insurance options might have high expectation uncertainty, but it’s not inherently interesting – so no matter what the outcome, it’s not interesting.]

So, a good question is something that the audience will want to know the results of, but won’t have a pre-defined expectation.

Tougher than it seems, but I think Jean-Francis Bonnefon et al did a good job with their paper on autonomous vehicles some months ago. (http://news.mit.edu/2016/driverless-cars-safety-issues-0623).

Reporting 

Again, this is tricky. Strictly speaking, you, as a researcher, can’t pick and choose what to report on, so really you should be reporting results objectively; but in reality, you have a lot of discretion in which results to emphasize, and how to emphasize it, and which data to leave in an appendix.

A good report is written so that the voice reflects that of the audience. When you think you’re audience might be surprised, you should sound surprised too; when you’re skeptical of a result, you do your due diligence is quality checking the results, and reassure your audience that it’s right (or not). So, you need a voice that is accessible by your audience. But the report should also be written authoritatively to command respect and credibility. 

All in all, research for public release is a major balancing act. If you read about a study and think “I could have done that,” then that just means the researcher did a good job. 

The best report automation programs?

There are none.

I know I’ve done a poor job if the client or a consultant asks me what program I’ve used to produce a report. (Or if I did.)

A report is as much a work of art as it is a communication of ideas, theories, evidence, and data. The production of a report can entail any combination of stoicism and artistry. It, of course, depends largely on your audience (a police report, for example, leans heavily toward stoic; an infographic on popular celebrities, on the other hand, depends more on aesthetics).

When I’m asked about the “program,” I used to develop a report, it feels as if I leaned too stoic. That the level of creativity in my report was no greater than that of a computer-generated report.

My goal in my report is to assign or draw meaning from abstract concepts.

There is nothing fundamentally meaningful about a number – or a collection of numbers. Only by providing some subjective qualification to a data point do I bring meaning to the abstract.

Picture this progression…

Stoic

  1. 75% of Americans believe [tech company] has a ‘good reputation.’
  2. With 75% of Americans rating [CPG company’s] reputation as good, [CPG company] has one of the best reputations in the industry
  3. [energy company], with 75% of Americans assigning it a ‘good’ reputation, drastically breaks from the expected industry norm — just 50% of Americans have a good impression of the next best competitor — though, fails to achieve the same acclaim internationally, despite being just as well-known in the United States.

Artistry

In statement A, the reader is left wondering whether 75% is a big number or a small number. Just a basic report of the value is ok in some contexts, but the researcher’s job is as much to interpret the number as it is to report it.

Statement C, arguably, evokes an emotion and offers the reader something to react to. It offers context and subjective meaning (‘just 50%,’ ‘fails to achieve,’ etc.).

Accepting the notion that market research is as much an industry devoted to providing facts as it is to providing reassurance is an important realization that will lead to a researcher writing “better” reports. (Or at least reports that are better suited to the audience).

I love quad charts: Employees Edition

I know people who have a lot of energy, are generally positive and gregarious. It’s nice to be around those people. But sometimes when I work with an analyst like that, their work is sub-par. I appreciate their energy, but you’ve got to be on point here. Our clients are investing a lot of money and we can’t just try again if something gets screwed up. 

Similarly (and I would put myself in this category), there are people who are great at what they do, but not that proactive. They do the work they’re given — usually well. But they won’t be the ones to drum up new work or readily volunteer their services.

So, we have “willingness to try” and “quality of work”… and here’s what I think happens when you intersect these two characteristics:


People in the top left should prefer a “numbers game” environment. They might have a low success rate, but know that a bigger N means more business. In the lower right you’d expect people who are averse to failure. They’re craftsmen who spend a lot of time of a few things because they don’t do a lot, but want to make sure they do it well.

This seems to make sense in my head. Salespeople should occupy the top left quad (willing to try, but low quality of work) because it leverages their willingness to meet people and make promises, but deemphasizes the fact that they might not actually be able to do the thing they’re selling.

For me, I life in the lower right can be pretty sweet. Call me when you have a project. (Or just email me.) I have a ton of pride in the work that I do and I’m pretty good at it. But I have a tougher time finding people who are willing to pay me for what I do.

Market Research: Data without Science

I often wrestle with finding the value in corporate market research. A research engagement is usually a large investment, but it’s difficult to see how the data generated in a research study actually helps my clients make better decisions.

Does the 71% of Americans who have read about your company on a news site really give a manager information that would affect their ad buy, marketing mix, communications strategy, etc.?

A good analyst will figure out how to tie a data point back to the original goals of the research, but may stop short of providing a concrete recommendation. I have a tendency to do this because I know too much about the research. The scientist in me is too skeptical to suggest a change in strategy.

But I find that consultants more readily make this leap. Perhaps, I’m naive, or perhaps I’m simply too close to the data and too familiar with the data collection process to have that confidence; but part of me knows that the recommendation – even when coming from the consultant – is a bit of a stretch.

The existence of market research as we know it is supported by a philosophy towards data as a panacea of “business solutions.” (We use the word “solutions,” but would seldom charge our clients with having “problems.”) Managers use data to support, sell, and feel better about their decisions; and a market research study can defend a position or even protect a position from too much blame if something goes wrong (being able to point to research that supported a decision that turned out wrong is a good way to indemnify oneself).

The more I reject the science of market research methods, the more I’m faced with a cynical conclusion about its nature… that market research is more about signaling than it is about content. The market research industry produces data for managers to use as they see fit and to defend their choices and help sell their products. But I think it’s naive to pretend that the methods in market research really reveal truths about the world. In reality, most of the conclusions of market research studies are confirmations of previously held beliefs.

Am I being too cynical or is this a secret about the industry?