Writing a survey for public release can be tricky and reporting the results is no easy task either.
People reading the results of any survey question have some probability distribution of expectations. Meaning, if I hear something about “how many people wore red pants today?” I’d have some guess (idk, 5%?). If the probability distribution is normal, then there’s a peak at mean expectation… it would look something like this:
Red: Any result here would be close enough to my expectation that it wouldn’t interest me. Say, 3-8%. That would be close enough to my guess.
Orange: This would make me skeptical. Anything below 1% and I think, “c’mon. Some people wore red pants.” And more than 15%, same skepticism; except now I’m thinking that I didn’t see any red pants today…and I saw a lot of people. It can’t be that high.
The sweet spot is right in the middle. I’m not skeptical but I’m also not uninterested.
This helps to think about how to optimize two (of many) dimensions of a successful PR study:
Questions should be crafted that have a “flat” distribution (high SD) of expectations. This will maximize the likelihood that a result falls into the “interesting” range. [Of course, a researcher need also consider the absolute level of interest, because uncertainty doesn’t imply anything about inherent interest. A question about how many people want their employer to offer better health insurance options might have high expectation uncertainty, but it’s not inherently interesting – so no matter what the outcome, it’s not interesting.]
So, a good question is something that the audience will want to know the results of, but won’t have a pre-defined expectation.
Tougher than it seems, but I think Jean-Francis Bonnefon et al did a good job with their paper on autonomous vehicles some months ago. (http://news.mit.edu/2016/driverless-cars-safety-issues-0623).
Again, this is tricky. Strictly speaking, you, as a researcher, can’t pick and choose what to report on, so really you should be reporting results objectively; but in reality, you have a lot of discretion in which results to emphasize, and how to emphasize it, and which data to leave in an appendix.
A good report is written so that the voice reflects that of the audience. When you think you’re audience might be surprised, you should sound surprised too; when you’re skeptical of a result, you do your due diligence is quality checking the results, and reassure your audience that it’s right (or not). So, you need a voice that is accessible by your audience. But the report should also be written authoritatively to command respect and credibility.
All in all, research for public release is a major balancing act. If you read about a study and think “I could have done that,” then that just means the researcher did a good job.