Report Automation is the Nuclear Power of Custom Research

 

Six years ago as a young researcher, I was drawn to the prospects of report automation when I was given a Power Point template and told to make 12 versions of it for 12 different countries.

I watched YouTube videos about Macros, figured out how to use VLOOKUP, and discovered an underground network of Excel geeks. After about a day and a half, I was a little nervous because I had spent several hours not really making any progress populating the reports, but I knew (hoped) the investment would be worth it.

All in all, what I developed wasn’t perfect. But it was very convenient. I would open up an Excel file, run a macro, and the output would be something I could copy into Power Point, so that each report only took about 15 minutes.

 

It turned out, a lot of people were trying to solve the report automation problem. Researchers spent a lot of time copying numbers from one document into another; and report automation was supposed to allow researchers to focus on more profitable tasks. There were special committees aimed at finding the newest report automation program; groups of researchers who would trial the programs; and a general sense of optimism. We could taste it!

Fast forward to today, and we’re no closer to cracking the report automation problem than we were six years ago. And, more frustratingly, the solution is constantly been just out of reach. No matter how many blogs I read or programs I test, no report automation tool really gets it right.

So, why the nuclear power reference?

It’s an old joke in the nuclear power world that cold fusion as a viable energy source has always been and will always be 30 years away.

To save you the time of checking my profile, I’ll tell you now that I’m not a nuclear engineer. But the issue with viable nuclear energy seems to be that advancements generate new,complex questions — often more complex than the questions answered.

And it seems to be a similar case for report automation:

Much in the way that viable nuclear fusion is perpetually 30 years away, viable report automation has been just out of reach for the past six years, at least.

Report Automation is not Solving a Mechanical Problem

From my experience, report automation programs usually end up creating more work than they save. Changing a little aspect of a graph that was created with a program is like playing Russian roulette with my report… I double click and have no idea whether or not Power Point is going to crash.

But really, I believe we have misinterpreted the problem that report automation is aimed at solving.

Although we call the final product a “report,” it’s really a narrative. Personally, I don’t know whether an array of data ought to be displayed as a bar chart or a pie chart or whatever until I’ve had a chance to see both. I need to play around with it and see how different displays complement the overall story the data is telling. Maybe the data doesn’t belong in the report at all? Maybe I need to combine an array of data with some other data point. Maybe I need to add in some desk research to support the data.

While I don’t think I’m an artist, there is a certain level of creativity that goes into developing a report. A creativity that is stifled with a report automation programs.

Ultimately, the “custom” in custom research will continue to preempt effective use automation programs. Sure, automation will be useful in some cases like in the story I described above; but I get the sense that solutions like these are ad hoc and difficult to apply generally.

FTI: Getting Corporate Communications Up to Speed in the Snapchat Era

Interesting from FTI.
The slow, leviathan communications process of corp comm might need to adapt to the new modes of story telling.

It’s time for Corporate Communications to take a page or two, or three, from their sales and marketing departments — and join the digital age.

Social media allows companies to communicate directly with consumers and clients instantly in a variety of content styles — all at the same time. Want to broadcast quarterly earnings? Twitter’s 140 characters is a good starting place. Share a clip of the CEO ringing the opening bell on the NASDAQ? Try a Facebook Live video feed. In each case, companies can use the mediums to direct users to the full story behind the messaging on the corporate website.

Where I disagree is the demarcation of corp comm from sales/marketing. Whether those boundaries make sense are more likely company-specific.
It seems that nearly all industries are aiming for more story-telling — ourselves included in the push towards more narrative-based reporting. But the fast paced social media world is an environment poorly suited for committee-based communications.

As companies contemplate making greater use of the various platforms, the number of decisions surrounding the quality and frequency of messaging can be overwhelming. It’s easy to get caught up in debating what company presence should look and sound like on a quick-hit, minute-to-minute feed like Twitter, a thought leadership platform like LinkedIn, and an image-driven, disappearing messaging app like Snapchat. Designating tone of voice across each is undoubtedly a challenge.

All of that presents a risk. But the bigger risk may be the failure to adapt.

Caution is understandable in the rapidly shifting digital frontier. But Corporate Communications departments that think of the medium as the message open the door to new possibilities. By exploding their stories out into the world and scattering the breadcrumbs among the various platforms, corporate communications provides a trail for consumers to follow back to their website.

Personally, I think narratives are very compelling. Would I watch a web-series of an analyst’s first 100 days at their company? Maybe, if it were well curated.
Would I then think of that person the next time I needed to do research? Probably.
Stories may be the new biz dev.

Marketing Franchises Pt1: Why We Hated New Coke but Can’t Wait for the iPhone 8

My first in a series of articles on Franchise Marketing.

On a larger scale, food is rarely marketed as a sequential product. Occasionally we might see “improved taste” or “new recipe” on a box; but these are exceptions to the rule. For some reason, food is simply not marketed as a sequential good despite the fact that the flavors are constantly changing to meet new tastes.

This was a major part of my studies during my graduate work at GMU. There is some discussion of the economic notion of product spaces, but readable, I think.

An iPhone, as a utility good, can always be improved in a way that is easy to understand and does not complicate the comparison of one model to the next — more memory is always better, all else equal. Coke is right to continually adjust its flavor profile to best match consumer preferences; but publicly announcing a change disrupts consumers’ decision-making rituals because it’s not obvious to consumers that the change is necessarily better. Unlike memory in a smartphone, more sugar isn’t always preferred to less sugar.

Do read the whole thing.

Qual Solutions for Qual Questions

Quantitative research is appealing because it forces researchers to think in measurable terms. Just saying things like, “people tend to…” or “there’s support for…” are highly scrutinized because statements that start like this either (1) deliberately omit quantitative data or (2) do not have strong statistical backing.

But when researchers are accustomed to giving quantitative answers (and clients are accustomed to hearing them), it becomes difficult to identify when questions are fundamentally qualitative. Here are some examples of (paraphrased) objectives I’ve observed in real RFP’s that were ultimately addressed with quantitative methods that would have been better addressed qualitatively.

  • “Investigate the issues that matter most to stakeholders…”
  • “Dimensionalize what [CONCEPT] means to various stakeholders…”
  • “Understand what motivates [GROUP] allies to support [GROUP CAUSE]…”

Of course there are other considerations when determining the optimal research approach; so, I can’t necessarily say that a quantitative approach was wrong. The best research approach is a decision that needs to weigh several factors. Nevertheless, I think analyzing real RFP objectives is helpful.

Qualitative Myths and Aversions

Here, I describe some myths and aversions clients and researchers have towards qualitative research.

Myth 1: Qualitative Research is Touchy Feely

Please describe to me how a market comes into equilibrium using statistics. Ok, that’s a hard one.

Please describe how a hurricane forms using statistics. Not a meteorologist?

Why are some Super Bowl ads more popular than others? Ok, everyone has an opinion here. But this is still impossible to answer using statistics.

These are all questions that I wouldn’t describe as “touchy feely,” but they are all qualitative in nature.

As scientists, we all deploy a process of thinking about how the world works. Describing how a process works is necessarily a qualitative exercise.

Myth 2: Qualitative Research is Subjective

This is a common misconception which does a great disservice to practitioners of qualitative research. Accusing a researcher of subjectivity is akin to an accusation of bias — it undermines the legitimacy of qualitative methods.

Although the data collected may itself be subjective, the collection, analysis, and reporting of it is methodical and descriptive. Qualitative researchers are not giving their opinions of the data collected.

Myth 3: Use Qualitative Research when the Quant Budget is Limited

Often researchers and/or clients will deploy qualitative methods when they do not have the budget for quantitative sample sizes.

I’ve seen this a few times where clients settle for qualitative work. The irony is that the few cases I’ve seen this happen, I think the resulting research was successful. But not because qualitative is a substitute for quantitative, rather, because the original project was qualitative in nature.

Regardless, if you intend to use quantitative methods to measure the size of an effect, compute the incidence of a population within a larger one, or determine the more impactful ad (among two similar ads), then qualitative simply won’t give you the information your looking for.

I find that clients and researchers alike are averse to qualitative research for several reasons.

Qualitative Aversion 1: Quantitative Complacency

Research buyers are accustomed to contracting work with suppliers when there is a need for measurement or calibration. At the center of the research is typically a decision that needs to be made “at the margin.” By this I mean, for example, “which of two similar ads should we run?”, or “how should investment be allocated across two stakeholder groups with similar impacts on a company’s reputation?” These are marginal questions that need statistical answers.

But somewhere in the research buyer-supplier relationship, the parties became complacent and just continued to use quantitative solutions for qualitative questions (like the ones described above). In short, buyers and suppliers simply expect quantitative methods to be the right method. The comfort and inertia apparently bias researchers to favor quantitative methods.

Qualitative Aversion 2: Need for Statistics

As discussed above, research is often called upon when a “marginal” decision is needed. This, I believe, has led to the expectation for research to provide operational decision making tools.

Typically,  when somebody asks for outside help in making a decision, they are on the fence. But qualitative research simply does not provide the quantitative support when somebody is on the fence about a decision.

Qualitative Aversion 3: Hubris

Many believe that qualitative research won’t tell the client/researcher anything that they don’t already know. “Why should I do a focus group with industry experts when I am the industry?”

Clients/researcher often think they already have the answers — they already know the possible hypotheses — and all they need is measurement and confirmation.

(But if they were to really analyze their RFPs, they might rethink how well they know the issues key to the research).

Qualitative Aversion 4: Lack of Scalability

Leaders in research firms deemphasize investments in qualitative research solutions because it is so human labor intensive. It simply does not make sense to spend much time focused on research solutions that do not have strong returns to scale.

This can lead to research consultants being poorly incentived to offer qualitative solutions even if the qualitative research is the optimal approach.

Resources

Here are a few resources I’ve found helpful concerning qualitative research. Still, there’s a lot of false information out there. I saw a Udemy blog post suggesting that qualitative research was “subjective,” which is not true. So, caveat emptor.

Yale University’s Dr. Leslie Curry (video):

Qualitative methods can generate a comprehensive description of processes, mechanisms or settings.

QRCA:

-Develop hypotheses for further testing and for qualitative questionnaire development
-Understand the feelings, values, and perceptions that underlie and influence behavior
-Identify customer needs
-Capture the language and imagery customers use to describe and relate to a product, service, brand, etc.
-Perceptions of marketing/communication messages
-Information obtained in quantitative study and to better understand the context/meaning of the data
-Generate ideas for improvements and/or extensions of a product, line, or brand
-Uncover potential strategic directions for branding or communications programs
-Understand how people perceive a marketing message or communication piece
-Develop parameters (i.e., relevant questions, range of responses) for a quantitative study

There are several resources out there, but be careful because there is a lot of mis-information.

Five Ways to Get Your Start as a Qualitative Consultant (Plus an Unethical One)

In the past few months, I’ve spent some time talking to independent qualitative consultants to learn more about the industry and how they got their starts.

It’s been a fascinating exercise. Above all, what I’ve learned in that a new qualitative researcher needs to be patient. There are a variety of ways to get one’s foot in the door, but the most successful consultants spend years developing reputations as strong consultants before making a mark in the industry.

It’s been useful for me to survey and organize the variety of ways I’ve heard IQC’s got their starts. Here is what I’ve heard so far.

1. Work for a Supplier, then Go Independent

This seems to be the most common approach. A consultant gets their start as a junior moderator at a research firm that does qualitative research. The junior consultant does some training, spends a few years progressing through the ranks, and,  builds strong rapports with end-clients.

Having a rapport with the end-client is very important here because it gives the consultant leverage to leave the research supplier, but keep the work since the end-client likely would want to keep the consultant as the moderator. In effect, the consultant’s first client is their former employer.

To be sure, the consultant is best to contract the research company as their first client, and not the end-client. This type of subterfuge is an easy way to get blacklisted in the community.

2. Apprentice

It’s sometimes possible to work closely with an existing consultant in an apprentice-like setting. This is no different than any other apprenticeship, the junior consultant is essentially an employee or intern of the senior consultant. After some time, the senior consultant passes projects in-full to the junior consultant.

This typically requires that the senior consultant have extra work lying around, which often depends more on macroeconomic conditions than anything else. But if the junior consultant can offer something to the senior consultant, then it’s possible for a mutually beneficial relationship to forge.

3. Grind it Out

My personal favorite. A junior consultant with some basic research knowledge can just do business development like crazy. Ask around if anyone is looking for research consultation … at a low price. And if the junior consultant can’t lower their price enough, just start doing work for free. For example:

  • Collect some data, write a report and circulate it among a relevant industry, trying to target research buyers. Maybe the next time that buyer needs research, her name might be top of mind.
  • Publish on LinkedIn, gain a following on Twitter and tweet information relevant to a particular industry… get the name out there.
  • Ask for overflow work from existing qualitative consultants — recruit, write reports, manage projects, do anything to make the senior consultants job a little easier. (This strategy risks — slightly different than apprenticeship — the junior consultant pigeonholing herself.)
  • Network, network, network. Just get the name out there.

In this approach, the junior consultant has to be strategic about what communications are value add (i.e., publishing an article on LinkedIn) and what communications are actual asks. After some time providing value, the junior consultant may have enough of a reputation to solicit business … but this is a long-term strategy. Ideally, the value add communications create a need for new research. This way, the junior consultant it not seen as poaching existing business.

4. Get Lucky

A variation of a theme from points (1) and (2).

Some consultants formerly worked for a research firm that (likely because of the recession) dissolved. The result was a consultant-client relationship without the middle man.

5. Specialize, then back in

In some cases, qualitative consultants were specialists either in an industry vertical or a topic area (like, branding or user design).

For these consultants, the qualitative is kind of a secondary role since they are primarily soliciting work as an industry or topic area specialist, but often qualitative work becomes necessary in the consultative process.

6(bonus). Steal

I had to throw this in here because this is a surefire way to get blacklisted in what is a VERY TIGHT-KNIT community. There will likely be opportunities for a junior consultant to furtively contact an end-client, undercut price or disparage an incumbent consultant and steal the business.

The junior consultant might make a quick buck, but long term this is not a good strategy in a business that is marked by strong supplier-supplier relationships.

Overall, what I’ve learned is that the struggle does not end once a junior consultant gets her first client. The first client might require the most patience, but in reality successful consultants constantly work towards improving their methodological approaches, keeping up with technology, budding into different industries, and partnering with other research suppliers in clever ways all in the effort to keep doing what they love.

Qualitative consulting is a relationship business. The most successful consultants spend years putting it more than they take. This is the approach that nearly everyone seems to agree is the key to sustainable success.

THE NEXT BIG BLUE-COLLAR JOB IS CODING

From Wired.

All the other millions [of coders]? They’re more like Devon, a programmer I met who helps maintain a ­security-software service in Portland, Oregon. He isn’t going to get fabulously rich, but his job is stable and rewarding: It’s 40 hours a week, well paid, and intellectually challenging. “My dad was a blue-­collar guy,” he tells me—and in many ways, Devon is too.

“GEMO”; and What it Means for Your Research

GEMO: An acronym, “Good Enough, Move On.”

I have heard this phrase more and more recently and it has the potential to become a new buzz word. The meaning? Since the value of labor hours into a research product decreases with each next input, it may not be efficient to make something better than it has to be. In other words, once something is good enough, move on.

What this means in real life? Research suppliers will be smarter about how they deploy production resources. Here are some examples as I see it:

  • Instead of spending an extra two hours making sure that the oxford comma use is consistent through a report, research suppliers might deliver a report two hours earlier.
  • Instead of correcting for weighting adjustments when stat testing, suppliers will spend more time analyzing data and providing sounder recommendations.
  • Instead of making sure all slide titles are perfectly centered, suppliers may shave a few hundred dollars off the price tag.

In short, suppliers are going to start forgoing a little quality for more value. By “value,” I mean more usefulness per dollar. Research buyers are paying the same hourly rate for an analyst regardless of what he is actually doing. He may be checking if “none” is a singular or plural subject; or he may be running a k-means cluster analysis. In either case, it’s the same rate.

On the face, this is a good thing. By reducing resources in areas unlikely to affect the conclusion of a research study, research suppliers can improve the value of their offering either by reducing price or by reallocating those resources to more valuable tasks.

GEMO: Where Marginal Cost Equals Marginal Benefit

In your Econ 101 class, you probably remember hearing a lot about MC=MB. This is the point where “stuff” should be produced. Here’s Investopedia’s pithy explanation.

Additional units of a good should be produced as long as marginal benefit exceeds marginal cost. It would be inefficient to produce goods when the marginal benefit is less than the marginal cost. Therefore an efficient level of product is achieved when marginal benefit is equal to marginal cost.

Research buyers face the same hourly for an analyst, even if he is simply copy-editing a report. At a certain point, improving the quality of a report is so costly, that it really simply isn’t worth the extra cost.

The term GEMO is abuzz in the research community because, evidently, research suppliers have been spending too much time on perfection. That is, spending too much much time where the marginal cost exceeds the marginal benefit.

When a company or team is conducting ANY part of their business where MC>MB, then this is quickly corrected (either by the company or by the market). Do not confuse MB<MC with things like “loss leaders.” Loss leaders still represent products for which MB>MC, but the company is charging less than MC. The social benefit still exists because the production cost is less than the benefit realized; the only difference is that the company is not making a direct profit. If a company systematically produces such that MB<MC, then there is a mis-allocation of resources.

Personally, I think it’s true. Sure, I have thought for a while that my resources were being applied to useless endeavors. But that might be my “millennial” attitude. More objectively though, I really do observe others putting time towards bids that probably won’t be won, for example. I observe painstaking committee-like editing over client emails. Or conference calls to discuss the merits of various weighting schemes that do not substantively impact the outcome of study results.

When Is “Good Enough” Good Enough?

The problem, of course, is determining when is something actually “good enough.” Maybe catching a spelling mistake really does make a difference in the research buyer’s confidence in the rest of the report. Can you really trust someone who mixes up “their” with “there”?

An efficient market has a very good mechanism to force research suppliers to produce things just at the point of good enough, but no better. As an example, Imagine two real estate developers are building condos in an up and coming neighborhood.

Luxor Properties specializes in high end, luxury condos for $50k each (cost)

Econo Development produces less expensive properties for $25k each (cost)

Now, imagine that the neighborhood is attracting young, college graduates with low income. Both developers put their properties up for sale.

Luxor condos sell for $45k

Econo condos sell for $30k

Luxor can set and will get a higher price for their condos than will Econo. They have a better product, so it certainly won’t sell for less than Econo condos. But the question is whether or not building a nicer condo was worth it — or did they “overbuild.” At the selling prices above, I’ve implied that the extra $25k. This is an example where Luxor Properties did not effectively implement “GEMO.” Things were “good enough” when the condo could have sold for $30k. But the additional quality cost $25k, but the buyers were only willing to pay an additional $15k.

Though real estate may take some time to come into equilibrium, we can see how a market incentivizes producers to produce at the point of just good enough.

But it is difficult for firms to determine the just good enough point because many firms’ operations are conducted outside of “traditional” markets.

Why Employee Specialization and Firms Make “GEMO” Difficult?

The connection between GEMO and firms is not immediately obvious. In part, this is because the concept of a firm is a surprisingly subtle concept.

The existence of large companies was an economic puzzle for quite a while. Economists had long assumed that most markets were efficient enough so that transactions ought to take place in a free market setting. Instead, we observed people coordinating with each other, agreeing only to work with one another. We observed entrepreneurs hiring people to conduct work instead of contracting out tasks.

On its face, it seems inefficient. Why not just contract someone to do the one thing you needed instead of hiring someone long-term and putting the entrepreneur at the risk of paying for their employee’s time when there is no work to be done?

This question was eventually answered in a seminal paper by Ronald Coase called The Nature of the Firm. Here  he describes the puzzle.

Outside the firm, price movements direct production, which is coordinated through a series of exchange transactions on the market. Within a firm, these market transactions are eliminated and in place of the complicated market structure with exchange transactions is substituted the entrepreneur-coordinator, who directs production. It is clear that these are alternative methods of coordinating production. Yet, having regard to the fact that, if production is regulated by price movements, production could be carried on without any organization at all might we ask, why is there any organization?

And is answer.

The main reason it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious of “organising” production through the price mechanism is that of discovering what the relevant prices are… The costs of negotiating and concluding a separate contract for each exchange … must also be taken into account…

[T]he operation of a market costs something and by forming an organisation and allowing some authority to direct the resources, certain marketing costs are saved.

For example:

If you think a business owner will need to contract a graphic designer for 100 projects throughout the year, then she may decide to just hire a graphic designer. Sure, the designer may have some downtime here and there, but the business owner will save money in the long run by engaging in just one transaction (the hire) instead of 100 transactions.

There are several other factors involved: risk preferences (someone risk averse may be willing to accept a lower-than-market rate for the safety of long-term work), individual management preferences (some people don’t like the market — selling, negotiating — and prefer long-term contracts), search costs (it takes time to find good contractors), and many others. But the general idea is that there are gains to coordination when transaction costs are sufficiently high.

What does this all have to do with GEMO? Well, in a coordinated environment, employees are specialists. A company might hire a data scientist to be … well … scientific. Science usually means “peer-reviewable” quality — a level of quality that is often above the just good enough threshold. But the data scientist is not incentivized to operate under the GEMO paradigm. Maybe in the free market, the data scientist would know at what point her effort to improve the quality of her product was greater than the value added; but in a firm environment, there is no such feedback mechanism other than her manager, who may not be familiar enough with the incremental costs of her work or the consequences of alternative approaches.

And this is just one example. As companies continue to afford employees more autonomy and encourage entrepreneurial culture, it is imperative that the incentives keep up so that employees do not misdirect their resources.

What Does GEMO Mean for Employees?

For an employee, GEMO can mean two things.

Interpretation 1:

Continue to work on a particular product until her own time could be better employed elsewhere.

In this case, a researcher decides to forgo copy-editing a report because she is aware that her salary warrants more valuable work than copy-editing. She chooses to deploy her resources to another task.

Interpretation 2:

Continue to work on a particular product until the incremental benefit to the client is less than the incremental cost to the client.

Here, the researcher is aware that the client will be billed, say, $150 per hour. She chooses not to spend an hour copy-editing a report because she suspects the client would not be willing to spend $150 for a few spelling corrections.

Note that Interpretation 2 is different than Interpretation 1 because in Interpretation 1, the client may still be willing to “purchase” copy-editing for $150 per hour. The copy-editing should still be done, but it should not be done by an employee who can direct their time towards more profitable tasks.

Considering many companies are encouraging their employees to be more entrepreneurial, my suspicion is that the GEMO directive is likely a combination of both interpretations.

Successful entrepreneurs consistently offer advice on delegating, often citing effective delegation as the grounds for their success. In this sense, they are acutely aware of the value of their own time (a la Interpretation 1).

And successful entrepreneurs, of course, are aware of the cost-benefit split of their products. No successful entrepreneur would sell something for which MB<MC.

What Managers Can Do to Help Employees with GEMO?

Employees aren’t entrepreneurs. So asking and employee to think like one subtly violates an implicit contract. For the employee, working for a company vs. going on their own is like saying, “I agree to work for a slightly below market rate, as long as my employer gives me a safe position and tells me what to do.”

But the days of centralized entrepreneurialism are quickly fading. As I explained above, firms exist when transaction costs are prohibitively high. But many online platforms have created marketplaces where buyers of specialized human labor can find specialized labor producers with low transactions costs. In other words, the “linear” production model is being supplanted by a platform model. Networks like Upwork and M-Turk have greatly reduced the cost in transacting contracts. Today, it is much easier for would-be employees to employ a little bit of entrepreneurial spirit (i.e., joining a freelance platform and marketing themselves) and make a decent living.

The sooner managers can help employees understand that this platform model is the new model, the sooner employees will embrace it. Indeed, employees may have no choice.

More tactically, it will be imperative for the manager to communicate what types of work is valuable enough to warrant her employees’ time (GEMO Interpretation 1), as well as communicate the end-clients’ expectations of the work being done (GEMO Interpretation 2).

In short, somebody needs to be making the decision of what work is MB>MC and what work is not. Of course, this sounds like a trivial conclusion; but the fact that GEMO is a word suggests that there is a complacency trap in overly specialized firms.

For example, an analyst may have high autonomy in writing a report. Once completed, he delivers the report to the client manager/consultant, who reviews and sends to the client. But nowhere in that process did someone really ensure that the analyst was deploying his resources efficiently.

By encouraging more entrepreneurial tendencies in their employees, firms will inadvertently encourage them to consider the value of their own time more strategically.

So What Does This Mean for Research?

In the short term, I suspect research buyers will not notice much of a difference in their end-reports. Perhaps a misspelling or two might slip through, maybe data visualizations may appear more hastily developed, but substantive differences won’t be noticed until later.

In the longer term, the industry may start to see differences in how research is conducted. Here, I give two examples of potential long-term implications of GEMO. I do not claim that these examples apply to all research suppliers, but rather, I describe two cases where I believe “GEMO” will provide the impetus for supplier employees to redirect their resources.

How Necessary is a Truly Representative Sample?

Research panels are costly to maintain. The benefit of using a large panel to source sample for research studies is the confidence researcher suppliers have in the projectability of their samples.

But competing sample suppliers (Google surveys, M-Turk, Twitter) are becoming extremely inexpensive. This means that the marginal cost  of using panels (that is, the difference in the cost of using a panel vs. the cost of using the next best option) is increasing. And while these “next best” sources may not claim to have the same projectability as carefully curated panels, researchers may soon realize that their clients don’t need the level of projectability that a panel provides … especially with alternative methods falling in price.

Are Research Suppliers Consulting at the Right Times?

Here is a story: I have heard a consultant tell a client, “Listen. Whether I give you my advice or not, you’re still paying me to consult.” This was after the client was less than enthusiastic about listening to the consultant discuss the data.

Although I don’t have experience on the client side of the research engagement, I suspect clients are often just looking for data, and less analysis and consultative work. This suspicion is very acute when I am forced to awkwardly request 30 minutes of my client’s time to discuss the findings.

I don’t doubt the value of consultation in research, but I do believe the consultation is more valuable in the design phase than it is in the reporting phase. So, in this sense, “GEMO” may discourage employees from overanalyzing research findings and redeploying those resources in design and execution.

These are just a handful of examples that seem to apply to the work I am most familiar with. The point here, however, is not to make sweeping generalizations about the ways in which “GEMO” may disrupt the research industry, rather just provide possible examples.

A Good or A Bad Thing?

The predictions in the previous section are, of course, speculative. I don’t think representative panels will go away; so buyers looking for that level of representativeness need not worry.

At the end of the day, GEMO will be a good thing for research buyers if deployed correctly. By giving all employees the latitude to determine what is valuable and what is not, employees will be better equipped to use their time in the most efficient way possible. These are the same employees who (1) have deep relationships with their clients and are the best at answering what their clients see as valuable and (2) are typically the best people to assess the value of their time.

Managers operating under the old-time principle that “their employees work for them” must either adopt the new paradigm — the manager works for his employees — or else find themselves with fewer and fewer employees to manage.

At the same time, employees unwilling to take responsibility for managing their own time and ensuring the value of their work will quickly be usurped by employees who can.

Ultimately, GEMO is likely the first in several steps firms will take to decentralize decision-making authority, and empower their employees with the same. Firms that are faced with low-cost competition due to the advent of marketplaces for specialized labor may be forced to start considering their business as more of a platform that matches researchers (employees) and research buyers.

At the same time, research firms with large assets (e.g., panels, a reputation for high quality) will need employees to maintain those assets. For example, the management of a panel is something that ought to have centralized leadership since one employee’s work on one aspect will impact many others. So, I do not think that “GEMO” is the start of a major disruption. I do, however, believe that employees of research firms will soon find themselves responsible for things that they were once not responsible for. And just as importantly, research firms will need to offer incentive structures that reflect this increased decentralization of power.

Missing the Point About Behavioral Economics

Casual observation on Linkedin or marketing websites reveal articles touting the applicability of behavioral economics to marketing strategies. Some examples include:

  • Applying behavioral economics to research physician decision-making
  • Can behavioral economics inform the ad research process?
  • Why behavioral economics is the future of shopper insights

These articles, while insightful in their own right, fundamentally misunderstand what behavioral economics is. Rather than being a tool for marketers and researchers to use; it’s simply an observational discipline. An exercise in modeling and understanding why people behave the way they do; not suggesting people behave in certain ways.

Behavioral Economists Prefer Observation

There’s a person economists study named Homo Economicus. We love this guy because he’s totally rational and it’s really easy to predict what he’s going to do. To give you an example of how this guy behaves, there’s something called the ultimatum game. This is the basis for many economics experiments and has revealed a lot about how people behave. Here are the rules:

  • Take two people: Person A and Person B.
  • Person A gets $10 and proposes an offer to Person B. In the offer, Person A distributes the money between herself and Person B. (For example, A gets $7; B gets $3).
  • Person B decides whether to accept the offer.
  • If the offer is accepted, the money is split based on what Person A said.
  • If the offer is rejected, nobody gets any money.

Now, the rules are kind of absurd. What person would want to play that game? But leave it to Economists (and yes, I’ve done this before) not only to devise the rules but make people play it.

Think about for a moment, for yourself, how much would you offer if you were Person A? Why not more/less? Is there any offer you would reject as Person B? If someone offered you a penny would you “punish” them for making such a measly offer?

In experimental settings, we usually see something like offers below 20% of the initial pie ($2 in the case above) get rejected. And I totally get this. Suppose I’m Person B and Person A offers a $9-$1 split (Person A gets $9; I get $1). Being offered a meager $1 when someone else gets $9 flies in the face of my sense of fairness. So I will give up a $1 just to teach you a lesson. Also, I may not want to appear like a pushover. Maybe getting a reputation as a pushover will result in worse deals in the future.

If, instead of me, Homo Economicus were Person B, though, he would accept an offer as little as one penny. A penny is better than nothing after all. And Mr. Economicus is probably never going to encounter Person A again, so there’s really no incentive to spend money developing a reputation.

What this has to do with behavioral economics is that the actual results are so different from the predicted results. (Like, sooooo, different – we are off by basically a factor of infinity.) We had a model of human behavior, but that theory failed to predict what actual humans do. It is the role of the behavioral economists to observe these deviations from prediction and refine or develop a model of human behavior that is consistent with previously-accepted doctrine as well as the new insights.

Why Are You Asking Us?

Now, here’s where you have it backwards.

Behavioral economics is NOT a set of directions to marketers. Behavioral economists live in the abstract. Psychological insights may provide intuition or after-the-fact justification for prediction errors, but at the end of the day, a consumer is just another data point to us.

This whole time we economists have sat quietly in the corner of the room watching you and your consumers have a conversation, documenting your behavior – how much stuff is transferred at a given price, how much time do consumers spend looking for a better price, which marketing strategies work better than others, etc. Suddenly, one of us coughed (thanks, Rich), and you’ve spotted us. Now you’re asking what you should say next. Just ignore us. Please. We don’t want our existence to interfere with your natural behavior. (Oh hey, that’s an observation bias reference!).

Seriously, you are way better at marketing than we are. The only thing we have to offer is a (probably incomplete) theory about how you behave. Have you ever watched a nature documentary and seen the tiger ask David Attenborough what to do in the middle of a chase? No. Are you really going to ask a bunch of geeky economists how to talk to your customers? Please don’t.

Ok, Maybe Some of It You Can Use

I admit I’m being a little cheeky. Having someone observe and document your own behavior is probably beneficial for your long-term success. By developing insights on how the best marketers capitalize on human bias, you can find ways to adjust your own strategies.

So, yes, I’m sure there are ways for you (the marketer) to use the research of BE to help your success. But know that for the most part, you were probably doing the things described in the BE literature already. I’m sure there are insights to be applied, but they are potentially overstated.

David Sack’s Famous Napkin Sketch for Econ 101

This is a total Econ nerd post, but still, I think, informative for a casual reader.

There’s a famous sketch drawn by David Sacks – a silicon valley VC. He drew the picture below describing how Uber’s business model was based on a positive feedback loop. I think it’s pretty intuitive… have a look:

Image result for sacks napkin

It’s an example of positive cross-network effects. When there are more suppliers, it improves the product and induces more consumers to market. The increase in consumers then has a positive cross-network effect on suppliers (drivers). Drivers have less downtime and make more — thus can charge lower prices assuming the drivers want to have a fixed hourly rate. (If you wanna get really nerdy, then we can incorporate labor supply elasticity, but that’s another post).

There’s nothing trivial about this sketch. It helps explain why in 2015 VC’s were willing to value Uber — a transportation company with no cars and no drivers — at $68 Billion. $68 BILLION.

I’d like to unpack this from an Econ 101 POV.

1. More drivers –> Wider Coverage (Bottom right)

This is the most straightforward piece. Basically, the market expands. In the image below, I assume that the supply and demand curves shift out in a way that is price neutral.

pic1

2A. Wider coverage –> faster pick ups

This is one of the key parts of network effects. The whole is greater than the sum of its parts. The fact that Uber can quickly and at low cost connect drivers and riders means the transaction costs go down dramatically. And secondly, more connections between drivers and riders can be made. This reduces the wait time for a rider and improves the quality of the product. Now riders are willing to pay more since the the product is better and Demand increases.

pic2

2B. Wider coverage –> Reduced Driver Downtime –> Reduced fare price

Drivers get more fares more frequently because of the network effects discussed above. This reduced downtime implies that drivers get more fares per hour. Wages equal (fares/hour) x (price/fare). But wages are competitive. So, as the fares/hour increases, the price/fare decreases to keep wages at the competitive labor rate*. (This is an important point. The model assumes that drivers are competitive with respect to their time, not the amount of “work” they do. For them, an hour of work is an hour of work, regardless of the number of fares they get).

To imagine how this affects the supply curve, imagine that a driver once would get 2 fares per hour and Uber charged $10 per fare. Per hour, the driver makes $20 less whatever Uber’s share is.

With network effects, the driver can now get 4 fares per hour. But the driver doesn’t get $40 (less Uber’s share) because wage prices for drivers are competitive. Uber knows this, so they reduce the fare. This an interesting dynamic and it results in a “stretching out” of the supply curve. From the market’s perspective, at a given price, there are now twice as many fares.

pic3

Very cool stuff.

Why fewer ad hominem attacks on LinkedIn?

There’s an archetype of a hypercompetitove, successful businessperson. I’d expect this person to be kind of cut throat and brutally honest.

I’d also expect this person to be active on LinkedIn. (At least at one point in LinkedIn’s history).

So, why are ad hominem attacks so uncommon on LinkedIn… and much more common on basically every other social medium? It seems counterintuitive – someone active on LinkedIn has an incentive to bring down their competitors and all… and what better way to do that than through ad hominem attacks.

Ok, I’m being admittedly dense right now. But I can think of a few reasons why ad hominem attacks are so rare:

1. Reputation is more important on LinkedIn; so people play it safer

We like to think that we can be different people on different platforms. The person I am on LinkedIn is certainly different than the person I am on Snapchat. But on Snapchat, I don’t have as much to lose.

On LinkedIn, I’m much more risk averse. I play in the middle because alienating somebody potentially means losing business or not getting a job. There’s no reason to stick my neck out if I don’t think the upside is very high.

2. The market acts as an arbiter in disagreements; so there’s no need for nastiness

Say I make a claim on LinkedIn that some new social media platform is going to blow up; everyone should advertise on it, invest in it, etc.

If you disagree with me, and are pretty confident, then who cares? There’s no sense in engaging in a debate with me if you know the market will punish me for being wrong. In fact, it’s probably in your best interest if I keep thinking I’m right — more money on the losing side of the bet for you to gain.

On the other social media platforms, though, the disagreements usually are not settled by a market arbiter. If I say, “the ACA is the worst piece of legislation ever.” Well, there’s nowhere for me to put my money where my mouth is. No obvious mechanism to tell me that I was right or wrong five years down the line. (People might disagree on that point, but realistically, five years from now Dems are going to say it was a success and GOPs are going to say it was a failure regardless of “objective” measures.)

So, my friends call me stupid until either I come around to their way of thinking or I leave the circle of friends who were calling me stupid.

In the end, I think it all comes down what each platform is trying to achieve. In a truly “social” platform, if the goal is to make friends, and from social groups, then those groups need a mechanism to weed out who should be members vs. not.

A business platform, though, people’s opinions don’t really matter all that much. I go on LinkedIn for new business opportunities … not to galvanize my social network.