Everyone seems to be talking about “big data” these days — what it is, where it is, why it’s important, who owns it and how to mine it. It’s like a gold rush with numerous prospectors jumping into the game, dreaming of striking it rich and cornering a market before others discover the location and size of their mine. But there’s one small problem: The prospectors often don’t quite know what they’re doing when it comes to pricing and selling their bag of gold dust — or in the case of data miners, their trove of digital data.
They’re not as prepared as they should be when they finally confront a buyer with inside market and other business information unbeknownst to the seller. The result: The seller doesn’t make as many sales, or as much profit, as hoped and desired. Of course, today’s major data miners — such as Acxiom, Facebook, Google, credit-rating agencies, and other major data miners that jump to mind —are anything but naive sodbusters who threw down their plows for pickaxes.
They’re incredibly sophisticated tech companies, at heart, that know what they’re doing and where to look, and collect, potentially fascinating and lucrative digital data that corporate buyers might find valuable about their customers, products and overall market prospects. But what amazes us is how little companies truly know about how to actually design and price information for sale to others.
Information isn’t like your proverbial widget that a buyer either needs or doesn’t need. Information is more fluid, more abstract — and it’s important for sellers to understand that not all data buyers and their needs are the same. Some buyers have partial information about, say, the online shopping patterns of their customers, while other buyers are relatively clueless about these patterns. How do you price information for such disparate buyers of information?
That’s one of the questions we sought to answer when we first started research that ultimately led to our new study. On the surface, what we found in our research appears to be basic common business sense when it comes to selling just about any product: offering multiple versions allows the seller to segment the market and increase profits. But in order to develop a better, more structured and quantitative framework to analyze the sale of information, one must take into consideration the uncertainties a data buyer faces when seeking information—and then customise a data product to those needs.
Think of it in terms of today’s cable companies’ selling different “tiers” of services and programming to home viewers – such as “basic,” “premium,” “super-premium” or “super-duper premium” plans, or whatever they call their tiered cable packages these days. What if, as we’re now learning, those broadly defined cable tiers are not adequate for many, if not most, cable customers?
The trick with big data, we’ve found, is that sellers need to better screen, sometimes on an individual basis, exactly what potential buyers and sellers need — including whether they’re “high-end” buyers or “lower-end” buyers. Not only that, sellers themselves have to better mine, and then analyse, their own information that they’ve gathered, and offer up new and better customised products to customers.
In the case of credit-rating firms, sellers have different levels of information that they offer to banks, such as basic “red-flag” products about the credit histories of loan applicants. But they can also sell more detailed information, such as alerting banks of any late payments or bankruptcy filings by borrowers after their loan applications have been approved. That’s extremely specific, and valuable, information to banks.
The key with partial packages is to be systematic in the type of mistakes a buyer of an incomplete data set may incur, having decided to miss out on the full information available from the seller. For example, lenders who fail to acquire all available red flags might give a loan to an unworthy borrower, but no good applicant is ever turned down because of it.
When selling information, it’s absolutely critical not to clutter up customised packages with “noise” that will merely degrade the product (as in the case of lower battery life for Tesla vehicles). Moreover, the optimal number of customised packages is limited by the complexity of buyers’ needs — if buyers need data to assist with simple decision problems, the seller should offer just a few partial packages, even if buyers are very diverse.
The bottom line is that data sellers can indeed sell comprehensive packages of data to corporate buyers — but they better be prepared to sell partial packages of data as well, tailored to the needs of individual buyers.
♣♣♣
Notes:
- This blog post is based on the authors’ paper The Design and Price of Information, forthcoming in the American Economic Review.
- The post gives the views of the authors, not the position of LSE Business Review or the London School of Economics.
- Featured image credit: Big data, by wynpnt, under a CC0 licence
- When you leave a comment, you’re agreeing to our Comment Policy.
Dirk Bergemann is Douglass and Marion Campbell Professor of Economics at Yale University. He has secondary appointments as Professor of Computer Science at the School of Engineering and Professor of Finance at the School of Management. He has been the Chair of the Department of Economics since 2013. He joined Yale in 1995 as an assistant professor, having previously served as a faculty member at Princeton University. He has been affiliated with the Cowles Foundation for Research in Economics at Yale since 1996 and is a fellow of the Econometric Society. His research is concerned with game theory, contract theory and mechanism design.
Alessandro Bonatti is an Associate Professor of Applied Economics at the MIT Sloan School of Management. His research focuses on (a) the provision of incentives in research-intensive and creative industries, and (b) on the impact of technological advances on firms’ online advertising and pricing strategies. Bonatti holds an MA, an MPhil, and a PhD in economics from Yale University.
Alex Smolin is a Postdoctoral Researcher at the Institute for Microeconomics, the University of Bonn. His research focuses on information design and mechanism design in various economic environments. Smolin holds an MS from Moscow Institute of Physics and Technology, an MA from New Economic School, and an MA, an MPhil, and a PhD in economics from Yale University.