Monday, November 30, 2020

Some New Articles, Posts on FRAND, Part 2

 1. Binxin Li and Chuanshu Chu published a post on the Kluwer Patent Blog titled .  The authors discuss the Xiaomi v. IDC and Huawei v. Conversant decisions, previously discussed on this blog here, and offer their comments on the potential for global SEP cases to cause problems with regard to international comity.

2.  Deputy Assistant Attorney General Alexander Okuliar delivered a talk in Washington, D.C. on October 28, titled From Edison to 'New Madison':  Division Activity at the Intersection of Innovation, Competition Law, and Technology.  Nothing new here regarding the division's views on FRAND and SEPs, though there are some interesting comments on China's antimonopoly enforcement and its standards development initiatives.

 

3.  Pier Luigi Parcu and David Silei have posted a paper on ssrn titled An Algorithm Approach to FRAND ContractsHere is a link, and here is the abstract:

 

In the context of standards development, the current mechanism of negotiation of FRAND royalties frequently brings to undesirable litigation. This is mainly due to the fact that a relevant part of the information concerning the standard, required to stipulate complete license contracts, is revealed only after the standard itself has spread in the market. In this respect, we propose a litigation-reducing algorithm to determine the FRAND level of the licensing royalty. Unlike the current negotiation mechanism, this algorithm can be defined ex-ante, so to increase contract completeness, because it includes a Bayesian-updating rule, able to address the presence of ex-ante uncertainty. We derive the algorithm from a generic oligopolistic-competition model, so to deliver characteristics of applicability to both price and quantity competition. Simulations in a linear-Cournot framework suggest the algorithm calculates FRAND royalties and may be usefully applied to real-life cases. 

 

4. David Teece has posted a paper on ssrn titled Patent Counting and the 'Top Down' Approach to Patent Valuations: An Economic and Public Policy Appraisal of Reasonable RoyaltiesHere is a link, and here is the abstract:

 

In many circumstances it is helpful, and sometimes necessary, to assess (possibly even to quantify) the technological prowess of a business enterprise, either overall or with respect to particular fields of application, or possibly with respect to the firm’s relative position in an industry. In such circumstances, it is tempting to use as a measure the number of patents that has been granted to a firm. However, patent counts are an imperfect and unreliable metric. Using them may create an aura of accuracy, but it is false (scientific) accuracy for the reasons discussed in this article. In particular, the “top-down” approach to the valuation of standard-essential patents (SEPs), which relies heavily on patent counting, is a poor surrogate for the determination of the value of patented technologies.

 

I will start with some basics. In scientific inquiry, precision refers to how close the measurement of a variable is to what is being measured. Precision is, however, independent of accuracy. Indeed, it is possible to be precise but highly inaccurate. Accuracy is, of course, more important than precision. In this paper, I will show that patent counting, while having the possibility of being precise, does not always meet that criterion in part because of ambiguities as to scope. For instance, sometimes standards are at issue with patents “reading on” or being “essential” to one or more technical standards. However, there may be ambiguities around how many patents in a given portfolio are in fact essential, versus simply declared essential by the owner or some third party.

In this article, I make two suggestions. First, patent-count metrics are at best poor proxies of technological strength or value. This is not just because of inaccurate patent counts in the numerator or denominator of some index. It is also because there is at best only a weak connection between even well-specified patent indices and underlying economic value of a patent or patent portfolio. It is often the case that one will have to look downstream to the user to figure out the incremental value that the technology yields to the consumer.

Second, when it comes to valuing intellectual property that “reads on” a standard, the numerical proportionality of standard-essential patents (SEPs) is a bogus measure. It is unlikely to measure the relative value of patents, let alone the value of technology. The problem is compounded because numerical proportionality requires the determination of a “total value” associated with all patents that “read on” a standard, which has typically been arrived at arbitrarily.

 

My initial response to the paper is that, while Teece may be correct in theory, (1) are the methodologies for estimating SEP value at a more granular level reliable?, and (2) even if so, are they worth it?  In other words, do the benefits from marginally increasing accuracy outweigh the additional administrative and adjudicative cost?  I'm a bit skeptical, given the lack of evidence regarding the materiality of the patent incentive in this space.

No comments:

Post a Comment