A few additional thoughts and comments on the article. First, no single set of data provides a perfect measurement, unless the aspect that one is attempting to quantify is extremely simple - for example, the temperature in a particular place at a particular time. And what we are trying to measure here does not fall into that category.
On the other hand, what we are measuring is also not that complicated. The staffing and outlay data reported to HEDC, NCES, and other reputable organizations that record and study national educational statistics are listing exactly what has been reported to them by the institutions themselves. Moreover, any given organization will use the same criteria uniformly for defining various categories of employment, in a given year. Now it may happen that from one year to the next a particular classification changes, and that change artificially affects numbers reported for a particular institution from one year to the next. In other words, one might see increases or decreases in types of staffing that are primarily due to reclassification of job category. Not that we have evidence for that actually occurring with any of the data presented in our study, only that it is a theoretical possibility.
But even then, the inter-institutional comparisons remain valid, as any new classification scheme will be applied uniformly to all institutions for which the data is being collected. Of course, it is possible that some of the numbers get misreported. But any error of that nature is the sole responsibility of the institution reporting the data.
Beyond that, when one arrives at the same conclusions from many different angles and from the use of different repositories of institutional data, and when the conclusions coincide with those of other, independent studies (as ours did with the University Senate's 2016 FCBC report), then a substantial degree of confidence in those conclusions is warranted.