View Single Post
Old 09-11-2015, 02:52 PM   #10
Josquin
Minors (Triple A)
 
Join Date: Sep 2015
Location: Toronto
Posts: 228
Quote:
Originally Posted by RonCo View Post
So it seems your definition of accuracy seems to be:

A = Guys he said would be good, and panned out
B = All the guys he said would be good

Accuracy = A/B

I therefore suggest that the measure of a scout's lack of accuracy might be made of two parts--guys he said would be good, who don't turn out to be + guys he did not identify as potentially good who turned out to be good, then divide this by all the players he looked at:

C = Guys he said would be good, but did not pan out
D = Guys he looked at but did not identify would be good, but who eventually became good
E = All the guys he looked at

Error Rate = (C+D)/E

This is interesting to think about to me, because the fact that the denominator is different for these two things helps frame the problem.

At the end of the day, the only thing a team really cares about above is A (the number of good players they get from their scouting department). They also want to limit "C" guys because they represent investment that has no return. Folks don't tend to talk too much of "D" players except in the abstract.

In the old days, I think scouting staffs distiguished themselves through sweat equity, meaning a scout who drove through po-dunk cities for days and days to see 1,000 players was a better scout than one who saw only 100--merely because that increased the number of guys he said would be good, and assuming all scouts are have generically the same hit-rate, would yield more "A" players.

Today, scouting is ubiquitous. Everyone sees everyone. So the impact of sweat equity is considerably less (except in some foreign scouting areas...though that is changing, too). Given that, I suspect that the actual value of one professional scout (guys who have spent their lives doing this) over another is very, very slim.

In the above, I suggest the best measure of a scout might actually be:

Value = (A-C)/(A+B+C+D) (or some weighted factor of these)

And that the differentiation between experienced scouts is which guys fit into A and D (though the numbers of As and Ds are probably about the same).

Just my complicated .02
If you look at scouting as a pure binary classification problem (i.e. looking at all the prospects and categorizing them as "good" or "bad") then there are established measures like F1-score to measure accuracy of a classifier. As you discuss, you want to minimize both the false positives (players who are classified as "good" but don't pan out) and the false negatives (players who are classified as "bad" but turn out to be good).

As you point out in another post, this is probably not the right way to look at scouting in OOTP. The right way to look at it is how often the scout's estimate of a prospect's potential ratings is close to his "true" ratings (which are visible in the Edit Player screen in Commissioner Mode). There is no binary threshold between "good" and "bad" prospects -- it is more of a continuum, and the scout's estimated ratings may or may not be close to reality. The scout's accuracy would thus be measured as the average deviation between estimated and true ratings.
Josquin is offline   Reply With Quote