bias.txt 3.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354
  1. October 2015This will come as a surprise to a lot of people, but in some cases
  2. it's possible to detect bias in a selection process without knowing
  3. anything about the applicant pool. Which is exciting because among
  4. other things it means third parties can use this technique to detect
  5. bias whether those doing the selecting want them to or not.You can use this technique whenever (a) you have at least
  6. a random sample of the applicants that were selected, (b) their
  7. subsequent performance is measured, and (c) the groups of
  8. applicants you're comparing have roughly equal distribution of ability.How does it work? Think about what it means to be biased. What
  9. it means for a selection process to be biased against applicants
  10. of type x is that it's harder for them to make it through. Which
  11. means applicants of type x have to be better to get selected than
  12. applicants not of type x.
  13. [1]
  14. Which means applicants of type x
  15. who do make it through the selection process will outperform other
  16. successful applicants. And if the performance of all the successful
  17. applicants is measured, you'll know if they do.Of course, the test you use to measure performance must be a valid
  18. one. And in particular it must not be invalidated by the bias you're
  19. trying to measure.
  20. But there are some domains where performance can be measured, and
  21. in those detecting bias is straightforward. Want to know if the
  22. selection process was biased against some type of applicant? Check
  23. whether they outperform the others. This is not just a heuristic
  24. for detecting bias. It's what bias means.For example, many suspect that venture capital firms are biased
  25. against female founders. This would be easy to detect: among their
  26. portfolio companies, do startups with female founders outperform
  27. those without? A couple months ago, one VC firm (almost certainly
  28. unintentionally) published a study showing bias of this type. First
  29. Round Capital found that among its portfolio companies, startups
  30. with female founders outperformed
  31. those without by 63%.
  32. [2]The reason I began by saying that this technique would come as a
  33. surprise to many people is that we so rarely see analyses of this
  34. type. I'm sure it will come as a surprise to First Round that they
  35. performed one. I doubt anyone there realized that by limiting their
  36. sample to their own portfolio, they were producing a study not of
  37. startup trends but of their own biases when selecting companies.I predict we'll see this technique used more in the future. The
  38. information needed to conduct such studies is increasingly available.
  39. Data about who applies for things is usually closely guarded by the
  40. organizations selecting them, but nowadays data about who gets
  41. selected is often publicly available to anyone who takes the trouble
  42. to aggregate it.
  43. Notes[1]
  44. This technique wouldn't work if the selection process looked
  45. for different things from different types of applicants—for
  46. example, if an employer hired men based on their ability but women
  47. based on their appearance.[2]
  48. As Paul Buchheit points out, First Round excluded their most
  49. successful investment, Uber, from the study. And while it
  50. makes sense to exclude outliers from some types of studies,
  51. studies of returns from startup investing, which is all about
  52. hitting outliers, are not one of them.
  53. Thanks to Sam Altman, Jessica Livingston, and Geoff Ralston for reading
  54. drafts of this.