Justification of the Legal Academy and Theory Survey


The legal academy and theory survey is a survey of members of the legal academy. The survey collects members’ views regarding their perception of (i) the “centrality” of different areas of law within the legal academy and (ii) major questions of legal theory. Here we briefly detail our reasoning behind the decision to run the survey, what types of questions and issues to include, and how to formulate the question and answer choices for each part of the survey.

What is the purpose of running the survey?


Broadly speaking, we are interested in learning more about the legal academy: What areas of law do members of the academy see as most central, and which legal theories do members of the academy endorse (or reject)?


One motivation of this study relates to the mismatch between the sheer volume of legal theory scholarship and the dearth of documented academic consensus resulting from this scholarship. Academics have long debated natural law vs. positivism, realism vs. formalism, originalism vs. living constitutionalism, and many other theories. But there is no systematic account of the legal-academic community’s propensity to endorse or reject these views. By surveying legal academics on their beliefs regarding the most often-debated questions, we hope to begin to resolve this mismatch.


We think of this project primarily as a form of sociology of law; we want to know how legal scholars see the field of law and what they think about some of the big questions within the field. Prior work in other fields has shown that academics can often have inaccurate sociological beliefs about the distribution of views held by their peers (see Bourget & Chalmers, 2014), and answering these sociological questions may be of interest and benefit not only to future legal academics and legal historians but to the legal academics today.


There is also a psychological aspect; the study aims to uncover some evidence about why the legal community understands the field and questions in this way. While we believe this psychological aspect to be interesting and informative in and of itself, it may also play into a potential jurisprudential aspect. Legal theory questions are certainly not settled by what 51% of law professors believe, but broad expert consensus in favor of one view might offer some normative weight in favor of that view, particularly if that consensus could not be accounted for by factors that are irrelevant to expertise, such as gender or race. 


Furthermore, law by its nature concerns not just law professors but society at large. The latter may be curious to know what legal theory experts think about the nature of the legal system that surrounds and affects them on a daily basis. Thus, a legal theory survey can provide evidence that is of interest and benefit to a variety of audiences, both within and beyond legal academia.

Justification of the areas of law used in the study and the legal theory topics


The survey includes two parts, which respectively consist of questions about: (i) the centrality of different areas of law within the legal academy, and (ii) specific issues in law and legal theory, which together are designed to gather as much sociological, psychological, and jurisprudential insight as possible in order to satisfy the aims laid out above.


With regard to the centrality part, we sought an objective “smaller” and “larger” list of different areas of law. We relied on (a) the 18 areas reflected in Jotwell: https://jotwell.com/, and (b) the 107 areas listed by the Association of American Law Schools (AALS) in their FAR recruitment material. We combined these lists, to eliminate some redundant areas. Each participant is asked about all 18 of the “smaller” list areas and a random subset of 7 from the “larger” list. Each participant may also choose to evaluate one additional area (e.g. if they would like to evaluate an area from the larger list that was not one of the 7 randomly presented).


Our reasoning behind using these two lists was threefold:

  • First, given that these two lists were established independently of this survey and with no knowledge of its hypotheses, using these lists reduces the potential for personal bias or prejudices in selecting areas arbitrarily.

  • Second, given the vast number of items on the combined lists, we are able to examine law professors’ beliefs on a wide range of areas as opposed to simply the main areas (as would be the case with one smaller list). 

  • Third, given that the two lists are well-known and respected within legal academia, we hope that using these lists is understood as a reasonable choice by the legal-academic community.


Nevertheless, there are many other important areas that are not reflected in the smaller list (drawn from Jotwell) or the larger list (drawn from the AALS). The survey welcomes participants to include written feedback, including suggestions of areas to include in future iterations of the survey.


With regard to the legal theory part of the survey, we constructed an initial list of questions and answer choices, with some emphasis on breadth and diversity of issues reflected. The list was circulated to a diverse set of U.S. law professors, and we received and incorporated feedback from approximately 20 legal academics, resulting in a final set of 25 questions. In designing the survey, we tried our best to incorporate questions that were of interest to, and representative of, a wide range of perspectives within legal theory. 


There are some areas that are over- or under- represented in the survey. In part, some areas of law did not lend themselves as well to questions with a sufficiently small list of most common answers. For example, we solicited feedback on several property law questions, but ultimately none was viewed favorably. The 25 questions include a relatively larger number of criminal law questions, which tended to be more comprehensible by those that were not specialists in that field. 


Despite our best efforts, we acknowledge that the survey is by no means perfect. The set of issues covered are undoubtedly biased against a certain set of perspectives and questions, particularly those which can not be succinctly captured via brief labels (see justification of question and answer format below), and perhaps biased towards the individual interests of the survey designers. To the extent that the survey is in fact biased against certain views, we hope to address this in future iterations of the survey, and in the meantime encourage other theorists (with much greater expertise in such areas) reach out to us with advice and feedback.

Justification of question and answer format


With regard to the format of the questions, we relied heavily on similar work pioneered in philosophy. For Part II we follow Turri (2016), with a few deviations. Turri’s (2016) survey asked participants to rate their agreement, on a scale of 1 to 7, with the statement: “This area is central to the discipline of philosophy,” with respect to 10 different areas of philosophy.


In our own approach, we decided to break up centrality into a normative and descriptive component to avoid potential confusion among respondents. We also decided to adopt a 0-10 point scale as opposed to a 1-7 point scale in order to potentially measure more subtle differences in mean ratings between areas and because 11 point scales are generally perceived by those responding to survey as better allowing them to express their feelings adequately (Preston & Colman, 2000).


For Part III, we follow Bourget & Chalmers (2014), who pioneered a similar study in philosophy. However, we decided to deviate from that model by asking participants to rate each answer choice individually as opposed to asking them to give one answer choice to each question. For example, whereas Bourget & Chalmers’ (2014) questions followed the following format:


Normative ethics: Consequentialism or deontology? 

(1) Accept Consequentialism; 

(2) Lean Consequentialism; 

(3) Lean Deontology; 

(4) Accept Deontology; 

(5) Other (with various options, such as “no fact of the matter”)


Our questions instead follow this format:


Constitutional interpretation

Originalism: (1) Reject; (2) Lean against; (3) Lean towards; (4) Accept; (5) Other

Living Constitutionalism:  (1) Reject; (2) Lean against; (3) Lean towards; (4) Accept; (5) Other


In designing both the questions and answer choices, we tried to strike a balance between clarity and brevity--that is, we wanted to keep the answer choices as succinct as possible without sacrificing too much complexity or nuance. The obvious benefits of this approach are that (a) it is quicker to read and presumably answer questions this way, thus hopefully easing the participants’ burden of completing the survey, and (b) it is easier to report results (e.g. “X% of legal academics endorse textualism”).


One major potential downside of this is ambiguity; “natural law” may not mean the same thing to academic A as it does to academic B, even if both report accepting it on the survey. This is, of course, a greater concern for some questions than others; in cases where we thought it was a particularly greater concern, we added clarifying language to a question or simply removed a proposed question altogether, such that the remaining question list would be as free from ambiguity as possible. We also allow participants to respond to particular answer choices by choosing “other: question unclear,” so as to further minimize this risk, though it is obviously important to be cautious about the interpretability of the results.




Carolyn C. Preston & Andrew M. Colman. 2000. Optimal number of response categories in rating scales: reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1),1-15.


John Turri. 2016. Perceptions of philosophical inquiry: a survey. Review of Philosophy and Psychology, 7(4), 805-816.


David Bourget & David J. Chalmers. 2014. What do philosophers believe? Philosophical Studies 170, 646-500.

Want to keep up to date with the law lab?

Enter your email below.