Project talks are open to academics (faculty, students and other researchers) interested in law and language from within and outside Western. All talks are held at the Faculty of Law, unless otherwise indicated.
Talks
Robert Mullins (Queensland), “Are Sex and Gender Terms Open Textured?”
Tuesday, February 10, 2026 at 12:30–1:50 pm
In-person
Abstract
The purpose of this paper is to offer an account of sex and gendered terms as “open textured” predicates. The idea of open texture is familiar to lawyers from H.L.A. Hart’s (1958, 1963) canonical discussions, but it originates in the work of Friedrich Waissmann (1945), who (roughly) defined open-textured predicates in terms of their openness to unexpected applications in future cases. My account of open-textured predicates deploys Sam Cumming’s (2022) and John Horty’s (2024) accounts of open-textured predicates in terms of “semantic reasons”. According to the semantic reasons account, analysis of open-textured predicates involves identifying a set of defeasible rules governing their application, rather than the identification of logically necessary or sufficient conditions. I argue for an analysis of sex and gender terms (like “man”, “male”, “woman”, and “female”) in terms of defeasible, rather than strict and exceptionless rules. I motivate this form of analysis by examining the failure of attempts to identify either necessary or sufficient conditions for application of these predicates in legal contexts. I suggest that this form of analysis has important implications for the legal and philosophical understanding of sex and gender. Legally, it suggests that courts, in particular, should abandon interpretive approaches to sex and gender terms that are focussed on the identification of either necessary or sufficient characteristics for their application. Judgments in favour of the restriction or extension the application of these predicates should focus on identifying legally relevant reasons for their restriction or extension. Philosophically, the analysis of sex and gender terms as open textured terms provides an interesting alternative to more canonical contextualist and expressivist accounts. More broadly, I conclude by suggesting that the notion of open texture has an important role to play in debates concerning what David Plunkett and Alex Burgess (2013) call “conceptual ethics”, in which normative and evaluative considerations seem to have bearing on our semantic choices.
Luciana Moro (Deakin), “Principles of Statutory Interpretation and Parliament’s Intentions: Re-Examining An Overlooked Relationship”
Thursday, November 13, 2025 at 12:30–1:50 pm
In-person and Zoom
Abstract
This paper analyses an often-neglected aspect of the formation of legislative intentions—namely, the role that principles of statutory interpretation play in that process. These principles are usually seen primarily or exclusively as premises of the exercise that judges carry out when they interpret statutes. For example, sceptics about legislative intent see the principles as something that judges rely on to create ‘legislative intentions’ that they then attribute to Parliament. In turn, defenders of the reality of legislative intentions tend to argue that such intentions exist independently of the process of judicial interpretation and the principles that courts apply, and they see the principles as mere evidence of such intentions.
In contrast, this paper examines the principles as premises of the legislative exercise. It shows how legislative drafting and deliberation are informed and shaped by the principles that courts apply when they interpret statutes. It argues that the principles serve as tools for a plurality of legislators to be able to ‘speak as one’. The principles enable this by providing a clear, shared stipulation of what—of all that the members of the group have done, written, said, and read—determines the content of the group’s ‘message’.
The paper argues that this role that the principles play in forming a unified legislative intention whose content is accessible to all legislators (and their advisers and drafters) should feature in any sound account of legislative intent. It argues, further, that this should also be borne in mind by courts when deciding how to develop, revise, or update the principles to make sure they are fit for their role.
Ross Pey (Western), “Issues with Bilingual Legislation and Potential Solutions for Bilingual Welsh Legislation”
Tuesday, October 21, 2025 at 12:30–1:50 pm
In-person and Zoom
Abstract
The UK has recently joined the club of bilingual jurisdictions as Welsh legislation is equally authentic in Welsh and English. However, unlike many other bilingual Commonwealth jurisdictions, the UK does not have the benefit of local jurisprudence that tackles the issues arising from bilingual legislation. In fact, the issue of bilingual interpretation was only recently raised in a Welsh case and has not been seriously interrogated in the literature. In this talk, I explore and categorise some of the issues raised by bilingual legislation from a survey of Commonwealth cases and legislative practices. I will then identify the issues which may be relevant to Welsh statutes. I also observe that the most problematic issue that could arise is a conflict between the Welsh and English language texts. In this light, I suggest an approach to resolving such conflicts that is grounded in the UK’s purposive method of interpretation.
Cameron Domenico Kirk-Giannini (Rutgers), “Commissioned Voices: Rethinking Authorship in Algorithmic Speech”
Thursday, March 20, 2025 at 12:30–1:50 pm
In-person and Zoom
Abstract
Most legal theorists working on algorithmic speech have held that algorithmically generated text is the speech of the designer of the system that produced it and should therefore be protected. This position has had the unfortunate consequence of limiting regulatory efforts that target social media bot networks designed to influence elections by distorting democratic deliberation. Recently, Peter Salib (2024) has offered an argument against the claim that algorithmically generated text is the speech of the designer of the system that produced it. I scrutinize Salib’s argument and conclude that it is not ultimately convincing. I then offer a framework for thinking about algorithmically generated text that paves the way for a more convincing argument. Central to this framework is the distinction between authoring and merely commissioning a work.
Kevin Tobia (Georgetown), “Reading Law with Linguistics: The Statutory Interpretation of Artifact Nouns”
Thursday, February 13, 2025 at 12:30–1:50 pm
In-person and Zoom
Abstract
Is an airplane a “vehicle”? Is a floating home a “vessel”? Is an unassembled gun a “firearm”? Such questions about “artifact nouns”—nouns that describe human-created entities—are fodder for legal philosophy. They are also common statutory interpretation issues, which today’s textualist courts resolve with linguistic analysis. We propose that textualist courts complement familiar tools, like dictionaries, with insights from linguistics.
We examine as a case study Garland v. VanDerStok, which the Supreme Court will soon decide. It concerns “gun parts kits,” firearm parts that can become operable firearms through combination or part finishing. These kits have been used in several mass shootings, and the case concerns whether such a kit is a “firearm” subject to regulation under the 1968 Gun Control Act. To analyze the statute’s meaning, we apply insights from linguistic theory, new data from language usage, and a survey study of ordinary Americans. This evidence supports that the gun parts kits identified by the government fit within the statutory meaning of “firearm.”
The article’s case study in the legal interpretation of artifact nouns also carries broader implications. We develop lessons for the practice of legal interpretation, statutory interpretation theory, and broader debates in legal philosophy.
Jacques Lamarche (Western), “A Grammar that is Logic and Formal, but is not Formal Semantics nor Generative Grammar”
Wednesday, January 29, 2025 at 4:00–5:30 pm
In-person and Zoom
Abstract
This presentation argues that the function of grammar in the Logic of Labeling (henceforth LL) of Lamarche (2023, 2024) is much simplified in comparison to its tradition function in formal theory of language such as frameworks like Formal Semantics (henceforth FS) and Generative Grammar (henceforth GG). Not only is this simplification desirable for the design of grammatical theory, the proposed approach implies general principles (a ‘code of conduct’ of sort) for the effective use of linguistic form in context.
The traditional function of grammar assumed in FS and GG can be coined the ‘conveying meaning/though’ function. Syntax under this view generates (a potentially infinite number of) meaningful complex expressions out of (a finite number of) simpler meaningful expressions. In contrast, the function of grammar in LL is to label external realities: grammar provides the formal means (an audio signal) to identify meaningful realities that are assumed to exist independently outside of grammar. Its syntax thus generates (a potentially infinite number of) constituent labels out of (a finite number of) simpler labels. This difference in function implies that the symbolic apparatus needed to express interpretation (semantic/logical type, predicate argument structure, and so on) and the distribution of form (grammatical feature used to categorize words into V, N, etc.) is ‘declared’ at different level of analysis in the two approaches. In the traditional conveying meaning function, the relevant symbolic apparatus is generally declared before the input of syntax (in the lexicon of the grammar for any lexically based model, categorial grammar, and so on) for the benefit of syntactic/compositional rules or, in certain case directly in the syntactic rules (). With the labeling approach, the symbolic apparatus that distinguishes grammatical values and logical interpretation is only declared at the output of syntax, after input labels have been turned into constituent labels. The assumption is that the lexical semantic distinctions of FS and GG must be outside of grammatical knowledge altogether because their relation to linguistic form is arbitrary. As Saussure (1916) makes clear, because of arbitrariness, semantics distinctions can only be associated to lexical form by relying on social conventions. If linguistic competence, as claimed by Chomsky (1965, 1886, and elsewhere), is a property of individual human beings and not groups, then information that relates to form by conventions established in the community cannot be part of this competence. The assumption is that only symbolic apparatus that pertains to the phonological description is declared before syntactic rules, in the lexicon of LL. The primitives of linguistic competence are thus labels – sequences of phonemes distinguished by their formal identity – that are recognized as unit of the language in use, when they apply to realities in the world under the conventions of a community of speakers.
Illustrating with a few basic constructions of English, the paper shows how the distinctions relevant for distribution and logical interpretation declared at the output of syntax are based on endocentric alignment principles of input form in constituent labels. I argue that the narrower interpretation of the function of grammar as a labeling system provide a model for linguistic competence that is more plausible than the traditional accounts where grammar’s function consist of conveying meaning. And while the LL is first and foremost a hypothesis about individual linguistic competence, the general conception has nevertheless significant implications for language use in social context: for actual words to be usable in context, individual speakers must adhere to a ‘code of conduct’ that ensures that the restrictions imposed by their individual grammars onto linguistic form is in synch with the lexical conventions of the language established by the group. Any deviance from this code of conduct introduces uncertainties in language, which can only be detrimental to the community of speaker and the efficiency of the labeling system as part of the communication system of language.
Elizabeth Allyn Smith (UQAM), “From Forensic Linguistics to AI: the Consequences of Different Understandings of ‘Ground Truth’ for the Courts”
Tuesday, November 12, 2024 at 12:30–1:50 pm
In-person and Zoom
Abstract
‘Ground truth’ refers to a fundamental truth, or, for data, to “the real or underlying facts; information that has been checked or facts that have been collected at source” (OED). I present the evolution of this concept and its more specific definition in several domains, comparing, in particular, the requirements of a forensic linguist (or other forensic scientist) as compared to someone working in (so-called) artificial intelligence. I will draw from my own collaborative work in computational linguistics as well as other studies to illustrate some of the pitfalls of the diverging uses of this term when viewed from an evidentiary lens. After discussing challenges that are likely to become more frequent in our judge-as-gatekeeper legal systems, I conclude with a soft-law proposal for expert witness reports to include data statements.
Legal Philosophy Research Group talks
Western Law’s Legal Philosophy Research Group also hosts some talks on law and language, which are cross-listed here. These talks are open to academics at Western, and they are in-person.
Amin Ebrahimi Afrouzi (UCLA), “Semantic Canons: their Contributions to Meaning and Interpretation”
Tuesday, March 18, 2025 at 12:30–1:50 pm
Martin David Kelly (Edinburgh), “From Instruction to Action: Rethinking Instruction-Governed Decision-Making”
Tuesday, February 4, 2025 at 12:30–1:50 pm
Law and Economics Research Group talks
Western Law’s Law and Economics Research Group also hosts some talks on law and language, which are cross-listed here. These talks are open to academics at Western, and they are in-person.
Simone Sepe (Toronto), “The Logic of Legal Formalism”
Tuesday, March 11, 2025 at 12:30–2:00 pm
