Technolinguistics in Practice: Socially Situating Language in AI Systems
Looking forward to joining other social scientists and linguists at the University of Siegen in May to present:
“What Python Can’t Do: Language Ideologies in Programming Courses for Natural Language Processing”
[snip]:
Many of the applications that are used in machine learning and natural language processing (NLP) are written in a computer language called Python. Python has become one of the fastest growing programming languages and knowledge of it can be considered a valuable form of social capital (Bourdieu 1977). The structure of Python, explicitly introduced as a language itself, reinforces a language ideology that sees language as a semantic, referential, and universal system (Jablonski n.d.; Lovrenčic et al. 2009; Danova 2017). This contrasts with the position taken by most linguistic anthropologists that sees language as primarily pragmatic, indexical, and highly localized in its ability to convey meaning (Silverstein 1976; 1979; Gal and Irvine 2019; Nakassis 2018; Eckert 2008). “
Images created by ChatGPT and Dall-E
The Teenage Gaze
This is a quick PechaKucha I put together for the EASA’s Why the World Needs Anthropologists conference in Berlin in September.
I seek to answer the question ‘why does every generation of adults think their teenagers are the worst?’ Spoiler alert: it’s a process of semiotic drift.
Why AI Will Never Fully Capture Human Language
My latest for Sapiens.org:
“…from the perspective of linguistic anthropology, novel-writing cars and chatbots designed for “natural language processing” simply do not command language at all. Instead, they perform a small subset of language competency—a fact that is often forgotten when the technology media focuses on sensational claims of AI sentience. Language, as it lives and breathes, is far more complicated.”
Read the full article here.
COVID shows why we need the social sciences now more than ever
Understanding why some reject discovery and innovation is essential to us all. Pure emphasis on ‘STEM’ without wider cultural study leaves society prey to conspiracy theories.
Read the full article in the Ottawa Citizen here.
PHOTO BY MICHELE TANTUSSI /REUTERS
When we search for the humanity in LaMDA AI, we’re missing the point
In the Globe and Mail on June 25, 2022, some commentary on the Blake Lemoine/sentient AI episode:
[snip]
The whole exchange is essentially a version of the famous “imitation game” proposed in the 1950s by mathematician Alan Turing and designed to see whether a machine, writing responses to human questions, could “pass” as a human. But there has been a bit of a misunderstanding along the way around Turing’s intent: The test was not designed to show whether machines were capable of human-like thought.
In fact, Turing considered the question “can machines think?” to be “too meaningless to deserve discussion.” For him, a far more interesting question was whether machines could use language to trick an interrogator into thinking it was human. The Turing test, then, is intended to be an inquiry into human suggestibility, rather than some barometer of machine intelligence.
[snip]
Click here for the full article.