- Sep 2024
-
github.com github.com
-
The field I know as "natural language processing" is hard to find these days. It's all being devoured by generative AI. Other techniques still exist but generative AI sucks up all the air in the room and gets all the money. It's rare to see NLP research that doesn't have a dependency on closed data controlled by OpenAI and Google
Robyn Speer says in his view natural language processing as a field has been taken over by #algogens And most NLP research now depends on closed data from the #algogens providers.
-
- Apr 2024
-
www.newyorker.com www.newyorker.com
-
Michael Macdonald amassed a vast collection of photographs of these texts and launched a digital Safaitic database, with the help of Laïla Nehmé, a French archeologist and one of the world’s leading experts on early Arabic inscriptions. “When we started working, Michael’s corpus was all on index cards,” Nehmé recalled. “With the database, you could search for sequences of words across the whole collection, and you could study them statistically. It worked beautifully.”
Researcher Michael Macdonald created a card index database of safaitic inscriptions which he and French archaeologist Laïla Nehmé eventually morphed into a digital database which included a collection of photographs of the extant texts.
-
- Feb 2023
-
wordcraft-writers-workshop.appspot.com wordcraft-writers-workshop.appspot.com
-
The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.
Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?
-
-
www.complexityexplorer.org www.complexityexplorer.org
-
Rhetoric of encomium
How do institutions form around notions of merit?
Me: what about blurbs as evidence of implied social networks? Who blurbs whom? How are these invitations sent/received and by whom?
diachronic: how blurbs evolve over time
Signals, can blurbs predict: - the field of the work - gender - other
Emergence or decrease of signals with respect to time
Imitation of styles and choices. - how does this happen? contagion - I'm reminded of George Mathew Dutcher admonition:
Imitation to be avoided. Avoid the mannerisms and personal peculiarities of method or style of well-known writers, such as Carlyle or Macaulay. (see: https://hypothes.is/a/ROR3VCDEEe2sZNOy4rwRgQ )
Systematic studies of related words within corpora. (this idea should have a clever name) word2vec, word correlations, information theory
How does praise work?
metaphors within blurbs (eg: light, scintillating, brilliant, new lens, etc.)
-
- Jan 2023
-
github.com github.com
-
https://github.com/CopticScriptorium<br /> Tools and technologies for digital and computational research into Coptic language and literature
-
-
genizalab.princeton.edu genizalab.princeton.edu
-
Local file Local file
-
Fried-berg Judeo-Arabic Project, accessible at http://fjms.genizah.org. This projectmaintains a digital corpus of Judeo-Arabic texts that can be searched and an-alyzed.
The Friedberg Judeo-Arabic Project contains a large corpus of Judeo-Arabic text which can be manually searched to help improve translations of texts, but it might also be profitably mined using information theoretic and corpus linguistic methods to provide larger group textual translations and suggestions at a grander scale.
-
More recent ad-ditions to the website include a “jigsaw puzzle” screen that lets users viewseveral items while playing with them to check whether they are “joins.” An-other useful feature permits the user to split the screen into several panelsand, thus, examine several items simultaneously (useful, e.g., when compar-ing handwriting in several documents). Finally, the “join suggestions” screenprovides the results of a technologically groundbreaking computerized anal-ysis of paleographic and codiocological features that suggests possible joinsor items written by the same scribe or belonging to the same codex. 35
Computer means can potentially be used to check or suggest potential "joins" of fragments of historical documents.
An example of some of this work can be seen in the Friedberg Genizah Project and their digital tools.
Tags
- digital humanities
- joins
- textual scholarship
- Friedberg Genizah Project
- information theory
- Friedberg Judeo-Arabic Project
- jigsaw puzzles
- contextual clues
- epigraphy
- Friedberg Jewish Manuscript Society
- graphology
- natural language processing
- artificial intelligence
- contextual extrapolation
- Cairo Geniza
- fragments
- codicology
- corpus linguistics
Annotators
-
- Nov 2022
-
www.researchgate.net www.researchgate.net
-
Robert Amsler is a retired computational lexicology, computational linguist, information scientist. His P.D. was from UT-Austin in 1980. His primary work was in the area of understanding how machine-readable dictionaries could be used to create a taxonomy of dictionary word senses (which served as the motivation for the creation of WordNet) and in understanding how lexicon can be extracted from text corpora. He also invented a new technique in citation analysis that bears his name. His work is mentioned in Wikipedia articles on Machine-Readable dictionary, Computational lexicology, Bibliographic coupling, and Text mining. He currently lives in Vienna, VA and reads email at robert.amsler at utexas. edu. He is currenly interested in chronological studies of vocabulary, esp. computer terms.
https://www.researchgate.net/profile/Robert-Amsler
Apparently follow my blog. :)
Makes me wonder how we might better process and semantically parse peoples' personal notes, particularly when they're atomic and cross-linked?
-
- Aug 2022
-
-
Mechanical form.Use standard size (8t/,xll in.) type-writer pa er or the essay paper in standard use a t the in-stitution. %or typing, use an unruled bond paper of goodquality, such as “Paragon Linen” or “Old Hampshire Mills.”At the left of the page leave a margin of 1% to l’/e inches;and a t the top, bottom, and right of the page, a margin of1 inch. Write only on one side of the paper. In ty in thelines should be double-spaced. Each chapter shouyd feginon a new page. Theses for honors and degrees must be typed;other essays may be typed or legibly written in ink. Whetherthe essay is typed or written, the use of black ink is prefer-able. The original typewritten copy must be presented. Incase two copies of a thesis are required, the second copymust be the first carbon and must be on the same quality ofpaper as the original.
Definitely a paragraph aimed at the student in the manner of a syllabus, but also an interesting tidbit on the potential evolution of writing forms over time.
How does language over time change with respect to the types and styles of writing forms, particularly when they're prescribed or generally standardized over time? How do these same standards evolve over time and change things at the level of the larger pictures?
-
-
Local file Local file
-
I recall being told by a distinguishedanthropological linguist, in 1953, that he had no intention of working througha vast collection of materials that he had assembled because within a few yearsit would surely be possible to program a computer to construct a grammar froma large corpus of data by the use of techniques that were already fairly wellformalized.
rose colored glasses...
-
- Apr 2022
-
-
Yeshiva teaching in the modern period famously relied on memorization of the most important texts, but a few medieval Hebrew manu-scripts from the twelfth or thirteenth centuries include examples of alphabetical lists of words with the biblical phrases in which they occurred, but without pre-cise locations in the Bible—presumably because the learned would know them.
Prior to concordances of the Christian Bible there are examples of Hebrew manuscripts in the twelfth and thirteenth centuries that have lists of words and sentences or phrases in which they occurred. They didn't include exact locations with the presumption being that most scholars would know the texts well enough to quickly find them based on the phrases used.
Early concordances were later made unnecessary as tools as digital search could dramatically decrease the load. However these tools might miss the value found in the serendipity of searching through broad word lists.
Has anyone made a concordance search and display tool to automatically generate concordances of any particular texts? Do professional indexers use these? What might be the implications of overlapping concordances of seminal texts within the corpus linguistics space?
Fun tools like the Bible Munger now exist to play around with find and replace functionality. https://biblemunger.micahrl.com/munge
Online tools also have multi-translation versions that will show translational differences between the seemingly ever-growing number of English translations of the Bible.
-
- Feb 2022
-
www.robinsloan.com www.robinsloan.com
-
Together: responsive, inline “autocomplete” powered by an RNN trained on a corpus of old sci-fi stories.
I can't help but think, what if one used their own collected corpus of ideas based on their ever-growing commonplace book to create a text generator? Then by taking notes, highlighting other work, and doing your own work, you're creating a corpus of material that's imminently interesting to you. This also means that by subsuming text over time in making your own notes, the artificial intelligence will more likely also be using your own prior thought patterns to make something that from an information theoretic standpoint look and sound more like you. It would have your "hand" so to speak.
-
- Jan 2022
-
vimeo.com vimeo.com
-
from: Eyeo Conference 2017
Description
Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.
Notes
Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.
Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)
Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary
Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4
Writing using the adjacent possible.
Corpus building as an art [~37:00]
Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.
Open questions
How might we use information theory to do this more easily?
What does a person or machine's "hand" look like in the long term with these tools?
Can we use corpus linguistics in reverse for this?
What sources would you use to train your model?
References:
- Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
- Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
- Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
- Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production
-
- Jun 2021
-
www.theatlantic.com www.theatlantic.com
-
The viciousness of church politics can rival pretty much any other politics you can name; the difference is that the viciousness within churches is often cloaked in lofty spiritual language and euphemisms.
It would be interesting to examine some of this language and these euphemisms to uncover the change over time.
-
- Feb 2021
-
psyarxiv.com psyarxiv.com
-
Sanders, J., Tosi, A., Obradović, S., Miligi, I., & Delaney, L. (2021). Lessons from lockdown: Media discourse on the role of behavioural science in the UK COVID-19 response. PsyArXiv. https://doi.org/10.31234/osf.io/dw85a
-
-
www.nybooks.com www.nybooks.com
-
Only fifteen of the thirty-seven commonplace books were written in his hand. He might have dictated the others to a secretary, but the nature of his authorship, if it existed, remains a matter of conjecture. A great deal of guesswork also must go into the interpretation of the entries in his own hand, because none of them are dated. Unlike the notes of Harvey, they consist of endless excerpts, which cannot be connected with anything that was happening in the world of politics.
I find myself wondering what this study of his commonplace books would look like if it were digitized and cross-linked? Sadly the lack of dates on the posts would prevent some knowledge from being captured, but what would the broader corpus look like?
Consider the broader digital humanities perspective of this. Something akin to corpus linguistics, but at the level of view of what a single person reads, thinks, and reacts to over the course of their own lifetime.
How much of a person could be recreated from such a collection?
-
-
voyant-tools.org voyant-tools.org
-
Looks like some serious power hiding in here.
Tags
Annotators
URL
-
- Oct 2020
-
www.nytimes.com www.nytimes.com
-
To have, but maybe not to read. Like Stephen Hawking’s “A Brief History of Time,” “Capital in the Twenty-First Century” seems to have been an “event” book that many buyers didn’t stick with; an analysis of Kindle highlights suggested that the typical reader got through only around 26 of its 700 pages. Still, Piketty was undaunted.
Interesting use of digital highlights--determining how "read" a particular book is.
-
-
adanewmedia.org adanewmedia.org
- Nov 2019
-
buttondown.email buttondown.email
-
From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.
And it's not just happening with text, but it also happens with speech as I've written before: Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 In fact, in this mentioned case, looking at transcripts actually helps to reveal that the emperor had no clothes because there's so much missing from the speech that the text doesn't have enough space to fill in the gaps the way the live speech did.
-
The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.
Circle back around and read this when it comes out.
Similarly, these other references should be an interesting read as well.
-
For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.
This isn't a very difficult problem and the underpinnings of it are well laid out by John R. Pierce in An Introduction to Information Theory: Symbols, Signals and Noise. In it he has a lot of interesting tidbits about language and structure from an engineering perspective including the reason why crossword puzzles work.
close reading, distant reading, corpus linguistics
-
- Sep 2019
-
www.theguardian.com www.theguardian.com
-
He is now intending to collaborate with Bourne on a series of articles about the find. “Having these annotations might allow us to identify further books that have been annotated by Milton,” he said. “This is evidence of how digital technology and the opening up of libraries [could] transform our knowledge of this period.”
-
- Apr 2019
-
tressiemc.com tressiemc.com
-
Digital sociology needs more big theory as well as testable theory.
I can't help but think here about the application of digital technology to large bodies of literature in the creation of the field of corpus linguistics.
If traditional sociology means anything, then a digital incarnation of it should create physical and trackable means that can potentially be more easily studied as a result. Just the same way that Mark Dredze has been able to look at Twitter data to analyze public health data like influenza, we should be able to more easily quantify sociological phenomenon in aggregate by looking at larger and richer data sets of online interactions.
There's also likely some value in studying the quantities of digital exhaust that companies like Google, Amazon, Facebook, etc. are using for surveillance capitalism.
-