24 Matching Annotations
  1. May 2025
    1. perceived by oneself “in here.” In this sense, the world consists of objects outthere in space (the container that holds them) before me as the perceivingsubject.

      for - adjacency - Indyweb dev - natural language - timebinding - parallel vs serial processing - comparison - spoken vs written language - what's also interesting is that spoken language is timebinding, sequential and our written language descended from that, - in spite of written language existing in 2D and 3D space, it inherited sequential flow, even though it does not have to - In this sense, legacy spoken language system constrains written language to be - serial - sequential and - timebound instead of - parallel - Read any written text and you will observe that the pattern is sequential - We constrain our syntax it to "flow" sequentially in 2D space, even though there is absolutely no other reason to constrain it to do so - This also reveals another implicit rule about language, that it assumes we can only focus our attention on one aspect of reality at a time

  2. Mar 2025
    1. AI adoption is rapidly increasing in all industries for several use cases. In terms of natural language technologies, the question generally is – is it better to use NLP approaches or invest in LLM technologies? LLM vs NLP is an important discussion to identify which technology is most ideal for your specific project requirements.

      Explore the key differences between NLP and LLM in this comprehensive comparison. Learn how these technologies shape AI-driven applications, their core functionalities, and their impact on industries like chatbots, sentiment analysis, and content generation.

  3. Jul 2024
    1. Whoosh provides methods for computing the “key terms” of a set of documents. For these methods, “key terms” basically means terms that are frequent in the given documents, but relatively infrequent in the indexed collection as a whole.

      Very interesting method, and way of looking at the signal. "What makes a document exceptional because something is common within itself and uncommon without".

  4. Feb 2024
  5. Jan 2023
    1. a common technique in natural language processing is to operationalize certain semantic concepts (e.g., "synonym") in terms of syntactic structure (two words that tend to occur nearby in a sentence are more likely to be synonyms, etc). This is what word2vec does.

      Can I use some of these sorts of methods with respect to corpus linguistics over time to better identified calcified words or archaic phrases that stick with the language, but are heavily limited to narrower(ing) contexts?

    1. Fried-berg Judeo-Arabic Project, accessible at http://fjms.genizah.org. This projectmaintains a digital corpus of Judeo-Arabic texts that can be searched and an-alyzed.

      The Friedberg Judeo-Arabic Project contains a large corpus of Judeo-Arabic text which can be manually searched to help improve translations of texts, but it might also be profitably mined using information theoretic and corpus linguistic methods to provide larger group textual translations and suggestions at a grander scale.

  6. Dec 2022
    1. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922

  7. Nov 2022
    1. Robert Amsler is a retired computational lexicology, computational linguist, information scientist. His P.D. was from UT-Austin in 1980. His primary work was in the area of understanding how machine-readable dictionaries could be used to create a taxonomy of dictionary word senses (which served as the motivation for the creation of WordNet) and in understanding how lexicon can be extracted from text corpora. He also invented a new technique in citation analysis that bears his name. His work is mentioned in Wikipedia articles on Machine-Readable dictionary, Computational lexicology, Bibliographic coupling, and Text mining. He currently lives in Vienna, VA and reads email at robert.amsler at utexas. edu. He is currenly interested in chronological studies of vocabulary, esp. computer terms.

      https://www.researchgate.net/profile/Robert-Amsler

      Apparently follow my blog. :)

      Makes me wonder how we might better process and semantically parse peoples' personal notes, particularly when they're atomic and cross-linked?

  8. Oct 2022
    1. https://www.explainpaper.com/

      Another in a growing line of research tools for processing and making sense of research literature including Research Rabbit, Connected Papers, Semantic Scholar, etc.

      Functionality includes the ability to highlight sections of research papers with natural language processing to explain what those sections mean. There's also a "chat" that allows you to ask questions about the paper which will attempt to return reasonable answers, which is an artificial intelligence sort of means of having an artificial "conversation with the text".

      cc: @dwhly @remikalir @jeremydean

  9. Aug 2022
  10. Dec 2021
    1. Catala, a programming language developed by Protzenko's graduate student Denis Merigoux, who is working at the National Institute for Research in Digital Science and Technology (INRIA) in Paris, France. It is not often lawyers and programmers find themselves working together, but Catala was designed to capture and execute legal algorithms and to be understood by lawyers and programmers alike in a language "that lets you follow the very specific legal train of thought," Protzenko says.

      A domain-specific language for encoding legal interpretations.

  11. Nov 2021
  12. Jun 2021
  13. Mar 2021
  14. Aug 2020
  15. Jul 2020
  16. May 2020
  17. Apr 2020