17 Matching Annotations
  1. Last 7 days
    1. https://web.archive.org/web/20240929075044/https://pivot-to-ai.com/2024/09/28/routledge-nags-academics-to-finish-books-asap-to-feed-microsofts-ai/

      Academic publishers are pushing authors to speed up delivering manuscripts and articles (incl suggesting peer review be done in 15d) to meet the quota they promised the AI companies they sold their soul to. Taylor&Francis/Routledge 75M USD/yr, Wiley 44M USD. No opt-outs etc. What if you ask those #algogens if this is a good idea?

  2. Sep 2024
    1. I don't think anyone has reliable information about post-2021 language usage by humans. The open Web (via OSCAR) was one of wordfreq's data sources. Now the Web at large is full of slop generated by large language models, written by no one to communicate nothing. Including this slop in the data skews the word frequencies. Sure, there was spam in the wordfreq data sources, but it was manageable and often identifiable. Large language models generate text that masquerades as real language with intention behind it, even though there is none, and their output crops up everywhere.

      Robyn Speer will no update longer Wordfreq States that n:: there is no reliable post-2021 language usage data! Wordfreq was using open web sources, but it getting pollutted by #algogens output

  3. Jul 2024
    1. https://web.archive.org/web/20240712174702/https://www.hyperorg.com/blogger/2024/07/11/limiting-ais-imagination/ When 18m ago I played with the temperature (I don't remember how or what but it was an actual setting in the model, probably something from huggingface) what stood out for me was that at 0 it was immediately obvious it was automated, and it yielded the same answer to the same prompt repeatedly as it stuck to the likeliest outcome for each next token. At higher temps it would get wilder, and it struck me as easier to project a human having written it. Since then I almost regard the temp setting as the fakery/projectionlikelihood level. Although it doesn't take much to trigger projection, as per Eliza. l n:: temp v modellen maakt projecte mogelijk

  4. Jun 2024
  5. May 2024
    1. And in this on the side, you see we have this new chat box where the user can engage with the content and this very first action. The user doesn't have to do anything. They land on the page and as long as they run a search, we immediately process a prompt that says what in your voice, how is the query you put in?

      Initial LLM chat prompt: why did this document come up

      Using the patron's keyword search phrase, the first chat shown is the LLM analyzing why this document matched the patron's criteria. Then there are preset prompts for summarizing what the text is about, recommended topics to search, and a prompt to "talk to the document".

    2. Navigating Generative Artificial Intelligence: Early Findings and Implications for Research, Teaching, and Learning

      Spring 2024 Member Meeting: CNI websiteYouTube

      Beth LaPensee Senior Product Manager ITHAKA

      Kevin Guthrie President ITHAKA

      Starting in mid-2023, ITHAKA began investing in and engaging directly with generative artificial intelligence (AI) in two broad areas: a generative AI research tool on the JSTOR platform and a collaborative research project led by Ithaka S+R. These technologies are so crucial to our futures that working directly with them to learn about their impact, both positive and negative, is extremely important.

      This presentation will share early findings that illustrate the impact and potential of generative AI-powered research based on what JSTOR users are expecting from the tool, how their behavior is changing, and implications for changes in the nature of their work. The findings will be contextualized with the cross-institutional learning and landscape-level research being conducted by Ithaka S+R. By pairing data on user behavior with insights from faculty and campus leaders, the session will share early signals about how this technology-enabled evolution is beginning to take shape.

      https://www.jstor.org/generative-ai-faq

    1. Navigating Generative Artificial Intelligence: Early Findings and Implications for Research, Teaching, and Learning

      Spring 2024 Member Meeting: CNI websiteYouTube

      Beth LaPensee Senior Product Manager ITHAKA

      Kevin Guthrie President ITHAKA

      Starting in mid-2023, ITHAKA began investing in and engaging directly with generative artificial intelligence (AI) in two broad areas: a generative AI research tool on the JSTOR platform and a collaborative research project led by Ithaka S+R. These technologies are so crucial to our futures that working directly with them to learn about their impact, both positive and negative, is extremely important.

      This presentation will share early findings that illustrate the impact and potential of generative AI-powered research based on what JSTOR users are expecting from the tool, how their behavior is changing, and implications for changes in the nature of their work. The findings will be contextualized with the cross-institutional learning and landscape-level research being conducted by Ithaka S+R. By pairing data on user behavior with insights from faculty and campus leaders, the session will share early signals about how this technology-enabled evolution is beginning to take shape.

      https://www.jstor.org/generative-ai-faq

    1. The ARL/CNI 2035 Scenarios: AI-Influenced Futures in the Research Environment. Washington, DC, and West Chester, PA: Association of Research Libraries, Coalition for Networked Information, and Stratus Inc., May 2024. https://doi.org/10.29242/report.aiscenarios2024

  6. Jan 2024
    1. Images of women are more likely to be coded as sexual in nature than images of men in similar states of dress and activity, because of widespread cultural objectification of women in both images and its accompanying text. An AI art generator can “learn” to embody injustice and the biases of the era and culture of the training data on which it is trained.

      Objectification of women as an example of AI bias

  7. Nov 2023
    1. One of the ways that, that chat G BT is very powerful is that uh if you're sufficiently educated about computers and you want to make a computer program and you can instruct uh chat G BT in what you want with enough specificity, it can write the code for you. It doesn't mean that every coder is going to be replaced by Chad GP T, but it means that a competent coder uh with an imagination can accomplish a lot more than she used to be able to, uh maybe she could do the work of five coders. Um So there's a dynamic where people who can master the technology can get a lot more done.

      ChatGPT augments, not replaces

      You have to know what you want to do before you can provide the prompt for the code generation.

  8. Sep 2023
    1. considering that Llama-2 has open weights, it is highly likely that it will improve significantly over time.

      I believe the author refers to the open-sources of llama-2 model. It allows quick and specific fine-tuning of the original big model.

  9. Jul 2023
    1. AI-generated content may also feed future generative models, creating a self-referentialaesthetic flywheel that could perpetuate AI-driven cultural norms. This flywheel may in turnreinforce generative AI’s aesthetics, as well as the biases these models exhibit.

      AI bias becomes self-reinforcing

      Does this point to a need for more diversity in AI companies? Different aesthetic/training choices leads to opportunities for more diverse output. To say nothing of identifying and segregating AI-generated output from being used i the training data of subsequent models.

  10. May 2023
    1. Some of these people will become even more mediocre. They will try to outsource too much cognitive work to the language model and end up replacing their critical thinking and insights with boring, predictable work. Because that’s exactly the kind of writing language models are trained to do, by definition.

      If you use LLMs to improve your mediocre writing it will help. If you use it to outsource too much of your own cognitive work it will get you the bland SEO texts the LLMs were trained on and the result will be more mediocre. Greedy reductionism will get punished.

  11. Dec 2022
    1. every country is going to need to reconsider its policies on misinformation. It’s one thing for the occasional lie to slip through; it’s another for us all to swim in a veritable ocean of lies. In time, though it would not be a popular decision, we may have to begin to treat misinformation as we do libel, making it actionable if it is created with sufficient malice and sufficient volume.

      What to do then when our government reps are already happy to perpetuate "culture wars" and empty talking points?

    2. anyone skilled in the art can now replicate their recipe.

      Well anyone skilled enough who has $500k for the gpu bill and access to and the means to store the corpus... So corporations I guess... Yey!