343 Matching Annotations
  1. Mar 2024
    1. "Barndommens gade" af Tove Ditlevsen er en klassisk dansk roman, der ofte læses i skolen,især på mellemtrinnet.
    2. Her er nogle forslag til undervisningsaktiviteter, der kan hjælpeeleverne med at forstå og analysere romanen
    1. ChatGPT Vision: The Best Way to Transform Your Paper Notes Into Digital Text

      Upload a photo into ChatGPT and request it to transcribe the photo into text. Better than OCR? It creates meaning out of surrounding context; even though words may be wrong.

  2. Feb 2024
    1. https://chat.openai.com/g/g-z5XcnT7cQ-zettel-critique-assistant

      Zettel Critique Assistant<br /> By Florian Lengyel<br /> Critique Zettels following three rules: Zettels should have a single focus, WikiLinks indicate a shift in focus, Zettels should be written for your future self. The GPT will suggest how to split multi-focused notes into separate notes. Create structure note from a list of note titles and abstracts.

      ᔥ[[ZettelDistraction]] in Share with us what is happening in your ZK this week. February 20, 2024

    1. https://web.archive.org/web/20240215084925/https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454

      Air Canada in a dispute about misleading information from an AI chatbot suggested the chatbot was its own entity and responsible for its own actions, therefore Air Canada not liable for anything it generated. AI Act specifically introduces responsibilities throughout an AI tool's lifecycles, as well as the AI consumer liabilities act. Vgl [[AI Act final annotaties]]

    1. https://web.archive.org/web/20240208185222/https://www.nature.com/articles/d41586-024-00349-5

      Paper by author Lizzie Wolkovich refused because of inaccurate suspicion of ChatGPT usage. Another cut to the peer review system? She had her GitHub writing receipts. Intriguing. Makes me think about blogging in Obs while having a private blogging repo that tracks changes. n:: use github while writing for [[Reverse Turing menszijn bewijs vaker nodig 20230505100459]] purposes.

  3. Jan 2024
  4. Dec 2023
    1. 最近我在試著翻譯 Do things that don’t scale 這篇有名的文章,在用 chatGPT 的時候,發現一個有趣的差別。 Airbnb now seems like an unstoppable juggernaut, but early on it was so fragile that about 30 days of going out and engaging in person with users made the difference between success and failure. chatGPT: Airbnb現在似乎是一個不可阻擋的巨無霸,但在早期,只要花大約30天的時間親自外出與使用者互動,就可能決定成功與失敗之間的差別。 我的翻譯:目前 Airbnb 似乎是個無人能擋的巨無霸,但它初期其實脆弱到如果三十天內不去和使用者互動,迎接他們的就會是失敗而不是成功。 chatGPT 「只要花 30 天就會決定成功與失敗」聽起來很不知所云,不知道到底是會失敗還是會成功?那是要超過三十天還是要低於三十天? 但如果是用我的版本,要表達的就很明確:「超過三十天不去找用戶的話,你就完蛋了。」 雖然我不確定自己翻的到底是否符合原話,但至少讀起來的「立場」較為明確,就是在說你就是要趕快去找使用者。但如果是 chatGPT 的版本,就會看不出來到底什麼東西會決定成功與失敗?又怎樣會導致成功、怎樣會導致失敗?




  5. Nov 2023
    1. AIs are not capable of citing the sources of knowledge used up to the standards of the Stack Exchange network. Even when Artificial Intelligence appears to cite sources for responses, such sources may not be relevant to the original request, or may not exist at all. For Stack Overflow, this means the answer may not honestly or fairly represent the sources of knowledge used, even if someone explicitly cites the Artificial Intelligence as an author in their answer.
    1. As an ex-Viv (w/ Siri team) eng, let me help ease everyone's future trauma as well with the Fundamentals of Assisted Intelligence.<br><br>Make no mistake, OpenAI is building a new kind of computer, beyond just an LLM for a middleware / frontend. Key parts they'll need to pull it off:… https://t.co/uIbMChqRF9

      — Rob Phillips 🤖🦾 (@iwasrobbed) October 29, 2023
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
  6. Oct 2023
    1. Wu, Prabhumoye, Yeon Min, Bisk, Salakhutdinov, Azaria, Mitchell and Li. "SPRING: GPT-4 Out-performs RL Algorithms byStudying Papers and Reasoning". Arxiv preprint arXiv:2305.15486v2, May, 2023.

    1. "The Age of AI has begun : Artificial intelligence is as revolutionary as mobile phones and the Internet." Bill Gates, March 21, 2023. GatesNotes

    1. It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.

      This is true of any of these LLM models actually for any task.

    1. Feng, 2022. "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis"

      Shared and found via: Gowthami Somepalli @gowthami@sigmoid.social Mastodon > Gowthami Somepalli @gowthami StructureDiffusion: Improve the compositional generation capabilities of text-to-image #diffusion models by modifying the text guidance by using a constituency tree or a scene graph.

    1. Training language models to follow instructionswith human feedback

      Original Paper for discussion of the Reinforcement Learning with Human Feedback algorithm.

    1. What is easier? Come up with good slogans out of nowhere, or come up with good slogans after getting a list of striking details?

      Of course this is the basis of keeping a zettelkasten for writing as well. When you can pull up prior ideas as a bank of information to work from, you're never starting from scratch, which is difficult not only for ChatGPT, but for people in general.

      Cross reference research on naming "white" things versus naming with a more specific prompt like "white" things in your refrigerator.

    1. Salesforce Einstein chatbot GPT features & capabilities

      How Einstein GPT Differs from Einstein AI: - Einstein GPT is an evolution of Salesforce's Einstein AI technology. - It combines proprietary Einstein AI models with ChatGPT and other language models. - Focus of Einstein GPT is on generating natural language responses and content. - Einstein AI, on the other hand, is more focused on predictive analytics and machine learning. - Integration-wise, Einstein GPT can be integrated with other AI technologies like OpenAI. - The combination of Einstein AI and GPT technology enhances efficiency and customer experiences.

  7. Sep 2023
    1. Reducing friction for non-English speakersDr. Anthony Kaziboni, the Head of Research at the University of Johannesburg, teaches students who mostly don’t speak English outside of the classroom. Kaziboni believes that command of English is a tremendous advantage in the academic world, and that misunderstandings of even small details of English grammar can hold back students from recognition and opportunity. He encourages his students to use ChatGPT for translation assistance, to improve their English writing, and to practice conversation.

      英语无疑是当前使用 Chatgpt效果最好的一种沟通方式。因此要想提高我们与chat gpt沟通的效率和效果。最好使用英语,进行沟通交流。而使用chat gpt本身就能够帮助我们。消除。英语使用上的障碍。对于非母语英语的。学生效果非常好。

    2. Building quizzes, tests, and lesson plans from curriculum materialsFran Bellas, a professor at Universidade da Coruña in Spain, recommends teachers use ChatGPT as an assistant in crafting quizzes, exams and lesson plans for classes. He says to first share the curriculum to ChatGPT and then ask for things like fresh quiz and lesson plan ideas that use modern or culturally relevant examples. Bellas also turns to ChatGPT to help teachers make sure questions they write themselves are inclusive and accessible for the students’ learning level. “If you go to ChatGPT and ask it to create 5 question exams about electric circuits, the results are very fresh. You can take these ideas and make them your own.”

      在西班牙的一些大学里面。学校开始鼓励老师们使用Chatgpt去辅助老师进行课程测试设计以及效果评估检查。通过chat gpt的应用,能够极大地缓解老师的工作压力。但是还能够。对每一名学生的。学习状况进行。评估。并根据课程的进度实时的。进行。提问。通过这种方式。让学生。对所学内容。进行。调取应用。并。从而加深印象,实现对理论知识的理解与吸收。

    3. Dr. Helen Crompton, Professor of Instructional Technology at Old Dominion University, encourages her education graduate students to use ChatGPT as a stand-in for a particular persona—like a debate partner who will point out weaknesses in their arguments, a recruiter who’s interviewing them for a job, or a new boss who might deliver feedback in a specific way. She says exploring information in a conversational setting helps students understand their material with added nuance and new perspective.

      现在国外的大学已经开始使用chart gpt。来充当助教的身份和角色,甚至于更加专业的一种身份。例如它可以去审视所有的辩论者,谁的发言更加全面、谁的发言更加薄弱。并且将这种发现的问题,及时的反馈回去。这就好比有一个非常资深的老师,随时在观察你的学习进展情况,并将你的学习的存在的问题及时的进行指点。

    4. We’re sharing a few stories of how educators are using ChatGPT to accelerate student learning and some prompts to help educators get started with the tool. In addition to the examples below, our new FAQ contains additional resources from leading education organizations on how to teach with and about AI, examples of new AI-powered education tools, and answers to frequently asked questions from educators about things like how ChatGPT works, its limitations, the efficacy of AI detectors, and bias.


  8. Aug 2023
    1. Some may not realize it yet, but the shift in technology represented by ChatGPT is just another small evolution in the chain of predictive text with the realms of information theory and corpus linguistics.

      Claude Shannon's work along with Warren Weaver's introduction in The Mathematical Theory of Communication (1948), shows some of the predictive structure of written communication. This is potentially better underlined for the non-mathematician in John R. Pierce's book An Introduction to Information Theory: Symbols, Signals and Noise (1961) in which discusses how one can do a basic analysis of written English to discover that "e" is the most prolific letter or to predict which letters are more likely to come after other letters. The mathematical structures have interesting consequences like the fact that crossword puzzles are only possible because of the repetitive nature of the English language or that one can use the editor's notation "TK" (usually meaning facts or date To Come) in writing their papers to make it easy to find missing information prior to publication because the statistical existence of the letter combination T followed by K is exceptionally rare and the only appearances of it in long documents are almost assuredly areas which need to be double checked for data or accuracy.

      Cell phone manufacturers took advantage of the lower levels of this mathematical predictability to create T9 predictive text in early mobile phone technology. This functionality is still used in current cell phones to help speed up our texting abilities. The difference between then and now is that almost everyone takes the predictive magic for granted.

      As anyone with "fat fingers" can attest, your phone doesn't always type out exactly what you mean which can result in autocorrect mistakes (see: DYAC (Damn You AutoCorrect)) of varying levels of frustration or hilarity. This means that when texting, one needs to carefully double check their work before sending their text or social media posts or risk sending their messages to Grand Master Flash instead of Grandma.

      The evolution in technology effected by larger amounts of storage, faster processing speeds, and more text to study means that we've gone beyond the level of predicting a single word or two ahead of what you intend to text, but now we're predicting whole sentences and even paragraphs which make sense within a context. ChatGPT means that one can generate whole sections of text which will likely make some sense.

      Sadly, as we know from our T9 experience, this massive jump in predictability doesn't mean that ChatGPT or other predictive artificial intelligence tools are "magically" correct! In fact, quite often they're wrong or will predict nonsense, a phenomenon known as AI hallucination. Just as with T9, we need to take even more time and effort to not only spell check the outputs from the machine, but now we may need to check for the appropriateness of style as well as factual substance!

      The bigger near-term problem is one of human understanding and human communication. While the machine may appear to magically communicate (often on our behalf if we're publishing it's words under our names), is it relaying actual meaning? Is the other person reading these words understanding what was meant to have been communicated? Do the words create knowledge? Insight?

      We need to recall that Claude Shannon specifically carved semantics and meaning out of the picture in the second paragraph of his seminal paper:

      Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.

      So far ChatGPT seems to be accomplishing magic by solving a small part of an engineering problem by being able to explore the adjacent possible. It is far from solving the human semantic problem much less the un-adjacent possibilities (potentially representing wisdom or insight), and we need to take care to be aware of that portion of the unsolved problem. Generative AIs are also just choosing weighted probabilities and spitting out something which is prone to seem possible, but they're not optimizing for which of many potential probabilities is the "best" or the "correct" one. For that, we still need our humanity and faculties for decision making.

      Shannon, Claude E. A Mathematical Theory of Communication. Bell System Technical Journal, 1948.

      Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

      Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.

      Shannon, Claude Elwood. “The Bandwagon.” IEEE Transactions on Information Theory 2, no. 1 (March 1956): 3. https://doi.org/10.1109/TIT.1956.1056774.

      We may also need to explore The Bandwagon, an early effect which Shannon noticed and commented upon. Everyone seems to be piling on the AI bandwagon right now...

    1. OpenAI, chatGPT. Response to prompt: “Explain what is meant by the term ‘Triple Bottom Line’” (February 15, 2023, https://chat.openai.com/).


    2. Policy Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged in the handin; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time. The university's policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

      class policy

    1. Overall, because the average rate of getting correct answers from ChatGPT and other generative AI technologies is too low, the posting of answers created by ChatGPT and other generative AI technologies is substantially harmful to the site and to users who are asking questions and looking for correct answers.
    2. The primary problem is that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce.
  9. Jul 2023
    1. The lawsuit against OpenAI claims the three authors “did not consent to the use of their copyrighted books as training material for ChatGPT. Nonetheless, their copyrighted materials were ingested and used to train ChatGPT.”
    1. One federal judge in the Northern District of Texas issued a standing order in late May after Schwartz’s situation was in headlines that anyone appearing before the court must either attest that “no portion of any filing will be drafted by generative artificial intelligence” or flag any language that was drafted by AI to be checked for accuracy. He wrote that while these “platforms are incredibly powerful and have many uses in the law,” briefings are not one of them as the platforms are “prone to hallucinations and bias” in their current states.

      Seems like this judge has a strong bias against the use of AI. I think this broad ban is too broad and unfair. Maybe they should ban spell check and every other tool that could make mistakes too? Ultimately, the humans using the tool shoudl be the ones responsible for checking the generaetd draft for accuracy and the ones to hold responsible for any mistakes; they shouldn't simply be forbidden from using the tool.

    2. he had used ChatGPT to conduct legal research for the court filing that referenced the cases and that the artificial intelligence tool assured him the cases were real.
    1. Could that change if every teacher had an assistant, a sort of copilot in the work of taking a class of students (with varying backgrounds, levels of engagement, and readiness-to-learn) from wherever they start to highly skilled, competent, and motivated young people?

      AI for teachers as creating efficiencies around how they use their time. Providing feedback to students as opposed to creating or even leading activities.

    1. A user types a prompt into a chat interface; this prompt is transformed into a big collection of numbers, which are then multiplied against the billions of numerical values that define the program’s constituent neural networks, creating a cascade of frenetic math directed toward the humble goal of predicting useful words to output next. The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.

      Excellent summary of what ChatGPT does and how to de-mystify the "black box" feelings about it.

  10. Jun 2023
    1. 在[最佳章节]中,关于[插入学习目标]最重要的20%是什么,这将帮助我理解其中的80%
    1. An article recommended to me by Dalton V. that he thought I'd enjoy and appreciate. Looks like AlignmentForum is one of those "online Rationalist communities" (like LessWrong, SlateStarCodex, etc.).

      The blog post "The Waluigi Effect" by Cleo Nardo touches on a variety of interesting topics:

      • the Waluigi effect
      • Simulator Theory
      • Derrida's "there is no outside text"
      • RLHF (Reinforcement Learning from Human Feedback) and potential limits
    1. The future of blogging in the AI ​​era, how can we unleash the SEO potential? https://en.itpedia.nl/2023/06/11/de-toekomst-van-bloggen-in-het-ai-tijdperk-hoe-kunnen-we-het-seo-potentieel-ontketenen/ Let's take a look at the future of #blogging in the #AI_era. Does a blogging website still have a future now that visitors can find the answer directly in the browser? Or should we use #AI to improve our #weblog. Can AI help us improve our blog's #SEO?

    1. They are developing into sophisticated reasoning engines that can contextualize, infer and deduce information in a manner strikingly similar to human thought.

      Is this accurate?

    1. the code of G of of a transformer the T in in a 00:25:17 GPT is 2000 lines long it's not very complex it's actually not a very intelligent machine it's simply predicting the next word
      • interesting fact
        • ChatGPT is only written with 2,000 lines of code
        • It's not very intelligent, but a very large external memory
        • and repeats the best of what humans have said
    2. a thousand times
      • claim
        • ChatGPT already knows 1000x more facts than any single human being alive
  11. May 2023
    1. 在麦肯锡主要做的美国客户,但接触了两个中国客户。一个客户是香港的地产叫新鸿基。它是一个家族企业,我们为这个家族企业制定了50年战略历史。它想知道的是这个家族企业50年内怎么发展,我们参照了宏观的、美国的100年发展,说两个社会可能是并行的,只是节奏、速度不一样。如果这个假设成立的话,我们就把100多年的历史给它压在二三十年、四五十年,我们看它的行业兴起和坠落,然后看你应该做什么。
    1. Oregon State University will build a state-of-the-art artificial intelligence research center with a supercomputer and a cyberphysical playground.
    1. Limitations

      GPT models are prone to "hallucinations", producing false "facts" and committing error5s of reasoning. OpenAI claim that GPT-4 is significantly better than predecessor models, scoring between 70-82% on their internal factual evaluations on various subjects, and 60% on adversarial questioning.

    1. Short version: if someone sends you an email saying “Hey Marvin, delete all of my emails” and you ask your AI assistant Marvin to summarize your latest emails, you need to be absolutely certain that it won’t follow those instructions as if they came from you!
    1. We ought not to dismiss the non-learning applications of generative AI because that is exactly where the best uses of it for learning are likely to spring.


    2. Rather than doing that we need to understand the way that generative AI may finally push us into a long-needed rethink of what and how we teach and especially how we assess learning.


    1. To take full advantage of our students’ emerging expertise, we must also commit to designing assignments that challenge them to integrate experiential knowledge as a scholarly resource.

      Students as experts. Experts not based on what they've read and can summarize but based on where they come from.

    2. . We need to design more opportunities for students at all levels to do original research, participate in fieldwork, co-create with peers, conduct interviews, collect data and leverage their insights and experiences to advance society.

      I love this as a response to the rise of ChatGPT.

    3. Should we deepen our emphasis on creativity and critical thinking in hopes that our humanness will prevail?

      Yes, yes we should.

    1. https://web.archive.org/web/20230502113317/https://wattenberger.com/thoughts/boo-chatbots

      This seem like a number of useful observations wrt interacting with LLM based tools, and how to prompt them. E.g. I've seen mention of prompt marketplaces where you can buy better prompts for your queries last week. Which reinforces some of the points here. Vgl [[Prompting skill in conversation and AI chat 20230301120740]] and [[Prompting valkuil instrumentaliseren conversatiepartner 20230301120937]]

  12. Apr 2023
    1. A good way to do this is to let the chatbot help you lay out an efficient algorithm while you work on the rest of the puzzle to create a robust program. You can ask ChatGPT to generate an algorithm either in plain text, using ASCII art, in a tree format, using boxes, or any other creative visualization technique you can think of.

      請chatgpt交代演算法是第一步 有趣

    1. And not just the asynchronous, discussion-board kind:

      Maybe too dismissive.

    2. What if we rearranged our universities around departments of critical thinking rather than departments of chemistry?

      Love this idea!

    3. I don’t consider myself a pessimist about human nature, but in what world do we humans take a perfectly good tool that helped us get from point A to point B and then decline its offer to take us from point B to point C?

      Fair point!

    1. I've been experimenting with the idea of combining ChatGPT, DALL-E, the ReadSpeaker TTS engine and the LARA toolkit to create multimedia stories that can be used as reading material for people who want to improve their foreign language skills.


      Manny's description of writing introductory language books using ChatGPT.

    1. 這種現象對物理學家來說並不陌生:它就是 1972 年諾貝爾獎得主 P. W. Anderson 發表的一篇著名文章「More is Different」裡面所說的「湧現」(emergence)現象。Anderson 認為,當某個系統的基本組成單元數量足夠多時,這些單元間的交互作用即使十分簡單,還有可能導致系統出現全新的行為。這些行為是個別單元、或是單元數很少的系統所不具備的。這種「湧現」的新性質並不是由個別單元簡單累加得來,而是一種全新的現象,只有透過對系統整體的研究和理解才能夠得到。這個概念,對於解釋物理學和其他領域的現象都有重要意義。


    1. https://web.archive.org/web/20230411095546/https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/

      On the temporary ban of ChatGPT in Italy on the basis of GDPR concerns.

      Italian DPA temporarily bans ChatGPT until adequate answers are received from OpenAI. Issues to address: 1. Absence of age check (older than 13) of ChatGPT users 2. Missing justification for the presence of personal data in trainingsdata of ChatGPT. 3. OpenAI has no EU based offices and as such there's no immediate counterparts for DPAs to interact with them. The temp ban is to ensure a conversation with OpenAI will be started.

      The trigger was a 9 hour cybersecurity breach where user's financial information and content of their prompts/generated texts leaked over into other accounts.

    1. For example, if given this prompt, the API will continue the train of thought about vertical farming.

      This skill is useful for drafting a manuscript. Because I can give the idea and ChatGPT can boost my writing speed.

      • Promt: Vertical farming provides a novel solution for producing food locally, reducing transportation costs and

      Vertical farming provides a novel solution for producing food locally, reducing transportation costs and emissions, and improving food security.

      Vertical farming is a type of agriculture where crops are grown in vertically stacked layers in a controlled environment. This type of farming is done indoors, in buildings or warehouses, and uses less land and water than traditional farming methods.

      Vertical farming has many benefits. It

    1. The problem is not water-related becuase I have tested it. And himidity and moisture are causing the stains as chatgpt said it might be a possibilty, what can I do?

      -u/Pambaiden at https://www.reddit.com/r/notebooks/comments/12go4ft/my_notebook_gets_stain_on_it_when_i_leave_it/

      Example of someone who queried ChatGPT as a general search engine to solve a problem and mentioned it in a public Reddit when asking for general advice about a problem with their notebook.

    2. SoupsUndying · 4 hr. agoChat… gpt… wow, how the world changes in the blink of an eye
    1. My fear is that countless people are already using ChatGPT to medically diagnose themselves rather than see a physician. If my patient in this case had done that, ChatGPT’s response could have killed her.

      More ELIZA. The opposite of searching on the internet for your symptoms and ending up with selfdiagnosing yourself with 'everything' as all outliers are there too (availability bias), doing so through prompting generative AI will result in never suggesting outliers because it will stick to dominant scripted situations (see the vignettes quote earlier) and it won't deviate from your prompts.

    2. If my patient notes don’t include a question I haven’t yet asked, ChatGPT’s output will encourage me to keep missing that question. Like with my young female patient who didn’t know she was pregnant. If a possible ectopic pregnancy had not immediately occurred to me, ChatGPT would have kept enforcing that omission, only reflecting back to me the things I thought were obvious — enthusiastically validating my bias like the world’s most dangerous yes-man.

      Things missing in a prompt will not result from a prompt. This may reinforce one's own blind spots / omissions, lowering the probability of an intuitive leap to other possibilities. The machine helps you search under the light you switched on with your prompt. Regardless of whether you're searching in the right place.

    3. ChatGPT rapidly presents answers in a natural language format (that’s the genuinely impressive part)

      I am coming to see this as a pitfall of generative AI texts. It seduces us to anthromorphise the machine, to read intent and comprehension in the generated text. Removing the noise in generating text, meaning the machine would give the same rote answers to the same prompts would reduce this human projection. It would make the texts much 'flatter' and blander than they currently already are. Our fascination with these machines is that they sometimes sound like us, and it makes us easily overlook the actual value of the content produced. In human conversation we would give these responses a pass as they are plausible, but we'd also not treat conversation as likely fully true.

    4. This is likely why ChatGPT “passed” the case vignettes in the Medical Licensing Exam. Not because it’s “smart,” but because the classic cases in the exam have a deterministic answer that already exists in its database.

      Machines will do well in scripted situations (in itself a form of automation / codification). This was a factor in Hzap 08 / 09 in Rotterdam, where in programming courses the problems were simplified and highly scripted to enable the teacher to be able to grade the results, but at the cost of removing students from actual real life programming challenges they might encounter. It's a form of greedy reductionism of complexity. Whereas the proof of the pudding is performing well within complexity.

    5. Here’s what I found when I asked ChatGPT to diagnose my patients

      A comparison of ChatGPT responses to actual ER case descriptions. Interesting experiment by the author, though there shouldn't be an expectation for better results than it gave.

    1. If you'd like to make your own: Go to https://chat.openai.com/chat "Give me another title and abstract for a funny April 1 RFC about AI" Ask it to shorten the abstract if it's too long Ask it to write the introduction "Now write a terminology section. Make sure to include the RFC 8174 boilerplate." "Now write a section describing how the protocol works. Be detailed, and make sure to refer to some RFCs." "Now write a Security Considerations section and an IANA considerations section"
    1. https://web.archive.org/web/20230404050349/https://greshake.github.io/

      This site goes with this paper <br /> https://doi.org/10.48550/arXiv.2302.12173

      The screenshot shows a curious error which makes me a little bit suspicious: the reverse Axelendaer is not rednelexa, there's an a missing.

    2. Microsoft prevents content from GitHub pages domains from being ingested by Bing Chat at the present time.

      Wait, what does this mean. #openvraag That previously it did, but now doesn't in response to this? Or that Bing Chat never did so in the first place? In the latter this paper is dealing in hypotheticals at this stage?

    1. my annotations for the OpenAI GPT4 info page.

    2. GPT-4 outperforms ChatGPT by scoring in higher approximate percentiles among test-takers.

      oh, great.

    3. 40% more likely to produce factual responses than GPT-3.5

      great, 40% more than what though?

    4. We used GPT-4 to help create training data for model fine-tuning and iterate on classifiers across training, evaluations, and monitoring.

      Interesting, you need to consider, is this like data augmentation, like bootstrapping, like adversarial training, or is it like overfitting to your data?

  13. Mar 2023
    1. Analysis of specifics from images, audio, or videos. Students would need to describe these kinds of media in detail in order to generate automated outputs about them.

      This is no longer true with ChatGPT 4. According to Open AI, "GPT-4 can accept images as inputs and generate captions, classifications, and analyses." https://openai.com/product/gpt-4

    1. A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev

      Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.

      When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?

    2. The supercomputer complex in Iowa is running a program created by OpenAI, an organization established in late 2015 by a handful of Silicon Valley luminaries, including Elon Musk; Greg Brockman, who until recently had been chief technology officer of the e-payment juggernaut Stripe; and Sam Altman, at the time the president of the start-up incubator Y Combinator.
    1. ChatGPT Is Dumber Than You Think<br /> by Ian Bogost

    2. We are drowning in an ocean of content, desperate for form’s life raft.

      example of information overload

      We're already drowning in information overload, but ChatGPT wants to increase the tsunami! Where is the tool that compresses and concatenates?