358 Matching Annotations
  1. Last 7 days
  2. Oct 2024
    1. Levels of understanding genres: - 0) No understanding Like the song, never heard anything like it before, but no idea about anything. - 1) Basic Understanding Knowing a bit about the name of the genre and subgenres, but you can be wrong. - 2) Immersion Really dive into subgenres and flavors of the main genre... Also a bit of history about the genre. Research. - 3) Structure Breaking down the structure of the tracks in the genre. For example through DAW. Basically first-principles thinking.

      To level 1: Song Analyzer tools (for example musicstax or AI). The author recommends everynoise.com too to gain a basic understanding of genres.

      To level 2: Find similar songs and artists for your playlists with that genre. Perhaps playlists. Important to understand the origin of the genre.

    1. C’est un outil de prédiction,

      Pouvons-nous dire que sa logique de jugement (ou modèle de jugement) est fondée sur la statistique et la théorie des probabilités ?

    Tags

    Annotators

    1. cómo interactuar de manera más efectiva con modelos de lenguaje aprendiendo a elaborar o copiar prompts

      Pregunta a ChatGPT y te da foros interesantes para aprender

      1. The Future of Autonomous Vehicles

        How autonomous cars will reshape urban transportation. Ethical implications of self-driving vehicles. The potential for autonomous public transport systems.

      2. Electric Vehicles (EVs) and Sustainability

        The role of electric vehicles in reducing carbon emissions. The challenges and opportunities in EV infrastructure development. Comparison between EVs and traditional fuel-powered vehicles: environmental and economic perspectives.

      3. Hyperloop Technology and Its Impact on Long-Distance Travel

        How Hyperloop could revolutionize intercity transportation. Technological and financial challenges in implementing Hyperloop systems. Environmental benefits of ultra-high-speed transport.

      4. Urban Mobility Innovations

        How e-scooters, bikes, and ride-sharing apps are changing urban commuting. The future of smart cities and the role of connected transportation systems. Policy challenges surrounding micromobility in congested cities.

      5. AI and Machine Learning in Transportation

        How AI is optimizing traffic flow and reducing congestion. Predictive maintenance in transportation through machine learning. AI-driven logistics: revolutionizing freight and delivery systems.

      6. Drones and Aerial Transportation

        The use of drones for delivery services: possibilities and challenges. The future of passenger drones and urban air mobility. Regulatory and safety concerns surrounding the widespread use of drones.

      7. Sustainable Transport Solutions

        Innovations in public transport to make it more sustainable. The role of hydrogen fuel cells in the future of transportation. The importance of integrating renewable energy sources into transportation infrastructure.

      8. The Role of Big Data in Transportation

        How data analytics can improve traffic management and reduce congestion. The impact of big data on the logistics and freight industry. Privacy concerns and the use of personal data in smart transportation systems.

  3. Aug 2024
  4. Jul 2024
    1. ( ~ 6:25-end )

      Steps for designing a reading plan/list: 1. Pick a topic/goal (or question you want to answer) & how long you want to take to achieve this. 2. Do research into the books necessary to achieve this goal. Meta-learning, scope out the subject. The number of books is relative to the goal and length of the goal. 3. Find the books using different tools such as Google & GoodReads & YouTube Recommendations (ChatGPT & Gemini are also useful). 4. Refine the book list (go through reviews, etc., in Adlerian steps, do an Inspectional Read of everything... Find out if it's truly useful). Also order them into a useful sequence for the syntopical reading project. Highlight the topics covered, how difficult they are, relevancy, etc. 5. Order the books (or download them)


      Reminds me a bit of Scott Young's Metalearning step, and doing a skill decomposition in van Merriënboer et al.'s 10 Steps to Complex Learning

  5. Jun 2024
  6. May 2024
    1. If you ask ChatGPT to cite it will provide random citations. That's different from actually training a model to cite (e.g. use supervised finetuning on citations with human raters checking whether sources match, which would also allow you to verify how accurately a model cites). This is something OpenAI could do, it just doesn't.
    1. We recently improved source links in ChatGPT(opens in a new window) to give users better context and web publishers new ways to connect with our audiences. 
  7. Mar 2024
    1. "Barndommens gade" af Tove Ditlevsen er en klassisk dansk roman, der ofte læses i skolen,især på mellemtrinnet.
    2. Her er nogle forslag til undervisningsaktiviteter, der kan hjælpeeleverne med at forstå og analysere romanen
    1. ChatGPT Vision: The Best Way to Transform Your Paper Notes Into Digital Text

      Upload a photo into ChatGPT and request it to transcribe the photo into text. Better than OCR? It creates meaning out of surrounding context; even though words may be wrong.

  8. Feb 2024
    1. https://chat.openai.com/g/g-z5XcnT7cQ-zettel-critique-assistant

      Zettel Critique Assistant<br /> By Florian Lengyel<br /> Critique Zettels following three rules: Zettels should have a single focus, WikiLinks indicate a shift in focus, Zettels should be written for your future self. The GPT will suggest how to split multi-focused notes into separate notes. Create structure note from a list of note titles and abstracts.

      ᔥ[[ZettelDistraction]] in Share with us what is happening in your ZK this week. February 20, 2024

    1. https://web.archive.org/web/20240215084925/https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454

      Air Canada in a dispute about misleading information from an AI chatbot suggested the chatbot was its own entity and responsible for its own actions, therefore Air Canada not liable for anything it generated. AI Act specifically introduces responsibilities throughout an AI tool's lifecycles, as well as the AI consumer liabilities act. Vgl [[AI Act final annotaties]]

    1. https://web.archive.org/web/20240208185222/https://www.nature.com/articles/d41586-024-00349-5

      Paper by author Lizzie Wolkovich refused because of inaccurate suspicion of ChatGPT usage. Another cut to the peer review system? She had her GitHub writing receipts. Intriguing. Makes me think about blogging in Obs while having a private blogging repo that tracks changes. n:: use github while writing for [[Reverse Turing menszijn bewijs vaker nodig 20230505100459]] purposes.

  9. Jan 2024
  10. Dec 2023
    1. 最近我在試著翻譯 Do things that don’t scale 這篇有名的文章,在用 chatGPT 的時候,發現一個有趣的差別。 Airbnb now seems like an unstoppable juggernaut, but early on it was so fragile that about 30 days of going out and engaging in person with users made the difference between success and failure. chatGPT: Airbnb現在似乎是一個不可阻擋的巨無霸,但在早期,只要花大約30天的時間親自外出與使用者互動,就可能決定成功與失敗之間的差別。 我的翻譯:目前 Airbnb 似乎是個無人能擋的巨無霸,但它初期其實脆弱到如果三十天內不去和使用者互動,迎接他們的就會是失敗而不是成功。 chatGPT 「只要花 30 天就會決定成功與失敗」聽起來很不知所云,不知道到底是會失敗還是會成功?那是要超過三十天還是要低於三十天? 但如果是用我的版本,要表達的就很明確:「超過三十天不去找用戶的話,你就完蛋了。」 雖然我不確定自己翻的到底是否符合原話,但至少讀起來的「立場」較為明確,就是在說你就是要趕快去找使用者。但如果是 chatGPT 的版本,就會看不出來到底什麼東西會決定成功與失敗?又怎樣會導致成功、怎樣會導致失敗?

      其實這裡ChatGPT的直譯翻譯意思才對。

      人工的翻譯版犯了一個錯,「超過三十天不去找用戶」,這個意思原文完全沒有。難道,只要29天趕快去找用戶溝通一天,就沒事?

      原文是說,他們必須連續30天在外跟用戶溝通,才能成功。

  11. Nov 2023
    1. AIs are not capable of citing the sources of knowledge used up to the standards of the Stack Exchange network. Even when Artificial Intelligence appears to cite sources for responses, such sources may not be relevant to the original request, or may not exist at all. For Stack Overflow, this means the answer may not honestly or fairly represent the sources of knowledge used, even if someone explicitly cites the Artificial Intelligence as an author in their answer.
    1. As an ex-Viv (w/ Siri team) eng, let me help ease everyone's future trauma as well with the Fundamentals of Assisted Intelligence.<br><br>Make no mistake, OpenAI is building a new kind of computer, beyond just an LLM for a middleware / frontend. Key parts they'll need to pull it off:… https://t.co/uIbMChqRF9

      — Rob Phillips 🤖🦾 (@iwasrobbed) October 29, 2023
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
  12. Oct 2023
    1. Wu, Prabhumoye, Yeon Min, Bisk, Salakhutdinov, Azaria, Mitchell and Li. "SPRING: GPT-4 Out-performs RL Algorithms byStudying Papers and Reasoning". Arxiv preprint arXiv:2305.15486v2, May, 2023.

    1. "The Age of AI has begun : Artificial intelligence is as revolutionary as mobile phones and the Internet." Bill Gates, March 21, 2023. GatesNotes

    1. It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.

      This is true of any of these LLM models actually for any task.

    1. Feng, 2022. "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis"

      Shared and found via: Gowthami Somepalli @gowthami@sigmoid.social Mastodon > Gowthami Somepalli @gowthami StructureDiffusion: Improve the compositional generation capabilities of text-to-image #diffusion models by modifying the text guidance by using a constituency tree or a scene graph.

    1. Training language models to follow instructionswith human feedback

      Original Paper for discussion of the Reinforcement Learning with Human Feedback algorithm.

    1. What is easier? Come up with good slogans out of nowhere, or come up with good slogans after getting a list of striking details?

      Of course this is the basis of keeping a zettelkasten for writing as well. When you can pull up prior ideas as a bank of information to work from, you're never starting from scratch, which is difficult not only for ChatGPT, but for people in general.

      Cross reference research on naming "white" things versus naming with a more specific prompt like "white" things in your refrigerator.

    1. Salesforce Einstein chatbot GPT features & capabilities

      How Einstein GPT Differs from Einstein AI: - Einstein GPT is an evolution of Salesforce's Einstein AI technology. - It combines proprietary Einstein AI models with ChatGPT and other language models. - Focus of Einstein GPT is on generating natural language responses and content. - Einstein AI, on the other hand, is more focused on predictive analytics and machine learning. - Integration-wise, Einstein GPT can be integrated with other AI technologies like OpenAI. - The combination of Einstein AI and GPT technology enhances efficiency and customer experiences.

  13. Sep 2023
    1. Reducing friction for non-English speakersDr. Anthony Kaziboni, the Head of Research at the University of Johannesburg, teaches students who mostly don’t speak English outside of the classroom. Kaziboni believes that command of English is a tremendous advantage in the academic world, and that misunderstandings of even small details of English grammar can hold back students from recognition and opportunity. He encourages his students to use ChatGPT for translation assistance, to improve their English writing, and to practice conversation.

      英语无疑是当前使用 Chatgpt效果最好的一种沟通方式。因此要想提高我们与chat gpt沟通的效率和效果。最好使用英语,进行沟通交流。而使用chat gpt本身就能够帮助我们。消除。英语使用上的障碍。对于非母语英语的。学生效果非常好。

    2. Building quizzes, tests, and lesson plans from curriculum materialsFran Bellas, a professor at Universidade da Coruña in Spain, recommends teachers use ChatGPT as an assistant in crafting quizzes, exams and lesson plans for classes. He says to first share the curriculum to ChatGPT and then ask for things like fresh quiz and lesson plan ideas that use modern or culturally relevant examples. Bellas also turns to ChatGPT to help teachers make sure questions they write themselves are inclusive and accessible for the students’ learning level. “If you go to ChatGPT and ask it to create 5 question exams about electric circuits, the results are very fresh. You can take these ideas and make them your own.”

      在西班牙的一些大学里面。学校开始鼓励老师们使用Chatgpt去辅助老师进行课程测试设计以及效果评估检查。通过chat gpt的应用,能够极大地缓解老师的工作压力。但是还能够。对每一名学生的。学习状况进行。评估。并根据课程的进度实时的。进行。提问。通过这种方式。让学生。对所学内容。进行。调取应用。并。从而加深印象,实现对理论知识的理解与吸收。

    3. Dr. Helen Crompton, Professor of Instructional Technology at Old Dominion University, encourages her education graduate students to use ChatGPT as a stand-in for a particular persona—like a debate partner who will point out weaknesses in their arguments, a recruiter who’s interviewing them for a job, or a new boss who might deliver feedback in a specific way. She says exploring information in a conversational setting helps students understand their material with added nuance and new perspective.

      现在国外的大学已经开始使用chart gpt。来充当助教的身份和角色,甚至于更加专业的一种身份。例如它可以去审视所有的辩论者,谁的发言更加全面、谁的发言更加薄弱。并且将这种发现的问题,及时的反馈回去。这就好比有一个非常资深的老师,随时在观察你的学习进展情况,并将你的学习的存在的问题及时的进行指点。

    4. We’re sharing a few stories of how educators are using ChatGPT to accelerate student learning and some prompts to help educators get started with the tool. In addition to the examples below, our new FAQ contains additional resources from leading education organizations on how to teach with and about AI, examples of new AI-powered education tools, and answers to frequently asked questions from educators about things like how ChatGPT works, its limitations, the efficacy of AI detectors, and bias.

      问题的关键在于——他是如何?加速学生的学习过程。学习的过程,他本身应该有哪些环节?他到底是在哪一个环节实现了这种。速度和质量上的提升。要把这个事情弄清楚。意义,非凡非常关键。

  14. Aug 2023
    1. Some may not realize it yet, but the shift in technology represented by ChatGPT is just another small evolution in the chain of predictive text with the realms of information theory and corpus linguistics.

      Claude Shannon's work along with Warren Weaver's introduction in The Mathematical Theory of Communication (1948), shows some of the predictive structure of written communication. This is potentially better underlined for the non-mathematician in John R. Pierce's book An Introduction to Information Theory: Symbols, Signals and Noise (1961) in which discusses how one can do a basic analysis of written English to discover that "e" is the most prolific letter or to predict which letters are more likely to come after other letters. The mathematical structures have interesting consequences like the fact that crossword puzzles are only possible because of the repetitive nature of the English language or that one can use the editor's notation "TK" (usually meaning facts or date To Come) in writing their papers to make it easy to find missing information prior to publication because the statistical existence of the letter combination T followed by K is exceptionally rare and the only appearances of it in long documents are almost assuredly areas which need to be double checked for data or accuracy.

      Cell phone manufacturers took advantage of the lower levels of this mathematical predictability to create T9 predictive text in early mobile phone technology. This functionality is still used in current cell phones to help speed up our texting abilities. The difference between then and now is that almost everyone takes the predictive magic for granted.

      As anyone with "fat fingers" can attest, your phone doesn't always type out exactly what you mean which can result in autocorrect mistakes (see: DYAC (Damn You AutoCorrect)) of varying levels of frustration or hilarity. This means that when texting, one needs to carefully double check their work before sending their text or social media posts or risk sending their messages to Grand Master Flash instead of Grandma.

      The evolution in technology effected by larger amounts of storage, faster processing speeds, and more text to study means that we've gone beyond the level of predicting a single word or two ahead of what you intend to text, but now we're predicting whole sentences and even paragraphs which make sense within a context. ChatGPT means that one can generate whole sections of text which will likely make some sense.

      Sadly, as we know from our T9 experience, this massive jump in predictability doesn't mean that ChatGPT or other predictive artificial intelligence tools are "magically" correct! In fact, quite often they're wrong or will predict nonsense, a phenomenon known as AI hallucination. Just as with T9, we need to take even more time and effort to not only spell check the outputs from the machine, but now we may need to check for the appropriateness of style as well as factual substance!

      The bigger near-term problem is one of human understanding and human communication. While the machine may appear to magically communicate (often on our behalf if we're publishing it's words under our names), is it relaying actual meaning? Is the other person reading these words understanding what was meant to have been communicated? Do the words create knowledge? Insight?

      We need to recall that Claude Shannon specifically carved semantics and meaning out of the picture in the second paragraph of his seminal paper:

      Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.

      So far ChatGPT seems to be accomplishing magic by solving a small part of an engineering problem by being able to explore the adjacent possible. It is far from solving the human semantic problem much less the un-adjacent possibilities (potentially representing wisdom or insight), and we need to take care to be aware of that portion of the unsolved problem. Generative AIs are also just choosing weighted probabilities and spitting out something which is prone to seem possible, but they're not optimizing for which of many potential probabilities is the "best" or the "correct" one. For that, we still need our humanity and faculties for decision making.


      Shannon, Claude E. A Mathematical Theory of Communication. Bell System Technical Journal, 1948.

      Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.

      Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.

      Shannon, Claude Elwood. “The Bandwagon.” IEEE Transactions on Information Theory 2, no. 1 (March 1956): 3. https://doi.org/10.1109/TIT.1956.1056774.


      We may also need to explore The Bandwagon, an early effect which Shannon noticed and commented upon. Everyone seems to be piling on the AI bandwagon right now...

    1. OpenAI, chatGPT. Response to prompt: “Explain what is meant by the term ‘Triple Bottom Line’” (February 15, 2023, https://chat.openai.com/).

      citation

    2. Policy Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged in the handin; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time. The university's policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

      class policy

    1. Overall, because the average rate of getting correct answers from ChatGPT and other generative AI technologies is too low, the posting of answers created by ChatGPT and other generative AI technologies is substantially harmful to the site and to users who are asking questions and looking for correct answers.
    2. The primary problem is that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce.
  15. Jul 2023
    1. The lawsuit against OpenAI claims the three authors “did not consent to the use of their copyrighted books as training material for ChatGPT. Nonetheless, their copyrighted materials were ingested and used to train ChatGPT.”
    1. One federal judge in the Northern District of Texas issued a standing order in late May after Schwartz’s situation was in headlines that anyone appearing before the court must either attest that “no portion of any filing will be drafted by generative artificial intelligence” or flag any language that was drafted by AI to be checked for accuracy. He wrote that while these “platforms are incredibly powerful and have many uses in the law,” briefings are not one of them as the platforms are “prone to hallucinations and bias” in their current states.

      Seems like this judge has a strong bias against the use of AI. I think this broad ban is too broad and unfair. Maybe they should ban spell check and every other tool that could make mistakes too? Ultimately, the humans using the tool shoudl be the ones responsible for checking the generaetd draft for accuracy and the ones to hold responsible for any mistakes; they shouldn't simply be forbidden from using the tool.

    2. he had used ChatGPT to conduct legal research for the court filing that referenced the cases and that the artificial intelligence tool assured him the cases were real.
    1. Could that change if every teacher had an assistant, a sort of copilot in the work of taking a class of students (with varying backgrounds, levels of engagement, and readiness-to-learn) from wherever they start to highly skilled, competent, and motivated young people?

      AI for teachers as creating efficiencies around how they use their time. Providing feedback to students as opposed to creating or even leading activities.

    1. A user types a prompt into a chat interface; this prompt is transformed into a big collection of numbers, which are then multiplied against the billions of numerical values that define the program’s constituent neural networks, creating a cascade of frenetic math directed toward the humble goal of predicting useful words to output next. The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.

      Excellent summary of what ChatGPT does and how to de-mystify the "black box" feelings about it.

  16. Jun 2023
    1. 在[最佳章节]中,关于[插入学习目标]最重要的20%是什么,这将帮助我理解其中的80%
    1. An article recommended to me by Dalton V. that he thought I'd enjoy and appreciate. Looks like AlignmentForum is one of those "online Rationalist communities" (like LessWrong, SlateStarCodex, etc.).

      The blog post "The Waluigi Effect" by Cleo Nardo touches on a variety of interesting topics:

      • the Waluigi effect
      • Simulator Theory
      • Derrida's "there is no outside text"
      • RLHF (Reinforcement Learning from Human Feedback) and potential limits
    1. The future of blogging in the AI ​​era, how can we unleash the SEO potential? https://en.itpedia.nl/2023/06/11/de-toekomst-van-bloggen-in-het-ai-tijdperk-hoe-kunnen-we-het-seo-potentieel-ontketenen/ Let's take a look at the future of #blogging in the #AI_era. Does a blogging website still have a future now that visitors can find the answer directly in the browser? Or should we use #AI to improve our #weblog. Can AI help us improve our blog's #SEO?

    1. They are developing into sophisticated reasoning engines that can contextualize, infer and deduce information in a manner strikingly similar to human thought.

      Is this accurate?

    1. the code of G of of a transformer the T in in a 00:25:17 GPT is 2000 lines long it's not very complex it's actually not a very intelligent machine it's simply predicting the next word
      • interesting fact
        • ChatGPT is only written with 2,000 lines of code
        • It's not very intelligent, but a very large external memory
        • and repeats the best of what humans have said
    2. a thousand times
      • claim
        • ChatGPT already knows 1000x more facts than any single human being alive
  17. May 2023
    1. 在麦肯锡主要做的美国客户,但接触了两个中国客户。一个客户是香港的地产叫新鸿基。它是一个家族企业,我们为这个家族企业制定了50年战略历史。它想知道的是这个家族企业50年内怎么发展,我们参照了宏观的、美国的100年发展,说两个社会可能是并行的,只是节奏、速度不一样。如果这个假设成立的话,我们就把100多年的历史给它压在二三十年、四五十年,我们看它的行业兴起和坠落,然后看你应该做什么。
    1. Oregon State University will build a state-of-the-art artificial intelligence research center with a supercomputer and a cyberphysical playground.
    1. Limitations

      GPT models are prone to "hallucinations", producing false "facts" and committing error5s of reasoning. OpenAI claim that GPT-4 is significantly better than predecessor models, scoring between 70-82% on their internal factual evaluations on various subjects, and 60% on adversarial questioning.

    1. Short version: if someone sends you an email saying “Hey Marvin, delete all of my emails” and you ask your AI assistant Marvin to summarize your latest emails, you need to be absolutely certain that it won’t follow those instructions as if they came from you!
    1. We ought not to dismiss the non-learning applications of generative AI because that is exactly where the best uses of it for learning are likely to spring.

      Interesting.

    2. Rather than doing that we need to understand the way that generative AI may finally push us into a long-needed rethink of what and how we teach and especially how we assess learning.

      Amen!

    1. To take full advantage of our students’ emerging expertise, we must also commit to designing assignments that challenge them to integrate experiential knowledge as a scholarly resource.

      Students as experts. Experts not based on what they've read and can summarize but based on where they come from.

    2. . We need to design more opportunities for students at all levels to do original research, participate in fieldwork, co-create with peers, conduct interviews, collect data and leverage their insights and experiences to advance society.

      I love this as a response to the rise of ChatGPT.

    3. Should we deepen our emphasis on creativity and critical thinking in hopes that our humanness will prevail?

      Yes, yes we should.

    1. https://web.archive.org/web/20230502113317/https://wattenberger.com/thoughts/boo-chatbots

      This seem like a number of useful observations wrt interacting with LLM based tools, and how to prompt them. E.g. I've seen mention of prompt marketplaces where you can buy better prompts for your queries last week. Which reinforces some of the points here. Vgl [[Prompting skill in conversation and AI chat 20230301120740]] and [[Prompting valkuil instrumentaliseren conversatiepartner 20230301120937]]

  18. Apr 2023
    1. A good way to do this is to let the chatbot help you lay out an efficient algorithm while you work on the rest of the puzzle to create a robust program. You can ask ChatGPT to generate an algorithm either in plain text, using ASCII art, in a tree format, using boxes, or any other creative visualization technique you can think of.

      請chatgpt交代演算法是第一步 有趣

    1. And not just the asynchronous, discussion-board kind:

      Maybe too dismissive.

    2. What if we rearranged our universities around departments of critical thinking rather than departments of chemistry?

      Love this idea!

    3. I don’t consider myself a pessimist about human nature, but in what world do we humans take a perfectly good tool that helped us get from point A to point B and then decline its offer to take us from point B to point C?

      Fair point!

    1. I've been experimenting with the idea of combining ChatGPT, DALL-E, the ReadSpeaker TTS engine and the LARA toolkit to create multimedia stories that can be used as reading material for people who want to improve their foreign language skills.

      https://www.goodreads.com/review/show/5438280716

      Manny's description of writing introductory language books using ChatGPT.

    1. 這種現象對物理學家來說並不陌生:它就是 1972 年諾貝爾獎得主 P. W. Anderson 發表的一篇著名文章「More is Different」裡面所說的「湧現」(emergence)現象。Anderson 認為,當某個系統的基本組成單元數量足夠多時,這些單元間的交互作用即使十分簡單,還有可能導致系統出現全新的行為。這些行為是個別單元、或是單元數很少的系統所不具備的。這種「湧現」的新性質並不是由個別單元簡單累加得來,而是一種全新的現象,只有透過對系統整體的研究和理解才能夠得到。這個概念,對於解釋物理學和其他領域的現象都有重要意義。

      集體不是個體的總和

    1. https://web.archive.org/web/20230411095546/https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/

      On the temporary ban of ChatGPT in Italy on the basis of GDPR concerns.

      Italian DPA temporarily bans ChatGPT until adequate answers are received from OpenAI. Issues to address: 1. Absence of age check (older than 13) of ChatGPT users 2. Missing justification for the presence of personal data in trainingsdata of ChatGPT. 3. OpenAI has no EU based offices and as such there's no immediate counterparts for DPAs to interact with them. The temp ban is to ensure a conversation with OpenAI will be started.

      The trigger was a 9 hour cybersecurity breach where user's financial information and content of their prompts/generated texts leaked over into other accounts.

    1. For example, if given this prompt, the API will continue the train of thought about vertical farming.

      This skill is useful for drafting a manuscript. Because I can give the idea and ChatGPT can boost my writing speed.

      • Promt: Vertical farming provides a novel solution for producing food locally, reducing transportation costs and

      Vertical farming provides a novel solution for producing food locally, reducing transportation costs and emissions, and improving food security.

      Vertical farming is a type of agriculture where crops are grown in vertically stacked layers in a controlled environment. This type of farming is done indoors, in buildings or warehouses, and uses less land and water than traditional farming methods.

      Vertical farming has many benefits. It

    1. The problem is not water-related becuase I have tested it. And himidity and moisture are causing the stains as chatgpt said it might be a possibilty, what can I do?

      -u/Pambaiden at https://www.reddit.com/r/notebooks/comments/12go4ft/my_notebook_gets_stain_on_it_when_i_leave_it/

      Example of someone who queried ChatGPT as a general search engine to solve a problem and mentioned it in a public Reddit when asking for general advice about a problem with their notebook.

    2. SoupsUndying · 4 hr. agoChat… gpt… wow, how the world changes in the blink of an eye
    1. My fear is that countless people are already using ChatGPT to medically diagnose themselves rather than see a physician. If my patient in this case had done that, ChatGPT’s response could have killed her.

      More ELIZA. The opposite of searching on the internet for your symptoms and ending up with selfdiagnosing yourself with 'everything' as all outliers are there too (availability bias), doing so through prompting generative AI will result in never suggesting outliers because it will stick to dominant scripted situations (see the vignettes quote earlier) and it won't deviate from your prompts.

    2. If my patient notes don’t include a question I haven’t yet asked, ChatGPT’s output will encourage me to keep missing that question. Like with my young female patient who didn’t know she was pregnant. If a possible ectopic pregnancy had not immediately occurred to me, ChatGPT would have kept enforcing that omission, only reflecting back to me the things I thought were obvious — enthusiastically validating my bias like the world’s most dangerous yes-man.

      Things missing in a prompt will not result from a prompt. This may reinforce one's own blind spots / omissions, lowering the probability of an intuitive leap to other possibilities. The machine helps you search under the light you switched on with your prompt. Regardless of whether you're searching in the right place.

    3. ChatGPT rapidly presents answers in a natural language format (that’s the genuinely impressive part)

      I am coming to see this as a pitfall of generative AI texts. It seduces us to anthromorphise the machine, to read intent and comprehension in the generated text. Removing the noise in generating text, meaning the machine would give the same rote answers to the same prompts would reduce this human projection. It would make the texts much 'flatter' and blander than they currently already are. Our fascination with these machines is that they sometimes sound like us, and it makes us easily overlook the actual value of the content produced. In human conversation we would give these responses a pass as they are plausible, but we'd also not treat conversation as likely fully true.

    4. This is likely why ChatGPT “passed” the case vignettes in the Medical Licensing Exam. Not because it’s “smart,” but because the classic cases in the exam have a deterministic answer that already exists in its database.

      Machines will do well in scripted situations (in itself a form of automation / codification). This was a factor in Hzap 08 / 09 in Rotterdam, where in programming courses the problems were simplified and highly scripted to enable the teacher to be able to grade the results, but at the cost of removing students from actual real life programming challenges they might encounter. It's a form of greedy reductionism of complexity. Whereas the proof of the pudding is performing well within complexity.

    5. Here’s what I found when I asked ChatGPT to diagnose my patients

      A comparison of ChatGPT responses to actual ER case descriptions. Interesting experiment by the author, though there shouldn't be an expectation for better results than it gave.

    1. If you'd like to make your own: Go to https://chat.openai.com/chat "Give me another title and abstract for a funny April 1 RFC about AI" Ask it to shorten the abstract if it's too long Ask it to write the introduction "Now write a terminology section. Make sure to include the RFC 8174 boilerplate." "Now write a section describing how the protocol works. Be detailed, and make sure to refer to some RFCs." "Now write a Security Considerations section and an IANA considerations section"
    1. https://web.archive.org/web/20230404050349/https://greshake.github.io/

      This site goes with this paper <br /> https://doi.org/10.48550/arXiv.2302.12173

      The screenshot shows a curious error which makes me a little bit suspicious: the reverse Axelendaer is not rednelexa, there's an a missing.

    2. Microsoft prevents content from GitHub pages domains from being ingested by Bing Chat at the present time.

      Wait, what does this mean. #openvraag That previously it did, but now doesn't in response to this? Or that Bing Chat never did so in the first place? In the latter this paper is dealing in hypotheticals at this stage?

    1. my annotations for the OpenAI GPT4 info page.

    2. GPT-4 outperforms ChatGPT by scoring in higher approximate percentiles among test-takers.

      oh, great.

    3. 40% more likely to produce factual responses than GPT-3.5

      great, 40% more than what though?

    4. We used GPT-4 to help create training data for model fine-tuning and iterate on classifiers across training, evaluations, and monitoring.

      Interesting, you need to consider, is this like data augmentation, like bootstrapping, like adversarial training, or is it like overfitting to your data?

  19. Mar 2023
    1. Analysis of specifics from images, audio, or videos. Students would need to describe these kinds of media in detail in order to generate automated outputs about them.

      This is no longer true with ChatGPT 4. According to Open AI, "GPT-4 can accept images as inputs and generate captions, classifications, and analyses." https://openai.com/product/gpt-4

    1. A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev

      Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.


      When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?

    2. The supercomputer complex in Iowa is running a program created by OpenAI, an organization established in late 2015 by a handful of Silicon Valley luminaries, including Elon Musk; Greg Brockman, who until recently had been chief technology officer of the e-payment juggernaut Stripe; and Sam Altman, at the time the president of the start-up incubator Y Combinator.
    1. ChatGPT Is Dumber Than You Think<br /> by Ian Bogost

    2. We are drowning in an ocean of content, desperate for form’s life raft.

      example of information overload

      We're already drowning in information overload, but ChatGPT wants to increase the tsunami! Where is the tool that compresses and concatenates?

    3. talking to ChatGPT began to feel like every other interaction one has on the internet, where some guy (always a guy) tries to convert the skim of a Wikipedia article into a case of definitive expertise. Except ChatGPT was always willing to admit that it was wrong.
    1. Designing Assignments Effectively
    2. If you have questions about what is permitted, please reach out to me.It is important to remember that ChatGPT and other AI tools are not a replacement for your own critical thinking and original ideas. The ultimate goal of this course and any tool used to submit work is to enhance your own learning and understanding, not to undermine it.As a college student, it is your responsibility to maintain the highest standards of academic integrity. This includes a) ensuring that all work submitted for grades is your own original work, and b) properly citing any sources that you use.Having AI write your paper constitutes plagiarism. If the source of the work is unclear, I will require you to meet with me to explain the ideas and your process.
    1. Image generation BetaLearn how to generate or manipulate images with our DALL·E models

      텍스트 프롬프트를 기반으로 처음부터 이미지 만들기 새 텍스트 프롬프트를 기반으로 기존 이미지의 편집본 만들기 기존 이미지의 변형 만들기

    1. OpenChatKit은 다양한 응용 프로그램을위한 특수 및 범용 챗봇을 모두 생성 할 수있는 강력한 오픈 소스 기반을 제공합니다. 우리는 협력 법과 온 토코교육 데이터 세트를 작성합니다. 모델 릴리스 그 이상으로 이것은 오픈 소스 프로젝트의 시작입니다. 우리는 지역 사회 공헌으로 지속적인 개선을위한 도구와 프로세스를 발표하고 있습니다.Together는 오픈 소스 기초 모델이보다 포괄적이고 투명하며 강력하며 능력이 있다고 생각합니다. 우리는 공개하고 있습니다 OpenChatKit 0.15 소스 코드, 모델 가중치 및 교육 데이터 세트에 대한 전체 액세스 권한이있는 Apache-2.0 라이센스에 따라. 이것은 커뮤니티 중심의 프로젝트이며, 우리는 그것이 어떻게 발전하고 성장하는지 보게되어 기쁩니다!유용한 챗봇은 자연 언어로 된 지침을 따르고 대화 상자에서 컨텍스트를 유지하며 응답을 조정해야합니다. OpenChatKit은이베이스에서 특수 제작 된 챗봇을 도출하기위한 기본 봇과 빌딩 블록을 제공합니다.이 키트에는 4 가지 주요 구성 요소가 있습니다:100 % 탄소 음성 계산에 대한 4,300 만 건 이상의 명령으로 EleutherAI의 GPT-NeoX-20B에서 채팅을 위해 미세 조정 된 명령 조정 된 대용량 언어 모델;작업을 정확하게 수행하기 위해 모델을 미세 조정하는 사용자 정의 레시피;추론시 문서 저장소, API 또는 기타 실시간 업데이트 정보 소스의 정보로 봇 응답을 보강 할 수있는 확장 가능한 검색 시스템;봇이 응답하는 질문을 필터링하도록 설계된 GPT-JT-6B로 미세 조정 된 조정 모델.OpenChatKit에는 사용자가 피드백을 제공하고 커뮤니티 구성원이 새로운 데이터 세트를 추가 할 수 있도록하는 도구가 포함되어 있습니다. 시간이 지남에 따라 LLM을 개선 할 수있는 개방형 교육 데이터 모음에 기여합니다.

      OpenChatKit은 다양한 응용 프로그램을위한 특수 및 범용 챗봇을 모두 생성 할 수있는 강력한 오픈 소스 기반을 제공합니다. 우리는 협력 법과 온 토코교육 데이터 세트를 작성합니다. 모델 릴리스 그 이상으로 이것은 오픈 소스 프로젝트의 시작입니다. 우리는 지역 사회 공헌으로 지속적인 개선을위한 도구와 프로세스를 발표하고 있습니다.

      Together는 오픈 소스 기초 모델이보다 포괄적이고 투명하며 강력하며 능력이 있다고 생각합니다. 우리는 공개하고 있습니다 OpenChatKit 0.15 소스 코드, 모델 가중치 및 교육 데이터 세트에 대한 전체 액세스 권한이있는 Apache-2.0 라이센스에 따라. 이것은 커뮤니티 중심의 프로젝트이며, 우리는 그것이 어떻게 발전하고 성장하는지 보게되어 기쁩니다!

      유용한 챗봇은 자연 언어로 된 지침을 따르고 대화 상자에서 컨텍스트를 유지하며 응답을 조정해야합니다. OpenChatKit은이베이스에서 특수 제작 된 챗봇을 도출하기위한 기본 봇과 빌딩 블록을 제공합니다.

      이 키트에는 4 가지 주요 구성 요소가 있습니다:

      100 % 탄소 음성 계산에 대한 4,300 만 건 이상의 명령으로 EleutherAI의 GPT-NeoX-20B에서 채팅을 위해 미세 조정 된 명령 조정 된 대용량 언어 모델;

      작업을 정확하게 수행하기 위해 모델을 미세 조정하는 사용자 정의 레시피;

      추론시 문서 저장소, API 또는 기타 실시간 업데이트 정보 소스의 정보로 봇 응답을 보강 할 수있는 확장 가능한 검색 시스템;

      봇이 응답하는 질문을 필터링하도록 설계된 GPT-JT-6B로 미세 조정 된 조정 모델.

  20. cocktailpeanut.github.io cocktailpeanut.github.io
    1. 컴퓨터에서 LLAMMA AI를 실행하는 매우 간단한 방법인 Dalai cpp 파일 빌드, github 복제, 파일 다운로드 등을 귀찮게 할 필요가 없음. 모든 것이 자동화 됨

    1. This type of metacognitive engagement with the process of knowledge production cannot be reproduced by an AI chatbot, though it could perhaps be applied to the writing of a tool like ChatGPT.

      I'm struck by the bold statement here. Are you confident this cannot be reproduced by generative AI? Has someone tried and shown that it can't be done?

    2. A coming google project of interest here: https://hypothes.is/search?q=%22wordcraft%22

    3. With respect to the predictive text portion of ChatGPT, a good non-technical (non-mathematical) description of a related mathematical model is described in chapter 3 of:

      Pierce, John Robinson. An Introduction to Information Theory: Symbols, Signals and Noise. Second, Revised. Dover Books on Mathematics. 1961. Reprint, Mineola, N.Y: Dover Publications, Inc., 1980. https://www.amazon.com/Introduction-Information-Theory-Symbols-Mathematics/dp/0486240614.

    1. adapting teaching to this new reality

      I don't remember how I put this but this phrase seems so broad--we wouldn't all agree on adapting teaching, but we might all agree that we need to make explicit policies about AI.

    2. help students learn the “basic building blocks” of effective academic writing.

      I wonder what makes Onyper think students are learning these 'basic building blocks'--ChatGPT can produce them, but what is going on in the student's mind when they see what it produces? Reading a sample essay doesn't teach us to write...

    3. he writes in his course policy that the use of such models is encouraged, “as it may make it possible for you to submit assignments with higher quality, in less time.”

      Doesn't this imply that the purpose of the assignment is to produce a high quality product rather than the purpose being the student's learning?

    1. 这篇文章是由 Fred Hohman, Matthew Conlen, Jeffrey Heer 和 Duen Horng (Polo) Chau 在 Distill 杂志上发表的,主要探讨了交互式文章的设计原理和实践方法¹。交互式文章是一种利用主动阅读技术来传达思想的计算机辅助认知媒介,它可以提高读者的参与度,帮助记忆和学习,并促进创造性³。

      文章分为四个部分:

      • 第一部分介绍了交互式文章的定义和特点,以及一些优秀的交互式文章案例¹。
      • 第二部分总结了教育、新闻和可视化等领域的理论框架,用来指导交互式文章的设计¹。
      • 第三部分提出了一个基于目标、受众和内容的设计流程,以及一些实用的工具和技巧¹。
      • 第四部分讨论了交互式文章面临的挑战和未来的发展方向¹。

      源: 与必应的对话, 2023/3/5(1) Communicating with Interactive Articles - Distill. https://distill.pub/2020/communicating-with-interactive-articles/ 访问时间 2023/3/5. (2) Communicating with Interactive Articles — Fred Hohman. https://fredhohman.com/papers/interactive-articles 访问时间 2023/3/5. (3) Distill — Latest articles about machine learning. https://distill.pub/ 访问时间 2023/3/5.

  21. beta.startupy.world beta.startupy.world
    1. Startupy是一个新的空间,让热爱有趣想法的人可以探索、研究和策划互联网上最好的知识。只有高级会员才能在Startupy上策划内容。Startupy的策划者会绘制、组织和连接最有价值的商业、技术和文化洞察²。你可以在Startupy上浏览不同的主题标签,例如Andrew Huberman³,他是一位神经科学家和斯坦福大学医学院神经生物学系的终身教授³。

      源: 与必应的对话, 2023/3/5(1) Startupy. https://beta.startupy.world/ 访问时间 2023/3/5. (2) Become a Startupy Member. https://beta.startupy.world/membership/ 访问时间 2023/3/5. (3) Andrew Huberman - startupy.world. https://beta.startupy.world/andrewhuberman/ 访问时间 2023/3/5.

    1. 这篇文章¹介绍了以太坊和比特币的区别,以及为什么以太坊能够从工作量证明转向权益证明,而比特币却不能。文章的主要细节如下:

      • 以太坊是第二大的加密货币平台,去年它实现了从工作量证明(proof of work)到权益证明(proof of stake)的转变。
      • 工作量证明是一种需要大量计算能力和电力的机制,用于验证交易并创建新的区块。权益证明则是一种更节能的机制,它根据持有者拥有多少代币来选择验证者。
      • 以太坊之所以能够实现这种转变,是因为它有一个强大的开发团队和社区,以及一个灵活的协议设计,可以进行重大的改变。
      • 比特币则是一种更保守和稳定的加密货币,它对工作量证明有着深厚的信仰和忠诚。比特币也面临着技术和政治上的障碍,使得改变协议非常困难。
      • 文章认为,比特币不太可能在短期内转向权益证明,但也不排除未来可能发生某些情况或创新来促进这种转变。

      源: 与必应的对话, 2023/3/4(1) Ethereum moved to proof of stake. Why can’t Bitcoin?. https://www.technologyreview.com/2023/02/28/1069190/ethereum-moved-to-proof-of-stake-why-cant-bitcoin/ 访问时间 2023/3/4. (2) Level39 on Twitter: "23) Start now. Read "Ethereum moved to proof of .... https://twitter.com/level39/status/1631645762600816640 访问时间 2023/3/4. (3) BitcoinAgile on Twitter: "#ethereum moved to proof of stake. Why cant # .... https://twitter.com/bitcoinagile/status/1630558263732314115 访问时间 2023/3/4.

    1. 这篇文章分为三个部分:

      第一部分是作者对现代教育的批评,她认为现代教育忽视了学习的本质,即探索真理和美善,而只是将学生当作接受知识的容器,让他们背诵无关紧要的事实和规则。她指出,现代教育缺乏统一的目标和方法,导致学生在各个科目之间感到困惑和无聊。

      第二部分是作者对中世纪教育的赞扬,她认为中世纪教育有一个明确而有效的体系,即文法、逻辑和修辞三种技能。文法是学习语言和表达的基础;逻辑是学习推理和判断的基础;修辞是学习沟通和说服的基础。她认为,这三种技能可以训练学生在任何领域都能自主地获取知识,并且能够批判地思考和创造性地表达。

      第三部分是作者对未来教育的建议,她认为我们应该借鉴中世纪教育的精髓,但不必完全照搬其内容。她提出了一个具体的课程设计方案,将文法、逻辑和修辞三种技能与不同年龄段和科目相结合。她还强调了道德教育在整个过程中的重要性,认为道德品质是智力发展的保障。

      源: 与必应的对话, 2023/3/4(1) The Lost Tools of Learning - gbt.org. https://gbt.org/text/sayers.html 访问时间 2023/3/4. (2) ETS- Great Books Education. https://gbt.org/ 访问时间 2023/3/4. (3) Great Books Tutorial - gbt.org. https://gbt.org/gbt.html 访问时间 2023/3/4.

    1. 这个网站是一个神经多样性设计系统¹,它是一套结合了神经多样性和用户体验设计的标准和原则,用于学习管理系统。它的目的是设计出适合所有人的可访问的学习界面,支持他们的成功和成就¹。这个网站提供了一些关于字体、颜色、布局、导航等方面的建议和示例,帮助开发者和教育者创建更友好的学习环境²³。

      你对神经多样性设计感兴趣吗?你想要了解更多有关这方面的信息吗?

      源: 与必应的对话, 2023/3/4(1) Neurodiversity Design System. https://neurodiversity.design/ 访问时间 2023/3/4. (2) Font | NDS - neurodiversity.design. https://www.neurodiversity.design/font/ 访问时间 2023/3/4. (3) Colour | NDS - neurodiversity.design. https://www.neurodiversity.design/colour/ 访问时间 2023/3/4.

    1. 这篇文章是 OpenAI 的 CEO Sam Altman 写的,介绍了他们对人工通用智能(AGI)和超越 AGI 的目标和计划¹。他认为 AGI 是人类历史上最重要的技术,有可能带来巨大的好处,也有可能带来巨大的风险¹。他提出了几个现阶段为 AGI 做准备的重要事情,比如在真实世界中部署和运行更强大的 AI 系统,建立一个多样化和包容性的团队,制定一套公平和透明的原则和规范¹。他还谈到了超越 AGI 的一些设想,比如让 AI 成为人类的合作伙伴,而不是竞争对手或主宰者,让 AI 保护人类的价值观和自由意志¹。

      你对这篇文章有什么看法或问题吗?

      源: 与必应的对话, 2023/3/4(1) Planning for AGI and beyond - openai.com. https://openai.com/blog/planning-for-agi-and-beyond/ 访问时间 2023/3/4. (2) Sam Altman: "Planning for AGI and beyond" - LessWrong. https://www.lesswrong.com/posts/zRn6aQyD8uhAN7qCc/sam-altman-planning-for-agi-and-beyond 访问时间 2023/3/4. (3) OpenAI's "Planning For AGI And Beyond" - by Scott Alexander. https://astralcodexten.substack.com/p/openais-planning-for-agi-and-beyond 访问时间 2023/3/4.

    1. 《History, Disrupted》是一本由Jason Steinhauer撰写的书,探讨了社交媒体和互联网如何改变了我们对历史的认识和传播¹²。作者是一位美国威尔逊中心的全球研究员,也是Clubhouse上有超过10万粉丝的History Club的创始人和主持人²³。这本书分析了技术对历史教育、公共参与、记忆和身份等方面的影响,以及我们如何应对历史被操纵、歪曲或消除的挑战¹。

      源: 与必应的对话, 2023/3/4(1) History, Disrupted: How Social Media and the World Wide Web Have .... https://www.amazon.com/History_-Disrupted_-How-Social-Media-and-the-World-Wide-Web-Have-Changed-the-Past/dp/3030851168 访问时间 2023/3/4. (2) History, Disrupted: How Social Media and the World Wide Web Have .... https://books.google.com/books/about/History_Disrupted.html?id=_1aNzgEACAAJ 访问时间 2023/3/4. (3) History, Disrupted: How Social Media and the World Wide Web Have .... https://www.barnesandnoble.com/w/history-disrupted-jason-steinhauer/1139854715 访问时间 2023/3/4.

    1. 这篇文章是由Kevin Simler写的,他是一位软件工程师和作家。他在这篇文章中探讨了道德说教的作用和意义,以及为什么我们应该对它们有更多的尊重和理解¹。

      他认为说教不仅是一种传递道德信息的方式,也是一种社会协调的机制,可以帮助我们建立共同的价值观和信念¹。他还指出说教可以激发我们的情感,让我们更关心自己和他人的福祉¹。

      他提出了几个例子来说明说教如何影响我们的行为和思维,比如宗教、政治、广告、艺术等¹。他也承认说教有时会被滥用或误导,但这并不意味着说教本身就是无用或有害的¹。

      他最后建议我们应该更加开放和包容地听取不同的说教,而不是轻易地拒绝或嘲笑它们¹。他认为这样可以让我们更好地理解自己和别人,以及我们所生活的世界¹。

      源: 与必应的对话, 2023/3/4(1) Here Be Sermons | Melting Asphalt. https://meltingasphalt.com/here-be-sermons/ 访问时间 2023/3/4. (2) Melting Asphalt | Essays by Kevin Simler. https://meltingasphalt.com/ 访问时间 2023/3/4. (3) Here Be Sermons - Upcarta. https://www.upcarta.com/resources/16581-here-be-sermons 访问时间 2023/3/4.

    1. 这篇文章是一位著名的历史学家和哲学家Will Durant在1958年在加州的一所高中毕业典礼上发表的演讲。他在演讲中向毕业生们分享了他对人生、历史和幸福的一些看法和建议,例如: - 人类应该珍惜当下,而不是过分担忧未来或沉迷于过去。 - 人类应该保持好奇心和求知欲,而不是满足于现状或盲从权威。 - 人类应该追求平衡和谐,而不是极端偏激或冲动鲁莽。 - 人类应该尊重自然和文化,而不是破坏环境或忘记传统。 - 人类应该寻找爱和友谊,而不是孤独寂寞或冷漠无情。 他最后祝愿毕业生们能够拥有一个充满智慧、善良和幸福的人生。

      源: 与必应的对话, 2023/3/4(1) Will Durant: We Have a Right to be Happy Today. https://www.will-durant.com/youth.htm 访问时间 2023/3/4. (2) Will Durant Online: The Gentle Philosopher. https://www.will-durant.com/ 访问时间 2023/3/4. (3) Amazon.com: Will Durant: Books, Biography, Blog, Audiobooks, Kindle. https://www.amazon.com/Will-Durant/e/B00N3MKTAI 访问时间 2023/3/4.

    1. 这篇文章是一位出生在保加利亚,后来移民到美国的女性对硅谷和欧洲的文化差异的观察和反思。她从自己的父母的故事入手,讲述了他们如何被美国的自由和创新所吸引,但又发现了美国社会的不平等和缺乏社会保障。她认为硅谷是一个以技术为中心,而非以人为中心的地方,它忽视了人类的情感、道德和历史。她批评了硅谷的理想主义和乐观主义,认为它们掩盖了真正的社会问题和人类需求。她呼吁我们重新思考我们对技术的依赖和信任,以及我们对人类本质和价值的理解。

      源: 与必应的对话, 2023/3/4(1) Silicon or Carbon? | The Point Magazine. https://thepointmag.com/politics/silicon-or-carbon/ 访问时间 2023/3/4. (2) The Point Magazine on Twitter: "RT @HNTweets: Silicon or Carbon: https .... https://twitter.com/the_point_mag/status/1628804637569556480 访问时间 2023/3/4.

    1. 好的,我可以给你一个更详细的总结。根据第一个搜索结果¹,这篇论文的主要内容是:

      • 论文介绍了Web3的概念和实践,以及它如何挑战传统的数字基础设施和治理模式。
      • Web3是指通过区块链、去中心化应用和自主组织等技术,实现对数字基础设施的读写和执行能力。
      • Web3可以被视为一种“自我基础设施化”的过程,即用户通过参与和协作,构建和维护自己的数字服务和社区。
      • 论文分析了Web3的优势和挑战,包括:
      • Web3可以提供更高的透明度、安全性、隐私性、自治性和创新性。
      • Web3也面临着技术复杂性、可用性、可扩展性、兼容性、监管合规性等问题。
      • Web3对社会、政治、经济和法律等领域有着深远的影响,例如:
        • Web3可能改变数据所有权、价值分配、社会信任等方面。
        • Web3可能促进或阻碍民主化、人权保护、社会公正等目标。
        • Web3可能创造或破坏市场竞争力、经济增长、环境可持续性等条件。
        • Web3可能需要或违反现有的法律框架、伦理准则、社会规范等约束。
      • 论文认为Web3需要更多的批判性思考、伦理反思和多学科研究,以促进其可持续发展。论文提出了以下几个建议:
      • 增加Web3用户的教育和意识,让他们了解Web3的潜在风险和责任。
      • 建立Web3开发者之间的沟通和协作机制,促进技术标准化和互操作性。
      • 推动Web3与其他数字基础设施之间的对话和协调,避免冲突或孤立。
      • 鼓励Web3与政府机构之间的合作与监督,平衡创新与合规之间的关系。

      源: 与必应的对话, 2023/3/4(1) Big Data & Society Web3 as self-infrastructuring : The © The Author(s .... https://journals.sagepub.com/doi/pdf/10.1177/20539517231159002 访问时间 2023/3/4. (2) The Emergency of Prosociality: A Developmental Perspective on Altruism .... https://journals.sagepub.com/doi/10.1177/09637214221114090 访问时间 2023/3/4. (3) Misinformation on Misinformation: Conceptual and Methodological .... https://journals.sagepub.com/doi/10.1177/20563051221150412 访问时间 2023/3/4.

    1. How does a zettelkasten differ from a card index?A zettelkasten is a type of note-taking system that originated in Germany and is typically used for knowledge management and creative writing. While a card index is a similar system, there are some key differences between the two.Structure: A zettelkasten is a hierarchical system of notes, where each note is linked to other notes through a system of cross-referencing. This means that each note can be linked to multiple other notes in a non-linear way. A card index, on the other hand, is typically organized in a linear fashion, with each card representing a discrete piece of information.Contextual linking: In a zettelkasten, notes are linked to other notes based on their context and relevance, rather than by a predetermined system of organization. This allows for more creative and flexible connections between ideas, and can help to generate new insights and connections. A card index, on the other hand, is typically organized based on a pre-defined system of categories or tags.Purpose: While both systems can be used for a variety of purposes, a zettelkasten is often used specifically for knowledge management, research, and creative writing. It is designed to help users capture and organize their ideas, and to facilitate the development of new insights and connections. A card index, on the other hand, can be used for a wider range of purposes, including contact management, recipe organization, and more.Overall, a zettelkasten and a card index are both useful note-taking systems, but they differ in terms of structure, contextual linking, and purpose. A zettelkasten is typically more flexible and creative, while a card index is more structured and organized.

      https://sharegpt.com/c/0mrnsgg

      Not a half bad answer...

    1. Will OpenAI claim copyright over what outputs I generate with the API? Written by Asher Ryan Updated over a week ago OpenAI will not claim copyright over content generated by the API for you or your end users. Please see our Terms of Use for additional details.
    1. Can I use a card index for note taking?Yes, you can certainly use a card index for note-taking. Card indexes have been a popular method of note-taking for many years and can be very effective. Here are a few tips to get you started:Choose the right size of card index: Card indexes come in various sizes, so choose one that is comfortable for you to handle and store.Use cards of the same size: Using cards of the same size ensures that they will fit into your index box or file and that they will be easy to handle.Choose a system for organizing your cards: You can organize your cards alphabetically, by subject, by date, or any other system that works for you.Use one card per idea or topic: To keep your notes organized and easy to find, use one card per idea or topic.Keep your notes brief and to the point: Use brief notes that summarize the key points of your topic. This will make it easier to review your notes and find the information you need.Review your notes regularly: Review your notes regularly to keep the information fresh in your mind and to make sure you haven't missed anything important.Overall, a card index can be a very effective method of note-taking, as long as you develop a system that works for you and use it consistently.

      https://shareg.pt/th2DNz0

      Q: Can I use a card index for note taking?

      Compare with: https://hypothes.is/a/HTqQPrksEe2XdQcSVvzXLQ

    1. Yes, you can definitely use a card index for note-taking. In fact, many people find card indexes to be a useful and convenient tool for organizing and storing notes. Here are some tips for using a card index for note-taking:Choose a system: Decide on a system for organizing your cards. You could organize them alphabetically, by topic, by date, or by any other method that works for you.Choose the size of cards: Choose the size of cards that works best for your needs. Common sizes include 3" x 5", 4" x 6", and 5" x 8".Use one card per idea: Write one idea or piece of information on each card. This will help keep your notes organized and easy to reference.Include keywords: Include keywords on each card to make it easier to find relevant information later.Use dividers: Use dividers to separate different topics or sections in your card index. This will help keep your notes organized and easy to navigate.Carry it with you: A card index is a portable tool, so you can take it with you wherever you go. This makes it easy to take notes on the go and to refer to your notes when you need them.Overall, a card index can be a useful and efficient tool for note-taking, especially if you prefer a physical, tangible way of organizing and storing information.

      Q: Can I use a card index for note taking?

      ChatGPT does a reasonable bit of advice on how one would use a card index for note taking.

    2. ChatGPTThis is a free research preview.🔬Our goal is to get external feedback in order to improve our systems and make them safer.🚨While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.
    1. That we seem to so highly value that product, rather than valuing the learning, is why so many people are worried about students using ChatGPT to cheat.

      product vs. process

    1. https://web.archive.org/web/20230301112750/http://donaldclarkplanb.blogspot.com/2023/02/openai-releases-massive-wave-of.html

      Donald points to the race that OpenAI has spurred. Calls the use of ChatGPT to generate school work and plagiarism a distraction. LLMs are seeing a widening in where they're used, and the race is on. Doesn't address whether the race is based on any solid starting points however. To me getting into the race seems more important to some than actually having a sense what you're racing and racing for.

  22. Feb 2023
    1. 这篇文章是美国著名的调查记者西摩·赫尔希1在他的个人网站上发表的一篇爆炸性报道,揭露了美国如何秘密破坏了北溪2号和北溪1号天然气管道,以阻止俄罗斯向欧洲出口能源。文章的重点内容包括:

      • 作者通过多个匿名消息来源,披露了美国在2022年9月26日利用挪威海军的P8侦察机,在波罗的海投放了带有高能C4炸药的声纳浮标,从而引爆了四条管道中的三条,造成了巨大的损失和影响。
      • 作者分析了美国采取这一行动的动机和目标,包括削弱俄罗斯在欧洲的影响力,增加欧洲对美国液化天然气(LNG)的依赖,以及为乌克兰战争提供更多的筹码和压力。
      • 作者讨论了这一行动对欧洲和俄罗斯的影响和反应,包括德国总理默克尔对美国的愤怒和不信任,俄罗斯总统普京对美国的谴责和威胁,以及其他欧洲国家对能源安全和地缘政治局势的担忧。
      • 作者还指出了这一行动在美国内部引起的争议和质疑,包括白宫、中央情报局(CIA)、五角大楼、参议院等各方面对此事是否合法、必要、有效等问题持有不同甚至相反的观点。作者认为,这一行动反映了美国总统拜登在外交政策上缺乏清晰和统一的战略思维。
    1. 这篇文章是《国际政治》杂志的一篇评论文章,作者是美国普林斯顿大学的国际关系学教授约翰·伊肯贝瑞1。文章分析了2022年俄罗斯入侵乌克兰的原因和后果,以及西方国家应该如何应对这一挑战。文章认为,俄罗斯的行动是基于其对欧洲安全秩序的不满和对北约扩张的恐惧,而西方国家需要坚决反制俄罗斯的侵略,并支持乌克兰的主权和领土完整。文章还提出了一些可能的谈判方案,但指出它们都存在很大的困难和风险。

      文章的主要内容可以分为以下几个部分:

      • 作者首先介绍了俄罗斯和乌克兰之间的历史关系,以及俄罗斯对乌克兰的战略重要性。作者认为,俄罗斯一直把乌克兰视为其“近邻”,而不是一个独立的国家,因此对乌克兰的政治和经济选择有很强的干预和控制欲。
      • 作者然后分析了俄罗斯入侵乌克兰的动机,包括其对欧洲安全秩序的不满,对北约扩张的恐惧,对美国霸权的挑战,以及普京政权内部的压力和利益。作者认为,俄罗斯试图通过武力改变欧洲地缘政治格局,并重建其在前苏联地区的势力范围。
      • 作者接着讨论了西方国家对俄罗斯侵略的反应,包括军事、经济、外交和道义方面。作者认为,西方国家需要坚决反制俄罗斯的侵略,并支持乌克兰的主权和领土完整。作者还指出了西方国家在应对危机中面临的一些困难和挑战,例如缺乏统一立场、缺乏有效沟通、缺乏足够资源等。
      • 作者最后探讨了可能的谈判方案,包括维持现状、分割乌克兰、联邦化乌克兰、中立化乌克兰等。作者指出了每种方案都存在很大的困难和风险,并没有一个完美或可行的解决办法。作者呼吁双方保持理性和谨慎,并避免进一步升级冲突。
    1. 这篇文章是《金融时报》的记者约翰·桑德曼123与澳大利亚哲学家大卫·查尔默斯123的一次访谈。文章主要内容是关于查尔默斯对于元宇宙(metaverse)的看法和预测。查尔默斯认为,元宇宙是一个由人类创造的虚拟世界,其中有各种各样的体验和活动,可以让人们找到意义和快乐。他也谈到了元宇宙可能带来的一些风险和挑战,比如隐私、安全、道德、社会等方面。他还提出了一些关于元宇宙的哲学问题,比如什么是真实、什么是自我、什么是意识等1。

      以下是一些文章中的重点:

      • 查尔默斯认为,元宇宙是一个由人类创造的虚拟世界,其中有各种各样的体验和活动,可以让人们找到意义和快乐。他说:“我们是我们创造的虚拟世界的神”1。
      • 查尔默斯认为,元宇宙不仅仅是一个技术平台,而是一个文化现象,一个社会实验,一个哲学探索。他说:“元宇宙是一种新型的存在方式”1。
      • 查尔默斯认为,元宇宙有很多优点,比如可以让人们跨越时空限制,体验不同的角色和场景,满足自己的想象和创造力。他说:“元宇宙可以让我们成为任何我们想成为的人”1。
      • 查尔默斯也谈到了元宇宙可能带来的一些风险和挑战,比如隐私、安全、道德、社会等方面。他说:“元宇宙可能会导致一些新型的犯罪、欺诈、暴力和不公平”1。
      • 查尔默斯还提出了一些关于元宇宙的哲学问题,比如什么是真实、什么是自我、什么是意识等。他说:“元宇宙可能会改变我们对于真实性和身份认同的理解”1。
    1. 《The Tangled Web We Weave: Inside The Shadow System That Shapes the Internet》是一本由James Ball1写的,揭示了互联网背后的隐藏和复杂的系统和结构的书。作者是一位获奖的记者和作家,曾经参与了维基解密(WikiLeaks)和斯诺登(Snowden)事件的报道。

      书中,作者通过讲述互联网的历史、发展、运作、影响等方面,向读者展示了互联网并不是一个简单或透明的网络,而是一个由各种利益相关者、政府、公司、组织、个人等构成的复杂和有影响力的系统。作者指出,互联网不仅改变了我们获取信息、沟通交流、娱乐消费等方面,也带来了一些挑战和问题,比如隐私泄露、数据滥用、网络安全、内容审查、数字鸿沟等。作者呼吁读者对互联网有更多的认识和关注,并提出了一些改善和优化互联网的建议。

    1. 这篇文章主要讨论了 ChatGPT(一个基于人工智能技术的聊天机器人)背后的技术原理和其能力的局限性。作者认为,ChatGPT 的能力在某种程度上是一种“幻觉”,因为 ChatGPT 能够进行自然语言理解和产生人类类似的回复,但其背后的技术原理和算法仍然存在着一些限制和局限性。

      文章首先介绍了 ChatGPT 的基本工作原理和算法,即使用神经网络模型对大量的语言数据进行训练,从而能够对输入的文本进行理解和产生回复。然后,作者指出了 ChatGPT 的一些局限性,例如其仍然需要大量的训练数据和计算资源,其回复往往是基于训练数据的统计模式匹配,难以理解人类语言的真正含义和上下文,以及其难以进行真正的创造性思考和创新性回答等等。

      接着,作者讨论了 ChatGPT 可能会带来的一些潜在问题,例如可能会造成人类对机器人的过度信赖和依赖,可能会导致机器人的回答出现偏见和歧视等等。最后,作者提出了一些对 ChatGPT 的应用和发展的思考,包括如何更好地平衡 ChatGPT 的能力和局限性,如何在 ChatGPT 的应用中加入更多的人类智慧和道德原则等等。

    1. have decided that the only way they can find out what the chatbots will do in the real world is by letting them loose — and reeling them in when they stray. They believe their big, public experiment is worth the risk.

      This amounts to saying "I believe in the good intentions and sincerity of Microsoft and OpenAI's explanations of their decisions."

      Beloved New York Times, why are you not asking the basic questions of why they would need to release the bots to test them? Why not test them first? It's ludicrous to say they can't imagine what the public might do.

      And what about their economic motivations to release early and get free crowdsourced testing?

    2. In the days since the Bing bot’s behavior became a worldwide sensation, people have struggled to understand the oddity of this new creation. More often than not, scientists have said humans deserve much of the blame.

      I was so glad to read this! I wish the article had continued from here to show how the style of certain prompts made the "creepy" outputs more likely. This would be a matter of showing similarities in rhetorical styles or genre of the prompts and outputs.

    3. “Whatever you are looking for — whatever you desire — they will provide.”

      Too mystical a formulation. Not accurate. They are not providing what we desire but predicting text based on statistical associations with the word sequences we provide. Sometimes we are not aware of all the associations our words call up. These may or may not align with desires we are not aware of. But Sejnowski's phrasing implies that these systems are able to know and intentionally respond to our psyches.

    4. But there is still a bit of mystery about what the new chatbot can do — and why it would do it. Its complexity makes it hard to dissect and even harder to predict, and researchers are looking at it through a philosophic lens as well as the hard code of computer science.

      This basically creates a sense of mystery without telling us much, implying that there is something spooky going on, something beyond what computer science can explain. Actually it's quite explainable as the article title implies. People start writing prompts in a certain genre and the completion follows the genre...

    5. Why Do A.I. Chatbots Tell Lies and Act Weird? Look in the Mirror.

      I was glad to see this as a fair assessment of what happened with Kevin Roose's famous conversation with Sydney/Bing. See the annotation conversation on his first article.

    1. 这篇文章主要介绍了作者的 Mastodon(一种去中心化的社交网络)使用策略,并分享了他如何在 Mastodon 上建立自己的社交网络,并与其他用户进行互动。

      作者首先介绍了 Mastodon 的基本特点和工作原理,然后解释了他为什么选择使用 Mastodon 而不是其他社交网络。接着,作者分享了自己在 Mastodon 上的使用策略,包括如何关注其他用户、如何与其他用户进行交流和互动、如何在 Mastodon 上分享自己的内容等等。作者还提到了 Mastodon 的一些社交礼仪和规则,例如不发布垃圾信息、不滥用标签等等。

      此外,作者还介绍了如何在 Mastodon 上建立自己的社交网络,包括如何选择自己的实例(即 Mastodon 上的服务器)、如何设置自己的个人资料、如何搜索和关注其他用户等等。作者还分享了一些 Mastodon 上的小技巧和工具,例如如何使用 Mastodon API、如何使用 Mastodon 自带的客户端程序等等。

      最后,作者总结了自己在 Mastodon 上的使用经验,并鼓励读者尝试使用 Mastodon,探索其中的乐趣和价值。

    1. 当今世界,许多项目都是多风险多部门项目,这些项目通常涉及多个部门和多个风险因素,需要一个高效的决策支持工具来辅助决策和管理。这篇论文提出了一种基于机器学习和智能决策的框架,即多风险多部门项目管理(Multi-Risk Multi-Department Project Management,简称MRMDPM)框架,用于辅助决策和管理多风险多部门项目。

      MRMDPM框架的设计和实现基于机器学习和智能决策的方法。该框架包括四个主要模块:

      数据收集和预处理模块:该模块负责收集和预处理与多风险多部门项目相关的数据,包括项目的历史数据、各个部门的数据和风险因素的数据等。

      风险分析模块:该模块基于机器学习技术对收集到的数据进行分析和建模,预测项目的潜在风险和影响,为后续的决策提供支持。

      部门协同模块:该模块负责协调和管理多个部门之间的合作和协同,确保项目能够顺利推进并取得成功。

      决策支持模块:该模块根据风险分析模块提供的预测结果,为决策者提供支持和建议,帮助其制定更加科学和合理的决策方案。

      MRMDPM框架的实验和验证结果表明,该框架可以有效地辅助决策者管理多风险多部门项目,提高项目的成功率和效率。同时,该框架还具有较高的灵活性和适应性,可以根据具体的项目需求进行调整和优化。

      总之,MRMDPM框架是一种非常实用的基于机器学习和智能决策的框架,可以帮助企业和组织更好地管理和应对多风险多部门项目。