36 Matching Annotations
  1. Nov 2023
    1. As an ex-Viv (w/ Siri team) eng, let me help ease everyone's future trauma as well with the Fundamentals of Assisted Intelligence.<br><br>Make no mistake, OpenAI is building a new kind of computer, beyond just an LLM for a middleware / frontend. Key parts they'll need to pull it off:… https://t.co/uIbMChqRF9

      — Rob Phillips 🤖🦾 (@iwasrobbed) October 29, 2023
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
    1. [[Rolf Kleef]] [[Aldo de Moor]]

      OpenAI call "democratic inputs for AI" 10 projects, 100k, 3 months

      [[Rolf Aldo Common Ground AI consensus]] #2023/11/01

  2. Oct 2023
  3. Sep 2023
  4. Aug 2023
  5. Jun 2023
    1. Assistant messages store previous assistant responses, but can also be written by a developer to give examples of desired behavior.

      Assistant is who answering questions?

    2. The user messages provide requests or comments for the assistant to respond to

      User is who asking questions?

    3. "You are a helpful assistant."

      Default system message.

    4. The system message helps set the behavior of the assistant.

      How to set up a good system message?

  6. Apr 2023
  7. Mar 2023
    1. A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev

      Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.

      When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?

    2. In June 2021, OpenAI published a paper offering a new technique for battling toxicity in GPT-3’s responses, calling it PALMS, short for ‘‘process for adapting language models to society.’’ PALMS involves an extra layer of human intervention, defining a set of general topics that might be vulnerable to GPT-3’s being led astray by the raw training data: questions about sexual abuse, for instance, or Nazism.
    3. ‘‘I think it lets us be more thoughtful and more deliberate about safety issues,’’ Altman says. ‘‘Part of our strategy is: Gradual change in the world is better than sudden change.’’

      What are the long term effects of fast breaking changes and gradual changes for evolved entities?

    4. The supercomputer complex in Iowa is running a program created by OpenAI, an organization established in late 2015 by a handful of Silicon Valley luminaries, including Elon Musk; Greg Brockman, who until recently had been chief technology officer of the e-payment juggernaut Stripe; and Sam Altman, at the time the president of the start-up incubator Y Combinator.
    1. Still, we can look for telltale signs. Another symptom of memorization is that GPT is highly sensitive to the phrasing of the question. Melanie Mitchell gives an example of an MBA test question where changing some details in a way that wouldn’t fool a person is enough to fool ChatGPT (running GPT-3.5). A more elaborate experiment along these lines would be valuable.

      OpenAI has memorised MBA tests- when these are rephrased or certain details are changed, the system fails to answer

    2. In fact, we can definitively show that it has memorized problems in its training set: when prompted with the title of a Codeforces problem, GPT-4 includes a link to the exact contest where the problem appears (and the round number is almost correct: it is off by one). Note that GPT-4 cannot access the Internet, so memorization is the only explanation.

      GPT4 knows the link to the coding exams that it was evaluated against but doesn't have "internet access" so it appears to have memorised this as well

    3. To benchmark GPT-4’s coding ability, OpenAI evaluated it on problems from Codeforces, a website that hosts coding competitions. Surprisingly, Horace He pointed out that GPT-4 solved 10/10 pre-2021 problems and 0/10 recent problems in the easy category. The training data cutoff for GPT-4 is September 2021. This strongly suggests that the model is able to memorize solutions from its training set — or at least partly memorize them, enough that it can fill in what it can’t recall.

      OpenAI was only able to pass questions available before september 2021 and failed to answer new questions - strongly suggesting that it has simply memorised the answers as part of its training

    1. https://openai.com/product/dall-e-2

      DALL·E 2 is an AI system that can create realistic images and art from a description in natural language.

    1. Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.

      Whisper는 범용 음성 인식 모델입니다. 다양한 오디오의 대규모 데이터 세트를 학습하고 다국어 음성 인식, 음성 번역, 언어 식별을 수행할 수 있는 멀티태스킹 모델이기도 합니다.

    1. Image generation BetaLearn how to generate or manipulate images with our DALL·E models

      텍스트 프롬프트를 기반으로 처음부터 이미지 만들기 새 텍스트 프롬프트를 기반으로 기존 이미지의 편집본 만들기 기존 이미지의 변형 만들기

  8. Feb 2023
  9. Dec 2022
    1. OpenAI is perhaps one of the oddest companies to emerge from Silicon Valley. It was set up as a non-profit in 2015 to promote and develop "friendly" AI in a way that "benefits humanity as a whole". Elon Musk, Peter Thiel and other leading tech figures pledged US$1 billion towards its goals.Their thinking was we couldn't trust for-profit companies to develop increasingly capable AI that aligned with humanity's prosperity. AI therefore needed to be developed by a non-profit and, as the name suggested, in an open way.In 2019 OpenAI transitioned into a capped for-profit company (with investors limited to a maximum return of 100 times their investment) and took a US$1 billion investment from Microsoft so it could scale and compete with the tech giants.

      Origins of OpenAI

      First a non-profit started with funding from Musk, Theil, and others. It has since transitioned to a "capped for-profit company".

  10. Oct 2022
    1. This phenomenon is characteristic of modern ML models, where an active community creates many new versions based on an original ML model that may enable greater use for different user groups. Each version may have its own license, though some model developers are now requiring all downstream models (derived models from the original model) to at least have the same use restrictions as included in the original license.

      Share-alike for restrictions, assuming that there will be proliferation of different choices of restriction sets.

  11. Jan 2022
    1. He said the new AI tutor platform collects “competency skills graphs” made by educators, then uses AI to generate learning activities, such as short-answer or multiple-choice questions, which students can access on an app. The platform also includes applications that can chat with students, provide coaching for reading comprehension and writing, and advise them on academic course plans based on their prior knowledge, career goals and interest

      I saw an AI Tutor demo as ASU+GSV in 2021 and it was still early stage. Today, the features highlighted here are yet to be manifested in powerful ways that are worth utilizing, however, I do believe the aspirations are likely to be realized, and in ways beyond what the product managers are even hyping. (For example, I suspect AI Tutor will one day be able to provide students feedback in the voice/tone of their specific instructor.)

  12. Jun 2021
  13. Feb 2021
    1. OpenAI and other researchers have released a few tools capable of identifying AI-generated text. These use similar AI algorithms to spot telltale signs in the text. It’s not clear if anyone is using these to protect online commenting platforms. Facebook declined to say if it is using such tools; Google and Twitter did not respond to requests for comment.


    2. OpenAI released a more capable version of its text-generation program, called GPT-3, last June. So far, it has only been made available to a few AI researchers and companies, with some people building useful applications such as programs that generate email messages from bullet points. When GPT-3 was released, OpenAI said in a research paper that it had not seen signs of GPT-2 being used maliciously, even though it had been aware of Weiss’s research.

      去年6月,OpenAI 发布了一个更强大的文本生成程序,称为 GPT-3。到目前为止,它只向少数人工智能研究人员和公司开放,一些人开发了有用的应用程序,比如从要点生成电子邮件信息的程序。当GPT-3发布时,OpenAI在一份研究报告中表示,尽管它已经意识到Weiss的研究,但没有看到GPT-2被恶意使用的迹象。

  14. Jul 2020