27 Matching Annotations
  1. May 2023
    1. strengths and weaknesses

      Yes, getting them questioning the output is step one. We have to rediscover critical thinking in the Era of Text Generation.

  2. Mar 2023
    1. adapting teaching to this new reality

      I don't remember how I put this but this phrase seems so broad--we wouldn't all agree on adapting teaching, but we might all agree that we need to make explicit policies about AI.

    2. help students learn the “basic building blocks” of effective academic writing.

      I wonder what makes Onyper think students are learning these 'basic building blocks'--ChatGPT can produce them, but what is going on in the student's mind when they see what it produces? Reading a sample essay doesn't teach us to write...

    3. he writes in his course policy that the use of such models is encouraged, “as it may make it possible for you to submit assignments with higher quality, in less time.”

      Doesn't this imply that the purpose of the assignment is to produce a high quality product rather than the purpose being the student's learning?

  3. Feb 2023
    1. A calculator performs calculations; ChatGPT guesses. The difference is important.

      Thank you! So beautifully and simply put ChatGPT is also used mostly for tasks where there is no one clear right answer.

    1. However, the article does not take a clear stance on the matter and does not offer a conclusion on whether the capitalization of the word "black" is a good or bad thing.

      This implies that it should take a stand but doesn't say why. Note that the New York Times article is not an editorial.

      This summary also misses the stand implied by the choice to end on a quote from scholar Crystal Fleming explaining why she capitalizes "Black":

      "'Frankly,” she said, “because I want to. That’s also something that I think we need to see more of in every field — black people doing what we want to do.'"

    2. long overdue

      Why? What value do these people see in the capitalization? This summary ignores the idea of power, which is central in the article. Eligon writes that capitalizing "Black" "confers a sense of power and respect to black people, who have often been relegated to the lowest rungs of society through racist systems, black scholars say."

    3. Some scholars believe that the capitalization of the word "Black" lumps people of African descent into a monolithic group and erases their diversity, while others believe that writing "Black" but not "White" is giving white people a pass on recognizing their racial privilege.

      Here is common academic phrasing to contrast ideas: "some scholars argue...while others believe that..."

      However, it's not a great choice to use this contrast phrasing for the particular ideas described here. The previous sentence used "while" to contrast one view in favor of "Black" with a view against it. But the two parts of this particular sentence are not opposing views. The first idea is against capitalization, whereas the second idea doesn't take a stance on "Black" vs. "black" but rather is weighing in on what we should do about terminology for whiteness if we do capitalize "Black."

      This is an example of how ChatGPT uses academic phrasing in ways that read as natural and even sound intelligent, but on closer examination remind us that ChatGPT has no understanding of the ideas.

    4. John Eligon, a New York Times columnist, writes about the debate surrounding the capitalization of the word "black" to refer to African-Americans. The move to capitalize the word has gained widespread acceptance in recent weeks and has sparked a debate over identity, race, and power. Many news organizations have changed their style to capitalize the word, but not all have embraced it, including The New York Times and The Washington Post, who are still grappling with the decision. The capitalization of black is part of a long-standing struggle to refer to people of African descent in the most respectful and accurate way.

      Here's a sample ChatGPT critical assessment of the NY Times article at https://www.nytimes.com/2020/06/26/us/black-african-american-style-debate.html

      For contrast, see this human-written sample essay from the textbook How Arguments Work: A Guide to Writing and Analyzing Texts in College: https://human.libretexts.org/Bookshelves/Composition/Advanced_Composition/Book%3A_How_Arguments_Work_-A_Guide_to_Writing_and_Analyzing_Texts_in_College(Mills)/04%3A_Assessing_the_Strength_of_an_Argument/4.11%3A_Sample_Assessment_Essays/4.11.02%3A_Sample_Assessment-_Typography_and_Identity

  4. platform.openai.com platform.openai.com
    1. upskilling activities in areas like writing and coding (debugging code, revising writing, asking for explanations)

      I'm concerned people will see this and remember it without thinking of all the errors that are described later on in this document.

    2. ChatGPT use in Bibtex format as shown below:

      Glad they are addressing this, and I hope they will continue to offer such suggestions. I don't think ChatGPT should be classed as a journal. We really need a new way to acknowledge its use that doesn't imply that it was written with intention or that a person stands behind what it says.

    3. will continue to broaden as we learn.

      Since there is a concern about the bias of the tool toward English and developed nations, it would be great if they could include global educators from the start.

    4. As part of this effort, we invite educators and others to share any feedback they have on our feedback form as well as any resources that they are developing or have found helpful (e.g. course guidelines, honor code and policy updates, interactive tools, AI literacy programs, etc).

      I wonder how this information will be shared back so that other educators can benefit from it. I maintain a resource list for educators at https://wac.colostate.edu/repository/collections/ai-text-generators-and-teaching-writing-starting-points-for-inquiry/

    5. one factor out of many when used as a part of an investigation determining a piece of content’s source and making a holistic assessment of academic dishonesty or plagiarism.

      It's still not clear to me how they can be used as evidence at of academic dishonesty at all, even in combination with other factors, when they have so many false positives and false negatives. I can see them used to initiate a conversation with a student and possibly a rewrite of a paper. This is tricky.

    6. Ultimately, we believe it will be necessary for students to learn how to navigate a world where tools like ChatGPT are commonplace. This includes potentially learning new kinds of skills, like how to effectively use a language model, as well as about the general limitations and failure modes that these models exhibit.

      I agree, though I think we should emphasize teaching about the limitations before teaching how to use the models. Critical AI literacy must become part of digital literacy.

    7. Some of this is STEM education, but much of it also draws on students’ understanding of ethics, media literacy, ability to verify information from different sources, and other skills from the arts, social sciences, and humanities.

      Glad they mention this since I am skeptical of claims that students need to learn prompt engineering. The rhetorical skills I use to prompt ChatGPT are mainly learned by writing and editing without it.

    8. While tools like ChatGPT can often generate answers that sound reasonable, they can not be relied upon to be accurate consistently or across every domain. Sometimes the model will offer an argument that doesn't make sense or is wrong. Other times it may fabricate source names, direct quotations, citations, and other details. Additionally, across some topics the model may distort the truth – for example, by asserting there is one answer when there isn't or by misrepresenting the relative strength of two opposing arguments.

      If we teach about ChatGPT, we might do well to showcase examples of these kinds of problems in output so that students develop an eye for them and an intuitive understanding that the model isn't thinking or reasoning or checking what it says.

    9. While the model may appear to give confident and reasonable sounding answers,

      This is a bigger problem when we use ChatGPT in education than in other arenas because students are coming in without expertise, seeking to learn from experts. They are especially susceptible to considering plausible ChatGPT outputs to be authoritative.

    10. . Web browsing capabilities and improving factual accuracy are an open research area that you can learn more in our blog post on WebGPT.

      Try PerplexityAI for an example of this. Google's Bard should be another example when released.

    11. subtle ways.

      Glad they mention this in the first line. People will see the various safeguards and assume that ChatGPT is safe because work has been done on this, but there are so many ways these biases can still surface, and since they are baked into the training data, there's not much prospect of eliminating them.

    12. Verifying AI recommendations often requires a high degree of expertise,

      This is a central idea that I would wish were foregrounded. If we are trying to use auto-generated text in a situation in with truth matters, we need to be quite knowledgeable and also invest time in evaluating what that text says. Sometimes that takes more time than writing something ourselves.

    13. students may need to develop more skepticism of information sources, given the potential for AI to assist in the spread of inaccurate content.

      It strikes me that OpenAI itself is warning of a coming flood of misinformation from language models. I'm glad they are doing so, and I hope they keep investing in improving their AI text classifier so we have some ways to distinguish human writing from machine-generated text.

    14. Educators should also disclose the use of ChatGPT in generating learning materials, and ask students to do so when they incorporate the use of ChatGPT in assignments or activities.

      Yes! We must begin to cultivate an ethic of transparency around synthetic text. We can acknowledge to students that we might sometimes be tempted to autogenerate a document and not acknowledge the role of ChatGPT (I have certainly felt this temptation).

    15. export their ChatGPT use and share it with educators. Currently students can do this with third-party browser extensions.

      This would be wonderful. Currently we can use the ShareGPT extension for this.

    16. they and their educators should understand the limitations of the tools outlined below.

      I appreciate these cautions, but I'm still concerned that by foregrounding the bulleted list of enticing possibilities, this document will mainly have the effect of encouraging experimentation with only lip service to the cautions.

    17. custom tutoring tools

      I'm concerned that any use of ChatGPT for tutoring would fall under the "overreliance" category as defined below. Students who need tutoring do not usually have the expertise or the time to critically assess or double check everything the tutor tells them. ChatGPT already comes off as more authoritative than it is. It will come across as even more authoritative if teachers are recommending it as a tutor.

  5. Jan 2023