139 Matching Annotations
  1. Oct 2023
    1. What is easier? Come up with good slogans out of nowhere, or come up with good slogans after getting a list of striking details?

      Of course this is the basis of keeping a zettelkasten for writing as well. When you can pull up prior ideas as a bank of information to work from, you're never starting from scratch, which is difficult not only for ChatGPT, but for people in general.

      Cross reference research on naming "white" things versus naming with a more specific prompt like "white" things in your refrigerator.

  2. Sep 2023
    1. What do you do then? You can take the book to someone else who, you think, can read better than you, and have him explain the parts that trouble you. ("He" may be a living person or another book-a commentary or textbook. )

      This may be an interesting use case for artificial intelligence tools like ChatGPT which can provide the reader of complex material with simplified synopses to allow better penetration of the material (potentially by removing jargon, argot, etc.)

  3. Aug 2023
  4. Jun 2023
    1. tech inevitability standpoint

      I can see correlations between this idea and the "proximate future" idea presented in this article: https://techpolicy.press/artificial-intelligence-and-the-ever-receding-horizon-of-the-future/

  5. May 2023
  6. Mar 2023
    1. help students learn the “basic building blocks” of effective academic writing.

      I wonder what makes Onyper think students are learning these 'basic building blocks'--ChatGPT can produce them, but what is going on in the student's mind when they see what it produces? Reading a sample essay doesn't teach us to write...

    2. he writes in his course policy that the use of such models is encouraged, “as it may make it possible for you to submit assignments with higher quality, in less time.”

      Doesn't this imply that the purpose of the assignment is to produce a high quality product rather than the purpose being the student's learning?

  7. Feb 2023
    1. We should be familiarizing ourselves with, and nurturing, our student’s writing styles and lines of inquiry.

      I've seen some push back on this idea in conversations on Twitter and elsewhere. I've heard some instructors say they don't necessarily have bandwidth for this kind of intimate pedagogy.

      I'm sympathetic with that challenge--MANY teachers are overworked and overwhelmed--but I still don't think backing off of humnanizing education is the right approach. I'd rather focus systematically on freeing up teachers to use this approach.

    2. It’s hard to avoid concerns about plagiarism with the rise of ChatGPT.

      I really struggled with whether to mention plagiarism at all in this post. I didn't want to add additional hype to the concerns about "cheating students" and the surveillance side of edtech that has profited off it. But I wouldn't be being honest if "plagiarism" wasn't something mentioned by many of the frontline teachers that I work with on the daily.

    1. https://www.cyberneticforests.com/ai-images

      Critical Topics: AI Images is an undergraduate class delivered for Bradley University in Spring 2023. It is meant to provide an overview of the context of AI art making tools and connects media studies, new media art, and data ethics with current events and debates in AI and generative art. Students will learn to think critically about these tools by using them: understand what they are by making work that reflects the context and histories of the tools.

    1. However, the article does not take a clear stance on the matter and does not offer a conclusion on whether the capitalization of the word "black" is a good or bad thing.

      This implies that it should take a stand but doesn't say why. Note that the New York Times article is not an editorial.

      This summary also misses the stand implied by the choice to end on a quote from scholar Crystal Fleming explaining why she capitalizes "Black":

      "'Frankly,” she said, “because I want to. That’s also something that I think we need to see more of in every field — black people doing what we want to do.'"

    2. long overdue

      Why? What value do these people see in the capitalization? This summary ignores the idea of power, which is central in the article. Eligon writes that capitalizing "Black" "confers a sense of power and respect to black people, who have often been relegated to the lowest rungs of society through racist systems, black scholars say."

    3. Some scholars believe that the capitalization of the word "Black" lumps people of African descent into a monolithic group and erases their diversity, while others believe that writing "Black" but not "White" is giving white people a pass on recognizing their racial privilege.

      Here is common academic phrasing to contrast ideas: "some scholars argue...while others believe that..."

      However, it's not a great choice to use this contrast phrasing for the particular ideas described here. The previous sentence used "while" to contrast one view in favor of "Black" with a view against it. But the two parts of this particular sentence are not opposing views. The first idea is against capitalization, whereas the second idea doesn't take a stance on "Black" vs. "black" but rather is weighing in on what we should do about terminology for whiteness if we do capitalize "Black."

      This is an example of how ChatGPT uses academic phrasing in ways that read as natural and even sound intelligent, but on closer examination remind us that ChatGPT has no understanding of the ideas.

    4. John Eligon, a New York Times columnist, writes about the debate surrounding the capitalization of the word "black" to refer to African-Americans. The move to capitalize the word has gained widespread acceptance in recent weeks and has sparked a debate over identity, race, and power. Many news organizations have changed their style to capitalize the word, but not all have embraced it, including The New York Times and The Washington Post, who are still grappling with the decision. The capitalization of black is part of a long-standing struggle to refer to people of African descent in the most respectful and accurate way.

      Here's a sample ChatGPT critical assessment of the NY Times article at https://www.nytimes.com/2020/06/26/us/black-african-american-style-debate.html

      For contrast, see this human-written sample essay from the textbook How Arguments Work: A Guide to Writing and Analyzing Texts in College: https://human.libretexts.org/Bookshelves/Composition/Advanced_Composition/Book%3A_How_Arguments_Work_-A_Guide_to_Writing_and_Analyzing_Texts_in_College(Mills)/04%3A_Assessing_the_Strength_of_an_Argument/4.11%3A_Sample_Assessment_Essays/4.11.02%3A_Sample_Assessment-_Typography_and_Identity

  8. platform.openai.com platform.openai.com
    1. ChatGPT use in Bibtex format as shown below:

      Glad they are addressing this, and I hope they will continue to offer such suggestions. I don't think ChatGPT should be classed as a journal. We really need a new way to acknowledge its use that doesn't imply that it was written with intention or that a person stands behind what it says.

    2. As part of this effort, we invite educators and others to share any feedback they have on our feedback form as well as any resources that they are developing or have found helpful (e.g. course guidelines, honor code and policy updates, interactive tools, AI literacy programs, etc).

      I wonder how this information will be shared back so that other educators can benefit from it. I maintain a resource list for educators at https://wac.colostate.edu/repository/collections/ai-text-generators-and-teaching-writing-starting-points-for-inquiry/

    3. one factor out of many when used as a part of an investigation determining a piece of content’s source and making a holistic assessment of academic dishonesty or plagiarism.

      It's still not clear to me how they can be used as evidence at of academic dishonesty at all, even in combination with other factors, when they have so many false positives and false negatives. I can see them used to initiate a conversation with a student and possibly a rewrite of a paper. This is tricky.

    4. Ultimately, we believe it will be necessary for students to learn how to navigate a world where tools like ChatGPT are commonplace. This includes potentially learning new kinds of skills, like how to effectively use a language model, as well as about the general limitations and failure modes that these models exhibit.

      I agree, though I think we should emphasize teaching about the limitations before teaching how to use the models. Critical AI literacy must become part of digital literacy.

    5. Some of this is STEM education, but much of it also draws on students’ understanding of ethics, media literacy, ability to verify information from different sources, and other skills from the arts, social sciences, and humanities.

      Glad they mention this since I am skeptical of claims that students need to learn prompt engineering. The rhetorical skills I use to prompt ChatGPT are mainly learned by writing and editing without it.

    6. While tools like ChatGPT can often generate answers that sound reasonable, they can not be relied upon to be accurate consistently or across every domain. Sometimes the model will offer an argument that doesn't make sense or is wrong. Other times it may fabricate source names, direct quotations, citations, and other details. Additionally, across some topics the model may distort the truth – for example, by asserting there is one answer when there isn't or by misrepresenting the relative strength of two opposing arguments.

      If we teach about ChatGPT, we might do well to showcase examples of these kinds of problems in output so that students develop an eye for them and an intuitive understanding that the model isn't thinking or reasoning or checking what it says.

    7. While the model may appear to give confident and reasonable sounding answers,

      This is a bigger problem when we use ChatGPT in education than in other arenas because students are coming in without expertise, seeking to learn from experts. They are especially susceptible to considering plausible ChatGPT outputs to be authoritative.

    8. subtle ways.

      Glad they mention this in the first line. People will see the various safeguards and assume that ChatGPT is safe because work has been done on this, but there are so many ways these biases can still surface, and since they are baked into the training data, there's not much prospect of eliminating them.

    9. Verifying AI recommendations often requires a high degree of expertise,

      This is a central idea that I would wish were foregrounded. If we are trying to use auto-generated text in a situation in with truth matters, we need to be quite knowledgeable and also invest time in evaluating what that text says. Sometimes that takes more time than writing something ourselves.

    10. students may need to develop more skepticism of information sources, given the potential for AI to assist in the spread of inaccurate content.

      It strikes me that OpenAI itself is warning of a coming flood of misinformation from language models. I'm glad they are doing so, and I hope they keep investing in improving their AI text classifier so we have some ways to distinguish human writing from machine-generated text.

    11. Educators should also disclose the use of ChatGPT in generating learning materials, and ask students to do so when they incorporate the use of ChatGPT in assignments or activities.

      Yes! We must begin to cultivate an ethic of transparency around synthetic text. We can acknowledge to students that we might sometimes be tempted to autogenerate a document and not acknowledge the role of ChatGPT (I have certainly felt this temptation).

    12. they and their educators should understand the limitations of the tools outlined below.

      I appreciate these cautions, but I'm still concerned that by foregrounding the bulleted list of enticing possibilities, this document will mainly have the effect of encouraging experimentation with only lip service to the cautions.

    13. custom tutoring tools

      I'm concerned that any use of ChatGPT for tutoring would fall under the "overreliance" category as defined below. Students who need tutoring do not usually have the expertise or the time to critically assess or double check everything the tutor tells them. ChatGPT already comes off as more authoritative than it is. It will come across as even more authoritative if teachers are recommending it as a tutor.

    1. Why are people so quick to be impressed by the output of large language models (LLMs)?

      This take-down isn't actually address this question. It's using it as a dismissal.

      It is a good question though and one not to be dismissed as its causes might interrogated.

      I am impressed (while also skeptical of ChatGPT). Does that make me dumb?

    1. e-Literate isn’t about what I know. It’s about what I’m learning.

      There's an interesting point to be made about process here. Can the same be said for course work: that writing for a class isn't about what you know it's about what you are learning.

    2. students are heavily influenced by whether they believe their teacher cares about their learning.

      Making writing more of a process rather than a product, a process in which the teacher gives regular feedback to the student, would help build that relationship.

    1. ChatGPT doesn’t mark the end of high school English class, but it can mark the end of formulaic, mediocre writing performance as a goal for students and teachers. That end is long overdue, and if ChatGPT hastens that end, then that is good news.

      Provocative argument: ironically, it's the standardization of learning that is killed by AI writing platforms.

    2. Both started with a version of “Work A and Work B have many similarities and many differences,” an opening sentence that I would have rejected from a live student

      So what's the point, ChatGPT isn't really all that sophisticated in its analysis? Relies on cliched structures? Either way or both, I kind of buy it. It's not a creative writer. It' utilitarian.

      There's also an interesting point to be made here in terms of the prompts teachers provide students for essays. They too need to be sophisticated rather than simply compare and contrast these two books.

    1. He said it was “very naive” to think it would be possible to impose restrictions on internet platforms, particularly with Microsoft primed to integrate AI into its search engine, Bing.“Are you going to ban Google and Bing?”

      Fair point.

    1. Pedagogically speaking, focusing on the grunt work of trying out ideas—watching them develop, wither, and cede ground to better ones—is the most valuable time we can spend with our students. We surrender that time to Silicon Valley and the messy database that is the internet at the peril of our students.

      This turns into a very traditional argument of the don't use Wikipedia variety.

    2. digital utopians might claim that students and teachers will have more opportunities for critical thinking because generating ideas—the grunt work of writing—isn’t taking up any of our time. Along this line of thinking, ChatGPT is just another calculator, but for language instead of numerical calculation.

      I'm still compelled by this idea TBH...

    1. Note that ChatGPT can produce outputs that take the form of  “brainstorms,” outlines, and drafts. It can also provide commentary in the style of peer review or self-analysis. Nonetheless, students would need to coordinate multiple submissions of automated work in order to complete this type of assignment with a text generator.

      Interesting. It almost takes MORE work to use ChatGPT in the context of such heavily scaffolded writing process,

    2. a process that empowers critical thinking

      Yes, I've never felt I was simply teaching writing when I taught composition. Writing was a visible end product of a lot of other work (reading, thinking, and non-summative pre-writing activities) that I was training students in.

    3. skip the learning and thinking around which their writing assignments are designed.

      Or does it focus the learning? Just as I don't really care if my students know how to spell as long as they use spell check, what does writing with ChatGPT open up in terms of enabling students and instructors to focus on different aspects of writing.

    1. Right now, one of the most powerful things you can learn about ChatGPT is how to write quality prompts.

      Interesting. Writing instructors could start to train students in writing prompts for AI. The rubrics below are not dissimilar from what we traditionally ask student to do in their writing. So maybe ChatGPT isn't the death of the essay!

    1. "I would much rather have ChatGPT teach me about something than go read a textbook."

      What about accuracy? Textbooks go through a rigorous process of composition and editing to ensure accuracy. Most of what exists to be scraped on the internet does not. I realize this is an old Web 2.0 "problem."

      (Would textbooks even be available for scraping by ChatGPT? What does it have access to?)

    2. "We adapted to calculators and changed what we tested for in math class, I imagine.

      What are the implications here for the writing instructor? What "computational" equivalent to basic calculation would then be no longer central to teaching writing?

    1. This framing means that as educators we need to be clear not only about what we hope our students are learning but also about how and why.

      This seems to point to process over product and more formative assessment or scaffolding as part of instruction.

    1. Get support. Consider starting a conversation with other teachers or the child’s family about AI-generated workand the importance of students writing authentically.

      For higher education students and especially where its used as a support tool then I think it would be very useful to highlight how they may get the extra support they need. Not everyone cheats because they are lazy, or disorganised. If they get into higher education it's an achievement so to throw it away in this way could highlight deeper issues around support, well being etc.

    2. Set Clear Classroom Expectations For AI-Generated Writing

      I think this is crucial, along with educating the public, parents, students, other teachers about what it can and can't do. The more familiar people are then scarmongering and negative attitudes towards its use might be addressed quickly. Everyone is an expert and have their own views based on what they have read or been told so we may as well do what we can to promote ethical and sensible/useful examples of its application support teaching and learning.

    1. The rudiments of writing will be considered a given, and every student will have direct access to the finer aspects of the enterprise.

      I wonder if there are analogs in math.

      The graphic calculator, for example, must have changed how math was taught, removing the need for that lower-order computation in math.

    2. Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic.

      This is probably scariest of all. ChatGBT as editor rather than author.

    3. What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor.

      Wow, that's a provocative statement! What is meant by better here?

      On some level, I've always felt that a poorly-written, but original essay is better than a well-written, well-analyzed but plagiarized one.

    1. methods of assessment that take into consideration the processes and experiences of learning, rather than simply relying on a single artifact like an essay or exam. The evidence of learning comes in a little of different packages

      How about Hypothesis social annotation throughout a course and throughout the process of essay composition.

    2. Rather than letting students explore the messy and fraught process of learning how to write, we have instead incentivized them to behave like algorithms, creating simulations that pass surface-level muster

      Annotation shows that messy process.

  9. Jan 2023
    1. In The New Laws of Robotics, legal scholar Frank Pasquale argues for guidance from professional organizations about whether and how to use data-driven statistical models in domains such as education or health care.

      Very interesting. Hypothesis, in its small way, can perhaps help some educators...

    2. we need collaborative processes to seek clarity.

      Indeed!

      And the reminder that writing (and knowledge production more generally) is always collaborative, has an audience, both potentially elided by relying on ChatGPT to generate prose/ideas.

    3. “mathy math,” a model of language sequences built by “scraping” the internet and then, with massive computing, “training” the model to predict the sequence of words most likely to follow a user’s prompt

      A kind of plagiarism in and of itself?

    1. The text is being generated on behalf of the student and is being substituted for the student’s self-generated text. This use of AI is inherently dishonest.

      Could one still argue that it's a component piece of the text/writing that is generated? Just like spelling, grammar, and citation are?

      No doubt it's a lot MORE of the text that is generated and COULD be handed in completely as is in many cases. But could it nonetheless be seen as a kind of starting point for students to then focus on other work, other skills? Like the editing processes mentioned above.

    2. Teaching students to be good critical readers takes time and requires instructors develop activities, such as social annotation assignments, that draw students’ attention to the details of a well-written text.

      Yes! And they ARE writing when they read and annotate, so they can still practice and instructors can still evaluate that skill. It's just a very different writing assignment than a final paper.

    3. So, while effective editors may or may not be exceptional writers, they must be great critical readers.

      I have often wondered (when I was an English teacher), am I teaching writing or reading? Obviously the answer is both.

      The product of so much English courses is paper writing, but that's also meant to be an assessment of a student's reading, right?

      So maybe there's a shift to focus more on reading as a formative assessment that is needed?