- Feb 2023
-
bradfrost.com bradfrost.com
-
这篇文章主要讨论了在人工智能时代中设计系统的重要性和挑战。作者认为,随着人工智能技术的发展,设计系统已经成为了一个必不可少的工具,可以帮助设计师和开发人员更高效地工作,并提高产品的一致性和可靠性。
文章首先介绍了设计系统的定义和组成部分。设计系统是一种包含设计原则、组件、样式和工具的系统,可以帮助设计师和开发人员更高效地工作,并保持产品的一致性和可靠性。设计系统的组成部分包括组件库、样式指南、设计语言和工具等。
然后,文章探讨了人工智能对设计系统的影响和挑战。作者认为,人工智能可以帮助设计师更快地生成和修改设计,并提高产品的用户体验。同时,人工智能还可以帮助设计师更好地理解用户需求和行为,并提供个性化的产品推荐和体验。然而,人工智能也面临着一些挑战,包括算法的公正性、隐私保护等方面。
接着,文章讨论了设计系统的实施和管理。设计系统的实施需要考虑多个因素,包括组件的设计和开发、样式和语言的制定、工具的选择等。同时,设计系统的管理也需要考虑多个方面,包括组件库的维护、文档和培训的提供、版本控制和合作等。
最后,文章强调了设计系统的重要性和未来发展方向。作者认为,设计系统可以帮助设计师和开发人员更高效地工作,并提高产品的一致性和可靠性。随着人工智能技术的发展,设计系统也将不断发展和完善,为用户提供更好的体验和服务。
-
-
www.technologyreview.com www.technologyreview.com
-
这篇文章主要介绍了 Rust 编程语言的发展情况和应用领域。作者指出,Rust 是目前世界上发展最快的编程语言之一,其在各个领域都有广泛的应用,特别是在网络安全、区块链、机器学习等领域。文章介绍了 Rust 的主要特点和优势,包括内存安全、高性能、跨平台等特点,并探讨了 Rust 未来的发展方向和挑战。
文章指出,Rust 的主要特点之一是内存安全,这使得 Rust 在网络安全和系统编程领域得到了广泛应用。Rust 的另一个优势是高性能,其通过内存管理和优化代码实现了比其他语言更快的运行速度。此外,Rust 具有跨平台的特点,可以在不同的操作系统和硬件上运行,这使得它在开发分布式系统和云计算方面也有广泛应用。
文章还介绍了 Rust 在其他领域的应用,包括区块链、机器学习、游戏开发等。Rust 的内存安全和高性能使其成为区块链应用开发的理想选择,同时 Rust 的跨平台特性也使其在区块链生态系统中得到广泛应用。在机器学习领域,Rust 的高性能和可扩展性使其成为研究人员和开发人员的理想选择。
最后,文章探讨了 Rust 的未来发展方向和挑战。作者认为,Rust 的发展方向包括扩大应用领域、加强社区贡献和生态系统建设等方面。同时,Rust 也面临着一些挑战,包括学习曲线陡峭、库和工具的缺失等。文章认为,通过社区的努力和不断的发展,Rust 可以在未来继续保持其领先地位,并在各个领域发挥更大的作用。
-
-
-
ChatGPT Internal Server Error: A Comprehensive Guide to Fixing It
This is a Comprehensive Guide to Fix ChatGPT Server Error. Follow the link to check the formula >> Internal Server Error ChatGPT<<<
-
-
web.hypothes.is web.hypothes.is
-
critique the products of AI writing tools
Maybe start with Kevin Roose's conversation with "Sydney"--the alter-ego of the new AI powered Bing search/chat platform.
-
an intimidating blinking cursor on a blank page
-
We should be familiarizing ourselves with, and nurturing, our student’s writing styles and lines of inquiry.
I've seen some push back on this idea in conversations on Twitter and elsewhere. I've heard some instructors say they don't necessarily have bandwidth for this kind of intimate pedagogy.
I'm sympathetic with that challenge--MANY teachers are overworked and overwhelmed--but I still don't think backing off of humnanizing education is the right approach. I'd rather focus systematically on freeing up teachers to use this approach.
-
In this moment of generative AI, Hypothesis continues to rely on what we’ve always done: support process-oriented pedagogies that make learning more accessible.
Check out our follow-up post for practical ideas on how to use social annotation to build in more scaffolded process into your courses.
-
It’s hard to avoid concerns about plagiarism with the rise of ChatGPT.
I really struggled with whether to mention plagiarism at all in this post. I didn't want to add additional hype to the concerns about "cheating students" and the surveillance side of edtech that has profited off it. But I wouldn't be being honest if "plagiarism" wasn't something mentioned by many of the frontline teachers that I work with on the daily.
-
-
ChatGPT could be used as a writing prompt for writers to leverage for their work in much the same way that [[Benjamin Franklin]] rewrote existing works or the major plot point in the movie [[Finding Forrester]] in which Jamal used William's work as a springboard for his own.
-
-
autumm.edtech.fm autumm.edtech.fm
-
I am skeptical of the tech inevitability standpoint that ChatGPT is here
inevitability is such an appropriate word here, because it captures a sort of techno-maximalist "any-benefit" mindset that sometimes pervades the ed-tech scene (and the position of many instructional designers and technologists)
-
-
-
I’m feeling anti-social today so I had a chat with ChatGPT about this by Paul Jacobson
I wonder what this sort of therapy looks like at scale?
-
-
www.linkedin.com www.linkedin.com
-
A calculator performs calculations; ChatGPT guesses. The difference is important.
Thank you! So beautifully and simply put ChatGPT is also used mostly for tasks where there is no one clear right answer.
-
-
leonfurze.com leonfurze.com
-
synthetic writing
Interesting phrase.
-
In PRS, I encouraged teachers to shift their focus from asking questions to teaching students how to ask high-quality questions themselves.
Perhaps in annotations?
-
-
www.youtube.com www.youtube.com
-
Local file Local file
-
Ippolito, Daphne, Ann Yuan, Andy Coenen, and Sehmon Burnam. “Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers.” arXiv, November 9, 2022. https://doi.org/10.48550/arXiv.2211.05030.
See also: https://wordcraft-writers-workshop.appspot.com/learn
A Google project entering the public as ChatGPT was released and becoming popular.
For additional experiences, see: https://www.robinsloan.com/newsletters/authors-note/
-
-
www.robinsloan.com www.robinsloan.com
-
Author's note by Robin Sloan<br /> November 2022
-
-
sharegpt.com sharegpt.com
-
However, the article does not take a clear stance on the matter and does not offer a conclusion on whether the capitalization of the word "black" is a good or bad thing.
This implies that it should take a stand but doesn't say why. Note that the New York Times article is not an editorial.
This summary also misses the stand implied by the choice to end on a quote from scholar Crystal Fleming explaining why she capitalizes "Black":
"'Frankly,” she said, “because I want to. That’s also something that I think we need to see more of in every field — black people doing what we want to do.'"
-
long overdue
Why? What value do these people see in the capitalization? This summary ignores the idea of power, which is central in the article. Eligon writes that capitalizing "Black" "confers a sense of power and respect to black people, who have often been relegated to the lowest rungs of society through racist systems, black scholars say."
-
John Eligon, a New York Times columnist, writes about the debate surrounding the capitalization of the word "black" to refer to African-Americans. The move to capitalize the word has gained widespread acceptance in recent weeks and has sparked a debate over identity, race, and power. Many news organizations have changed their style to capitalize the word, but not all have embraced it, including The New York Times and The Washington Post, who are still grappling with the decision. The capitalization of black is part of a long-standing struggle to refer to people of African descent in the most respectful and accurate way.
Here's a sample ChatGPT critical assessment of the NY Times article at https://www.nytimes.com/2020/06/26/us/black-african-american-style-debate.html
For contrast, see this human-written sample essay from the textbook How Arguments Work: A Guide to Writing and Analyzing Texts in College: https://human.libretexts.org/Bookshelves/Composition/Advanced_Composition/Book%3A_How_Arguments_Work_-A_Guide_to_Writing_and_Analyzing_Texts_in_College(Mills)/04%3A_Assessing_the_Strength_of_an_Argument/4.11%3A_Sample_Assessment_Essays/4.11.02%3A_Sample_Assessment-_Typography_and_Identity
Tags
Annotators
URL
-
-
twitter.com twitter.com
-
docs.google.com docs.google.com
-
https://docs.google.com/document/d/1E8b-aY6R-CUMgXe0UTCsdyHWHDatBa1DaQBvdcuA_Kk/edit
AI in Education Resource Directory
<small><cite class='h-cite via'>ᔥ <span class='p-author h-card'>Hypothesis</span> in Liquid Margins 38: The rise of ChatGPT and how to work with and around it : Hypothesis (<time class='dt-published'>02/09/2023 16:11:54</time>)</cite></small>
-
-
docs.google.com docs.google.com
-
https://docs.google.com/document/d/1WpCeTyiWCPQ9MNCsFeKMDQLSTsg1oKfNIH6MzoSFXqQ/preview<br /> Policies related to ChatGPT and other AI Tools
<small><cite class='h-cite via'>ᔥ <span class='p-author h-card'>Hypothesis</span> in Liquid Margins 38: The rise of ChatGPT and how to work with and around it : Hypothesis (<time class='dt-published'>02/09/2023 16:11:54</time>)</cite></small>
-
-
platform.openai.com platform.openai.comOpenAI API18
-
Educator considerations for ChatGPT<br /> https://platform.openai.com/docs/chatgpt-education
<small><cite class='h-cite via'>ᔥ <span class='p-author h-card'>Hypothesis</span> in Liquid Margins 38: The rise of ChatGPT and how to work with and around it : Hypothesis (<time class='dt-published'>02/09/2023 16:11:54</time>)</cite></small>
-
upskilling activities in areas like writing and coding (debugging code, revising writing, asking for explanations)
I'm concerned people will see this and remember it without thinking of all the errors that are described later on in this document.
-
ChatGPT use in Bibtex format as shown below:
Glad they are addressing this, and I hope they will continue to offer such suggestions. I don't think ChatGPT should be classed as a journal. We really need a new way to acknowledge its use that doesn't imply that it was written with intention or that a person stands behind what it says.
-
will continue to broaden as we learn.
Since there is a concern about the bias of the tool toward English and developed nations, it would be great if they could include global educators from the start.
-
As part of this effort, we invite educators and others to share any feedback they have on our feedback form as well as any resources that they are developing or have found helpful (e.g. course guidelines, honor code and policy updates, interactive tools, AI literacy programs, etc).
I wonder how this information will be shared back so that other educators can benefit from it. I maintain a resource list for educators at https://wac.colostate.edu/repository/collections/ai-text-generators-and-teaching-writing-starting-points-for-inquiry/
-
one factor out of many when used as a part of an investigation determining a piece of content’s source and making a holistic assessment of academic dishonesty or plagiarism.
It's still not clear to me how they can be used as evidence at of academic dishonesty at all, even in combination with other factors, when they have so many false positives and false negatives. I can see them used to initiate a conversation with a student and possibly a rewrite of a paper. This is tricky.
-
Ultimately, we believe it will be necessary for students to learn how to navigate a world where tools like ChatGPT are commonplace. This includes potentially learning new kinds of skills, like how to effectively use a language model, as well as about the general limitations and failure modes that these models exhibit.
I agree, though I think we should emphasize teaching about the limitations before teaching how to use the models. Critical AI literacy must become part of digital literacy.
-
Some of this is STEM education, but much of it also draws on students’ understanding of ethics, media literacy, ability to verify information from different sources, and other skills from the arts, social sciences, and humanities.
Glad they mention this since I am skeptical of claims that students need to learn prompt engineering. The rhetorical skills I use to prompt ChatGPT are mainly learned by writing and editing without it.
-
While tools like ChatGPT can often generate answers that sound reasonable, they can not be relied upon to be accurate consistently or across every domain. Sometimes the model will offer an argument that doesn't make sense or is wrong. Other times it may fabricate source names, direct quotations, citations, and other details. Additionally, across some topics the model may distort the truth – for example, by asserting there is one answer when there isn't or by misrepresenting the relative strength of two opposing arguments.
If we teach about ChatGPT, we might do well to showcase examples of these kinds of problems in output so that students develop an eye for them and an intuitive understanding that the model isn't thinking or reasoning or checking what it says.
-
While the model may appear to give confident and reasonable sounding answers,
This is a bigger problem when we use ChatGPT in education than in other arenas because students are coming in without expertise, seeking to learn from experts. They are especially susceptible to considering plausible ChatGPT outputs to be authoritative.
-
. Web browsing capabilities and improving factual accuracy are an open research area that you can learn more in our blog post on WebGPT.
Try PerplexityAI for an example of this. Google's Bard should be another example when released.
-
subtle ways.
Glad they mention this in the first line. People will see the various safeguards and assume that ChatGPT is safe because work has been done on this, but there are so many ways these biases can still surface, and since they are baked into the training data, there's not much prospect of eliminating them.
-
Verifying AI recommendations often requires a high degree of expertise,
This is a central idea that I would wish were foregrounded. If we are trying to use auto-generated text in a situation in with truth matters, we need to be quite knowledgeable and also invest time in evaluating what that text says. Sometimes that takes more time than writing something ourselves.
-
students may need to develop more skepticism of information sources, given the potential for AI to assist in the spread of inaccurate content.
It strikes me that OpenAI itself is warning of a coming flood of misinformation from language models. I'm glad they are doing so, and I hope they keep investing in improving their AI text classifier so we have some ways to distinguish human writing from machine-generated text.
-
Educators should also disclose the use of ChatGPT in generating learning materials, and ask students to do so when they incorporate the use of ChatGPT in assignments or activities.
Yes! We must begin to cultivate an ethic of transparency around synthetic text. We can acknowledge to students that we might sometimes be tempted to autogenerate a document and not acknowledge the role of ChatGPT (I have certainly felt this temptation).
-
export their ChatGPT use and share it with educators. Currently students can do this with third-party browser extensions.
This would be wonderful. Currently we can use the ShareGPT extension for this.
-
they and their educators should understand the limitations of the tools outlined below.
I appreciate these cautions, but I'm still concerned that by foregrounding the bulleted list of enticing possibilities, this document will mainly have the effect of encouraging experimentation with only lip service to the cautions.
-
custom tutoring tools
I'm concerned that any use of ChatGPT for tutoring would fall under the "overreliance" category as defined below. Students who need tutoring do not usually have the expertise or the time to critically assess or double check everything the tutor tells them. ChatGPT already comes off as more authoritative than it is. It will come across as even more authoritative if teachers are recommending it as a tutor.
-
-
hypothes.is hypothes.is
-
garymarcus.substack.com garymarcus.substack.com
-
Scaling neural network models—making them bigger—has made their faux writing more and more authoritative-sounding, but not more and more truthful.
Yes -- distinguishing the more realistic from more truthful. That's where the conversation should be.
-
-
-
Why are people so quick to be impressed by the output of large language models (LLMs)?
This take-down isn't actually address this question. It's using it as a dismissal.
It is a good question though and one not to be dismissed as its causes might interrogated.
I am impressed (while also skeptical of ChatGPT). Does that make me dumb?
-
-
eliterate.us eliterate.us
-
So will AI text generation tools revolutionize or kill college writing? Both! Neither! For sure! Probably! Eventually! Somewhat! It’s…complicated.
Nice summary of the discourse on ChatGPT!
-
e-Literate isn’t about what I know. It’s about what I’m learning.
There's an interesting point to be made about process here. Can the same be said for course work: that writing for a class isn't about what you know it's about what you are learning.
-
Particularly if used judiciously as part of the writing curriculum rather than the whole thing, it could be quite useful.
Very sensible.
-
students are heavily influenced by whether they believe their teacher cares about their learning.
Making writing more of a process rather than a product, a process in which the teacher gives regular feedback to the student, would help build that relationship.
-
Then I would have edited the output
Interesting. Collaborating with the bot in composition. It gets you started, but you are still needed.
-
-
www.forbes.com www.forbes.com
-
ChatGPT doesn’t mark the end of high school English class, but it can mark the end of formulaic, mediocre writing performance as a goal for students and teachers. That end is long overdue, and if ChatGPT hastens that end, then that is good news.
Provocative argument: ironically, it's the standardization of learning that is killed by AI writing platforms.
-
Both started with a version of “Work A and Work B have many similarities and many differences,” an opening sentence that I would have rejected from a live student
So what's the point, ChatGPT isn't really all that sophisticated in its analysis? Relies on cliched structures? Either way or both, I kind of buy it. It's not a creative writer. It' utilitarian.
There's also an interesting point to be made here in terms of the prompts teachers provide students for essays. They too need to be sophisticated rather than simply compare and contrast these two books.
-
If they put a great degree of thought into designing a prompt, would that not mean that they were doing something involving real learning?
Yes!
-
I suspect that test runs with ChatGPT depend in part on the richness of the prompt given,
Writing good prompts could be something we teach students.
-
And the algorithm cannot manage supporting its points with quotes from the works, a pretty fundamental part of writing about literature.
ChatGPT not good at integration of quotes, a key piece of writing from evidence.
-
-
www.theguardian.com www.theguardian.com
-
He said it was “very naive” to think it would be possible to impose restrictions on internet platforms, particularly with Microsoft primed to integrate AI into its search engine, Bing.“Are you going to ban Google and Bing?”
Fair point.
-
-
www.mckinsey.com www.mckinsey.com
-
Ready to take your creativity to the next level? Look no further than generative AI!
Talk about "hype"!
-
-
www.insidehighered.com www.insidehighered.com
-
AI is increasingly used for assessments of student learning. These are all valuable and enhance our efficiency and effectiveness.
But at what cost?
-
-
slate.com slate.com
-
At the same time, we need to continue building activities and assessments to make classroom work more specific and experiential.
Yes! Not sure that means banning AI as a tool which this essay ends up arguing.
-
Pedagogically speaking, focusing on the grunt work of trying out ideas—watching them develop, wither, and cede ground to better ones—is the most valuable time we can spend with our students. We surrender that time to Silicon Valley and the messy database that is the internet at the peril of our students.
This turns into a very traditional argument of the don't use Wikipedia variety.
-
digital utopians might claim that students and teachers will have more opportunities for critical thinking because generating ideas—the grunt work of writing—isn’t taking up any of our time. Along this line of thinking, ChatGPT is just another calculator, but for language instead of numerical calculation.
I'm still compelled by this idea TBH...
-
-
criticalai.org criticalai.org
-
Analysis of recent events not in the training data for the system.
Wouldn't analysis and commentary on recent events be readily available on the Internet?
-
Note that ChatGPT can produce outputs that take the form of “brainstorms,” outlines, and drafts. It can also provide commentary in the style of peer review or self-analysis. Nonetheless, students would need to coordinate multiple submissions of automated work in order to complete this type of assignment with a text generator.
Interesting. It almost takes MORE work to use ChatGPT in the context of such heavily scaffolded writing process,
-
get a better sense of their thinking
And if we're reading more of their writing through social annotation or other "steps" in the process, we also become familiar with their thinking.
-
a process that empowers critical thinking
Yes, I've never felt I was simply teaching writing when I taught composition. Writing was a visible end product of a lot of other work (reading, thinking, and non-summative pre-writing activities) that I was training students in.
-
students who feel connected to their writing will be less interested in outsourcing their work to an automated process.
Love this idea. Teaching students to own and enjoy their writing.
-
skip the learning and thinking around which their writing assignments are designed.
Or does it focus the learning? Just as I don't really care if my students know how to spell as long as they use spell check, what does writing with ChatGPT open up in terms of enabling students and instructors to focus on different aspects of writing.
-
-
leonfurze.com leonfurze.com
-
Augmenting teachers, not replacing them
Amen!
-
There’s a line somewhere between using ChatGPT in collaboration, and getting it to do all the work.
Important point.
-
ChatGPT is not an original thinker, but you are.
This is important to remind students of too. And maybe a key area for teachers to focus on what students could contribute to a writing process that includes ChatGPT.
-
using the model’s suggestions as a starting point
Perhaps the same with students. Not using ChatGPT to write the essay, but perhaps in the brainstorming process.
-
Right now, one of the most powerful things you can learn about ChatGPT is how to write quality prompts.
Interesting. Writing instructors could start to train students in writing prompts for AI. The rubrics below are not dissimilar from what we traditionally ask student to do in their writing. So maybe ChatGPT isn't the death of the essay!
-
Beyond the media hype about cheating,
I think it's important to move past the plagiarism aspect of the debates around ChatGPT, but don't think it's just "hype." Teachers are concerned.
-
-
www.businessinsider.com www.businessinsider.com
-
"I would much rather have ChatGPT teach me about something than go read a textbook."
What about accuracy? Textbooks go through a rigorous process of composition and editing to ensure accuracy. Most of what exists to be scraped on the internet does not. I realize this is an old Web 2.0 "problem."
(Would textbooks even be available for scraping by ChatGPT? What does it have access to?)
-
the company has also heard from them that the chat bot can be "an unbelievable personal tutor for each kid," Altman said.
ChatGPT as a tutor. Perhaps with the same guardrails in place so that tutors don't do the work for the students.
-
"We adapted to calculators and changed what we tested for in math class, I imagine.
What are the implications here for the writing instructor? What "computational" equivalent to basic calculation would then be no longer central to teaching writing?
-
-
chat.openai.com chat.openai.comChatGPT1
-
, an arithmetic operator in Python is not a function. An arithmetic operator is a symbol that performs
test. I am trying to see if we can cite chapgpt through hypothes.is annotations.
-
-
joshbrake.substack.com joshbrake.substack.com
-
This framing means that as educators we need to be clear not only about what we hope our students are learning but also about how and why.
This seems to point to process over product and more formative assessment or scaffolding as part of instruction.
-
The main goal of transparent teaching is simple: to promote students’ conscious understanding of how they learn.
So metacognition?
-
The educational issues surrounding ChatGPT are similar in kind to those we've seen with the growing power of the web
Yeah, is this even a new thing? It this the same debate we've always had?
-
-
s3.amazonaws.com s3.amazonaws.com
-
Note that students will not be able to cite ChatGPT using a link to their generated response;instead, ask students to repeat the exact language of their search query in the footnotes in lieu of a link
Actually citation is possible with this extension.
-
formulaic syntax
Interesting. So creativity is not it's strength. It's imitative.
-
These tools, along with a range of other practices,
Yes, the practices are key! I doubt the battle of algorithms can be won by either side.
-
-
docdrop.org docdrop.org
-
scrape any website using tab gbt
= webscraping with ChatGPT
Tags
Annotators
URL
-
-
news.ycombinator.com news.ycombinator.com
-
I've been using ChatGPT pretty consistently during the workday and have found it useful for open ended programming questions, "cleaning up" rough bullet points into a coherent paragraph of text, etc. $20/month useful is questionable though, especially with all the filters. My "in between" solution has been to configure BetterTouchTool (Mac App) with a hotkey for "Transform & Replace Selection with Javascript". This is intended for doing text transforms, but putting an API call instead seems to work fine. I highlight some text, usually just an open ended "prompt" I typed in the IDE, or Notes app, or an email body, hit the hotkey, and ~1s later it adds the answer underneath. This works...surprisingly well. It feels almost native to the OS. And it's cheaper than $20/month, assuming you aren't feeding it massive documents worth of text or expecting paragraphs in response. I've been averaging like 2-10c a day, depending on use.Here is the javascript if anyone wants to do something similar. I don't know JS really, so I'm sure it could be improved. But it seems to work fine. You can add your own hard coded prompt if you want even. async (clipboardContentString) => { try { const response = await fetch("https://api.openai.com/v1/completions", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": "Bearer YOUR API KEY HERE" }, body: JSON.stringify({ model: "text-davinci-003", prompt: `${clipboardContentString}.`, temperature: 0, max_tokens: 256 }) }); const data = await response.json(); const text = data.choices[0].text; return `${clipboardContentString} ${text}`; } catch (error) { return "Error" } }
.
-
-
wcet.wiche.edu wcet.wiche.edu
-
create assessments that “take into consideration the processes and experiences of learning.”
Annotation!
-
Ask students to engage in metacognitive reflection that has them articulate what they have learned, how they have learned it, and why the knowledge is valuable.
Students annotating their own writing?
-
-
www.theatlantic.com www.theatlantic.com
-
Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?
Great question!
-
The question isn’t “How will we get around this?” but rather “Is this still worth doing?”
Somewhat defeatist. Quit rather than evolve?
-
The rudiments of writing will be considered a given, and every student will have direct access to the finer aspects of the enterprise.
I wonder if there are analogs in math.
The graphic calculator, for example, must have changed how math was taught, removing the need for that lower-order computation in math.
-
Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic.
This is probably scariest of all. ChatGBT as editor rather than author.
-
nor does it successfully integrate quotations from the original texts
Interesting. Probably easy for AI develop this skill rather than a limit of the technology.
But, for now, maybe a good indicator of more sophisticated writing.
-
What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor.
Wow, that's a provocative statement! What is meant by better here?
On some level, I've always felt that a poorly-written, but original essay is better than a well-written, well-analyzed but plagiarized one.
-
-
www.insidehighered.com www.insidehighered.com
-
methods of assessment that take into consideration the processes and experiences of learning, rather than simply relying on a single artifact like an essay or exam. The evidence of learning comes in a little of different packages
How about Hypothesis social annotation throughout a course and throughout the process of essay composition.
-
The fact that the AI writes in fully fluent, error-free English with clear structure virtually guarantees it a high score on an AP exam
Yikes!
-
ChatGPT may be a threat to some of the things students are asked to do in school contexts, but it is not a threat to anything truly important when it comes to student learning.
Great line, powerful claim.
-
an opportunity to re-examine our practices and make sure how and what we teach is in line with our purported pedagogical values.
Love this.
-
Rather than letting students explore the messy and fraught process of learning how to write, we have instead incentivized them to behave like algorithms, creating simulations that pass surface-level muster
Annotation shows that messy process.
-
- Jan 2023
-
sharegpt.com sharegpt.com
-
The meaning of life is
Be a lot cooler if it just said 42.
Tags
Annotators
URL
-
-
www.insidehighered.com www.insidehighered.com
-
In The New Laws of Robotics, legal scholar Frank Pasquale argues for guidance from professional organizations about whether and how to use data-driven statistical models in domains such as education or health care.
Very interesting. Hypothesis, in its small way, can perhaps help some educators...
-
we need collaborative processes to seek clarity.
Indeed!
And the reminder that writing (and knowledge production more generally) is always collaborative, has an audience, both potentially elided by relying on ChatGPT to generate prose/ideas.
-
slow thinking,
Love it! Social annotation certainly help slow reading IMO.
-
Should I ask students to prompt a language model and then critique its output?
Great assignment idea!
-
preferences of data scraped from internet sites hardly renowned for their wisdom or objectivity.
Something else we try to teach our students, right?
-
“mathy math,” a model of language sequences built by “scraping” the internet and then, with massive computing, “training” the model to predict the sequence of words most likely to follow a user’s prompt
A kind of plagiarism in and of itself?
-
What a contrast to the masochistic persistence I had practiced for so many years and preached to my struggling students.
So true. Writing is hard, isn't it? ChatGPT sometimes makes it look easy. What will students make of that!?
-
-
www.insidehighered.com www.insidehighered.com
-
Back in the early 2000s, I used to demonstrate to students how EasyBib often gets it wrong when it comes to MLA formatting.
This is a great analogy. I remember feeling the same way about EasyBib when teaching comp.
-
having students socially annotate the paper, practicing their editing and fact-checking skills.
Yes! Would love to see an example of such an assignment.
-
The text is being generated on behalf of the student and is being substituted for the student’s self-generated text. This use of AI is inherently dishonest.
Could one still argue that it's a component piece of the text/writing that is generated? Just like spelling, grammar, and citation are?
No doubt it's a lot MORE of the text that is generated and COULD be handed in completely as is in many cases. But could it nonetheless be seen as a kind of starting point for students to then focus on other work, other skills? Like the editing processes mentioned above.
-
Teaching students to be good critical readers takes time and requires instructors develop activities, such as social annotation assignments, that draw students’ attention to the details of a well-written text.
Yes! And they ARE writing when they read and annotate, so they can still practice and instructors can still evaluate that skill. It's just a very different writing assignment than a final paper.
-
So, while effective editors may or may not be exceptional writers, they must be great critical readers.
I have often wondered (when I was an English teacher), am I teaching writing or reading? Obviously the answer is both.
The product of so much English courses is paper writing, but that's also meant to be an assessment of a student's reading, right?
So maybe there's a shift to focus more on reading as a formative assessment that is needed?
-
-
sites.google.com sites.google.com
-
Pedagogy: Some Ideas on How to Use GPT in Teaching
Tags
Annotators
URL
-
-
www.inverse.com www.inverse.com
-
I could instead present students with ChatGPT’s response alongside some marking instructions and ask them to provide a critique on what grade the automated response deserves and why.
What a great assignment idea (and Hypothesis could be used). Would really help students reflect on what writing is and what techniques/skills are needed to be an effective writer, sone modeled by ChatGPT, some not.
-
do we really need all students to be writing the same essays and responding to the same questions?
Hmm
-
an opportunity to improve the way we assess
Twist!
-
articulate its inability to fully replicate the expertise and real-world experience that human teachers bring to the classroom
Learning from the discourse over the past 6 weeks?
-
If ChatGPT is used to grade assignments or exams,
Cheating for teachers?
-
making it capable of engaging in natural language conversations.
Is it conversation?
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Figure 3. The average drop in log probability (perturbation discrep-ancy) after rephrasing a passage is consistently higher for model-generated passages than for human-written passages. Each plotshows the distribution of the perturbation discrepancy d (x, pθ , q)for human-written news articles and machine-generated arti-cles; of equal word length from models GPT-2 (1.5B), GPT-Neo-2.7B (Black et al., 2021), GPT-J (6B; Wang & Komatsuzaki (2021))and GPT-NeoX (20B; Black et al. (2022)). Human-written arti-cles are a sample of 500 XSum articles; machine-generated textis generated by prompting each model with the first 30 tokens ofeach XSum article, sampling from the raw conditional distribution.Discrepancies are estimated with 100 T5-3B samples.
quite striking here is the fact that more powerful/larger models are more capable of generating unusual or "human-like" responses - looking at the overlap in log likelihoods
-
if we apply small perturbations to a passagex ∼ pθ , producing ̃x, the quantity log pθ (x) − log pθ ( ̃x)should be relatively large on average for machine-generatedsamples compared to human-written text.
By applying small changes to text sample x, we should be able to find the log probs of x and the perturbed example and there should be a fairly big delta for machine generated examples.
-
As in prior work, we study a ‘white box’ setting (Gehrmannet al., 2019) in which the detector may evaluate the log prob-ability of a sample log pθ (x). The white box setting doesnot assume access to the model architecture or parameters.While most public APIs for LLMs (such as GPT-3) enablescoring text, some exceptions exist
The authors assume white-box access to the log probability of a sample \(log p_{\Theta}(x)\) but do not require access to the model's actual architecture or weights.
-
Empirically, we find predictive entropy to be positively cor-related with passage fake-ness more often that not; there-fore, this baseline uses high average entropy in the model’spredictive distribution as a signal that a passage is machine-generated.
this makes sense and aligns with the gltr - humans add more entropy to sentences by making unusual choices in vocabulary that a model would not.
-
We find that supervised detectors can provide similardetection performance to DetectGPT on in-distribution datalike English news, but perform significantly worse than zero-shot methods in the case of English scientific writing andfail altogether for German writing. T
supervised detection methods fail on out of domain examples whereas detectgpt seems to be robust to changes in domain.
-
ex-tending DetectGPT to use ensembles of models for scoring,rather than a single model, may improve detection in theblack box setting
DetectGPT could be extended to use ensembles of models allowing iot to work in black box settings where the log probs are unknown
-
hile in this work, we use off-the-shelfmask-filling models such as T5 and mT5 (for non-Englishlanguages), some domains may see reduced performanceif existing mask-filling models do not well represent thespace of meaningful rephrases, reducing the quality of thecurvature estimate.
The approach requires access to language models that can meaningfully and accurately rephrase (perturbate) the outputs from the model under evaluation. If these things do not align then it may not work well.
-
For models be-hind APIs that do provide probabilities (such as GPT-3),evaluating probabilities nonetheless costs money.
This does cost money to do for paid APIs and requires that log probs are made available.
-
We simulate human re-vision by replacing 5 word spans of the text with samplesfrom T5-3B until r% of the text has been replaced, andreport performance as r varies.
I question the trustworthiness of this simulation - human edits are probably going to be more sporadic and random.
-
Figure 5. We simulate human edits to machine-generated text byreplacing varying fractions of model samples with T5-3B gener-ated text (masking out random five word spans until r% of text ismasked to simulate human edits to machine-generated text). Thefour top-performing methods all generally degrade in performancewith heavier revision, but DetectGPT is consistently most accurate.Experiment is conducted on the XSum dataset
DetectGPT shows 95% AUROC for texts that have been modified by about 10% and this drops off to about 85% when text is changed up to 24%.
-
DetectGPT’s performancein particular is mostly unaffected by the change in languagefrom English to Germa
Performance of this method is robust against changes between languages (e.g. English to German)
-
ecause the GPT-3 API does not provideaccess to the complete conditional distribution for each to-ken, we cannot compare to the rank, log rank, and entropy-based prior methods
GPT-3 api does not expose the cond probs for each token so we can't compare to some of the prior methods. That seems to suggest that this method can be used with limited knowledge about the probabilities.
-
improving detection offake news articles generated by 20B parameterGPT-NeoX
The authors test their approach on GPT-NeoX. The question would be whether we can get hold of the log probs from ChatGPT to do the same
-
his approach, which we call DetectGPT,does not require training a separate classifier, col-lecting a dataset of real or generated passages, orexplicitly watermarking generated text. It usesonly log probabilities computed by the model ofinterest and random perturbations of the passagefrom another generic pre-trained language model(e.g, T5)
The novelty of this approach is that it is cheap to set up as long as you have the log probabilities generated by the model of interest.
-
See ericmitchell.ai/detectgptfor code, data, and other project information.
Code and data available at https://ericmitchell.ai/detectgpt
Tags
Annotators
URL
-
-
-
The real danger is not to people who are experts in their fields. Super experts in every field will continue to do what they have always done. All of us, however, are novices in almost everything we do. Most of us will never be experts in anything. The vast majority of the human experience of learning about something is done at the novice level. That experience is about to be autotuned.
This. And connected to the perverse incentives of views and engagement, the flood of autotuned, adequate enough to pass through filters, songs is upon us.
-
Starting this year, we’re going to be returned a mishmash of all the information that is available on the Internet, sorted by mysterious practices
We're already there, really. The question isn't whether ChatGPT output is seen as, or mistaken as, human to a degree convincing to everyone (though it eventually will be), but that it is already indistinguishable from the lower-tier content, the shite that clogs up search already.
-
-
irisvanrooijcogsci.com irisvanrooijcogsci.com
-
www.theatlantic.com www.theatlantic.com
-
but an opinion is different from a grounded understanding.
Preach! Here maybe we're approaching at the limits of AI writing chatbots and the horizons of where we need to push student writing.
-
Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated?
Scary!
-
-
every.to every.to
-
It can summarize things you’ve said to it in new language that helps you look at yourself in a different light and reframe situations more effectively.
This IS fascinating. Is something lost here, though?
I keep thinking about the journey versus the destination. There's not doubt a car gets you places faster and more efficiently than a bike. But riding a bike does open about physical and geographic awareness less accessible in an automobile.
-
Journaling in GPT-3 feels more like a conversation, so you don’t have to stare at a blank page or feel silly because you don’t know what to say.
Is this part of the generative (and sometimes frustrating) part of journaling?
In general, this article seems rather utilitarian in its understanding of journaling. But I don't journal regularly so maybe I'm not one to talk.
-
If you know how to use it correctly and you want to use it for this purpose, GPT-3 is pretty close, in a lot of ways, to being at the level of an empathic friend
Interesting. In other contexts, AI has been aligned with the unfeeling?
-
-
wcet.wiche.edu wcet.wiche.edu
-
more and more jobs involve the use of generative AI for everything from discovering new drug molecules to developing ad copy,
Working with ChatGBT is preparing students for the workplace.
-
more creative assessments that require students to demonstrate application of knowledge rather than simply the ability to produce information.
More creative and more formative.
-
It forces us to reconsider what is distinctly human about intelligence if a machine can generate human language complete with analysis.
Really situates this moment in history.
-
I fully believe that the fact that the essay was written by AI and not a live person would be undetectable for many college admissions committees.
Yikes!
-
synthesize
Is AI really not synthesizing?
-
-
www.newsweek.com www.newsweek.com
-
deeper humanistic questions like, what is truth? What is. beauty? How do we know what we know?
Like the calculator opened up other areas of math education. Let chat bots do some of the grunt work?
-
I'm happy to say good riddance to the college essay and other "skills" that we've come to see as proving the value of the humanities.
Again, with the with the throwing things away...
-
one that demands students find out something about themselves and tell it to you in a voice that is their own.
I do think student voice is an interesting place to focus attention on in this debate.
-
-
www.nytimes.com www.nytimes.com
-
One high school teacher told me that he used ChatGPT to evaluate a few of his students’ papers, and that the app had provided more detailed and useful feedback on them than he would have, in a tiny fraction of the time.
Interesting concern: AI writing chat bot replacing teacher. Like concerns over Perusall's algo-grading.
-
They’ll need to know their way around these tools — their strengths and weaknesses, their hallmarks and blind spots — in order to work alongside them. To be good citizens,
And good workers!
-
-
threadreaderapp.com threadreaderapp.com
-
blog.tesol.org blog.tesol.org
-
If my goal as a teacher is to help students learn, then should I withhold access to the same information until the timing is convenient for me? Some teachers might look at this and ask if they’re even relevant anymore. I would suggest that the answer is yes, but our focus will shift on helping students develop even deeper critical thinking skills in English. Less motivated teachers who just want to teach grammar may be weeded out over time, but those who are ready to take deeper dives will get the opportunity to mentor students on their path to becoming confident and autonomous English users.
-
ChatGPT can be used as a writing assistance tool. ELLs can use ChatGPT to generate ideas and receive feedback on their writing. ChatGPT can also provide grammar and spelling assistance, which can be particularly helpful for ELLs who are still learning the rules of the English language.
-
you can use pretty much any device with a microphone to transcribe your spoken English into the prompt box. Once the responses are generated, students can use screen reading software to verbalize the response.
-
-
chat.openai.com chat.openai.comChatGPT1
-
In OOP, objects are used to represent real-world entities, and the methods (i.e., functions) and attributes (i.e., v
this is a test to see if hypothes.is works on ChatGPt
-
- Dec 2022
-
-
At the end of the day, Copilot is supposed to be a tool to help developers write code faster, while ChatGPT is a general purpose chatbot, yet it still can streamline the development process, but GitHub Copilot wins hands down when the task is coding focused!
GitHub Copilot is better at generating code than ChatGPT
-