10 Matching Annotations
  1. Jan 2024
    1. While I agree with the general sentiment of Malesic’s point, there is an implication that students, perhaps in general, are not willing to learn

      I agree with this point. The learning that occurs in schools in unappealing to many students not because they don't care about learning at all, but because the topics covered aren't ones that interest them and because they are made to demonstrate that learning in specific ways that may not be fun or engaging.

    2. Because of this, it is safer for students to write in ways that express the already known, to produce what I call writing-related simulations.

      I agree with this! It has been noted by many, including myself, that the type of writing encouraged by education systems is more of a template for people to copy over and over with different topics than an actual way to think through and approach what they need to write about.

      This ties into Warner's Substack article and his thoughts on people 'thinking like economists' in relation to AI. He discussed how people think of AI as an assistant to labour and a way of becoming profitable, and here, he talks about how, in a sense, this has already been happening. Students are often required to write because it's the work they have to do rather than because their writing would contribute any new or revolutionary ideas, similar to how people hope to use AI to replace the work that they have to do in everyday life instead of because they genuinely believe it'll change the world (and, oftentimes, they hope it'll change the world just enough so that it can replace their work without necessarily contributing anything else that could benefit others)

    3. As I say ad nauseam, here, there and everywhere, writing is thinking, which means the act and experience of writing is both the expression and the exploration of an idea (or ideas).

      I feel that this is why so many people are against AI. People can use it to write something for them instead of exercising the brainpower they've developed over the years to think of original material themselves, thus making people 'dumber' (less prone to or interested in original thinking).

    4. Like MOOCs, some folks seem to believe the technology possesses some power beyond what we allow it to have. ChatGPT for sure sheds some light on what we’ve been up to, school- and teaching-wise, but the AI isn’t in control—we are. If ChatGPT is the end of high school English as we know it, well … that course in that form didn’t deserve to live anyway.

      He mentioned this a bit in his Substack article - Warner doesn't believe AI is as incredible as it's made out to be. He sees it as a tool, but not as a powerful enough technology to replace anything humans can come up with. His view is less pessimistic than many others' who believe that AI will be used in place of real creativity or intelligence or that we'll never be able to discern AI vs. human writing

    1. the chief legacy of McNamara’s approach was not the attempt to bring rationality to war, but instead to treat war as an “economic” rather than a “political” problem.Internal bells rang as I realized that this too is Andreessen’s analytic framework. Increased speed, efficiency, productivity, these are the values by which he is defining “saving the world.” This is what Elizabeth Bopp Berman calls the “economic style of reasoning,” a style which has become dominant in the post-Cold War era of the country.

      Here, Warner prepares to introduce the detriments of how people think of AI. His view seems to lean more anti-capitalist in that he doesn't like people treating AI as a method of profit. I agree with this to some extent, and I think it's a relevant point related to how he thinks about ChatGPT (and AI as a whole). He's not against its existence, but he dislikes that what could be used as a tool for learning and creativity is being considered as yet another method of labor.

    2. what happens? More questions appear, better questions, deeper questions that add facets to the book, and perhaps most importantly, keep me interested in the process.

      This is a relatively important aspect of writing that, although touched on in educational settings, I don't think is used enough. Formulating questions about the 'why' of a written piece helps writers work towards a conclusion and create a comprehensive narrative. It works in full-length books and in smaller works, like essays.

    1. This gets complicated because the same pipe often feeds into multiple faucets. So it takes careful thought to figure out which valves to tighten and which ones to loosen, and by how much.

      The way this is described makes it sound like the LLM's process is basically a digitized reconstruction of the human mind. This is more or less how human brains figure out what words to say in what order, and even though some Language models are able to do this quickly, it occurs much faster in the human brain than is noticeable to us. I'm a bit conflicted on whether I think this is really interesting and revolutionary and could do great things or whether I find it unnecessary - it sounds like a lot more work just to create technology that, so far, appears to only be able to do a small part of what a human brain can, even if the human scope of knowledge is limited in comparison.

    2. For example, the most powerful version of GPT-3 uses word vectors with 12,288 dimensions—that is, each word is represented by a list of 12,288 numbers. That’s 20 times larger than Google’s 2013 word2vec scheme. You can think of all those extra dimensions as a kind of “scratch space” that GPT-3 can use to write notes to itself about the context of each word. Notes made by earlier layers can be read and modified by later layers, allowing the model to gradually sharpen its understanding of the passage as a whole.

      I find this part interesting. I never thought about what the 'intelligence' part of AI entailed and I didn't know that it could be self-sustaining - my knowledge was limited to the idea that there are people constantly updating and coding AI programs and that the AI itself was mostly imitating things it picked up and had programmed into it.

    3. For example, in some word vector models, doctor minus man plus woman yields nurse. Mitigating biases like this is an area of active research.

      How do these biases appear in word vectors? Do people not have control over which word vectors appear closer together? And how do AI programs pick up these biases if they're relying on a variety of different sources?

    4. Language models take a similar approach: each word vector1 represents a point in an imaginary “word space,” and words with more similar meanings are placed closer together. For example, the words closest to cat in vector space include dog, kitten, and pet. A key advantage of representing words with vectors of real numbers (as opposed to a string of letters, like “C-A-T”) is that numbers enable operations that letters don’t.

      I don't entirely understand this. Do the word vectors function as coordinates in a vector space? And what is the 'space', exactly? From what I can tell, it's a non-physical "space" that helps language models 'map things out' and find words in relation to other ones, but I don't actually know.

      With the 'cat' example, the words closest to it are either similar or are immediately associated with cats in the human mind. That makes it sound like numbers that are close to each other are assigned to words that are close to each other and the AI is capable of associating these words with each other because of their numbers... but I don't know if there's more to it or why there as so many different ways to represent individual words, such as 'cat', with so many different word vectors.