22 Matching Annotations
  1. Last 7 days
    1. As you do this you might consider personality differences (such as introverts and extroverts), and neurodiversity, the ways people’s brains work and process information differently (e.g., ADHD, Autism, Dyslexia, Face blindness, depression, anxiety). But be careful generalizing about different neurotypes (such as Autism), especially if you don’t know them well. Instead try to focus on specific traits (that may or may not be part of a specific group) and the impacts on them (e.g., someone easily distracted by motion might…., or someone sensitive to loud sounds might…, or someone already feeling anxious might…).

      This is a really thoughtful way to approach it—focus on specific traits and needs, not labels. Different people can react very differently to the same app feature: autoplay videos, constant alerts, or crowded layouts might overwhelm one person but not another. Designing with flexibility (like mute options, reduced motion, clear layouts, and notification control) helps more people feel comfortable and included.

    1. Many have anecdotal experiences with their own mental health and those they talk to. For example, cosmetic surgeons have seen how photo manipulation on social media has influenced people’s views of their appearance:

      This is such a real issue—when edited photos become the “normal” standard, people can start feeling like their unfiltered face or body isn’t good enough. Cosmetic surgeons are noticing that some patients now bring filtered selfies as goals, which shows how social media can reshape self-image in unhealthy ways. It’s a good reminder that what we see online is often curated or altered, not a fair baseline for how real people should look.

  2. Feb 2026
    1. To use loop variables, we create a variable before our loop, and give it an initial value (often 0). Then within the loop over each item in our list, we can optionally add something to our loop variable. After the loop, our variable will have our final result.

      This example shows how a loop variable acts like a running total: it starts at 0, updates each time a condition is met, and stores the final count after the loop ends. The code is especially clear because it combines iteration (for letter in "Mississippi") with a conditional check (if letter == "i"), which is a common beginner pattern. You could make it even stronger by noting that this same structure works for counting anything in a list or string, not just letters.

    1. When social media platforms show users a series of posts, updates, friend suggestions, ads, or anything really, they have to use some method of determining which things to show users. The method of determining what is shown to users is called a recommendation algorithm, which is an algorithm (a series of steps or rules, such as in a computer program) that recommends posts for users to see, people for users to follow, ads for users to view, or reminders for users.

      This passage clearly explains that recommendation algorithms decide what users see by applying programmed rules to rank content like posts, friends, and ads. It also hints at why these systems matter: they shape users’ attention and experience on social media, often influencing what people believe is popular or important. You could strengthen it by briefly noting that these algorithms are usually optimized for engagement, not necessarily for accuracy or well-being.

    1. What assumptions do the site and your device make about individuals or groups using social media, which might not be true or might cause problems? List as many as you can think of (bullet points encouraged).

      it shows how social media is built around a “default user” who has stable internet, a newer phone, lots of attention, and no accessibility needs. The assumptions about things like vision/hearing/motor control and even privacy/safety are the ones that can cause real harm, because if the platform gets those wrong, people don’t just have a worse experience — they can get excluded or put at risk. Also love that it points out how “defaults” (autoplay, public sharing, tracking) are basically assumptions forced onto everyone unless they know how to fight the settings.

    1. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another.

      This really makes me think about how “disability” isn’t just about someone’s body or mind—it’s also about whether the environment is built to include them. If a society assumes everyone can do things one specific way (like stairs everywhere or no captions), then people get labeled “disabled” mostly because the world isn’t designed for them.

    1. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.

      Yeah, it’s kind of a weird trade—we hand over a ton of personal info because we assume the platform will protect it, but breaches still happen all the time. It makes “trust” feel more like a gamble than a guarantee. What also bugs me is that once your data is out there, you can’t really take it back, even if the company apologizes or improves security later. It definitely makes me think twice about what I share and what permissions I give apps.

    1. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.

      That’s such a good reminder that “private” messaging usually just means “not public,” not “only between two people.” It makes me want to be more careful about what I assume is truly confidential when I DM. I also think it’s wild how much of this is automated—like filtering, scanning, or flagging messages without us noticing. It raises a real question of how much privacy we’re actually trading for convenience.

    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      Yeah, that’s such a good example of “poisoning” that isn’t even malicious — it’s just the internet doing internet things. A dataset can get totally skewed if one group floods it, because suddenly the results stop representing the wider population the researchers were trying to study.

    1. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.

      Totally, and I think what makes it kinda sneaky is that the “product” isn’t really the app, it’s our attention. Once a platform learns what keeps you scrolling (likes, outrage, drama, cute videos, whatever), it can keep feeding you more of that so you stay longer without even realizing it.

    2. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.

      Totally, and I think what makes it kinda sneaky is that the “product” isn’t really the app, it’s our attention. Once a platform learns what keeps you scrolling (likes, outrage, drama, cute videos, whatever), it can keep feeding you more of that so you stay longer without even realizing it.

    3. One of the main goals of social media sites is to increase the time users are spending on their social media sites. The more time users spend, the more money the site can get from ads, and also the more power and influence those social media sites have over those users. So social media sites use the data they collect to try and figure out what keeps people using their site, and what can they do to convince those users they need to open it again later.

      Totally, and I think what makes it kinda sneaky is that the “product” isn’t really the app, it’s our attention. Once a platform learns what keeps you scrolling (likes, outrage, drama, cute videos, whatever), it can keep feeding you more of that so you stay longer without even realizing it.

  3. Jan 2026
    1. Every “we” implies a not-“we”. A group is constituted in part by who it excludes. Think back to the origin of humans caring about authenticity: if being able to trust each other is so important, then we need to know WHICH people are supposed to be entangled in those bonds of mutual trust with us, and which are not from our own crew. As we have developed larger and larger societies, states, and worldwide communities, the task of knowing whom to trust has become increasingly large. All groups have variations within them, and some variations are seen as normal. But the bigger groups get, the more variety shows up, and starts to feel palpable. In a nation or community where you don’t know every single person, how do you decide who’s in your squad?

      This made me think about how “we” isn’t just a warm, inclusive word—it’s also a boundary. In big communities where you can’t personally know everyone, people start using shortcuts (language, style, beliefs, even humor) to decide who feels “safe” or “one of us,” but those shortcuts can get unfair really fast.

    1. Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction.

      I like that this definition highlights inauthenticity as part of trolling—because the troll isn’t trying to have a real conversation, they’re trying to derail one. It also makes me think about how trolling can change the vibe of a platform over time, since people start assuming bad faith and get more defensive.

    1. Authenticity is a concept we use to talk about connections and interactions when the way the connection is presented matches the reality of how it functions. An authentic connection can be trusted because we know where we stand. An inauthentic connection offers a surprise because what is offered is not what we get. An inauthentic connection could be a good surprise, but usually, when people use the term ‘inauthentic’, they are indicating that the surprise was in some way problematic: someone was duped.

      authenticity is basically “what you see is what you get,” so you can trust the vibe and know where you stand. When something’s inauthentic, it’s not just different than expected, it’s different in a way that feels misleading, like you got played. And yeah, even if the surprise could be harmless, “inauthentic” usually implies someone was tricked or taken advantage of.

    2. This is not to say that there is no room for appreciating connections that are not fully honest, transparent, and earnest all the time. Social media spaces have allowed humor and playfulness to flourish, and sometimes humor and play are not, strictly speaking, honest. Often, this does not bother us, because the kind of connection offered by joke accounts matches the jokey way they interact on social media. We get to know a lot about public figures and celebrities, but it is not usually considered problematic for celebrity social media accounts to be run by publicist teams. As long as we know where we stand, and the kind of connection being offered roughly matches the sort of connection we’re getting, things go okay.

      I like this point because it feels realistic — social media isn’t always about being 100% “authentic,” and sometimes the whole vibe is obviously playful or curated. Joke accounts work because everyone’s in on the bit, and celebrity accounts run by teams don’t feel weird as long as it’s clear what kind of relationship we’re actually getting. The problem starts when the account pretends it’s a genuine personal connection but it’s really marketing or manipulation.

    1. The 1980s and 1990s also saw an emergence of more instant forms of communication with chat applications. Internet Relay Chat (IRC) lets people create “rooms” for different topics, and people could join those rooms and participate in real-time text conversations with the others in the room.

      Yeah, IRC was basically the blueprint for a lot of what we still use now — it’s like an early version of Discord/Slack with topic-based “rooms” and live chat. I also think it’s cool how it shifted internet communication from slower, one-to-one stuff (like email) to real-time group conversations where communities could form and evolve on the fly.

    2. One of the early ways of social communication across the internet was with Email, which originated in the 1960s and 1970s. These allowed people to send messages to each other, and look up if any new messages had been sent to them.

      Totally — email was basically the OG “DM.” It’s kind of wild that something created back in the 1960s/70s is still a core way we communicate today, just with nicer interfaces and way more spam.

    1. In our example tweet we can see several places where data could be saved in lists:

      This section makes it click that a “tweet” isn’t just one thing—it’s basically a bundle of lists (a list of images, a list of likes, a list of replies, etc.). Thinking of it that way also helps explain why social media data gets huge fast, because each post can point to multiple growing lists. It’s kind of wild that even something simple like “who liked this” is literally stored as a list of accounts behind the scenes.

    1. Metadata is information about some data. So we often think about a dataset as consisting of the main pieces of data (whatever those are in a specific situation), and whatever other information we have about that data (metadata). For example: If we think of a tweet’s contents (text and photos) as the main data of a tweet, then additional information such as the user, time, and responses would be considered metadata. If we download information about a set of tweets (text, user, time, etc.) to analyze later, we might consider that set of information as the main data, and our metadata might be information about our download process, such as when we collected the tweet information, which search term we used to find it, etc. Now that we’ve looked some at the data in a tweet, let’s look next at how different pieces of this information are saved.

      This part made me realize how “metadata” can be just as revealing as the main content, even if it seems harmless—like time, location, or who replied. In social media, you can sometimes learn more from patterns in metadata (posting frequency, networks, timing) than from what someone actually said. It also feels like a big privacy issue, because people might not realize they’re “sharing” all that extra info just by using a platform.

    1. We also would like to point out that there are fake bots as well, that is real people pretending their work is the result of a Bot. For example, TikTok user Curt Skelton posted a video claiming that he was actually an AI-generated / deepfake character:

      The “fake bot” idea is wild because it flips the usual problem—now humans can pretend to be AI to get attention, seem mysterious, or dodge accountability (“it wasn’t me, it was the bot”). That makes trust even harder, since it blurs what’s real automation versus just performance. It also makes me think platforms might need clearer disclosure norms, because otherwise people get rewarded for deception either way.

    2. As one example, in 2016, Rian Johnson, who was in the middle of directing Star Wars: The Last Jedi, got bombarded by tweets that all originated in Russia (likely making at least some use of bots).

      It’s kinda scary how coordinated bot/troll activity can make a backlash look way bigger (and more “real”) than it actually is, which can totally warp what people think the general public believes. The stat about a lot of negative tweets being politically motivated or not even human really shows why “online outrage” isn’t always a reliable measure of real opinion. It also feels unfair to creators, because they can get pressured or harassed by something that’s basically manufactured.