20 Matching Annotations
  1. Last 7 days
    1. For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.”

      This shows how moderation and user behavior constantly adapt to each other. When platforms try to filter certain language, people often respond creatively, which makes the system harder to manage. It also raises questions about whether removing certain words actually addresses harm, or just shifts how people express it.

  2. Feb 2026
    1. Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome, contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who can walk short distances but needs to use a wheelchair when going long distances).

      This really stood out to me because it shows how much we rely on visibility to decide what we believe. If a disability doesn’t match people’s expectations of what it “should” look like, they’re quick to doubt it. It highlights how harmful those assumptions can be, especially when people already have to manage their condition privately.

    2. Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome, contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who can walk short distances but needs to use a wheelchair when going long distances).

      This really stood out to me because it shows how much we rely on visibility to decide what we believe. If a disability doesn’t match people’s expectations of what it “should” look like, they’re quick to doubt it. It highlights how harmful those assumptions can be, especially when people already have to manage their condition privately.

    3. Some disabilities are visible disabilities that other people can notice by observing the disabled person (e.g., wearing glasses is an indication of a visual disability, or a missing limb might be noticeable). Other disabilities are invisible disabilities that other people cannot notice by observing the disabled person (e.g., chronic fatigue syndrome, contact lenses for a visual disability, or a prosthetic for a missing limb covered by clothing). Sometimes people with invisible disabilities get unfairly accused of “faking” or “making up” their disability (e.g., someone who can walk short distances but needs to use a wheelchair when going long distances).

      This really stood out to me because it shows how much we rely on visibility to decide what we believe. If a disability doesn’t match people’s expectations of what it “should” look like, they’re quick to doubt it. It highlights how harmful those assumptions can be, especially when people already have to manage their condition privately.

    1. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.

      This is a good reminder that “private” on social media usually just means private from other users, not from the platform itself. It feels like the label creates a sense of safety that isn’t really accurate, especially when companies can still scan or access messages behind the scenes.

    2. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.

      This is a good reminder that “private” on social media usually just means private from other users, not from the platform itself. It feels like the label creates a sense of safety that isn’t really accurate, especially when companies can still scan or access messages behind the scenes.

    1. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them.

      This part stood out to me because it shows how fragile “anonymization” actually is. Even when obvious identifiers are removed, patterns in the data can still point back to real people. It makes anonymized data feel a lot less safe than it’s usually presented to be.

    1. So those Redditors suggested they spam the site with fake applications, poisoning the job application data, so Kellogg’s wouldn’t be able to figure out which applications were legitimate or not (we could consider this a form of trolling). Then Kellogg’s wouldn’t be able to replace the striking workers, and they would have to agree to better working conditions.

      This example really blurs the line between harm and protest. Poisoning data is usually framed as something unethical, but here it’s being used as a tactic to support workers and push back against corporate power. It made me think about how the intent behind data poisoning matters a lot for how we judge it ethically, even if the technical action is the same.

  3. Jan 2026
    1. In 2011, a group on 4chan started spreading a plan for making a “Forever Along Involuntary Flashmob.” You can see their instructions below:

      This example already feels unsettling even before seeing the instructions. Calling it a “flashmob” makes it sound harmless or playful, but the intent to target people who are already isolated shows how trolling can turn loneliness into a joke. It really highlights how collective anonymity can normalize cruelty that most individuals wouldn’t openly own.

    1. If the immediate goal of the action of trolling is to cause disruption or provoke emotional reactions, what is it that makes people want to do this disruption or provoking of emotional reactions?

      I think a lot of it comes down to power and attention. Provoking emotional reactions is an easy way to feel influential or visible, especially in spaces where you might otherwise feel ignored. Trolling also seems lower risk when there’s anonymity, so people can get that sense of control or amusement without facing real consequences.

    1. Anonymity can also encourage authentic behavior. If there are aspects of yourself that you don’t feel free to share in your normal life (thus making your normal life inauthentic), then anonymity might help you share them without facing negative consequences from people you know.

      I like that this complicates the usual “anonymity is bad” take. It makes sense that anonymity can actually increase authenticity for people who feel constrained offline, especially for marginalized identities. It feels less like anonymity causes inauthenticity by default, and more like it amplifies whatever social pressures already exist.

    1. To describe something as authentic, we are often talking about honesty, in that the thing is what it claims to be. But we also describe something as authentic when we want to say that it offers a certain kind of connection.

      This sentence really clarified authenticity for me. It’s not just about whether something is “real” or “fake,” but whether expectations line up with reality. That helps explain why joke accounts or surprise parties don’t feel inauthentic in a bad way because the mismatch isn’t harmful or misleading.

    1. 4Chan has various image-sharing bulletin boards, where users post anonymously. Perhaps the most infamous board is the “/b/” board for “random” topics. This board emphasizes “free speech” and “no rules” (with exceptions for child pornography and some other illegal content). In these message boards, users attempt to troll each other and post the most shocking content they can come up with. They also have a history of collectively choosing a target website or community and doing a “raid” where they all try to join and troll and offend the people in that community.

      This part really shows how anonymity plus a “no rules” mindset changes behavior. It feels like the goal shifts from sharing ideas to just getting reactions, no matter the harm. Calling it “free speech” sounds ideal in theory, but here it mostly seems to reward whoever can be the most shocking or offensive.

    1. Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.

      This comparison really stood out to me. It makes today’s misinformation problem feel less like a brand new crisis and more like a return to an older media pattern, just amplified by speed and scale. The idea that the 1900s were the “unusual” period, not the norm, kind of flips how I usually think about media history.

    1. If we look at a data field like gender, there are different ways we might try to represent it. We might try to represent it as a binary field, but that would exclude people who don’t fit within a gender binary. So we might try a string that allows any values, but taking whatever text users end up typing might make data that is difficult to work with (what if they make a typo or use a different language?). So we might store gender using strings, but this time use a preset list of options for users to choose from, perhaps with a way of choosing “other,” and only then allow the users to type their own explanation if our categories didn’t work for them. Perhaps you question whether you want to store gender information at all.

      I like how this example shows that even something that seems simple like a “data field” actually involves a lot of value judgments. Every way of storing gender has tradeoffs between inclusivity, usability, and data cleanliness, and there isn’t a purely technical solution. It also made me stop and think about whether collecting certain data is even necessary in the first place.

    1. Binary consisting of 0s and 1s make it easy to represent true and false values, where 1 often represents true and 0 represents false. Most programming languages have built-in ways of representing True and False values

      This makes sense to me, especially how binary maps so cleanly onto True/False. It’s interesting that something as abstract as “truth” in code ultimately comes down to 0s and 1s, which feels very different from how messy and ambiguous truth can be in real life.

    1. Only in Oman has the occasional donkey…been used as a mobile billboard to express anti-regime sentiments. There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed.”

      The donkey example actually made this click for me. It’s a really clear way to show how intention and action can be separated, and how responsibility gets blurred when the “actor” doesn’t understand what it’s doing. Seeing bots framed this way helps me think less about blaming the account itself and more about the people behind it, likewho designed it, deployed it, or benefit from it, even if they’re far removed from the actual action.

    1. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm:

      I like this clarification because “bot” gets thrown around so loosely online. It makes sense to draw the line at whether an account is actually run by software versus a human, even if that human is being paid and tightly scripted. Calling click-farm workers “human computers” is kind of unsettling, but it does a good job of showing how inauthentic behavior isn’t always automated in a technical sense.

    1. When we (the authors) were young, as internet-based social media was starting to become commonly used, a popular sentiment we often heard was: “The internet isn’t real life.” This was used as a way to devalue time spent on social media sites, and to dismiss harms that occurred on them. Versions of this phrase are still around, such as in this tweet from statistician Nate Silver:

      I found this idea really interesting because it shows how outdated the phrase “the internet isn’t real life” has become. Online spaces now directly affect people’s mental health, relationships, and even job opportunities, so dismissing harm just because it happens online feels disconnected from reality. It makes me think about how language like this allows people and platforms to avoid taking responsibility for real consequences.

    1. Actions are judged on the sum total of their consequences (utility calculus) The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.”

      I get why consequentialism is appealing, especially for platforms that rely on metrics like engagement or growth, but it also feels kind of risky. If decisions are made only based on what benefits the majority, harm to smaller or more vulnerable groups can easily be overlooked. In social media, this could mean justifying toxic or harmful content as long as it keeps most users active or entertained.