16 Matching Annotations
  1. Feb 2026
    1. We want to provide you, the reader, a chance to explore mental health more. We want you to be considering potential benefits and harms to the mental health of different people (benefits like reducing stress, feeling part of a community, finding purpose, etc. and harms like unnecessary anxiety or depression, opportunities and encouragement of self-bullying, etc.). As you do this you might consider personality differences (such as introverts and extroverts), and neurodiversity, the ways people’s brains work and process information differently (e.g., ADHD, Autism, Dyslexia, Face blindness, depression, anxiety). But be careful generalizing about different neurotypes (such as Autism), especially if you don’t know them well. Instead try to focus on specific traits (that may or may not be part of a specific group) and the impacts on them (e.g., someone easily distracted by motion might…., or someone sensitive to loud sounds might…, or someone already feeling anxious might…). We will be doing a modified version of the five-step CIDER method (Critique, Imagine, Design, Expand, Repeat). While the CIDER method normally assumes that making a tool accessible to more people is morally good, if that tool is potentially harmful to people (e.g., give people unnecessary anxiety), then making the tool accessible to more people might be morally bad. So instead of just looking at the assumptions made about people and groups using a social media site, we will be also looking at potential harms to different people and groups using a social media site. So open a social media site on your device. Then do the following (preferably on paper or in a blank computer document):

      I like that this design analysis explicitly treats “accessibility to more people” as not automatically morally good if the underlying feature or platform dynamics can cause harm (e.g., unnecessary anxiety). That framing pushes us to evaluate both who benefits and who pays the costs, rather than assuming growth or engagement is neutral. It also made me think good mental-health-oriented design should be measured by outcomes like reduced harm and increased user agency—not just “time on site,” and that those metrics might differ across groups with different vulnerabilities.

    1. For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.”

      The idea of platforms trying to “detect” mental health states raises a tough ethical trade-off: even well-intentioned interventions can easily become surveillance if users didn’t meaningfully consent. The section’s examples make it clear that detection systems can be repurposed by employers or other actors in harmful ways, and they also risk false positives/negatives that may escalate anxiety or stigma rather than help. Because mental health signals are so context-dependent, I think platforms should be extremely cautious about automation here—prioritizing user control, transparency, and strict limits on how any detected “risk” can be used or shared.

    1. Virality and Intention

      The “virality vs. intention” discussion really helped me see that going viral isn’t a single outcome—content can spread in ways that align with the creator’s intent, or in ways that are antagonistic or humiliating. Ethically, that shifts some responsibility onto sharers and platforms: sharing is not “neutral,” and platform features (like duets/quote-tweets) can amplify misinterpretations or pile-ons that the original creator never consented to.

    1. Ethical Analysis of Recommendation Algorithms

      What I took from this chapter is that recommendation algorithms create structural outcomes that can’t be reduced to “just make better personal choices.” When platform owners frame algorithmic effects as purely user behavior (e.g., blaming people for interacting with content), it feels like responsibility is being shifted away from the system design—even though the system is shaping what interactions are even possible or likely.

    1. Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      The comparison between Universal Design and ability-based design helped me see two different ways of shifting the burden: either build multiple options into the environment, or make the system adapt to each user. I’m curious about the trade-off when the system “detects and adapts”—it could improve inclusion, but it might also introduce privacy risks or misclassification, so it seems important that users can control and override those adaptations.

    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.1 For example: If a building only has staircases to get up to the second floor (it was built assuming everyone could walk up stairs), then someone who cannot get up stairs has a disability in that situation. If a physical picture book was made with the assumption that people would be able to see the pictures, then someone who cannot see has a disability in that situation. If tall grocery store shelves were made with the assumption that people would be able to reach them, then people who are short, or who can’t lift their arms up, or who can’t stand up, all would have a disability in that situation. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation.

      I like the framing that disability is not just an individual trait, but something created by a mismatch between a person and the assumptions built into an environment (e.g., stairs-only buildings or picture books assuming sight). This makes accessibility feel like an ethical design obligation: when platforms assume one “default user,” they silently decide who gets full participation and who has to work around barriers.

    1. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time).

      This section makes the privacy vs. security distinction feel very concrete: users may accept some privacy trade-offs, but they still expect platforms to protect the data they collect. The examples about password storage and breaches underline that security failures aren’t just “technical accidents”—they’re governance choices (process, incentives, access controls), and it’s why protections like unique password hashing and 2-factor authentication matter in practice.

    1. We might want a conversation or action that happens in one context not to be shared in another (context collapse)

      I like how this section lists different reasons people want privacy, especially “context collapse,” because it shows privacy is often about controlling the audience rather than hiding wrongdoing. The point about “private” messages still being stored and searchable by the company is also a useful reminder that many privacy features are more like interface promises than true confidentiality—platform design should make those boundaries clearer and give users stronger control over retention and access.

    1. Spurious Correlations

      The discussion of spurious correlations is a great caution that data mining can “find” patterns that look convincing but are actually coincidences or driven by hidden variables. It makes me think ethical data work should require more than technical correctness—researchers (and platforms) should be explicit about uncertainty, avoid overclaiming causality, and consider how misleading correlations can shape public narratives or policy decisions.

    1. Platforms also collect information on how users interact with the site.

      This section made me realize how “social media data” includes far more than what users explicitly post—things like what we click, what we pause on, location, and even direct messages can all become part of the data pipeline. Ethically, that feels like a consent and transparency problem: even if the platform’s terms mention data collection, users rarely understand the scope or downstream uses, and the fact that platforms may also collect information about non-users makes the boundary of consent even blurrier.

    1. Don’t feed the trolls

      The contrast between “don’t feed the trolls” and the critique that ignoring doesn’t stop persistent harassment really highlights a burden-shifting problem: it asks victims to do the work of preventing abuse. This makes me think the more scalable solution is platform governance and design: skilled moderation, clear rules, and meaningful enforcement, instead of relying mainly on individual self-defense strategies.

    1. Why troll?

      I appreciate how the chapter defines trolling as inauthentic posting aimed at disruption or provoking an emotional reaction, and then breaks down motivations like “lulz,” gatekeeping, power, and even “making a point.” What feels tricky is that “disruption” can sometimes be framed as satire or protest, so the ethical evaluation probably has to consider who bears the cost (targets vs. bystanders) and whether the disruption increases accountability or mainly produces harm.

    1. The way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching.

      I like the point that code-switching and “putting on a persona” can still be authentic because different communities have different norms for what sincere expression looks like. Context collapse on social media seems like a platform-caused pressure toward a single “flattened” self; it would be interesting to discuss which design features (audience controls, friction for resharing, clearer context cues) could reduce that pressure without isolating people into echo chambers.

    1. Authenticity is a rich concept, loaded with several connotations. To describe something as authentic, we are often talking about honesty, in that the thing is what it claims to be. But we also describe something as authentic when we want to say that it offers a certain kind of connection. A knock-off designer item does not offer the purchaser the same sort of connection to the designer brand that an authentic item does. Authenticity in connection requires honesty about who we are and what we’re doing; it also requires that there be some sort of reality to the connection that is supposedly being made between parties. Authentic connections frequently place high value on a sense of proximity and intimacy. Someone who pretends to be your friend, but does not spend time with you (proximity) or does not open themselves up to trusting mutual interdependence (intimacy) is offering one kind of connection (being an acquaintance) under the guise of a different kind of connection (friendship).

      What stood out to me is how “authenticity” here is not only about factual truth, but about whether the kind of relationship being offered matches what the audience thinks they’re getting (e.g., lonelygirl15 and the idea of being “duped”). It makes me wonder if platforms should treat authenticity as a design problem of signaling—for example, clearer disclosures for staged/fictional/AI-mediated content—so users can calibrate trust without banning playfulness or performance.

  2. Jan 2026
    1. The user interface of a computer system (like a social media site), is the part that you view and interact with. It’s what you see on your screen and what you press or type or scroll over. Designers of social media sites have to decide how to layout information for users to navigate and decide how the user performs various actions (like, retweet, post, look up user, etc.). Some information and actions will be made larger and easier to access while others will be smaller or hidden in menus or settings.

      The affordances vs. friction framing is really helpful for noticing “hidden” design choices. Infinite scroll is a great example: it removes natural stopping points, so it can quietly push people to keep consuming content longer than they intended. I like the idea of adding intentional friction for high-impact actions (like sharing), because it can reduce impulsive spread without banning the action.

    1. This history is all very US focused. In future versions of this book, I hope to incorporate a more global history of social media.

      I appreciate the note that this history is very US-focused. A more global timeline could also highlight early/parallel social media ecosystems outside the US (for example, major messaging + social platforms that shaped “super-app” behavior and different norms around identity). It would be interesting to compare how business models and regulation in different regions changed what features became “standard.”