22 Matching Annotations
  1. Last 7 days
    1. So you might find a safe space online to explore part of yourself that isn’t safe in public (e.g., Trans Twitter and the beauty of online anonymity). Or you might find places to share or learn about mental health (in fact, from seeing social media posts, Kyle realized that ADHD was causing many more problems in his life than just having trouble sitting still, and he sought diagnosis and treatment). There are also support groups for various issues people might be struggling with, like ADHD, or having been raised by narcissistic parents.

      Online spaces can offer a powerful sense of safety and belonging, especially for people who feel unable to express certain parts of themselves in public. Anonymity can create room for exploration, honesty, and connection that might not otherwise be possible offline. At the same time, social media can also serve as an entry point to self-understanding, as people encounter language and experiences that help them recognize patterns in their own lives. Support groups and online communities show how digital platforms, despite their flaws, can meaningfully reduce isolation and encourage people to seek help.

    1. Doomscrolling is: “Tendency to continue to surf or scroll through bad news, even though that news is saddening, disheartening, or depressing. Many people are finding themselves reading continuously bad news about COVID-19 without the ability to stop or step back.” Merriam-Webster Dictionary

      This issue became especially severe during the COVID-19 pandemic. Large-scale prevention and control measures intensified people’s sense of uncertainty and anxiety, which in turn led many to lose trust in governments and other perceived “authorities,” weakening public credibility overall. At the same time, heightened stress seemed to push people toward more extreme positions, fostering a kind of defensive aggression in public discourse. As a result, hostility and resentment became more visible in everyday interactions.

  2. Feb 2026
    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      The way accessible design is often discussed creates an artificial divide between “designers” and “disabled users,” which reinforces exclusion rather than inclusion. Dr. Cynthia Bennett’s research highlights how disabled people are frequently treated as consultants or test subjects instead of being recognized as designers in their own right. This framing ignores lived experience as a form of expertise and limits what accessibility can become. If disabled people are not seen as real designers, then accessibility will always be partial and shaped by outsiders’ assumptions rather than real needs.

    1. Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      Universal Design shifts responsibility from individuals to systems, which feels fair on the surface, but fairness is not always the same as justice. In social media, treating everyone the same may still disadvantage people who start from very different positions. True justice may require platforms to offer different tools, protections, or visibility to different users based on their needs. So the goal should not only be “fairness,” but a more thoughtful form of equity that actively reduces harm rather than assuming equal access works for everyone.

    2. 10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      Universal Design shifts responsibility from individuals to systems, which feels fair on the surface, but fairness is not always the same as justice. In social media, treating everyone the same may still disadvantage people who start from very different positions. True justice may require platforms to offer different tools, protections, or visibility to different users based on their needs. So the goal should not only be “fairness,” but a more thoughtful form of equity that actively reduces harm rather than assuming equal access works for everyone.

    3. 10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      Universal Design shifts responsibility from individuals to systems, which feels fair on the surface, but fairness is not always the same as justice. In social media, treating everyone the same may still disadvantage people who start from very different positions. True justice may require platforms to offer different tools, protections, or visibility to different users based on their needs. So the goal should not only be “fairness,” but a more thoughtful form of equity that actively reduces harm rather than assuming equal access works for everyone.

    1. When we use social media platforms though, we at least partially give up some of our privacy. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly. In some cases we might want a social media company to be able to see our “private” messages, such as if someone was sending us death threats. We might want to report that user to the social media company for a ban, or to law enforcement (though many people have found law enforcement to be not helpful), and we want to open access to those “private” messages to prove that they were sent.

      I have always been skeptical about whether privacy on social media is truly “private.” In many cases, so-called private messages are still accessible to platform developers or automated systems, which means users are trusting companies to protect their privacy rather than actually controlling it themselves. While this access can be helpful in situations like reporting threats or harassment, it also raises questions about who ultimately benefits from this arrangement. If only social media companies are able to see and manage my private data, I am not sure that this kind of “privacy” genuinely serves users’ interests rather than the platforms’ own priorities.

    1. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time).

      It is interesting that using symbols, uppercase letters, and numbers does not significantly increase the difficulty of brute-force attacks, while increasing the length of a password dramatically raises the cost of cracking it. However, many social media platforms still emphasize “complex” password rules rather than encouraging longer passwords. This can create a false sense of security for users, who may believe their passwords are strong when they are not. Ironically, these complexity requirements can even make passwords harder to remember, leading users to reuse them or choose predictable patterns, which ultimately gives attackers more opportunities.

    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      This example shows how datasets can be unintentionally poisoned by social dynamics rather than malicious intent. When a single TikTok video goes viral, it can dramatically change who participates in a survey, skewing the data toward one narrow demographic. Even though the data may look large and complete, it no longer represents the population the researchers originally intended to study. This highlights how data collection is shaped by platforms and visibility, and why researchers must think carefully about how and where their data is gathered.

    1. It turns out that if you look at a lot of data, it is easy to discover spurious correlations where two things look like they are related, but actually aren’t. Instead, the appearance of being related may be due to chance or some other cause. For example:

      When working with large data sets, it becomes clear how easy it is to find patterns that are misleading. Two variables may appear to move together, but this relationship can be caused by chance or by other hidden factors. Psychology has a similar idea captured by the phrase “correlation does not imply causation.” This reminds me that seeing a pattern in data is not the same as understanding why it exists, and that conclusions should be made carefully rather than based on surface-level relationships.

  3. Jan 2026
    1. Youtuber Innuendo Studios talks about the way arguments are made in a community like 4chan: You can’t know whether they mean what they say, or are only arguing as though they mean what they say. And entire debates may just be a single person stirring the pot [e.g., sockpuppets]. Such a community will naturally attract people who enjoy argument for its own sake, and will naturally trend oward the most extremte version of any opinion. In short, this is the free marketplace of ideas. No code of ethics, no social mores, no accountability. … It’s not that they’re lying, it’s that they just don’t care. […] When they make these kinds of arguments they legitimately do not care whether the words coming out of their mouths are true. If they cared, before they said something is true, they would look it up. The Alt-Right Playbook: The Card Says Moops by Innuendo Studios While there is a nihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the ihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the Rules of the Internet: “Rule 30. There are no girls on the internet. Rule 31. TITS or GTFO - the choice is yours”), and extreme racism (like an invented Nazi My Little Pony character). Is this just hypocritical, or is it ethically wrong? It depends, of course, on what tools we use to evaluate this kind of trolling. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faithf the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith11. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      This reading helped me see that trolling in spaces like 4chan isn’t just about “free speech” or joking, but about a lack of care for truth and harm. The idea that arguments are made without concern for whether they are true explains why these communities drift toward extreme misogyny and racism. While trolls may claim a nihilistic or egoist stance, this feels less like a genuine ethical position and more like a shield to avoid responsibility. Under almost any serious moral framework, the deliberate disruption and harm caused by trolling is ethically wrong, especially when it consistently targets marginalized groups.

    1. and their extreme misogyny: Rule 30. There are no girls on the internet Rule 31. TITS or GTFO - the choice is yours [meaning: if you claim to be a girl/woman, then either post a photo of your breasts, or get the fuck ou

      Misogyny on the internet seems to be more severe than in real life—especially in the realm of online gaming. At first, I thought this was because gaming is a space that glorifies skill and power, where authority is tied almost exclusively to “game performance,” and stereotypes about women being worse at games lead to a loss of discursive power. However, if misogyny was already pervasive in the early internet, then I think there must be other contributing factors and explanations as well.

    1. Anonymity encouraging inauthentic behavior# Anonymity can encourage inauthentic behavior because, with no way of tracing anything back to you1, you can get away with pretending you are someone you are not, or behaving in ways that would get your true self in trouble.

      Anonymity can encourage people to act in ways they normally would not if their real identity were known. When there are no clear consequences, it becomes easier to pretend to be someone else or to say and do things that might cause trouble in real life. This lack of accountability often lowers people’s sense of responsibility toward others. Over time, anonymity can make online spaces feel less honest and more hostile, even though it can sometimes protect users as well.

    1. On social media, context collapse is a common concern, since on a social networking site you might be connected to very different people (family, different groups of friends, co-workers, etc.). Additionally, something that was shared within one context (like a private message), might get reposted in another context (publicly posted elsewhere).

      In fact, many young people today are already used to this situation. We are accustomed to showing different sides of ourselves in different contexts, and we strongly hope that these contexts do not interfere with one another. For example, we might act “cool” or relaxed around friends in order to fit in, while presenting ourselves as serious and hardworking in front of parents, teachers, or in school settings. When these different contexts merge or collide, people often feel a strong sense of discomfort, betrayal, or even resentment. This is not strange or abnormal—just as most people would not want their parents to closely observe their behavior at a party with friends. Because of this, people usually do not see themselves as fake or feel guilty for behaving differently across contexts; instead, it feels like a normal and necessary way of managing social life.

    1. Fig. 5.2 An newer bulletin board system. In this one you can click on the thread you want to view, and threads can include things like images.

      Although bulletin board systems originated decades ago, many websites today still use this forum-style structure. For example, the Dungeons & Dragons (D&D) community relies heavily on well-known bulletin board forums where players share discussions, homebrew rules, and game resources. Similarly, the fighting game MUGEN, which comes from the arcade era, has dedicated forums where users upload character models, stages, and other custom assets. These modern bulletin board systems show how this format continues to support niche communities by organizing discussions and resources in a clear, thread-based way.

    1. In the mid-1990s, some internet users started manually adding regular updates to the top of their personal websites (leaving the old posts below), using their sites as an online diary, or a (web) log of their thoughts. In 1998/1999, several web platforms were launched to make it easy for people to make and run blogs (e.g., LiveJournal and Blogger.com). With these blog hosting sites, it was much simpler to type up and publish a new blog entry, and others visiting your blog could subscribe to get updates whenever you posted a new post, and they could leave a comment on any of the posts.

      Although blogs are often seen as a product of the early internet, many middle-aged and older users still use them as an important way to communicate and share ideas. I once followed a professor from one of my classes who regularly posted his personal observations and reflections on his blog. For example, he noticed that people in the cafeteria were less likely to choose orange trays than brown ones, and he speculated that there might be a scientific or psychological explanation behind this preference. What stood out to me was that many commenters—who appeared to be from the same age group based on their usernames and profile pictures—actively engaged with his posts, showing that blogs still function as a thoughtful and community-oriented space for discussion.

    1. In fact, I have always been puzzled about the collection of information such as "region" and "age". Is it really "necessary" for companies to collect such information? These pieces of information do not guarantee that the account is used by a real person - fake accounts can also randomly generate combinations of these pieces of information, but it will increase the risk of user information leakage

    1. they tweeted “yesterday,” what do I mean by “yesterday?” We might be in different time zones and have different start and end times for what we each call “yesterday.” Or for the person who posted it? Those might not be the same. Or if I want to see for a given account, how much they tweeted “yesterday,” what do I mean by “yesterday?” We might be in different time zones and have different start and end times for what we each call “yesterday.”

      As an international student, I have experience with this. Many social media platforms display the posting time based on the time zone of the viewer, i.e. the device time zone, when the publisher posts. That is to say, if you view the release time of the same tweet in different time zones, you will find that their release time has changed

    1. What bots do you dislike?

      I dislike bot armies that are used to manipulate opinions or flood comment sections. In situations where people are supposed to think critically and make their own judgments, these bots create a lot of noise and confusion. They make it harder to tell what real people actually believe. Overall, they turn online discussions into a messy and unhealthy environment instead of a meaningful conversation.

    2. What bots do you find surprising? What bots do you like?

      Many bots on today’s video platforms are designed to recognize background music in videos and help users download audio or video clips. Even though these bots sometimes go against the rules of the platforms, they can be very useful for people who want to find songs or save content for personal use. I find these bots surprising because they can quickly identify music with high accuracy. I also like them because they make it much easier to explore and reuse media that would otherwise be hard to access.

    1. 18.3.2. Schadenfreude# Another way of considering public shaming is as schadenfreude, meaning the enjoyment obtained from the troubles of others. A 2009 satirical article from the parody news site The Onion satirizes public shaming as being for objectifying celebrities and being entertained by their misfortune: Media experts have been warning for months that American consumers will face starvation if Hollywood does not provide someone for them to put on a pedestal, worship, envy, download sex tapes of, and then topple and completely destroy. Nation Demands Fresh Celebrity Meat - The Onion

      I think this phenomenon is even more common on today’s internet, where almost everything—including suffering—is turned into entertainment. For example, in Sex Education, there is a storyline in which a school bully is revealed to be closeted and secretly in love with the gay student he targets. One scene, in which the bully publicly exposes his own physical insecurity as a form of “confession,” is played for humor and often provokes laughter. While the scene may be entertaining, I find it problematic because it turns school bullying into spectacle, overlooking the lasting psychological trauma experienced by the victim. The victim’s suffering cannot simply be erased or healed by understanding the bully’s motivations, even when those motivations are framed romantically.

    1. What do you think is the responsibility of tech workers to think through the ethical implications of what they are making?

      Kumail Nanjiani reflects on his experiences visiting tech companies and expresses concern that many developers give little to no thought to the ethical consequences of their innovations. He highlights how powerful technologies—such as privacy-invasive tools or manipulated media—are often created with a “can we do this?” mindset rather than “should we do this?”. The lack of prepared responses to ethical questions suggests that these issues are rarely discussed within tech culture. Nanjiani ultimately warns that once technology is released, its harms cannot easily be undone, making ethical responsibility crucial.