13 Matching Annotations
  1. Jan 2026
    1. 7.3.4. RIP trolling# RIP trolling is where trolls find a memorial page and then all work together to mock the dead person and the people mourning them. Here’s one example from 2013: A Facebook memorial page dedicated to Matthew Kocher, who drowned July 27 in Lake Michigan, had attracted a group of Internet vandals who mocked the Tinley Park couple’s only child, posting photos of people drowning with taunting comments superimposed over the images. One photo showed a submerged person’s hand breaking through the water with text reading “LOL u drowned you fail at being a fish,” according to a screen grab of the page shared with the Tribune after the post was removed. Cruel online posts known as RIP trolling add to Tinley Park family’s grief from the Chicago Tribune

      This is a particularly disturbing online behavior because it directly targets individuals experiencing profound grief. For families who have lost loved ones, they simply seek a quiet space to commemorate and mourn, only to be abruptly invaded by strangers who provoke them with mockery and malicious jokes. Such actions go beyond mere attention-seeking; they constitute collective harm. When confronting RIP trolling, the priority should not be demanding victims endure it, but rather prompt platform intervention—swiftly removing content, restricting accounts, or even imposing permanent bans—to safeguard mourning spaces from abuse.

    1. One of the traditional pieces of advice for dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling. We can see this advice as well in the trolling community’s own “Rules of the Internet”: Do not argue with trolls - it means that they win But the essayist Film Crit Hulk argues against this in Don’t feed the trolls, and other hideous lies. That piece argues that the “don’t feed the trolls” strategy doesn’t stop trolls from harassing: Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”

      Many people often say that the best way to deal with online trolls is to “just ignore them.” It sounds reasonable—as if by not responding, the troll will lose interest and disappear. But reality doesn't always play out that way. For some trolls, being ignored only fuels their desire to “make their presence felt.” They become more abusive and attack more frequently, just to provoke a response.

      This advice of “ignoring trolls” actually places a huge burden on the person being harassed. It creates the impression that if things escalate, it's your fault for not handling it properly, rather than acknowledging the other party's inappropriate behavior. In contrast, shifting the focus to how platforms manage and address violating accounts seems more reasonable. Clearer rules, more timely moderation, and genuine consequences like account suspensions or feature restrictions would make trolls pay a price, rather than forcing victims to endure in silence.

    1. In 2016, when Donald Trump was running a campaign to be the US President, one twitter user pointed out that you could see which of the Tweets on Donald Trump’s Twitter account were posted from an Android phone and which from an iPhone, and that the tone was very different. A data scientist decided to look into it more and found: “My analysis … concludes that the Android and iPhone tweets are clearly from different people, “posting during different times of day and using hashtags, links, and retweets in distinct ways, “What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).” (Read more in this article from The Guardian) Note: we can no longer run code to check this ourselves because first, Donald Trump’s account was suspended in January 2021 for inciting violence, then when Elon Musk decided to reinstate Donald Trump’s account (using a Twitter poll as an excuse, but how many of the votes were bots?), Elon Musk also decided to remove the ability to look up a tweet’s source.

      This analysis intrigued me, and it was the first time I realized that a data scientist could reasonably infer which tweets were likely to have come from Trump himself and which from his campaign. This shows that even seemingly simple metadata can contain very strong behavioral signals. This made me realize that platforms are not neutral technological Spaces, but systems that are influenced by power, economic interests, and individual decisions.

    1. 6.6.2. Anonymity encouraging authentic behavior# Anonymity can also encourage authentic behavior. If there are aspects of yourself that you don’t feel free to share in your normal life (thus making your normal life inauthentic), then anonymity might help you share them without facing negative consequences from people you know. 6.6.3. Is authentic self-expression good?# We can next ask if authentic self-expression is a good thing or not. But that depends, what is the authentic thing about yourself that you would be expressing? Are you authentically expressing hate or cruelty? If so, perhaps authentic self-expression is morally bad. Are you part of an oppressed or marginalized group that has been restricted from self-expression? Then perhaps expressing yourself is morally good. (See Trans Twitter and the beauty of online anonymity)

      Anonymity itself is not good or bad; it is more like an amplifier that amplifies what the person is trying to express.

      If a person uses anonymity to express hatred and attack others, then this expression of the "true self" is clearly morally questionable because it directly harms others. But on the other hand, for some people who are repressed and marginalized in real life, anonymity may be the only safe way for them to express their true selves. In this case, anonymity is not a shirking of responsibility, but a form of self-protection.

    1. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      I think infinite scroll is a classic example of “friction-reducing design,” but its impact is actually a bit scary. In the past, with formats like search results, you had to click “next page” after finishing one page. While this action was a bit cumbersome, it provided a pause point, reminding you, “Should I stop now?” Infinite scroll completely removes that barrier. You just keep scrolling down, and content automatically loads, making you completely unaware of how long you've been scrolling.

      I think this is also why scrolling through social media is so addictive: it's not because we genuinely want to look for that long, but because the design eliminates every opportunity to stop.

    2. Sometimes designers add friction to sites intentionally. For example, ads in mobile games make the “x” you need to press incredibly small and hard to press to make it harder to leave their ad:

      I find this example particularly relatable because I frequently encounter this issue myself when playing mobile games: the “X” button for closing ads is designed to be super tiny, making it incredibly difficult to tap. Sometimes you accidentally click into the ad page instead. On the surface, this seems like a minor design detail, but it's actually a deliberate tactic to increase friction, making it harder for users to leave the ad. For advertisers and platforms, this keeps users engaged longer and even generates accidental clicks, boosting revenue. But from the user's perspective, this design is downright annoying since it exploits our attention and clumsy interactions to “force” us into unwanted actions. I believe this goes beyond ordinary design; it's manipulative design.

    1. When we think about how data is used online, the idea of a utility calculus can help remind us to check whether we’ve really got enough data about how all parties might be impacted by some actions. Even if you are not a utilitarian, it is good to remind ourselves to check that we’ve got all the data before doing our calculus. This can be especially important when there is a strong social trend to overlook certain data. Such trends, which philosophers call ‘pernicious ignorance’, enable us to overlook inconvenient bits of data to make our utility calculus easier or more likely to turn out in favor of a preferred course of action.

      When I think about how data is used on the web, I think the concept of "utility computing" is actually useful, because it reminds us: do we really see all the data before deciding whether something is "more beneficial than harmful"? Many times we only use the information we have, but the missing data may be the most important part. I also agree with the text about "harmful ignorance", because in reality, it is really easy for people to ignore some data that makes them uncomfortable or not in line with their own position, so the results will be more like supporting the choice they want to make. This is especially true in social media and algorithmic recommendations, where we may be seeing things that are already filtered, so if we don't ask, "What's missing?" we may be biased in our utility calculations.

    1. Gender# Data collection and storage can go wrong in other ways as well, with incorrect or erroneous options. Here are some screenshots from a thread of people collecting strange gender selection forms:

      I found that many websites have different gender options. I think it's a hard option to collect. Many times what everyone thinks gender is not even on the website. In order to be fair and treat each user equally, we need to provide them with the most appropriate options.

    1. Antagonistic bots can also be used as a form of political pushback that may be ethically justifiable. For example, the “Gender Pay Gap Bot” bot on Twitter is connected to a database on gender pay gaps for companies in the UK. Then on International Women’s Day, the bot automatically finds when any of those companies make an official tweet celebrating International Women’s Day and it quote tweets it with the pay gap at that company:

      It is "confrontational", but it has a social justice purpose - to use automation to counter the "pseudo-equality propaganda" of corporate marketing and bring the real structural problem (the wage gap) to the public. This example shows that some antagonistic bots can instead become tools for monitoring power.

    2. Bots might have significant limits on how helpful they are, such as tech support bots you might have had frustrating experiences with on various websites. 3.2.2. Antagonistic bots:# On the other hand, some bots are made with the intention of harming, countering, or deceiving others.

      The "bot" itself is not good or bad, but depends on what it is designed for and how the rules of the platform constrain it. For example, friendly bots (automatic captioning, vaccine progress, red panda images) essentially improve the efficiency of information acquisition and enhance the user experience; antagonistic bots (spam, fake fans, astroturfing), however, can create false public opinion and make people think that "many people support/oppose a certain opinion", which directly affects public judgment

    1. 儒(其他关联)# B成为模范人物(例如,仁慈的;真诚的;尊敬和祭祀祖先;尊重各方ents长老和权威人士(照顾儿童和年轻人;对家人和他人慷慨大方)。 这些特质通常是通过仪式和礼节(包括祭祀祖先、音乐和饮茶)来展现和实现的。,从而形成和谐的社会。 关键数据: 孔子中国约500 孟子约350,中国 荀子公元前300年左右,中国 道教# 顺应宇宙自然循环,采取自然而然的行动。强行推动事物发展很可能会适得其反。。 不认同儒家注重礼仪规范,更崇尚自发性和玩乐精神。 就像水(柔软且易变形)经过一段时间可以切割岩石一样。 关键数据: 老子公元前500年左右的中国 老子 庄子公元前300年左右的中国

      I thought it was interesting that many of these frameworks try to describe what makes a “good person,” but they don’t always agree about what that actually looks like. For example, Confucianism emphasizes rituals and social roles, while Taoism encourages doing less and letting things unfold naturally. Reading them side-by-side made me realize that ethical behavior can depend a lot on what a culture values, not just on universal rules.

    1. We also see this phrase used to say that things seen on social media are not authentic, but are manipulated, such as people only posting their good news and not bad news, or people using photo manipulation software to change how they look.

      I think this idea shows how social media can distort people's lives. When all we see is mostly good news, filters, and edited photos, it's easy to compare ourselves to something that was never real. Over time, this can affect our self-esteem and our expectations of what is "normal". This reminds me that we often forget that social media is more like a highlight reel than real life.

    1. Platforms can be minimalist, like Yo, which only lets you say “yo” to people and nothing else. Platforms can also be tailored for specific groups of people, like a social media platforms for low-income blind people in India.

      I think it was interesting to put a minimalist platform like Yo together with a platform specifically for low-income blind people. The former looks "simple and easy to use," but the fact that it has fewer features means it can't do much. Specialized platforms, on the other hand, are more complex but really help those who need it most. It made me think: Platform design really depends on who you want to serve.