18 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mayo Clinic Staff. Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) - Symptoms and causes. 2023. URL: https://www.mayoclinic.org/diseases-conditions/chronic-fatigue-syndrome/symptoms-causes/syc-20360490 (visited on 2023-12-07).

      I chose this source because it makes invisible disability easier to understand. ME/CFS can seriously affect a person even if other people cannot see it, and symptoms can change over time. That connects well to the chapter’s point that not all disabilities are obvious from the outside.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.

      I liked this definition because it makes disability feel less like a personal defect and more like a mismatch between people and the way society is designed. That stood out to me because it shifts the focus from “what is wrong with this person” to “what assumptions are built into this space or system.”

    1. ight to privacy. November 2023. Page Version ID: 1186826760. URL: https://en.wikipedia.org/w/index.php?title=Right_to_privacy&oldid=1186826760 (visited on 2023-12-05).

      I thought this source was interesting because the “right to be forgotten” shows that privacy is not just about secrets. It is also about whether people can move on from old information that stays online for a long time. That connects well to this chapter’s point that privacy matters for many different reasons.

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. When we use social media platforms though, we at least partially give up some of our privacy

      I thought this part was interesting because “private” messages on social media are not really fully private. They are private from other users, but not necessarily from the company. That feels important because a lot of people probably do not think about that difference very much when they use these apps.

  5. Apr 2026
  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Web tracking. October 2023. Page Version ID: 1181294364. URL: https://en.wikipedia.org/w/index.php?title=Web_tracking&oldid=1181294364 (visited on 2023-12-05).

      This source says websites track what users do online and use it to understand behavior. I didn’t realize it includes things like what you click or watch. It feels a bit uncomfortable because it means a lot of personal info is being collected without we really noticing.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Additionally, spam and output from Large Language Models like ChatGPT can flood information spaces (e.g., email, Wikipedia) with nonsense, useless, or false content, making them hard to use or useless.

      I agree with this part. A lot of AI writing sounds correct at first, but sometimes it is empty or wrong, and that can really lower the quality of a website.

    1. Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction

      I like this definition because it shows trolling is more than just being rude online. The main thing is that the person is trying to mess up the conversation or get a reaction out of people. I think that makes it different from normal disagreement, even if both can look offensive sometimes.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      I thought this source was interesting because it shows trolling is not always just for fun or to make people mad. Sometimes it is also used to prove a point, especially by showing how easily people believe weird stories online. That connects well to this chapter.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Jonah E. Bromwich and Ezra Marcus. The Anonymous Professor Who Wasn’t. The New York Times, August 2020. URL: https://www.nytimes.com/2020/08/04/style/college-coronavirus-hoax.html (visited on 2023-11-24).

      This source seems especially relevant to the chapter because it shows how anonymity online can become ethically complicated very quickly. From the title alone, it already suggests a gap between the identity a person presented online and who they really were. That connects directly to this chapter’s discussion of how anonymity can support either authentic self-expression or harmful inauthentic behavior. Soufflé. What interests me most is that anonymity is not just about privacy — it can also affect trust, credibility, and harm when other people believe the identity being performed is real.

    1. Anonymity can also encourage authentic behavior. If there are aspects of yourself that you don’t feel free to share in your normal life (thus making your normal life inauthentic), then anonymity might help you share them without facing negative consequences from people you know.

      I think this part is really interesting because people often talk about anonymity as if it is automatically harmful, but this section shows that it can also help people express their real selves. Soufflé. In my opinion, anonymity is not good or bad by itself. It depends on the situation and on what kind of behavior it makes possible. For some people, especially those who fear judgment, punishment, or discrimination, anonymity can create space for honest self-expression that would be difficult in everyday life. That makes me think the ethical question is less about anonymity itself and more about what kinds of actions and communities anonymity supports.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Bulletin board system. November 2023. Page Version ID: 1184559343. URL: https://en.wikipedia.org/w/index.php?title=Bulletin_board_system&oldid=1184559343 (visited on 2023-11-24).

      I found this source interesting because bulletin board systems feel like an early version of features that are still common today, especially threads and community discussion spaces. Even though the technology was much simpler, the basic social pattern is familiar: one person posts, others reply, and a conversation forms around shared interests. This source made me realize that many features we associate with modern social media actually have much older roots.

    1. The Web 1.0 internet had some features that make it stand out compared to later internet trends:

      I found this part interesting because it shows that the design of a platform affects how people interact. When personal webpages and communication tools were separate, online social life seems like it was more divided and maybe more intentional. Today, many platforms put everything together in one space, which makes interaction easier, but also creates more pressure to always be visible and active.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Caroline Delbert. Some People Think 2+2=5, and They’re Right. Popular Mechanics, October 2023. URL: https://www.popularmechanics.com/science/math/a33547137/why-some-people-think-2-plus-2-equals-5/ (visited on 2023-11-24).

      This source caught my attention because the title is surprising, but it makes an important point. I think the article helps show that numbers are not always as simple or objective as they first appear. In real-world situations, the meaning of a number often depends on definitions, assumptions, and context. That connects strongly to this chapter, especially the discussion of how measuring Twitter bots depends on how people define what they are counting.

    1. We have to be aware that we are always making these simplifications, try to be clear about what simplifications we are making, and think through the ethical implications of the simplifications we are making.

      The sentence “all data is a simplification of reality” really stood out to me. I like this point because it reminds us that data is never just a perfect copy of the real world. The apple example was simple, but it clearly showed that counting something as “one” can hide important differences. I think this also connects strongly to the Twitter bot example, because the result depends a lot on how people define words like “user” or “spam bot.” This made me realize that when we look at data, we should not only ask whether it is correct, but also ask what has been simplified or left out.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL:

      This source caught my attention because the title already suggests that bias in bots is a real design problem, not just a technical mistake. It connects to the chapter by reminding us that even simple program structures can still produce harmful outcomes if the data, rules, or assumptions behind them are biased.

    1. One of the most common events to program for is around time: We can also tell programs to wait for a period of time, or start at a given time.

      This part made me think about how simple scheduling can make a bot feel much more active and intentional, even when it is doing a very basic task. A bot that posts at regular times may look more “human” or organized, which also raises ethical questions about transparency and whether other users should know they are interacting with automation.

  13. Mar 2026
    1. ust because we use an ethics framework to look at a situation doesn’t mean that we will come out with a morally good conclusion.

      I was most interested in the part saying ethics frameworks do not guarantee moral goodness. I agree because people can use the same framework to defend very different actions. This reminded me that ethical thinking in technology is not just about picking one theory, but about staying critical, comparing perspectives, and asking who might be harmed by a decision.

    2. Focuses on responsibilities and relational issues in the relationships you are invested in.

      I want to add to ethics of care. The reading says it focuses on responsibilities in relationships, but I think it is also useful for social media because it highlights emotional harm that rule-based frameworks may miss. For example, even if a platform follows the same rule for everyone, it may still fail vulnerable users who need more protection and support.