34 Matching Annotations
  1. Jun 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Luddite. December 2023. Page Version ID: 1189255462. URL: https://en.wikipedia.org/w/index.php?title=Luddite&oldid=1189255462 (visited on 2023-12-10).

      explains how a group of 19th century English textile workers protested against machines that were taking their jobs. They destroyed equipment to stand up for their rights in a rapidly changing world. Over time, “Luddite” has come to describe anyone skeptical of new technology.

    1. How have your views on ethics changed (or been reinforced)?

      My views on ethics have deepened over time. I used to think ethics was just about knowing what’s right and wrong, but now I see it’s more complicated than that. Real-life situations often don’t have clear answers, and doing the “right” thing can still feel uncomfortable or messy. I’ve learned that empathy, listening, and understanding different perspectives matter just as much as the rules. It’s made me more thoughtful and careful about the choices I make and how they affect others.

  3. May 2025
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Supply and demand. December 2023. Page Version ID: 1189274291. URL: https://en.wikipedia.org/w/index.php?title=Supply_and_demand&oldid=1189274291 (visited on 2023-12-10).

      This website explains how prices are set by the balance between what consumers want to buy and what producers want to sell. It also notes that real markets are more complex and influenced by factors like policy and competition.

    1. CEOs of companies (like Mark Zuckerberg of Meta) are often both wage-laborers (they get a salary, Zuckerberg gets a tiny symbolic $1/year) and shareholders (they get a share of the profits, Zuckerberg owns 16.8%)

      I was surprised to learn that Mark Zuckerberg only takes a $1 annual salary. At first, that seemed like a noble or humble gesture, but after reading this section, I realize it's actually a strategic move that reflects how power and wealth work in capitalism. Since he owns such a large share of the company, he doesn’t need a salary he still holds power and profits through ownership. It really highlights how, in capitalism, ownership matters far more than labor in terms of control and influence.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Guilt–shame–fear spectrum of cultures. November 2023. Page Version ID: 1184808072. URL: https://en.wikipedia.org/w/index.php?title=Guilt%E2%80%93shame%E2%80%93fear_spectrum_of_cultures&oldid=1184808072 (visited on 2023-12-10).

      The guilt–shame–fear spectrum describes how different cultures shape behaviour. Guilt cultures rely on an internal sense of right and wrong. Shame cultures focus on social image, and people avoid wrong doing to prevent embarrassment. Fear cultures use threats of punishment and authority to maintain order.

    1. What do you consider to be the most important factors in making an instance of public shaming bad?

      Public shaming turns harmful when the reaction is way too harsh for what the person did, especially if people don’t know the full story. It gets worse when a big group joins in, and the person being shamed has no chance to explain, grow, or make things right. It’s even more unfair when it targets someone with little power.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. q1] Doxing. December 2023. Page Version ID: 1189390304. URL: https://en.wikipedia.org/w/index.php?title=Doxing&oldid=1189390304 (visited on 2023-12-10).

      Doxing refers to the process of publicly releasing a person's private or personal data without consent, often as a way of bullying, intimidating, or threatening them. It is most commonly used as a form of internet bullying or revenge and carries with it some rather serious real-life consequences such as stalking or emotional damage. The article also identifies the legal and ethical issues of doxing and its growing popularity in online culture and online disputes.

    1. Do you believe crowd harassment is ever justified?

      No I don't think crowd harassment is ever justified. Even if someone has done something wrong, a group pf people attacking that person does more harm then good. It risks at dehumanising the person, spreads fear and misinformation. Accountability should come through fair processes and not though public shaming.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Crowdsourcing. December 2023. Page Version ID: 1188348631. URL: https://en.wikipedia.org/w/index.php?title=Crowdsourcing&oldid=1188348631#Historical_examples (visited on 2023-12-08).

      This source from Wikipedia provides a deeper history of crowdsourcing, demonstrating that the topic had existed before the internet even existed. One of these interesting examples is the Oxford English Dictionary project in the 19th century, which used thousands of volunteers to mail in examples of word usage essentially a pre-digital form of crowdsourcing. It also illustrates how the British government employed crowds to solve technical problems, like the longitude problem for navigation.

    1. Some of the different characteristics that means of communication can have include (but are not limited to):

      This highlights the manner in which the design of a communication tool can have an impact on the way that people interact. For example, anonymous and asynchronous tools can produce more reflective or honest contributions, while synchronous tools prioritise speed and urgency.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Wikipedia. URL: https://www.wikipedia.org/ (visited on 2023-12-08).

      Wikipedia is an open source and crowd sourced encyclopedia of sorts. Its technical model relies on wiki-based content management system, allowing decentralised and real-time collaboration. Even though it allows for collaborative learning it raises questions about content moderation, reliability and bias.

    1. What would be considered bad actions that need to be moderated?

      From a Relational Ethics perspective, bad actions that need moderation are those that harm relationships and community well-being. This includes dehumanising speech like racism or sexism, spreading misinformation that endangers others, and behaviours like harassment. Instead of focusing just on individual freedom, these frameworks emphasise protecting the community.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anya Kamenetz. Facebook's own data is not as conclusive as you think about teens and mental health. NPR, October 2021. URL: https://www.npr.org/2021/10/06/1043138622/facebook-instagram-teens-mental-health (visited on 2023-12-08).

      This article talks about how Facebook's internal research found out that Instagram negatively impacts the mental health of teenagers but still continued to allow teens to use the platform without making any meaningful changes. This highlights an important issue that despite having data showing harm, Facebook chose not to take stronger action, likely due to business interests.

    1. But Lauren Collee argues that by placing the blame on the use of technology itself and making not using technology (a digital detox) the solution, we lose our ability to deal with the nuances of how we use technology and how it is designed:

      I agree with the point being made here because blaming technology itself feels like a shortcut that let's us avoid dealing with the harder questions such as how we use social media, who controls its design, and how we might shape it differently. Maybe instead of escaping the digital world through detoxes, we should be working toward building healthier online communities and advocating for platforms to prioritise user well-being.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Evolution of cetaceans. November 2023. Page Version ID: 1186568602. URL: https://en.wikipedia.org/w/index.php?title=Evolution_of_cetaceans&oldid=1186568602 (visited on 2023-12-08).

      It's fascinating to think about how the concept of evolution in cetaceans can relate to viral content. Just as whales and dolphins evolved from land animals to ocean dwellers through gradual changes over millions of years, viral content adapts over time through user modifications and platform algorithms. This makes me wonder if understanding this "evolutionary" process could help predict which types of content are more likely to go viral.

    1. Sometimes content goes viral in a way that is against the intended purpose of the original content.

      This made me think how little control people actually have once they post something on the internet. How small memes and jokes can turn into something serious just because people interpret it differently. I have seen this happen with videos that were meant to be heartfelt but one small detail is misfitting, and suddenly the whole video is being shared as comedy.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Systemic bias. November 2023. Page Version ID: 1185361788. URL: https://en.wikipedia.org/w/index.php?title=Systemic_bias&oldid=1185361788 (visited on 2023-12-07).

      This article very well explains how biases can be built into systems even without anyone intending to do so. The article says that these patterns often come from technologies themselves and not users.

    1. YouTube will and has funnel creators and viewers around in ways that reflect the biases and prejudices of the population it serves.

      This section stood out to me because it highlights a dangerous feedback loop when the recommendation algorithm learns from user preferences, and those preferences are shaped by societal biases, it risks reinforcing discrimination rather than challenging it.

  12. Apr 2025
  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Social model of disability. November 2023. Page Version ID: 1184222120. URL: https://en.wikipedia.org/w/index.php?title=Social_model_of_disability&oldid=1184222120#Social_construction_of_disability (visited on 2023-12-07).

      I found this really interesting because it shifts the focus from an individual’s body being “broken” to society not accommodating different needs. I liked how it explained that what makes something a "disability" is often the way environments are designed without thinking about a range of people.

  14. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. A disability is an ability that a person doesn’t have, but that their society expects them to have.[1] For example:

      I found it really interesting how this chapter emphasised that disability is socially defined rather than just a physical or medical condition. It made me think about how often we blame individuals for not fitting into systems that were never designed with everyone in mind. For example, when public spaces aren’t accessible, we often treat it as the disabled person's problem rather than a design failure.

    1. Social engineering (security). December 2023. Page Version ID: 1188634196. URL: https://en.wikipedia.org/w/index.php?title=Social_engineering_(security)&oldid=1188634196 (visited on 2023-12-08).

      I found it fascinating how psychological manipulation is such a powerful tool in cyberattacks. One detail that really stood out was how attackers often rely more on tricking people than on hacking systems. For example, phishing emails that pretend to be from a trusted source can lead people to unknowingly give up their login credentials.

  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. o for example, Facebook stored millions of Instagram passwords in plain text [i8], meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords.

      The section about Facebook storing Instagram passwords in plain text was honestly shocking. I always assumed that big tech companies would follow basic security practices like encrypting passwords. It’s frustrating to realise that even when we try to use strong, unique passwords and enable 2-factor authentication, our data can still be compromised because of the company's negligence.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kurt Wagner. This is how Facebook collects data on you even if you don’t have an account. Vox, April 2018. URL: https://www.vox.com/2018/4/20/17254312/facebook-shadow-profiles-data-collection-non-users-mark-zuckerberg (visited on 2023-12-05).

      The article sheds light on something that is both invasive and unsettling. Facebook's ability to collect information about people who haven't even created an account serves as a reminder that sometimes our digital footprints are not in pur control. It’s not just about cookies and trackers anymore; it’s about being mapped and profiled through others’ digital activity. What makes this source so powerful is the way it brings an ethical issue to life: even opting out of a platform doesn’t mean opting out of surveillance.

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Then Sean Black, a programmer on TikTok saw this and decided to contribute by creating a bot that would automatically log in and fill out applications with random user info, increasing the rate at which he (and others who used his code) could spam the Kellogg’s job applications:

      This shows how automation can rapidly scale data poisoning. The use of a bot to submit fake data highlights vulnerability in systems that lack strong validation systems. It also how low barriers to entry for automation can turn small scale protests into large scale disruptions

  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Film Crit Hulk. Don’t feed the trolls, and other hideous lies. The Verge, July 2018. URL: https://www.theverge.com/2018/7/12/17561768/dont-feed-the-trolls-online-harassment-abuse (visited on 2023-12-05).

      This article argues that the common advice to not feed the trolls often does not help victims of online harassment. Instead of trolls getting bored and going away, they sometimes even become more aggressive. The article suggests that victims should be given more power through strong moderation, setting clear rules and removing abusive users from platforms rather than asking the victim to stay silent.

    1. Have you witnessed different responses to trolling? What happened in those cases?

      Yes, I have seen different responses to trolling, especially from influencers. Sometimes when hate or threats go too far, these influencers call out the trolls publicly often in stories or posts. What I find interesting is that once these trolls are called out in public they often issue an apology. It shows how easy it is to say harmful things while hiding behind a screen without being held accountable for them.

  19. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. COVID-19 pandemic. November 2023. Page Version ID: 1186598722. URL: https://en.wikipedia.org/w/index.php?title=COVID-19_pandemic&oldid=1186598722 (visited on 2023-11-24).

      The article shows how the crisis reshaped digital spaces, especially social media. A notable detail is how platforms became central to both official public health messaging and the rapid spread of misinformation. This dual role not only affected public perception and behaviour but also undermined the growing influence and responsibility of online platforms in shaping real-world outcomes.

    1. Where do you see parasocial relationships on social media?

      Parasocial relationships can be seen across a range of social media platforms where the creators release personal content such as behind-the-scenes footage and life updates. These platforms foster an impression of intimacy through the ability to interact directly with fans via comments, live streaming, and Q&A sessions. Followers, in turn, may feel a personal connection to the creator, forming a one-sided bond where they believe they know the person, even though the creator may not be aware of their existence.

  20. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Web 2.0. October 2023. Page Version ID: 1179906793. URL: https://en.wikipedia.org/w/index.php?title=Web_2.0&oldid=1179906793#Web_1.0 (visited on 2023-11-24).

      The Web 2.0 article highlights how the internet shifted from static pages to platforms where users could actively create and share content. It’s interesting to see how this change reflects what Standage points out—today’s social media isn’t entirely new but a digital take on the old ways people have always communicated and interacted, like through pamphlets or public forums.

    1. Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories [e2]. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.

      I hadn’t really thought about graffiti or handwritten books as early forms of social media before reading this. It’s fascinating to see how people have always found ways to share information, opinions, or even gossip long before the internet. It makes me think that our need to communicate publicly and socially is deeply human, not just a modern trend driven by technology.

  21. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Julia Evans. Examples of floating point problems. January 2023. URL: https://jvns.ca/blog/2023/01/13/examples-of-floating-point-problems/ (visited on 2023-11-24).

      Julia Evans walks through common issues with floating point numbers, showing how operations like 0.1 + 0.2 may not equal 0.3 due to how computers store decimals in binary. She highlights that these small errors can lead to unexpected bugs, especially in programs that rely on precise math.

    1. Now, there are many reasons one might be suspicious about utilitarianism as a cheat code for acting morally, but let’s assume for a moment that utilitarianism is the best way to go. When you undertake your utility calculus, you are, in essence, gathering and responding to data about the projected outcomes of a situation. This means that how you gather your data will affect what data you come up with.

      The idea that utility calculus relies on the data we chose to include really made me think. It highlights how easy it is to justify decisions as ethical when we only focus on the data that supports our goals. For example, social media platforms might prioritise engagement but they often overlook the negative impact on users mental health

  22. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL: https://www.vice.com/en/article/mg7g3y/how-to-make-a-not-racist-bot (visited on 2023-12-02).

      I looked into the article and for me it added some really interesting context to the section on Microsoft Tay. The author pointed out that if you design a system that learns from the internet without any filters you are basically guaranteeing it will also reflect the worst parts of it.

    1. 3.2.3. Corrupted bots# As a final example, we wanted to tell you about Microsoft Tay a bot that got corrupted. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day. Read more about what went wrong from Vice How to Make a Bot That Isn’t Racist [c14]

      Reading about the the Tay bot really stuck with me. Its wild how quickly it went from an experiment to a disaster because of how people interacted with it. It makes me wonder if it is possible for a bot to learn safely from the internet without picking up the worst traits of human behaviour. Even though it was an experiment in machine learning, it also shows how much responsibility developers and the audience that interacts with it has when the bots have been put out in public spaces