19 Matching Annotations
  1. Jun 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Ted Chiang. Will A.I. Become the New McKinsey? The New Yorker, May 2023. URL: https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey (visited on 2023-12-10).

      I read this article by Ted Chiang called “Will A.I. become the new McKinsey?” and it got me thinking. basically, chiang says that ai is super good at crunching tons of data and spotting patterns, but it can’t really deal with the messy stuff like office politics, ethical questions, or understanding a company’s moral dilemma. Companies have tried using it for big decisions, but the results have been hit or miss. Chiang thinks ai will probably end up helping consultants instead of taking over their jobs. it’s more like a powerful sidekick than a full replacement.

    1. If you could magically change anything about how people behave on social media, what would it be?

      Honestly, I’ve noticed online chats can turn ugly so fast. I wish people would take a moment before hitting post/send and think how their actions would affect people's mental and emotional state. If we all imagined the person behind the screen, we’d probably choose kinder words. I feel a lot of arguments happen because we forget there’s a real person reading. So, I’d love for empathy to just kick in automatically like a little reminder that there’s an actual human on the other side. If that happened, I think trolling and harassment would drop significantly, and conversations would feel a lot more supportive.

    1. What if government regulations said that social media sites weren’t allowed to make money based on personal data / targeted advertising? What other business models could they use? How would social media sites be different?

      Without targeted ads, they’d need other funding maybe through like subscriptions or generic ads based on what you’re looking at instead of your personal profile. Your feed would likely show posts by time or group activity rather than the stuff they predict you’ll click on. Smaller, privacy-friendly apps could pop up, and big platforms might lose users if they start charging. You’d get better privacy, but some people might be left out if they can’t afford a fee.

  3. May 2025
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Face (sociological concept). November 2023. Page Version ID: 1184174814. URL: https://en.wikipedia.org/w/index.php?title=Face_(sociological_concept)&oldid=1184174814 (visited on 2023-12-10).

      The article explores “face” as a universal social concept tied to a person’s dignity, reputation, and standing within a community. It defines "face" and shows how it must be constantly managed in everyday interactions. The source then talks about cultural variations like in China “mianzi” (prestige) and “lian” (moral integrity) play distinct roles, while many Asian, Slavic, Middle Eastern, and African societies have their own face related customs and expressions.

    1. What do you consider to be the most important factors in making an instance of public shaming bad?

      When people come together to shame someone online without knowing the full story, it turns nasty. Rumors spread, context gets lost, and the person on the receiving end never even gets a chance to explain themselves. Before long, strangers are digging up private details or piling on insults, and it stops being about a mistake, it becomes personal. That kind of pressure can lead to serious anxiety or depression, all for a moment of viral outrage. And because everyone’s hiding behind a screen name, there’s no real accountability or empathy, just cruelty.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Devin Coldewey. Study finds Reddit's controversial ban of its most toxic subreddits actually worked. TechCrunch, September 2017. URL: https://techcrunch.com/2017/09/11/study-finds-reddits-

      Researchers at Georgia Tech looked into what happened after Reddit shut down r/coontown and r/fatpeoplehate. They found that hate speech from those users dropped by 80–90%, and many simply left the site. A handful resurfaced on platforms like Gab, but they didn’t spread more hate. In a nutshell, it conveyed that banning toxic communities really cleans things up.

    1. So how can platforms and individuals stop themselves from being harassed?

      I get that muting and blocking can give you a quick break, but it hardly stops a pile‑on. Going legal is even tougher since the harassment tactics change so fast. I wish platforms would step up by shutting down toxic groups, improving reporting, and backing tools like mass‑blocking apps. When users and platforms team up, that’s when I think it really works.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Crowdsourcing. December 2023. Page Version ID: 1188348631. URL: https://en.wikipedia.org/w/index.php?title=Crowdsourcing&oldid=1188348631#Historical_examples (visited on 2023-12-08).

      The Wikipedia article defines crowdsourcing as sourcing ideas, services, or funds from large online groups. It traces roots back to the 1714 Longitude Prize and the Oxford Dictionary before walking through modern platforms like Amazon Mechanical Turk and Kickstarter. It also breaks down models like crowd voting, creative contests and citizen science to show how joint effort drives innovation and problem solving.

    1. Do you think there are ways a social media platform can encourage good crowdsourcing and discourage bad crowdsourcing?

      I think platforms can really steer crowdsourcing in a good direction by shining a spotlight on helpful posts like featured comments or nifty badges for top contributors. Running fun and well‑moderated question answer rounds or community projects with clear rules keeps everyone on the same page. And to nip the bad stuff in the bud, they should have straight forward policies against misinformation and hate speech, plus quick verification prompts like “Are you sure this is verified?” before you hit share. This way we have a more collaborative space.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Maggie Fick and Paresh Dave. Facebook's flood of languages leaves it struggling to monitor content. Reuters, April 2019. URL: https://www.reuters.com/article/idUSKCN1RZ0DL/ (visited on 2023-12-08).

      Facebook’s platform is offered in over 110 languages, yet its detailed “community standards” rules exist in only about half of them. The company relies on roughly 15,000 human reviewers and machine‐learning tools to police content, but these systems only cover a few languages allowing harmful posts in under‑served tongues to slip through. In practice, hateful content in languages like Burmese or Amharic has helped fuel ethnic violence in Myanmar and racist rants during Fiji’s elections. Governments from Australia to Singapore are warning of hefty fines or even jail time if Facebook doesn’t fix this fast.

    1. Have you ever faced consequences for breaking social media rules (or for being accused of it)?

      Yes, my Facebook marketplace account was under review was almost over 2 weeks. I am still not sure what community guidelines I did not follow properly for it to result in that. I never got access to that account again, and had to create a new account all together

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Anya Kamenetz. Selfies, Filters, and Snapchat Dysmorphia: How Photo-Editing Harms Body Image. Psychology Today, February 2020. URL: https://www.psychologytoday.com/us/articles/202002/selfies-filters-and-snapchat-dysmorphia-how-photo-editing-harms-body-image (visited on 2023-12-08).

      In her Psychology Today piece, Kamenetz explains how adding filters on our selfies, especially on Snapchat and Instagram, can cultivate what we think is beautiful, leaving us anxious about our own looks and even pushing some people to consider real‑life cosmetic tweaks just to match their filtered faces. This trend is called "Snapchat dysmorphia". This aligns with what experts say about filter usage and who it contributes to broader mental health challenges.

    1. In what ways have you found social media bad for your mental health and good for your mental health?

      Seeing everyone’s best moments makes me compare myself, get FOMO, and feel down. Late‑night scrolling, just before I go to bed, wrecks my sleep and leaves me feeling drained. But it’s not all bad because I can stay close to friends who live far away, my parents are thousands of miles away and I am still a part of everything in their lives only because of social media. When I mute toxic accounts and set a timer on my apps, social media actually becomes a fun way to learn and connect instead of stressing me out.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2.

      Reading how Roman couriers spread news just like we share TikToks today made me laugh. History really is repeating itself in the most crazy way imaginable. It got me thinking what really spreads ideas is the simple act of sharing with your friends. Luther’s German pamphlets going gangbusters in the 1500s feels a lot like a meme blowing up now, just with more ink and fewer likes

    1. 12.1. Evolution and Memes

      The trio of replication, variation, and selection immediately made me think of TikTok trends like when someone copies a video (that's replication), gives it their own spin (that's variation), and then the algorithm pushes the versions that get the most likes (that's selection!!). It’s crazy how these cultural memes really do evolve just like living things. I wonder how how the fitness of a meme change when it jumps from one platform to another...

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zack Whittaker. Facebook won't let you opt out of its phone number 'look up' setting. TechCrunch, March 2019. URL: https://techcrunch.com/2019/03/03/facebook-phone-number-look-up/ (visited on 2023-12-07).

      Zack, in his article, discloses how Facebook doesn’t even give us the choice to keep our phone number private in its look‑up feature. So basically anyone can potentially find us just by typing in our digits. It really shows how default settings can sneakily push us into sharing more than we want.

    1. Consider impact vs. intent. For example, consequentialism only cares about the impact of an action. How do you feel about the importance of impact and intent in the design of recommendation algorithms?

      I think impact matters most, even well‑intentioned algorithms can fuel outrage or misinformation if their real‑world effects aren’t tracked. So platforms should watch actual outcomes not just code, they should allow users to easily flag unwanted content and potentially test tweaks on small groups first. This way by focusing on impact and keeping good intent, recommendations can help more than they harm.

  11. Apr 2025
  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Inclusive design. December 2023. Page Version ID: 1188074097. URL: https://en.wikipedia.org/w/index.php?title=Inclusive_design&oldid=1188074097 (visited on 2023-12-07).

      I really like how this wikipedia page challenges the notion of "fixing" users by pointing out that there is a disconnect between the design and people's wants. The part about attracting actual users right away and providing various methods to engage, such as through customizable interfaces, got me to thinking about all the little details we frequently overlook. Compared to trying to force everyone into a single solution, it feels more imaginative and human.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Many of the disabilities we mentioned above were permanent disabilities, that is, disabilities that won’t go away. But disabilities can also be temporary disabilities, like a broken leg in a cast, which may eventually get better. Disabilities can also vary over time (e.g., “Today is a bad day for my back pain”). Disabilities can even be situational disabilities, like the loss of fine motor skills when wearing thick gloves in the cold, or trying to watch a video on your phone in class with the sound off, or trying to type on a computer while holding a baby.

      I really connected with the idea of situational disabilities. During spring break, I fractured my ankle and suddenly doing things in my kitchen became a real struggle. I couldn’t reach the top shelf or bend down to grab things on the lower racks. It made me notice how much of our world assumes we’re all fully able-bodied. I keep thinking what if every space was made for everyone from the start? no more top shelf hassles or tight staircases, just something that works for everyone.