18 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Ableism. December 2023. Page Version ID: 1188412565. URL: https://en.wikipedia.org/w/index.php?title=Ableism&oldid=1188412565

      This article on ableism explains that it's a form of discrimination and social prejudice against people with disabilities. It highlights how ableism operates through stereotypes, assumptions, and society that limit opportunities and shape how disabled people are perceived and treated. It also shows that ableism exists at multiple levels, including individual attitudes, cultural beliefs, and institutional systems like education and healthcare. It's inherently embedded in society and affects many aspects of life for people with disabilities.

    1. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment.

      This part of the reading stood out to me, the contrast between putting the burden on individuals as well as the designers was interesting. The section on coping strategies made me realize how often people are forced to quietly adapt themselves to systems that were never designed with them in mind like students sitting in front of the classroom without knowing why they struggle to see. That idea felt frustrating because it just normalizes the expectations that individuals should adjust rather than questioning. I personally think the shift toward universal design and ability based design is more ethical and sustainable. It reminds me of discussions in UX design where we talk about designing for edge cases and benefitting for everyone as a whole. I feel that companies should just treat accessibility as a core requirement instead of an extra feature implemented.

    1. Emma Bowman. After Data Breach Exposes 530 Million, Facebook Says It Will Not Notify Users. NPR, April 2021. URL: https://www.npr.org/2021/04/09/986005820/after-data-breach-exposes-530-million-facebook-says-it-will-not-notify-users (visited on 2023-12-06).

      This source is an article about Facebook’s data breach affecting over 530 million users, highlights how the company chose not to notify individuals whose data was exposed. The article explains that even though sensitive information like passwords was not leaked, details such as phone numbers can still pose serious risks because they act as a key identifier that can be used for scams or identity misuse. What stood out to me is how this connects to the chapter’s discussion of privacy as a form of protection rather than just secrecy. If users are not informed, they lose the chance to take action to protect themselves. I personally think this raises ethical concerns about how much responsibility companies should have in being transparent with users, especially when the consequences of exposure can still be harmful.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. . But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.

      This line from the reading stood out to me because of how private messaging on social media isn’t actually fully private and feels like an invasion of privacy. It makes me think about how often we trust platforms with personal conversations without really considering who else might have access. From a user perspective, there’s a tradeoff in how we want privacy, but we also want safety features like being able to report harassment or threats. That tension reminds me of discussions in this course about data mining and surveillance, where companies justify access to data for protection or improvement, but it can also blur ethical boundaries. I also wonder where the line should be drawn for these social media companies ethically.

  4. Apr 2026
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Nicole Nguyen. Here's Who Facebook Thinks You Really Are. September 2016. Section: Tech. URL: https://www.buzzfeednews.com/article/nicolenguyen/facebook-ad-preferences-pretty-accurate-tbh (visited on 2024-01-30).

      This source connects well tot he chapter and this article is about Facebooks ad preferences. What stood out to me is how FB builds a profile of you not just from what you post, but from things like pages that you interact with, posts you like, and websites you visit through tools like the FB Pixel. It's really surprising they could access these detailed and personal profiles, yet also feel inaccurate at the same time. Connecting this back to the chapter on data mining and how platforms can infer things like political views or interests without users explicitly sharing them. It feels invasive and knowing that FB might be tracking activity outside of its own app. At the same time, it shows how powerful data mining is with shaping what we see online.

    1. By looking at enough data in enough different ways, you can find evidence for pretty much any conclusion you want.

      This part of the chapter really stood out to me because if you look at enough data, you can basically find support for almost any conclusion or interpretation you could want. This made me think about how often I see graphs and statistics online and just assume they mean something important without really questioning them. It reminds me of what we've been learning about data visualization and how easy it is to make something look convincing depending on what data you choose to show. It also made me realize that data minding isn't just technical but also ethical as well. If people can findpatterns that match what they already believe, then it's easy to spread misleading ideas without realizing it. I wonder if platforms should do more to limit this kind of thing or if its just on users to think more about the data they see online.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Troll (slang). December 2023. Page Version ID: 1188437550. URL: https://en.wikipedia.org/w/index.php?title=Troll_(slang)&oldid=1188437550 (visited on 2023-12-05).

      From this article, this research argues that internet trolling is strongly linked to traits like sadism, meaning that trolls often genuinely enjoy causing emotional harm to others rather than just joking around. I found this especially interesting because it challenges the common idea that trolling is harmless or just for fun. Instead, it suggests that for some people, trolling reflects deeper personality tendencies, which makes the behavior more concerning.

    1. Additionally, the inauthentic arguments have long been observed, and were memorably explored by Jean-Paul Sartre as “Bad Faith” [g12]. “Bad faith” here means pretending to hold views or feelings, while not actually holding them (this may be intentional, or it may be through self-deception).

      This part of the reading stood out to me with the idea that trolling didn’t actually start with the internet, but has always been part of human behavior. The quote about “boys throwing stones at frogs for fun” really made me pause because it highlights how people can find enjoyment in causing harm even when the other side is genuinely affected. It made me think about how online spaces just amplify something that already exists rather than creating it from scratch. This connection of the concept of “bad faith” is especially interesting, since it frames trolling not just as joking or pranking, but as intentionally dishonest argumentation meant to disrupt others. This makes me question whether responding to trolls is even productive, since they’re not trying to engage in respectful good discussion in the first place, almost like ragebait.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zoe Schiffer. She created a fake Twitter persona — then she killed it with COVID-19. The Verge, September 2020. URL: https://www.theverge.com/21419820/fake-twitter-persona-covid-death-munchausen-metoostem-co-founder (visited on 2023-11-24).

      From this source, this article is about the fake twitter persona @sciencing_bi describing how BethAnn McLaughlin created an entirely fictional identity, even going as far to announce the persona's death from covid. This led to a response of real grief and emotional responses from followers. This story highlights how inauthenticity online can go beyond harmless performance and actually manipulate people's trust in harmful ways. People believed they were forming a genuine connection with a marginalized academic voice, when it was all fake. To what line should be drawn between acceptable online personas and harmful deception especially when real people are affected.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress.

      One idea that stood out to me is how authenticity isn't just about being real, but its about whether the connection being presented actually matches what reality is. It made me rethink influencers and online personalities because even if the content is curated, it can still feel personal and trustworthy in a way that isn't entirely accurate. I also found the link between authenticity, vulnerability and trust interesting. This whole chapter focuses on the idea that we value authenticity because our well being becomes tied to others when we form connections which helps explain why people feel so upset when they realize they've been misled. It's not the idea of being tricked, but the feeling that your trust was misplaced. At the same time, I don't fully agree that inauthenticity is entirely negative in all cases. Things like meme accounts or online personas can still have meaningful communities when people understand the type of interaction they're getting themselves into.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Text messaging. November 2023. Page Version ID: 1184681792. URL: https://en.wikipedia.org/w/index.php?title=Text_messaging&oldid=1184681792 (visited on 2023-11-24).

      I found this source interesting and overall talked about the idea of SMS texting and how it evolved into a dominant communication tool. It mentions that text messaging originally came from the Short Message Service which was constrained to about 160 characters leading to users to abbreviate words to common words we use today like lol or u. Something that started as a technical limitation ended up influencing language and culture globally. Because early communication tools like texting weren't just about sending messages, but also about shaping how people interact and express themselves. These design constraints in tech can unintentionally create long term behavioral changes.

    1. In the 1980s and 1990s, Bulletin board system (BBS) [e6] provided more communal ways of communicating and sharing messages.

      This system actually stood out to me in this reading of this chapter because it shows how much effort and intention went into communication during the Web 1.0 era compared to our modern day now. For example, having to create your own personal webpage or actively join specific spaces like the BBS meant that users had to be way more deliberate about where and how they would interact. This made me think more about how different it was from now where content is constantly pushed to us through algorithms where back then you would have to go find conversations where the conversations find you now. I feel like this shift has contributed to things like doomscrolling because there is way less friction and work to accessing so much content. I do have a question if the web back then was able to create more or less meaningful interactions than nowadays because there weren't as many features but people had a sense to choose to be apart of something.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Caroline Delbert. Some People Think 2+2=5, and They’re Right. Popular Mechanics, October 2023. URL: https://www.popularmechanics.com/science/math/a33547137/why-some-people-think-2-plus-2-equals-5/ (visited on 2023-11-24).

      From this article, the writer sums up that math entirely depends on context and not just the fixed built rules that we've all known it by. For example, rounding or real world situations can make something like 2+2 equal 5 in practice. This anecdote stood out to me because it shows that numbers and metrics aren't always fully objective, but could be subjective. Things like ratings are shaped by how we define and measure them which makes me more cautious to trust data at face value because of how possible it is for data to be manipulated.

    1. All data is a simplification of reality# We’ve talked about how we represent data on a computer, but let’s now step back and think about the nature of data itself.

      I thought this statement of how all data is a simplification of reality was really interesting and was able to relate it to myself. This made me rethink how I usually treat numbers and datasets as objective truth. For example, in my own experience working with data like in a database with sql, I’ve always focused on getting the “correct” metric. But this reading made me realize that even before analysis begins, there are already subjective decisions being made like what counts as a user, a transaction, or even an active account. That’s similar to the Twitter bot example, where changing the definition of a spam bot can completely change the final percentage. I also think this connects a lot to product and tech decisions, like in product management metrics like engagement or retention seem straightforward, but they’re actually based on how we define user behavior. If those definitions are flawed or overly simplified, then the decisions we make based on them could also be misleading. So it’s not just a technical issue but also an ethical one too because simplifications can shape real outcomes.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sean Cole. Inside the weird, shady world of click farms. January 2024. URL: https://www.huckmag.com/article/inside-the-weird-shady-world-of-click-farms (visited on 2024-03-07).

      This article explains how these dystopian looking click farms use large numbers of phones and accounts to artificially boost likes, follows, and engagement on social media. This tricks the algorithm into promoting content and makes it seem more popular than it really is. It highlights how this can spread misinformation and influence what people believe and see on the internet and ultimately cheat the system to get visibility online.

    1. [Morten] Bay found that 50.9% of people tweeting negatively about “The Last Jedi” were “politically motivated or not even human,” with a number of these users appearing to be Russian trolls. The overall backlash against the film wasn’t even that great, with only 21.9% of tweets analyzed about the movie being negative in the first place. https://www.indiewire.com/2018/10/star-wars-last-jedi-backlash-study-russian-trolls-rian-johnson-1202008645/ [c11]

      This specific statistic about the backlash to "The Last Jedi" was really surprising to me, especially where over half the negative tweets weren't even from real people. I think this anecdote made me rethink about how I interpret online reactions, especially the fact it could be fake. Personally, this also made me realize how it easy it is to assume what we see on social media is definitive and real public opinion, when it can be heavily manipulated by fake data. It makes me question how often I've formed opinions based on something that wasn't representative of real people. I'm thankful that this was an eye-opening reminder to myself that I shouldn't be too critical of myself because it might not be real.

  12. Mar 2026
    1. In this class, you will be building up a ‘toolbox’ for thinking about ethics.

      I really like this line in this reading because it expresses that the idea of ethics isn't something with just one right answer. I like this framing, but it also made me a little uncomfortable at the same time because if ethics is just a set of tools we choose from, does that mean people can kind of pick the framework that justifies what they already want to do? For example, in social media situations a company could use a consequence based approach on their platform to justify something like data collection and it would benefit people overall and ignore the violation of privacy. Similarly, someone else could use a rights based framework to say the opposite, it almost feels like ethics could be bent and flexible that could be helpful yet also kinda dangerous. In my real life like when I would work in group projects, sometimes people aren't actually disagreeing on the facts, but what "matters more" ethically. The toolbox idea hits the idea well but makes me wonder to what extent do we allow people to use ethics to selectively support their own interests?

    1. There are many more ethics frameworks that we haven’t mentioned here. You can look up some more here.

      I ended up looking at the site that lead to see other ethics frameworks out of curiosity and came across the Social Networking and Ethics page, as I felt it was really relevant to this course. I felt it was really interesting how it talks about separating ethical impacts into direct, indirect, and structural categories. I felt in practice, these categories blur together way more than the framework suggests, like misinformation in social media isn't just a direct harm between users, but it's also shaped by platform algorithms or structure and is accentuated through interactive behavior or indirect behavior. I'm curious if treating these as different categories could oversimplify how responsibility is shared between users like how we discussed during class who is responsible for coding the bluesky bot and running the bot. It seems like harm comes from individual users but in reality, the design of the platform and business models are just as responsible. So instead of how the article defines of thinking of these categories as separate, I think it's more useful to see them together as responsibilities that entail one another so it's easier to see how much power platforms have when certain behaviors are exhibited on the sites.