38 Matching Annotations
  1. Jun 2025
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Shoshana Zuboff. The age of surveillance capitalism: the fight for a human future at the new frontier of power. 2019. URL: https://orbiscascade-washington.primo.exlibrisgroup.com/permalink/01ALLIANCE_UW/8iqusu/alma99162177355601452.

      Zuboff's work reveals the ethical ramifications of innovation. Her theory of "surveillance capitalism" positions data harvesting as the axial principle of an emerging economic order based on the commodification of human experience. This also raises ethical questions about consent, autonomy, and privacy—questions that are typically not considered by inventors and tech firms before they release new technologies.

    1. 21.2. Ethics in Tech

      This chapter points out how society consistently ignores ethical concerns with every new wave of technology. Inventors tend to adore their inventions without considering the consequences. These ethical issues are not abstract; they have the potential to harm well-being, freedom, or jobs.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Olivia Solon. 'It's digital colonialism': how Facebook's free internet service has failed its users. The Guardian, July 2017. URL: https://www.theguardian.com/technology/2017/jul/27/facebook-free-basics-developing-markets (visited on 2023-12-10).

      I think Solon’s framing of this as digital colonialism is especially powerful, it points out that tech companies, under the guise of benevolence, are often replicating patterns of control, dependency, and cultural dominance that echo historic colonization.

    1. Example: One Laptop Per Child

      A revealing aspect of the OLPC example is how it exposes the tech industry's habit of exporting solutions without importing perspectives. The failure of OLPC wasn’t just about hardware flaws or poor instructions—it stemmed from a deeper colonialist mindset that presumed innovation must come from the Global North to the Global South

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Noah Kulwin. Shoshana Zuboff Talks Surveillance Capitalism’s Threat to Democracy. Intelligencer, February 2019. URL: https://nymag.com/intelligencer/2019/02/shoshana-zuboff-q-and-a-the-age-of-surveillance-capital.html (visited on 2023-12-10).

      Zuboff's model matters not just because it doesn't merely explain how Meta and other firms work, but because it explains how their business model inherently alters the balance of power between corporations and citizens. Her "instrumentarian power" is a great way of explaining that platforms aren't just serving consumers anymore—they're shaping behavior at scale, using prediction and modification to dictate outcomes. The interesting thing is how Zuboff frames this as a crisis for democracy, rather than a privacy crisis

    1. Surveillance Capitalism

      Meta's surveillance capitalism business model reflects a deep moral tension between maximizing profit and user autonomy. And although it may be rational from a capitalist point of view to reap behavior surplus and predict user behavior, that Meta gains financially from the increase in personal data use under conditions that often arise without completely informing users raises genuine concerns about digital rights and privacy.

  5. May 2025
  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Paul Billingham and Tom Parr. Enforcing social norms: The morality of public shaming. European J of Philosophy, 28(4):997–1016, December 2020. URL: https://onlinelibrary.wiley.com/doi/10.1111/ejop.12543 (visited on 2023-12-10), doi:10.1111/ejop.12543.

      What's fascinating about this academic writing is that it doesn't entirely dismiss public shaming, but instead tries to figure out when it could potentially be okay. Their insistence on reintegration as a necessary condition, that the point of shaming should in fact be to reintegrate the person back into society is pretty interesting and kind of difficult to think about. In our cancel culture today, in which permanent exclusion is the norm, this principle runs counter to the trend and makes me question the purpose of condemning bad behavior.

    1. Normal People

      People participate in public shaming more for catharsis, attention, or social solidarity than for justice. This performative role, which is magnified by algorithms, renders public shaming too frequently not an issue of moral accountability but one of spectacle. This is the ethical question of how to ensure that calls for justice are not hijacked by viral outrage that is more concerned with clicks than fairness.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Alice E. Marwick. Morally Motivated Networked Harassment as Normative Reinforcement. Social Media + Society, 7(2):20563051211021378, April 2021. URL: https://doi.org/10.1177/20563051211021378 (visited on 2023-12-10), doi:10.1177/20563051211021378.

      What's interesting in this source is the way it contradicts the common assumption that online harassment is always trolling or hate. Here, it identifies how some users believe their harassment is a positive force, as if they're enforcing common values or punishing wrongdoing. This perspective explains things like "cancel culture" and those mob attacks on celebrities, where people believe they're doing justice.

    1. Harassment is behavior which uses a pattern of actions which are permissible by law, but still hurtful.

      This is an important but underrated truth about harassment: that its power lies in repetition and ambiguity. That harassment often takes the form of legally innocent action, repeated over time in a manner constituting real harm, highlights how unsuited legal systems are to deal with vagabond but relentless abuse. This is especially so on the internet, where particular comments or acts may be innocuous in isolation but collectively constitute an organized or relentless campaign of harm.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. im Hollan and Scott Stornetta. Beyond being there. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '92, 119–125. New York, NY, USA, June 1992. Association for Computing Machinery. URL: https://dl.acm.org/doi/10.1145/142750.142769 (visited on 2023-12-08), doi:10.1145/142750.142769.

      What is innovative in this paper is its bold rejection of the assumption that the purpose of communication technologies is to replicate face-to-face communication. Instead, it encourages system designers to take advantage of the unique strengths of digital environments—e.g., asynchronous communication, flexible accessibility, and record-keeping—that can better address the flaws of in-person communication

    1. There have been many efforts to use computers to replicate the experience of communicating with someone in person, through things like video chats, or even telepresence robots [p5]]. But there are ways that attempts to recreate in-person interactions inevitably fall short and don’t feel the same. Instead though, we can look at different characteristics that computer systems can provide, and find places where computer-based communication works better, and is Beyond Being There [p6] (pdf here [p7])

      Instead of always attempting to imitate face-to-face conversation, it contends we should instead capitalize on the unique affordances of digital tools—such as asynchronous collaboration, anonymity, and archiving—as their strengths rather than weaknesses. For example, Wikipedia would not be successful as a face-to-face gathering but shines in a digital format since people can contribute at their own rate, anonymously, and with a permanent record.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. David Gilbert. Facebook Is Ignoring Moderators’ Trauma: ‘They Suggest Karaoke and Painting’. Vice, May 2021. URL: https://www.vice.com/en/article/m7eva4/traumatized-facebook-moderators-told-to-suck-it-up-and-try-karaoke (visited on 2023-12-08).

      I think Gilbert exposes how Facebook's outsourced moderators are left to deal with violent and psychologically damaging content on a regular basis with minimal mental health support. The suggestion that painting or karaoke can heal trauma caused by long-term exposure to violence, child abuse, or suicide is not just tone-deaf—it's reflective of a deep structural contempt for worker well-being. This piece strongly conveys that ethical content moderation is not just about what is removed, but about whom corporations step on in the process.

    1. 15.1. Types of Content Moderator Set-Ups

      One of the main issues this part raises is the invisibility and emotional toll of human content moderators, especially those in specialized moderation units. These workers, often outsourced and underpaid, habitually view gruesome, violent, or offensive content so that others don't have to, yet they are rarely discussed within the discourse of platform safety. So, like, even though Facebook says they have all these moderation systems in place, the truth is a lot of the mental labor falls on poorly trained low-paid workers from low income nations who barely get any support or appreciation.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Karen Hao. How Facebook got addicted to spreading misinformation. MIT Technology Review, March 2021. URL: https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/ (visited on 2023-12-08).

      This is the opposite of the typical script: Facebook is not just struggling to keep bad content in line; it is algorithmically incentivized to disseminate it. That presents essential ethical questions regarding platform design itself instead of content policy. It also suggests that hiring more moderators or adjusting community standards will never be enough unless the platforms themselves change how their algorithms rank and suggest content.

    1. free-speech

      Something that gets me thinking here is how even the websites that say they're all about "free speech" still end up moderating things—like, for example, spam. It shows you a pretty big contradiction: total free speech is usually not actually possible. Even the most laid-back sites have to put up some boundaries, not because of morals or the law, but just to make things work. In this respect, then, the ban on spam is not ideological but economic—without some moderation, the platform buckles under its own weight.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Robinson Meyer. Everything We Know About Facebook’s Secret Mood-Manipulation Experiment. The Atlantic, June 2014. URL:

      What truly surprises me in this article is not only the sheer moral collapse that happens in the conduct of an experiment without being specifically approved by users involved, but even more so in the utterly nonchalant manner through which powerful social networking platforms are able to exert influence over people's emotions. This is far beyond the scope of target advertising or user engagement metric measurements—it enters the more sinister realm of actively manipulating individuals' moods and states of mind without any hint of transparency or accountability.

    1. Digital Detox

      Detox culture treats technology as a toxin that we can simply wash away, rather than something that's present in our social, economic, and emotional landscapes. This is a symptom of a larger issue in digital ethics: if we're moralizing about individual behavior like urging people to "just log off", we're at risk of ignoring the structural problems—like profit-maximizing algorithms, exploitative content design, or the psychological manipulation embedded in app design.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Standage. Writing on the Wall: Social Media - The First 2,000 Years. Bloomsbury USA, New York, 1st edition edition, October 2013. ISBN 978-1-62040-283-2.

      I find this book so interesting because it shows how social media is not some completely new thing but is instead part of a continuing tradition of how people share information—like Roman graffiti, pamphlets, and coffeehouse chatter. Bringing this up when talking about how social media has "evolved" really drives the point home that although technology changes fast, the underlying human activity behind it—like sharing, adapting, and amplifying messages—has been around for thousands of years. It's a reminder that social media did not invent virality; it merely accelerated and automated it.

    1. 12.3. Evolution in social media

      One of the more interesting sections of this chapter is how it frames social media websites as digital ecosystems wherein content evolves in the same way organisms in the wild do. Not only does this metaphor describe the mechanics of viral content in a more comprehensible manner, but it also invites a more essential ethical inquiry: if replication and selection are so heavily determined by platform design like algorithms or frictionless sharing, then platform designers have immense influence to shape culture.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Kashmir Hill. How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did. Forbes, February 2012. URL: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/ (visited on 2023-12-07).

      This article illustrates the intrusive nature of recommendation algorithms. Target utilized purchasing patterns to predict pregnancies and deliver targeted advertising—so accurate that the pregnancy of a teen was revealed to her father through it. It raises ethical concerns with business efficiency versus personal privacy. It made me realize that harmless data points, like buying lotion or vitamins, can lead to individual inferences without consent.

    1. Sometimes though, individuals are still blamed for systemic problems. For example, Elon Musk, who has the power to change Twitters recommendation algorithm, blames the users for the results:

      This is a perfect example of deflecting responsibility from the creators of systems to the users who merely use them. This viewpoint does not take into account that most users are not fully aware of how recommendation algorithms work; therefore, blaming them for interacting "wrongly" with a system they had no hand in creating seems incredibly unfair.

  14. Apr 2025
  15. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. C. L. Lynch. Invisible Abuse: ABA and the things only autistic people can see. NeuroClastic, March 2019. URL: https://neuroclastic.com/invisible-abuse-aba-and-the-things-only-autistic-people-can-see/ (visited on 2023-12-07).

      This is a robust source since it interferes with the prevailing presupposition that ABA is an uncontroversial "gold standard" of support for autistic individuals. On the contrary, Lynch unveils how ABA can be immensely hurtful in manners that are often unintelligible to non-autistic individuals, noting that compliance-based therapies can trigger long-term emotional trauma.

    1. Accessible Design

      What is notable is that Universal Design and Ability-Based Design move the burden towards designers and society as a whole. Still, even these methods are not flawless, as needs may conflict, and accessibility is not an instantaneous solution but a continuous, adaptive process.

  16. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Karen Hao. How to poison the data that Big Tech uses to surveil you. MIT Technology Review, March 2021.

      This article explores how individuals are attempting to thwart corporate espionage, including adding fake or noisy data into user-tracking algorithms. The most engaging aspect is framing data poisoning as a tool for privacy activists—turning around the generally negative connotation of the term.

  17. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended.

      One of the compelling aspects to reflect on in this chapter is that it resists the presumption that data is inherently neutral or trustworthy. Accidental and intentional data poisoning both betray a fundamental vulnerability in our significant reliance on datasets for making decisions. The case of TikTok skewing scientific surveys is particularly fascinating—it demonstrates how digital virality can unintentionally compromise data integrity, revealing that even robust research methodology can be susceptible to trends and online culture.

  18. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Whitney Phillips. Internet Troll Sub-Culture's Savage Spoofing of Mainstream Media [Excerpt]. Scientific American, May 2015. URL: https://www.scientificamerican.com/article/internet-troll-sub-culture-s-savage-spoofing-of-mainstream-media-excerpt/ (visited on 2023-12-05).

      Whitney Phillips' Scientific American article discusses how online trolls mock mainstream media for entertainment and disruption. Interestingly, she posits that trolls employ "meta-commentary" to critique the media through mimetic aggression. This positions trolling as an ideologically invested performative practice instead of a mere shock value exercise.

    1. What is trolling

      While trolling is often associated with cruelty or chaos, this chapter shows that it can also function as political commentary or social critique, like Jaboukie's impersonation of the FBI. His tweet, while provocative and inauthentic, actually draws attention to real historical grievances and critiques the FBI’s past treatment of civil rights leaders like Martin Luther King Jr.

    1. In this activity, you will be looking at Facebook’s name policy, and thinking through who this version of authenticity works for, and who it doesn’t[1].

      Facebook's policy on names represents how websites mandate a thin, Western definition of authenticity— privileging legal nomenclature above cultural, social, or security-driven naming habits. The policy assumes everyone shares one, official name used "in everyday life," without attention to marginalized others like Indigenous populations (whose names may include non-Latin alphabets), transgender individuals (whose legal name might not represent their identity), or activists (who need aliases for protection).

  19. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. (lonelygirl15 [f1])

      Though people generally value openness and trust, new media seems to blur the gap between constructed identity and real relationship. As in the case of lonelygirl15 and Jennifer Lawrence. This takes us to an overarching question: Does authenticity prosper where self presentation is unavoidably performative, or does it just transform as an extension of "strategic realness"? The outcry over lonelygirl15 shows that dishonesty violates our expectation of truth, but the channel's sustained popularity demonstrates that viewers are willing to believe fiction provided that its nature is eventually revealed.

    1. Read A People’s History of Black Twitter [e16]

      This article let me have a great thinking of Black Twitter. It’s a cultural phenomenon rather than a formal platform. One detail that stood out to me was how Black Twitter often served as an early warning system for mainstream media, breaking stories before traditional outlets caught on.

    1. Bulletin board system (BBS

      The BBS remind me of early MUDs that I learned from another data course, which were also social spaces. It’s cool how text-only interfaces fostered such rich communities makes me think modern apps could learn from their focus on content over flashy design.

    1. Sound can also be represented in other ways, such as music being represented by lists of which instrument should play which note at which times (e.g., MIDI files [d12]), which is closer to how Ada Lovelace imagined computers representing music in her 1842 quote we included in chapter 2.3.3 (Computers Speak Binary).

      It's cool how MIDI's format reflects Ada's idea was prescient, but MIDI's practicality is even cooler: it's a lingua franca for computer music because it separates composition from sound quality (i.e., a piano MIDI track can be performed like a synth if you swap out the instrument preset). Modern software like Ableton or MuseScore draws on this, but I think MIDI's real legacy is democratizing musical composition—anyone can edit notes like they're words, no recording studio necessary.

    1. Data and Metadata

      This metadata definition gets me thinking about the number of times we overlook its importance in everyday tech experiences, especially in terms of privacy. Like, while the content of a tweet might be harmless, its metadata can reveal patterns about a person's life, routines, or even identity. It's scary how companies can use this "secondary" information for profiling or advertising.

    1. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them.

      The difference between bots and automated accounts run by actual people is one that is very important to make. Yet I find myself wondering if or not the difference between the two becomes more blurred in the real world. For instance, there are some "human computers" working in click farms who may be able to utilize scripts or automation tools to actually handle multiple accounts simultaneously. This is one of such instances where there is an overlap of human action and algorithmic process, and it is difficult to delineate clearly what is absolutely human and what is dependent on automated systems.

    1. “Gender Pay Gap Bot [c12]

      I am blown away at the way in which the same technology—bots—can be either so harmful or so beneficial for justice. The Gender Pay Gap Bot uses automation to call out hypocrisy and promote accountability, especially on performative days like International Women's Day. I think it's really contrary to the notion that all those pesky bots are inherently "bad"—sometimes they're actually necessary to present the truth in ways that humans alone might not be able to accomplish as effectively.

    1. What do you think is the responsibility of tech workers to think through the ethical implications of what they are making?

      Tech workers really do need to think about the ethics of what they create because tech is not neutral, like it reinforces human biases, changes societies, and can actually hurt people. Though companies usually care more about speed and profit, it's up to individual engineers, designers, and product managers to take ownership and ask tough questions like, "Who could this hurt?" instead of leaving legal teams to clean it up or thinking ethics is somebody else's problem. This entails both individual responsibility (reporting, refusing unethical assignments) and structural change (placing ethicists within groups, protecting whistleblowers).

    1. What enabled someone to be able to get a photo of her checking the phone at the airport?

      This makes me realize that the public reaction also raises ethical concerns about collective goodness. Were the Twitter users who stalked her out, trended the hashtag, and photographed her at the airport driven by justice, or were they schadenfreudeous? Even if Sacco’s tweet was intended as an ironic joke.