22 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Matt Stopera. Monica Lewinsky Has Been Making Jokes About The Clinton Impeachment For Years, And It Really Is Funny Every Single Time. BuzzFeed, September 2021. URL: https://www.buzzfeed.com/mjs538/monica-lewinsky-twitter-comebacks (visited on 2023-12-08).

      As much as I respect her efforts to create light out of her situation, part of me feels like this is an attempted grab at relevance. The Clinton scandal is so old now, and in my opinion, many of her joke tweets are unfunny and not meaningful in any way. Just feels like her bringing up something that many Internet users now weren't even alive for.

    1. Users can also create intentionally bad or offensive content in an attempt to make it go viral (which is a form of trolling). So when criticism of this content goes viral, that is in fact aligned with the original purpose.

      I find this part very interesting. The concept of "cringe" in the 2010s was so prevalent, but now the lines are so blurred, as content many people enjoy can also be widely mocked as cringe. And on top of that, intentionally cringe content (which often goes viral and gets lots of success) makes this distinction even more difficult. But is this a good thing? It facilitates more diverse discourse and leads us away from homogenous thinking and opinions, as now there is disagreement. Which I think might actually be a good thing for internet culture.

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zack Whittaker. Facebook won't let you opt out of its phone number 'look up' setting. TechCrunch, March 2019. URL: https://techcrunch.com/2019/03/03/facebook-phone-number-look-up/ (visited on 2023-12-07).

      I found this to be pretty messed up and definitely unethical. If we look at Kant's ideals of Deontology, this is unethical as it violates trust, free will and uses the personal data of users to further the gain of higher-ups managing the platform, at the cost of user autonomy.

    1. Elon Musk’s view expressed in that tweet is different than some of the ideas of the previous owners, who at least tried to figure out how to make Twitter’s algorithm support healthier conversation [k6].

      This reminds me a lot of the original story of Justine Sacco's tweet. And as much as I vehemently disagree with Musk's views and rhetoric, him being open about the way the Twitter/X algorithm works is interesting to me. I previously said that social media doesn't benefit off of positive interactions, but rather being the paper on which arguments are written. And Musk outright states that, interaction, positive or negative, is interaction, and will boost content in that vein whether you like it or not, so as to try and elicit more participation from you.

  4. Oct 2025
  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Meg Miller and Ilaria Parogni. The Hidden Image Descriptions Making the Internet Accessible. The New York Times, February 2022. URL: https://www.nytimes.com/interactive/2022/02/18/arts/alt-text-images-descriptions.html (visited on 2023-12-07).

      I found the presentation of this article super creative. I loved the choice to actually use textual descriptions within the article itself to demonstrate their function and how useful they can be. On the topic of disability and accessibility, I think the concept of alt text is great, since it's a relatively minor addition to the user interface, yet can drastically change the experience certain users might have on the internet.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Disabilities can be accepted as socially normal, like is sometimes the case for wearing glasses or contacts, or it can be stigmatized [j5] as socially unacceptable, inconvenient, or blamed on the disabled person. Some people (like many with chronic pain) would welcome a cure that got rid of their disability. Others (like many autistic people [j6]), are insulted by the suggestion that there is something wrong with them that needs to be “cured,” and think the only reason autism is considered a “disability” at all is because society doesn’t make reasonable accommodations for them the way it does for neurotypical [j7] people.

      I'm actually glad we got this chapter. I'm taking Disability Studies right now, where we dive into how disability is viewed, treated, and functions in our society, and the conflict between different ideals held on disability. The medical model sees disability as an illness to be cured, while the social model sees disability as a side effect of the inaccessibility already present in society.

    1. Lyra Hale. New Book Says Facebook Employees Abused Access to Track and Stalk Women. The Mary Sue, July 2021. URL: https://www.themarysue.com/facebook-employees-abused-access-target-women/ (visited on 2023-12-06).

      This article didn't really surprise me. We know Facebook was formed as a Tinder-style sexual rating system, so information of women being exploited by Facebook employees isn't unexpected from this company. This article brought up two specific examples of men using their power as Facebook employees to track the locations of women in real time, but also notes 52 total employees being fired for abusing their access to users' information.

    1. Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them [i19]. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages [i20]

      While I in no way think these invasions of privacy are ethical or justified, I do think there's something to be said about taking accountability online. What I really mean by that is, I feel like some of these things are intuitive as consequences of using such a universal platform like the Internet. The work system example feels very obvious to me, and I would never send or say something not work appropriate on my work or school email. On the other hand, all private messages being completely accessible to the owners of social media is kind of ridiculous.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Mia Jankowicz. A TikToker said he wrote code to flood Kellogg with bogus job applications after the company announced it would permanently replace striking workers. Business Insider, December 2021. URL: https://www.businessinsider.com/tiktoker-wrote-code-spam-kellogg-strike-busting-job-ad-site-2021-12 (visited on 2023-12-05).

      Is this not a beneficial form of trolling? At the end of the day, trolling is a disruption of established order we are expected to follow. While this order is generally important for us to follow, it is created by those above us to control us. And when we use the internet, an invention of those above us, to disrupt the order that benefits them, I see that as a positive action.

    1. “Bad faith” here means pretending to hold views or feelings, while not actually holding them (this may be intentional, or it may be through self-deception).

      As much as I enjoy the concept of trolling and feel that it's one of the most unique parts of the internet, bad faith arguing has gotten so out of control and soiled so much discourse that takes place online. The idea that now people will engage in discussions or arguments while positing opinions and ideas they don't really stand for completely derails the concept of debate in the first place, and so while I enjoy a bit of trolling here and there, bad faith I believe is just unhealthy.

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Jordan Pearson. Your Friends’ Online Connections Can Reveal Your Sexual Orientation. Vice, September 2014. URL: https://www.vice.com/en/article/gvydky/your-friends-online-connections-can-reveal-your-sexual-orientation (visited on 2023-12-05).

      This is just silly to me, ethically and logically. Obviously the first issue is, someone who isn't on social media at all should retain the right to keep their information private from massive tech corporations. The idea that companies are using active users as a workaround for mining the personal info of non users is ridiculous, because why do they need info on non users in the first place? And secondly, the method of this data collection is so clunky. It all hinges on the assumption that we're friends with only people we're similar to, so therefore someone with gay friends added must be gay? That's just not how the world works.

    1. After looking at your ad profile, ask yourself the following: What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you? { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { name: "python3", path: "./ch08_data_mining" }, predefinedOutput: true } kernelName = 'python3'

      This was really cool to look at. There weren't as many detailed and invasive categories as I initially imagined, but the fact that Google keeps track of if I'm single or not is kind of weird. I can understand why it might help them to know whether or not I'm a parent, but I'm not so sure about my relationship status. I also realized some of the data points' inaccuracy stems from data they glean from my parents' household information, as well as my occasional dishonesty when answering questions online. For example, Google Ads think I'm 25-34, likely because I've watched Youtube videos or TV shows that are listed for audiences older than me, or sometimes lied about my age to sign up for a website.

  9. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Jasper Jackson. Donald Trump 'writes angrier and more negative Twitter posts himself'. The Guardian, August 2016. URL: https://www.theguardian.com/media/2016/aug/10/donald-trump-twitter-republican-candidate-android-iphone (visited on 2023-11-24). [f6] X (formerly Twitter). Permanent suspension of @realDonaldTrump. January 2021. URL: https://blog.twitter.com/en_us/topics/company/2020/suspension (visited on 2023-11-24).

      I remember back when this happened, when Twitter was still X. So many people cried out in anger when Trump was banned, claiming that free speech was being infringed upon. But Twitter was a private company that had the right to ban whoever it wanted. The reaction so many people, including Elon Musk and Trump himself, I feel was the writing on the walls to Musk eventually purchasing Twitter. But also looking into the question of authenticity, and seeing that Trump may have had a team specifically hired to post inflammatory rhetoric really speaks to how difficult moderation is on the internet, and how much the platform of Twitter has changed as Musk took over.

  10. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.

      Why does authenticity when watching something bother us? We sit down and watch fictional movies for hours at a time, watch hour long episodes of fictional TV shows weekly and give it our full attention, but what about watching someone online tell stories bothers us? I think it's likely associated with mistrust and dishonesty, like the passage says. Influences now lie all the time, but are much more careful and covert in how they do it, when as a result, when they're caught, the consequences are more dramatic in comparison to inauthenticity with low effort to cover it up.

  11. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. What is user friction? Why you're losing users and how to stop. August 2023. URL: https://www.fullstory.com/user-friction/ (visited on 2023-11-24).

      I actually found this article really interesting, since it spoke to things I feel myself and many other users have experienced online before. Many of us have rage clicked at old websites that refuse to load, even though there's no logical indication that brute force will somehow force the program to work. And cognitive or emotional friction is a very real issue, as sometimes when the website or UI is frustrating enough it's easier to just abandon it altogether.

    1. While the Something Awful forums had edgy content, one 15-year-old member of the Something Awful forum called “Anime Death Tentacle Rape Whorehouse” was frustrated by content restrictions on Something Awful, and created his own new site with less restrictions: 4Chan.

      Genuinely mindblowing name. I thought this story would be a singular instance, but the twist that it ended up being the massive platform we know as 4chan. Is the point of social media to allow complete and unrestricted socialization, or something else entirely? The point I'm trying to make and I think the major takeaway we can glean from 4chan now that it's a few years removed, is that complete lack of restriction on the internet usually serves to enable people to engage is violent or degenerate behaviors with significantly less consequences than there would be in the real world.

  12. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Matt Binder. The majority of traffic from Elon Musk's X may have been fake during the Super Bowl, report suggests. February 2024. Section: Tech. URL: https://mashable.com/article/x-twitter-elon-musk-bots-fake-traffic (visited on 2024-03-31).

      I notice lots of people responded to this source, because it's really telling and ironic how Elon proclaimed that the bots on Twitter were a big problem discouraging him from purchasing the platform, yet we're seeing reports that his acquisition has only increased bot usage on the platform. And even though I feel that this data is likely accurate, is there a chance that, in line with our discussion of data being a simplification of reality, this bot traffic may be overestimated? Or even underestimated? It's something to think about.

    1. As you can see, TurboTax has a limit on how long last names are allowed to be, and people with too long of names have different strategies with how to deal with not fitting in the system. Gender# Data collection and storage can go wrong in other ways as well, with incorrect or erroneous options. Here are some screenshots from a thread of people collecting strange gender selection forms:

      I wonder, why does this happen? Is it some kind of attempt at shortcut or automation to make the developing process smoother for the developers, at the cost of how user friendly the interface ends up being? Essentially I feel like these results indicate that developers are using cost-cutting practices to make development finish quicker. This ultimately benefits the large majority of people who fall into easy categories, but is to the detriment of people who are outliers.

  13. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Brian Whitaker. Oman's Sultan Qaboos: a classy despot. The Guardian, March 2011. URL: https://www.theguardian.com/commentisfree/2011/mar/04/oman-sultan-qaboos-despot (visited on 2023-11-17).

      I found myself interested in the image of a ruler posing to be benevolent and cultured while really being ignorant and dismissive towards their people. Specifically the detail of how difficult it is for the people of Oman to assemble and to speak out makes me understand the connection between the Sultan and social media bots. If social media congregation is the only reasonable way for people to speak out against a neglectful government, it makes the ethical question of automated bots a bit more complicated.

    1. # Go through the tweets to see which ones have curse words for mention in mentions.data: # check if the tweet has a curse word if(predict(mention.text))[0] == 1): # if it did have a curse word, put it in the cursing mentions list cursing_mentions.append(mention)

      I remember learning about some of this stuff in AP Comp Sci Principles. When we were hearing about automated bots that go through social media and take specific actions, and then further provided the steps to run code to make that happen, I started trying to put the steps of the code together in my mind. I figure you need to iterate through a list to look for particular phrases, which you'd set within another list, along with a for loop to detect your desired word in social media. When I start to get lost is when I think about scaling that to be bigger.

    1. Bots# Bots are computer programs that act through a social media account. We will talk about them more in the next (Chapter 3). There are also various applications that are made to help users interact with social media. For example, there are social media manager programs that help people schedule posts and let multiple people use the same account (particularly useful if you are something like a news organization).

      The kinds of bots we've used so far seem pretty simple. It's telling a computer to send a post to social media. But nowadays, we have an overwhelming amount of bots, to the point that a decent chunk of the content I see online is reposted stuff on a clear bot page that I just have to scroll through. Even though we as a class are a bot farm, it's obviously way less consequential. It gets really crazy when you think about the creators who have their content stolen, and reposted across dozens of different burner accounts, just to amass a following on at least one. I think nowadays it's gone too far.

    1. Actions are judged on the sum total of their consequences (utility calculus) The ends justify the means. Utilitarianism: “It is the greatest happiness of the greatest number that is the measure of right and wrong.” That is, What is moral is to do what makes the most people the most happy.

      I'd say of all of the different frameworks provided, Consequentialism has the most direct application and parallels with many of the ethical questions and debates we often have in regards to social media. Often the game that gets played when it comes to social media is the data and the numbers, and we see developers measure value, success, and popularity largely through the numbers they get fed. And just as one could argue this mindset is flawed, you could say the same flaws exist in Consequentialism. As much at looking at final outcomes can be a rational way to make decisions, it ultimately strips some of the humanity and nuance away from said decisions in the short term. I found the parallels between these two mindsets very interesting.