Social media is a huge component of the digital public sphere. It’s important that we as users participate in that public sphere responsibly, and considering the ethics frameworks can help us do that
- May 2023
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Understanding the various ethics frameworks is useful when understanding the effects that the digital sphere can have on individuals. It also helps shape personal ethics decisions you have to make online
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Or they might let advertizes make ads go only to “Jew Haters” (which is ethically very bad, and something Meta allowed).
How is this allowed?? Do advertisers send in targeted ad requests..? I’d like to know more about how this works… I fully believe they allowed that, but wtf??
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
if the richest man in the world offers to buy out a social media site for more than it’s worth, then it is the fiduciary duty of the leaders of the social media site to accept that offer. I
What happens if a company violates their fiduciary duty? Does that become a legal issue? If not, did Twitter then have the choice to not sell the company to musk?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
I think there needs to be more of a separation between people who are “canceled” for major crimes vs people who are canceled for trivial mistakes. I think that grouping all of those offenses into one phenomenon of “cancel culture” diminishes the significance and impact of actions that are truly harmful.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
I never thought about the specific differences between shame and guilt. I think it’s very important to learn how to separate the two and realizing that just because you did a bad thing doesn’t inherently make you a bad person. It also makes me realize that the phrase “shame on you” is kind of messed up
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The construction by some people of the concept of individual liberty as a free pass to harass others is stupid as hell. James Madison had no idea the internet was going to exist when he wrote that amendment
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The ask.fm phenomenon of the mid 2010s was, I would say, one of the most prominent source of harassment at the time. It was also very accessible because people put it in their own social media bios.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Twitter is definitely known for its users’ ability to “solve” problems. Whenever someone wants to speak out against an injustice they faced, be it by another person, entity, corporation, store, etc., the first piece of advice many people offer is “take this to twitter.”
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
I think there need to be conversations had about where the line between crowdsourcing and exploiting contributors. Using Wikipedia as an example, I think most editors fall under the category of crowdsourcing because they are passionate about the particular topic they’re moderating/ editing, but if they have specific tasks delegated to them (which I don’t know if that’s how Wikipedia works, but for the sake of argument I’ll say it is) and they’re not being paid for that, that might fall more into the territory of exploitation.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What dangers are posed with languages that have limited or no content moderation
I don’t care what your views on freedom of speech are, every social media website should have some sort of basic content moderation system in place. I believe that as an owner of a large social media website, you are morally obligated to protect the users who choose to use your site.
-
Reddit is valued at more than ten billion dollars, yet it is extremely dependent on mods who work for absolutely nothing. Should they be paid, and does this lead to power-tripping mods?
I think mods should be paid at least a small amount. While it is true that most mods volunteer their time to moderate it is because they are passionate about the subject of the subreddit which they are moderating, but just because you enjoy the work you do doesn’t mean you shouldn’t be paid for it. Also, it’s not like Reddit needs to pay them a huge salary or anything; even just a small fee for moderators would be better than nothing
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Some philosophers even suggested that it is hard to think about what is rational or reasonable without our take being skewed by our own aims and egos.
I agree. I think it’s hard to come up with a definition of rational or reasonable, as most people have internal biases or lived experiences that contribute to their interpretations of what is morally correct.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Almost all social media sites (even the ones that claim “free speech”) block spam, mass produced unsolicited messages, generally advertisement, scams, or trolling.
There should definitely be a modification to what exactly free speech entails because in my mind spam/scams are not protected under the claim of “free speech.” Spam/ scams are not providing any meaningful dialogue and you can’t really interact with it in a meaningful way. It doesn’t contribute in any way whatsoever.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Social media can make trauma dumping easier
The idea that you can always just turn off your phone and walk away has sort of led to a subconscious belief among some that the internet isn’t real, or significant maybe. I say this because some of the trauma dumping and oversharing online is unreal, and I can only imagine that the lack of real-life conversation makes that sort of personal information incredibly easy and seemingly inconsequential to share.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In 2019 the company Facebook (now called Meta) presented an internal study that found that Instagram was bad for the mental health of teenage girls, and yet they still allowed teenage girls to use Instagram.
It’s interesting to me that Meta conducted this study in the first place. Was this information leaked? How was it presented to the public?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Cancel culture is a sort of natural selection. Except also not really because you can choose not to be a dumbass.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Chain letters- the first chain mail. Irl “send this to ten people you know or you will be killed in your sleep tonight”
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Pinterest always gives me good recommendations. It’s easy for them to do that because all the input they need are photos that I save, which gives the algorithm a lot to go off of.
-
- Apr 2023
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
It’s interesting that they “don’t allow” it. Why not? What’s prohibiting them from allowing alt text?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
There are ways in which technology is adapting to be universally designed, such as the option to increase text size or turn on audio descriptions. I am curious about what other accommodations are being implemented into technology.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Encrypted or hard to guess passwords do not feel as accessible of a security measure than I think they actually are. Maybe I’m just speaking for myself, but even when Apple presents me with one of its suggested “strong passwords,” I am worried that iCloud will lose my stored passwords, in which case I would never be able to access it in case it is written down, which isn’t very secure. I guess writing down strong passwords is better than using the same password for everything..?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
The feeling of invisibility that talking into the void online provides makes privacy a lot less of a concern for some people I’ve noticed. They aren’t having these conversations face-to-face with real people so people say a lot of things behind screens that they probably wouldn’t share in real life.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
It’s kind of alarming how many websites ask you if you want to allow data tracking across other websites. It may seem surprising that you are being shown so much web content (ads, search results, etc.) that is related to something you were just thinking about or searching, but with the amount of data you provide even subconsciously, that is really not surprising at all.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
It is easier now than ever to put your personal data out into the internet. We use so many sites nowadays and we forget that every site we plug our personal information into can potentially be visible for anyone to access just by doing a quick Google search.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Additionally, the enjoyment of causing others pain or distress (“lulz”) has also been part of the human experience for millennia:
They were just throwing stones at frogs for the lulz. Interesting take on humanity’s motivation to troll
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
I had no idea “doing it for the lulz” was an official term for trolling out of amusement. Same with the term “trolling the newbies”
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In these systems, someone would start a “thread” by posting an initial message. Others could reply to the previous set of messages in the thread.
So am early Reddit, lol. What was usually discussed in these threads? Did the nature of these conversations differ from what we see online today? (Obviously so, but to what extent? Was internet hostility less of an issue, and how was that a result of what and how conversations were had?)
-
I had no idea that email originated in the 60s and 70s. What did this early form of email look like? On what platform was it operated?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
I assume there must be dictionaries in dictionaries- would the user’s page be considered a dictionary? What is the threshold for that which is considered a data dictionary? Could Twitter itself be considered a dictionary, or is that set of data too broad?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How big does a number have to be to take up a significant amount of space? Does “big number” mean an actual digit count that is displayed on the screen?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
How do we regulate the actions of bots? If a bot/ AI is supposedly capable of anything, is it possible to limit its capabilities to be confined to that which is considered ethical? And if that is possible, can we claim that the intentions of the human programmers are actually good if they are not taking these preventative measures?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
It seems nefarious that a bot like this was created without programming some sort of hate detection into its code. Is it hard to regulate bot hate speech? With the rise of AI, this seems like a prevalent conversation to be having
-
- Mar 2023
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Meaning: only follow rules that you are ok with everyone else following.
Kind of like the golden rule but while the golden rule is focused on treating others with kindness under the assumption that that’s how everyone wants to be treated, deontology seems to be suggesting that if there is a rule you don’t want others to follow, you’re not obligated to follow it yourself. This could be interpreted in contexts other than kindness.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What things about the design of Twitter enabled these events to happen?
I argue that the display of a users likes and retweets to the public is primarily what fuels so much twitter discourse. Platforms such as Instagram where your likes are private limit the spread of posts and discourse. You can comment, but unless you share the post to your story, that comment is limited to other users who stumbled across the same post. Twitter’s also more about sharing written posts rather than just images which makes it a platform centered around dialogue as opposed to sharing mundane pictures of one’s life, which doesn’t spark as much conversation.
-