34 Matching Annotations
  1. Mar 2026
    1. As a potential worker in the tech industry, you might someday find yourself in a position where you have influence over how social media platforms are designed, programmed, or operated (e.g., you could be a programmer, or designer, or content moderator). We hope that if you find yourself in one of these positions, you consider the ethics of what you are doing. We hope you could then bring those concerns into how you design and implement automated systems for social media sites.

      I believe that society would be better off if everyone who went into the tech industry, or any industry that interacts with technology, was required to have a thorough understanding of standardized ethics. Furthermore, these ethics need to be upheld, especially when capitalism and colonialism intersect with technology.

    1. As a social media user, we hope you are informed about things like: how social media works, how they influence your emotions and mental state, how your data gets used or abused, strategies in how people use social media, and how harassment and spam bots operate.

      Social media overall would probably be much better if everyone had a strong understanding and care for ethics. I know that after this course, I will look at and engage with social media differently.

    1. The tech industry is full of colonialist thinking and practices, some more subtle than others.

      Colonialism, eurocentrism and androcentrism are also very prominent issues in tech and the world in general. Products are often designed to meet the needs of white men, since they are often the ones in these offices designing these products. This issue can be addressed by fixing systemic discrimination in the hiring system.

    1. To increase profits, Meta wants to corner the market on social media. This means they want to get the most users possible to use Meta (and only Meta) for social media.

      In my opinion capitalism, specifically late stage capitalism is the reason social media is unhealthy and not as fun. Apps are centered around ads and sponsorships while they’re designed to maximize the amount of time you spend scrolling, making social media highly addicting.

    1. On February 6, 2022, Jeremy Schneider became the Twitter “main character of the day” for posting the following Tweet, which was widely condemned as being mean and not understanding other people’s experiences:

      This is a good example of how public outrage can prompt an individual to rethink their actions and become more mindful. While his joke was simple and not a huge controversial issue, he still exemplified how the public's reaction led him to change his mind, something which can be rare on the internet as individuals often feel pressure to double down on their claims.

    1. We can also consider events in the #MeToo movement as at least in part public shaming of sexual harassers (but also of course solidarity and organizing of victims of sexual harassment, and pushes for larger political, organizational, and social changes).

      I'd argue that the #MeToo movement is powerful instead of harmful compared to the Halloween prank example because it didn't shame individuals. Rather it shamed the culture around sexual harassment/assault, focusing on the issue as a whole rather than putting energy towards the perpetrrators.

    1. When do you think crowd harassment is justified (or do you think it is never justified)?

      Crowd harassment can have the right intentions, but it almost always goes too far. Think of cancel culture with celebrities. Even with harassing CEOs or politicians, harassment ends up becoming noise instead of productive action.

    1. Do you believe crowd harassment is ever justified

      No, crowd harassment is not justified because it can quickly become harmful and unfair. Even when people are upset for good reasons, it’s better to handle problems in safer and more constructive ways.

  2. Feb 2026
    1. When tasks are done through large groups of people making relatively small contributions, this is called crowdsourcing. The people making the contributions generally come from a crowd of people that aren’t necessarily tied to the task (e.g., all internet users can edit Wikipedia), but then people from the crowd either get chosen to participate, or volunteer themselves.

      I think crowdsourcing is one of the best ways to moderate and maintain online spaces, especially in tandem with a group of trained human moderators and AI moderators. But crowdsourcing is central to moderation as it values the opinions of users --- the people who the sites are created for.

    1. When looking at who contributes in crowdsourcing systems, or with social media in generally, we almost always find that we can split the users into a small group of power users who do the majority of the contributions, and a very large group of lurkers who contribute little to nothing. For example, Nearly All of Wikipedia Is Written By Just 1 Percent of Its Editors, and on StackOverflow “A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions..” We see the same phenomenon on Twitter: Fig. 16.3 Summary of Twitter use by Pew Research Center# This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies.

      This is an interesting phenomenon. I wonder why there is imbalance, and why this tends to be so prevalent across different platforms. It makes sense on platforms like Twitter and Instagram where many users choose not to post and instead just view others' posts (aka lurkers), but on other platforms where all/most users post I wonder why this happens.

    1. What support should content moderators have from social media companies and from governments?

      Moderation is a field that contains a lot of gray area — it is important to consider prioritizing what needs to be taken down first, and then determining what is even worth removing. Clearer legislation is the first step to making moderators’ lives easier.

    1. eddit is composed of many smaller discussion boards, called subreddits. These subreddits range from friendly to very toxic, with different moderators in charge of each subreddit. Reddit as a larger platform decided to ban and remove some of its most toxic and hateful subreddits, including r/c***town (note: I censored out a racial slur for Black people), and r/fatpeoplehate. In a study of what happened after this ban: Post-ban, hate speech by the same users was reduced by as much as 80-90 percent. […] “Members of banned communities left Reddit at significantly higher rates than control groups. […] Migration was common, both to similar subreddits (i.e. overtly racist ones) and tangentially related ones (r/The_Donald). […] However, within those communities, hate speech did not reliably increase, although there were slight bumps as the invaders encountered and tested new rules and moderators.

      An important thing to consider when imposing censorship upon a certain platform is the likelihood of certain groups/communities of just migrating to another platform with less censorship. Is banning these groups the most effective way to curb hate speech?

    1. “Incel” is short for “involuntarily celibate,” meaning they are men who have centered their identity on wanting to have sex with women, but with no women “giving” them sex. Incels objectify women and sex, claiming they have a right to have women want to have sex with them. Incels believe they are being unfairly denied this sex because of the few sexually attractive men (”Chads”), and because feminism told women they could refuse to have sex. Some incels believe their biology (e.g., skull shape) means no women will “give” them sex. They will be forever alone, without sex, and unhappy. The incel community has produced multiple mass murderers and terrorist attacks.

      The internet certainly accelerates dangerous communities, especially when users are lonely and struggle with mental health. The incel community has only continued to expand and grow, developing into the looksmaxxing and blackpill communities today and even connecting with major right wing content creators and politicians.

    1. “If [social media] was just bad, I’d just tell all the kids to throw their phone in the ocean, and it’d be really easy. The problem is it - we are hyper-connected, and we’re lonely. We’re overstimulated, and we’re numb. We’re expressing our self, and we’re objectifying ourselves. So I think it just sort of widens and deepens the experiences of what kids are going through. But in regards to social anxiety, social anxiety - there’s a part of social anxiety I think that feels like you’re a little bit disassociated from yourself. And it’s sort of like you’re in a situation, but you’re also floating above yourself, watching yourself in that situation, judging it. And social media literally is that. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it, watch people watch them, watch people watch them watch them. My sort of impulse is like when the 13 year olds of today grow up to be social scientists, I’ll be very curious to hear what they have to say about it. But until then, it just feels like we just need to gather the data.” Director Bo Burnham On Growing Up With Anxiety — And An Audience - NPR Fresh Air (10:15-11:20)

      This quote shows why it’s so hard to address most issues around social media. It’s almost a necessary evil, as it is an essential part of all of our lives. It is difficult to keep up with real life if you’re not connected online too, so solutions to social media addiction or negative effects must be found on the other end — the developers’ end.

    1. Building off of the amplification polarization and negativity, there are concerns (and real examples) of social media (and their recommendation algorithms) radicalizing people into conspiracy theories and into violence.

      Echo chambers form at rapid rates, especially on certain platforms which allow for this acceleration. This can be very dangerous.

    1. Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system.

      It seems like most social media platforms nowadays prioritize reactivity and anger in creating algorithms. A lot of platforms show me discourse or controversial videos in order to maximize engagement, but these social media structures appear to be affecting real life behavior, especially of children.

    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for. And unfortunately, as researcher Dr. Cynthia Bennett points out, disabled people are often excluded from designing for themselves, or even when they do participate in the design, they aren’t considered to be the “real designers.” You can see Dr. Bennet’s research talk on this in the following Youtube Video:

      I think this shows the importance of representation in programming and how important it is to test technology before it is launched. Even better, it is important to have representation amongst the people developing these products. When marginalized identities can’t achieve success in programming, other people with marginalized identities can’t enjoy these products either.

    1. Which abilities are expected of people, and therefore what things are considered disabilities, are socially defined. Different societies and groups of people make different assumptions about what people can do, and so what is considered a disability in one group, might just be “normal” in another. There are many things we might not be able to do that won’t be considered disabilities because our social groups don’t expect us to be able to do them. For example, none of us have wings that we can fly with, but that is not considered a disability, because our social groups didn’t assume we would be able to. Or, for a more practical example, let’s look at color vision:

      In a world of ever-growing diversity, it is vital to keep in mind all the different types of people who might be using a certain technology. Many websites and apps now have accessibility settings which account for any disabilities or accessibility needs, but these are constantly being revamped or updated — standardizing and requiring accessibility settings is a helpful thing to do since it ensures that everyone can use different platforms and have a good experience.

    1. Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women

      Internally, there should be more security to prevent employees from abusing their power. There are many ways to do this technologically and socially. An important thing to do is ensure that employees follow strict rules and encryption should be stronger.

    1. For example, a social media application might offer us a way of “Private Messaging” (also called Direct Messaging) with another user. But in most cases those “private” messages are stored in the computers at those companies, and the company might have computer programs that automatically search through the messages, and people with the right permissions might be able to view them directly.

      This makes me think of the Tea app data leak in which almost all users had their information leaked because the developer allegedly stored it all in a public Google Drive. Certain ethical situations should be mandatory to follow when creating a social platform, even (or especially) if a developer is inexperienced.

    1. For example, social media data about who you are friends with might be used to infer your sexual orientation. Social media data might also be used to infer people’s: Race Political leanings Interests Susceptibility to financial scams Being prone to addiction (e.g., gambling) Additionally, groups keep trying to re-invent old debunked pseudo-scientific (and racist) methods of judging people based on facial features (size of nose, chin, forehead, etc.), but now using artificial intelligence.

      With so much information about individuals being publicly available in the age of social media, it is very easy for people to find information about anyone — whether they are hackers who can access specific metadata or just a passing by user who observes your profile. It is interesting to observe how people’s behavior has changed over the last decade as our lives become more and more public.

    1. Social media platforms collect various types of data on their users.

      Data collection has a variety of gains and drawbacks for both parties, the platform and the users. But it is important to maintain ethics when collecting data by asking questions such as what are the legal limitations on how much data can be collected and how that data can be used.

  3. Jan 2026
    1. 7.3.4. RIP trolling# RIP trolling is where trolls find a memorial page and then all work together to mock the dead person and the people mourning them. Here’s one example from 2013: A Facebook memorial page dedicated to Matthew Kocher, who drowned July 27 in Lake Michigan, had attracted a group of Internet vandals who mocked the Tinley Park couple’s only child, posting photos of people drowning with taunting comments superimposed over the images. One photo showed a submerged person’s hand breaking through the water with text reading “LOL u drowned you fail at being a fish,” according to a screen grab of the page shared with the Tribune after the post was removed. Cruel online posts known as RIP trolling add to Tinley Park family’s grief from the Chicago Tribune 7.3.5. Flooding Police app with K-pop videos# To go in a different direction for our last example, let’s look at an example of trolling as a form of protest. In the Black Lives Matters protests of 2020, Dallas Police made an app where they asked people to upload videos of protesters doing anything illegal. In support of the protesters, K-pop fans swarmed the app and uploaded as many K-pop videos as they could eventually leading to the app crashing and becoming unusable, and thus protecting the protesters from this attempt at Police surveillance.

      Comparing these two examples (RIP trolling and protest trolling) makes the stark differences between different kinds of trolling. Trolling, like many other things on the internet, isn’t necessarily good or bad, and it is driven by intention. It can be used for good and for bad.

    1. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005). These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy: Rule 43. The more beautiful and pure a thing is - the more satisfying it is to corrupt it

      There’s lots of examples of widespread/mass trolling on 4chan changing entire communities or corners of the internet. When interacting with an individual troll, the consequences don’t go much further than mild annoyance, but when several people come together to collectively troll, they can cause real world consequences.

    1. How do you notice yourself changing how you express yourself in different situations, particularly on social media? Do you feel like those changes or expressions are authentic to who you are, do they compromise your authenticity in some way?

      Naturally, it’s impossible for anyone to truly capture their entire selves in a social media profile. Everything is some type of performance to a certain degree, whether that’s informed by social media trends and behaviors or digital interactions.

    1. There are many ways inauthnticity shows up on internet-based social media, such as: Catfishing: Create a fake profile that doesn’t match the actual user, usually in an attempt to trick or scam someone Sockpuppet (or a “burner” account): Creating a fake profile in order to argue a position (sometimes intentionally argued poorly to make the position look bad)

      I think inauthenticity is just an inevitable thing about social media if not a defining factor. Nowadays with so many bots rampant across different platforms, and AI generated, automated content (i.e. the Dead Internet Theory), social media has become characterized by fake stories and lies. Inauthenticity ranges from someone lying about an anecdote to full-blown fake news.

    1. One famous example of reducing friction was the invention of infinite scroll.

      Ethically it is important to consider the implications of reducing friction in web design. While it does make the user experience more convenient and comfortable, it also promotes overuse of social media and creates a “doomscrolling trap” leading to modern issues like excessive use of technology.

    1. Open two social media sites and choose equivalent views on each (e.g., a list of posts, an individual post, an author page etc.). List what actions are immediately available. Then explore and see what actions are available after one additional action (e.g., opening a menu), then what actions are two steps away. What do you notice about the similarities and differences in these sites?

      When comparing posts on Twitter/X and Instagram, you can see that Instagram must contain an image and the user who posted it, and can include additional info such as a text caption, music, location, etc. Twitter shows the text and the user who posted it but has limitations on what else can be added, although links and images are supported. There are similar action options such as likes, comments, shares and now as of recent, reposts.

    1. So all data that you might find is a simplification. There are many seemingly simple questions that in some situations or for some people, have no simple answers, questions like: What country are you from? What if you were born in one country, but moved to another shortly after? What if you are from a country that no longer exists like Czechoslovakia? Or from an occupied territory? How many people live in this house? Does a college student returning home for the summer count as living in that house? How many words are in this chapter? Different programs use different rules for what counts as a “word” E.g., this page has “2 + 2 = 4”, which Microsoft Word counts as 5 words, and Google Docs counts as 3 words.

      Simplifying data may frequently be convenient when creating a widely-applicable program, but it involves leaving at least one group or perspective out. Because of this, simplification of data often contains inherent bias and developers should be aware of this.

    2. The data in question here is over what percentage of Twitter users are spam bots, which Twitter claimed was less than 5%, and Elon Musk claimed is higher than 5%

      In the modern age, it is important to understand what truly counts as a “bot” considering the intricacies of automation and the philosophical question of autonomous AI. Can we consider bots as a valid representation of general public consensus as they become more prevalent?

    1. How are people’s expectations different for a bot and a “normal” user?

      People typically don’t expect to glean much useful information from bots. In my case, at least, I would typically block or ignore them. Additionally, it’s often easy to identify a bit, but in the age of AI these lines are becoming more blurred.

    1. Antagonistic bots can also be used as a form of political pushback that may be ethically justifiable. For example, the “Gender Pay Gap Bot” bot on Twitter is connected to a database on gender pay gaps for companies in the UK. Then on International Women’s Day, the bot automatically finds when any of those companies make an official tweet celebrating International Women’s Day and it quote tweets it with the pay gap at that company:

      I think it’s interesting how bots are frequently used to push a certain message, often political. While this may be able to boost positive movements and spread information, it is a dangerous capability and misconstrues the true standings of most people’s beliefs.

    1. We can’t give every example, but here is a range of different things social media platforms do (though this is all an oversimplification).

      A major issue with “social media” and trying to apply ethical safeguards to it, is that social media is a vast collection of digital platforms. This vastness makes social media hard to define, and thus it becomes difficult to address issues on a wide scale, for example through law. Additionally, social media varies across different regions, peoples, and languages, and similarly, so do ethics.

    1. Natural Rights# Locke: Everyone has a right to life, liberty, and property Jefferson in the Declaration of Independence: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” Discussions of “human rights” fit in the Natural Rights ethics framework

      Natural rights are certainly the basis for the majority of most modern ethics. Everyone is created equal, and therefore in designing laws and regulations, we must ensure that everyone is treated equally. Of course, this isn’t always effective because “equal” or “equitable” treatment isn’t easy to define, and those who create the regulations/laws have their own biases.