24 Matching Annotations
  1. Last 7 days
    1. 13.5.4. Search through news submissions and only display good news# Now we will make a different version of the code that computes the sentiment of each submission title and only displays the ones with positive sentiment. # Look up the subreddit "news", then find the "hot" list, getting up to 10 submission submissions = reddit.subreddit("news").hot(limit=10) # Turn the submission results into a Python List submissions_list = list(submissions) # go through each reddit submission for submission in submissions_list: #calculate sentiment title_sentiment = sia.polarity_scores(submission.title)["compound"] if(title_sentiment > 0): print(submission.title) print() Copy to clipboard Fake praw is pretending to select the subreddit: newsBreaking news: A lovely cat took a nice long nap today! Breaking news: Some grandparents made some yummy cookies for all the kids to share! Copy to clipboard 13.5.5. Try it out on real Reddit# If you want, you can skip the fake_praw step and try it out on real Reddit, from whatever subreddit you want Did it work like you expected? You can also only show negative sentiment submissions (sentiment < 0) if you want to see only bad news.

      This demo was interesting because it shows how algorithms can intentionally shape what users see. Filtering for only positive news might seem helpful for improving mental health, but it also raises questions about whether hiding negative information creates a distorted view of reality. I also noticed that sentiment analysis is a simplified way to judge content, since tone and context can be more complex than just positive or negative. Overall, this example shows how small design decisions in algorithms can significantly influence users’ emotional experiences online.

    1. 13.1.1. Digital Detox?# Some people view internet-based social media (and other online activities) as inherently toxic and therefore encourage a digital detox, where people take some form of a break from social media platforms and digital devices. While taking a break from parts or all of social media can be good for someone’s mental health (e.g., doomscrolling is making them feel more anxious, or they are currently getting harassed online), viewing internet-based social media as inherently toxic and trying to return to an idyllic time from before the Internet is not a realistic or honest view of the matter. In her essay “The Great Offline,” Lauren Collee argues that this is just a repeat of earlier views of city living and the “wilderness.” As white Americans were colonizing the American continent, they began idealizing “wilderness” as being uninhabited land (ignoring the Indigenous people who already lived there, or kicking them out or killing them). In the 19th century, as wilderness tourism was taking off as an industry, natural landscapes were figured as an antidote to the social pressures of urban living, offering truth in place of artifice, interiority in place of exteriority, solitude in place of small talk. Similarly, advocates for digital detox build an idealized “offline” separate from the complications of modern life: Sherry Turkle, author of Alone Together, characterizes the offline world as a physical place, a kind of Edenic paradise. “Not too long ago,” she writes, “people walked with their heads up, looking at the water, the sky, the sand” — now, “they often walk with their heads down, typing.” […] Gone are the happy days when families would gather around a weekly televised program like our ancestors around the campfire! But Lauren Collee argues that by placing the blame on the use of technology itself and making not using technology (a digital detox) the solution, we lose our ability to deal with the nuances of how we use technology and how it is designed: I’m no stranger to apps that help me curb my screen time, and I’ll admit I’ve often felt better for using them. But on a more communal level, I suspect that cultures of digital detox — in suggesting that the online world is inherently corrupting and cannot be improved — discourage us from seeking alternative models for what the internet could look like. I don’t want to be trapped in cycles of connection and disconnection, deleting my social media profiles for weeks at a time, feeling calmer but isolated, re-downloading them, feeling worse but connected again. For as long as we keep dumping our hopes into the conceptual pit of “the offline world,” those hopes will cease to exist as forces that might generate change in the worlds we actually live in together. So in this chapter, we will not consider internet-based social media as inherently toxic or beneficial for mental health. We will be looking for more nuance and where things go well, where they do not, and why.

      This section does a good job showing that the relationship between social media and mental health is complex rather than purely positive or negative. I found the example of Facebook’s mood experiment especially interesting because it raises ethical concerns about consent and manipulation, not just mental health outcomes. The discussion of digital detox was also thoughtful, particularly the idea that blaming technology itself may prevent us from improving how platforms are designed. Overall, this reading encourages a more nuanced understanding of social media’s impact instead of oversimplifying it as either harmful or beneficial.

  2. Feb 2026
    1. 12.7. Activity: Value statements in what goes viral# 12.7.1. Choose three scenarios# When content goes viral there may be many people with a stake in it’s going viral, such as: The person (or people) whose content or actions are going viral, who might want attention, or get financial gain, or might be embarrassed or might get criticism or harassment, etc. Different people involved might have different interests. Some may not have awareness of it happening at all (like a video of an infant). Different audiences might have interests such as curiosity or desire to bring justice to a situation or desire to get attention for themselves or their ideas based on engaging the viral content, or desire to troll or harass others. Social networking platforms might have interests such as increased attention to their platform or increased advertising, or increased or decreased reputation (in views of different audiences). List at least three different scenarios of content going viral and list out the interests of different groups and people in the content going viral. 12.7.2. Create value statements# Social media platforms have some ability to influence what goes viral and how (e.g., recommendation algorithms, what actions are available, what data is displayed, etc.), though they only have partial control, since human interaction and organization also play a large role. Still, regardless of whether we can force any particular outcome, we can still consider of what you think would be best for what content should go viral, how much, and in what ways. Create a set of value statements for when and how you ideally would want content to go viral. Try to come up with at least 10 value statements. We encourage you to consider different ethics frameworks as you try to come up with ideas.

      This section clearly shows that virality isn’t neutral and always involves tradeoffs between different groups. I liked how the examples highlight that what benefits platforms or audiences can still harm individuals, especially through misinformation or loss of privacy. It also made me think more about how recommendation systems should reflect ethical values, not just engagement metrics

    1. 12.2.1. Books# The book Writing on the Wall: Social Media - The First 2,000 Years describes how, before the printing press, when someone wanted a book, they had to find someone who had a copy and have a scribe make a copy. So books that were popular spread through people having scribes copy each other’s books. And with all this copying, there might be different versions of the book spreading around, because of scribal copying errors, added notes, or even the original author making an updated copy. So we can look at the evolution of these books: which got copied, and how they changed over time. 12.2.2. Chain letters# When physical mail was dominant in the 1900s, one type of mail that spread around the US was a chain letter. Chain letters were letters that instructed the recipient to make their own copies of the letter and send them to people they knew. Some letters gave the reason for people to make copies might be as part of a pyramid scheme where you were supposed to send money to the people you got the letter from, but then the people you send the letter to would give you money. Other letters gave the reason for people to make copies that if they made copies, good things would happen to them, and if not bad things would, like this: You will receive good luck within four days of receiving this letter, providing, you in turn send it on. […] An RAF officer received $70,000 […] Gene Walsh lost his wife six days after receiving the letter. He failed to circulate the letter. Fig. 12.2 An example chain letter from https://cs.uwaterloo.ca/~mli/chain.html.# The spread of these letters meant that people were putting in effort to spread them (presumably believing making copies would make them rich or help them avoid bad luck). To make copies, people had to manually write or type up their own copies of the letters (or later with photocopiers, find a machine and pay to make copies). Then they had to pay for envelopes and stamps to send it in the mail. As these letters spread we could consider what factors made some chain letters (and modified versions) spread more than others, and how the letters got modified as they spread.

      This section is a fun and clear way to show that “going viral” isn’t just an internet phenomenon. The examples of books, chain letters, and sourdough starters nicely illustrate how ideas and practices spread through effort, incentives, and social networks long before digital platforms existed. I especially like the chain letter example because it clearly shows how emotional pressure and fear helped drive sharing, which feels very similar to modern online virality. Overall, this makes the concept of cultural evolution and memes much more concrete and easy to understand.

    1. 11.4.1. Filter Bubbles# One concern with how recommendation algorithms is that they can create filter bubbles (or “epistemic bubbles” or “echo chambers”), where people get filtered into groups and the recommendation algorithm only gives people content that reinforces and doesn’t challenge their interests or beliefs. These echo chambers allow people in the groups to freely have conversations among themselves without external challenge. The filter bubbles can be good or bad, such as forming bubbles for: Hate groups, where people’s hate and fear of others gets reinforced and never challenged Fan communities, where people’s appreciation of an artist, work of art, or something is assumed, and then reinforced and never challenged Marginalized communities can find safe spaces where they aren’t constantly challenged or harassed (e.g., a safe space) 11.4.2. Amplifying Polarization and Negativity# There are concerns that echo chambers increase polarization, where groups lose common ground and ability to communicate with each other. In some ways echo chambers are the opposite of context collapse, where contexts are created and prevented from collapsing. Though others have argued that people do interact across these echo chambers, but the contentious nature of their interactions increases polarization. Along those lines, ff social media sites simply amplify content that gets strong reactions, they will often amplify the most negative and polarizing content. Recommendation algorithms can make this even works. For example: At one point, Facebook counted the default “like” reaction less than the “anger” reaction, which amplified negative content. On Twitter, one study found (full article on archive.org): “Whereas Google gave higher rankings to more reliable sites, we found that Twitter boosted the least reliable sources, regardless of their politics.” According to another study on Twitter: “An analysis […] suggested that when users swarm tweets to denounce them with quote tweets and replies, they might be cueing Twitter’s algorithm to see them as particularly engaging, which in turn might be prompting Twitter to amplify those tweets. The upshot is that when people enthusiastically gather to denounce the latest Bad Tweet of the Day, they may actually be ensuring more people see it than had they never decided to pile on in the first place. That possibility raises serious questions of what constitutes responsible civic behavior on Twitter and whether the platform is in yet another way incentivizing combative behavior.” Though this is a big concern about Internet-based social media, traditional media sources also play into this: For example, this study: Cable news has a much bigger effect on America’s polarization than social media, study finds Note: polarization itself is not necessarily bad (do we want to make everyone believe the exact same thing?), and some argue that in some situations polarization is even a good thing. 11.4.3. Radicalization# Building off of the amplification polarization and negativity, there are concerns (and real examples) of social media (and their recommendation algorithms) radicalizing people into conspiracy theories and into violence. Rohingya Genocide in Myanmar# A genocide of the Rohingya people in Myanmar started in 2016, and in 2018 Facebook admitted it was used to ‘incite offline violence’ in Myanmar. In 2021, the Rohingya sued Facebook for £150bn over how Facebook amplified hate speech and didn’t take down inflammatory posts. The Flat Earth Movement# The flat earth movement (an absurd conspiracy theory that the earth is actually flat, and not a globe) gained popularity in the 2010s. As YouTuber Dan Olson explains it in his (rather long) video In Search of a Flat Earth: Modern Flat Earth [movement] was essentially created by content algorithms trying to maximize retention and engagement by serving users suggestions for things that are, effectively, incrementally more concentrated versions of the thing they were already looking at. Bizarre cranks peddling random theories are an aspect of civilization that has always been with us, so it was inevitable that they would end up on YouTube, but the algorithm made sure they found an audience. These systems were accidentally identifying people susceptible to conspiratorial and reactionary thinking and sending them increasingly deeper into Flat Earth evangelism. Dan Oleson then explained that by 2020, the flat earth content was getting less views: The bottom line is that Flat Earth has been slowly bleeding support for the last several years. Because they’re all going to QAnon. See also: YouTube aids flat earth conspiracy theorists, research suggests 11.4.4. Discussion Questions# What responsibilities do you think social media platforms should have in regards to larger social trends? Consider impact vs. intent. For example, consequentialism only cares about the impact of an action. How do you feel about the importance of impact and intent in the design of recommendation algorithms? What strategies do you think might work to improve how social media platforms use recommendations?

      This section does a great job showing how recommendation algorithms can unintentionally amplify polarization and even contribute to radicalization. The examples (Facebook reactions, Twitter quote-tweet dynamics, and the flat earth → QAnon pipeline) clearly illustrate how engagement-based systems can reward negativity and extreme content. I also appreciate the nuance at the end that polarization itself isn’t always bad, which keeps the discussion balanced rather than alarmist. Overall, this is a clear, well-supported explanation of why algorithmic design choices have serious social consequences beyond individual user intent.

    1. 11.2. Ethical Analysis of Recommendation Algorithms# When we look at ethics and responsibility in regards to recommendation algorithms, it can be helpful to consider the difference between individual analysis and systemic analysis. 11.2.1. Individual vs. Systemic Analysis# Individual analysis focuses on the behavior, bias, and responsibility an individual has, while systemic analysis focuses on the how organizations and rules may have their own behaviors, biases, and responsibility that aren’t necessarily connected to what any individual inside intends. For example, there were differences in US criminal sentencing guidelines between crack cocaine vs. powder cocaine in the 90s. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people. Therefore, when these guidelines were followed, they had have racially biased (that is, racist) outcomes regardless of intent or bias of the individual judges. (See: https://en.wikipedia.org/wiki/Fair_Sentencing_Act). 11.2.2. Recommendation Algorithms as Systems# Similarly, recommendation algorithms are rules set in place that might produce biased, unfair, or unethical outcomes. This can happen whether or not the creators of the algorithm intended these outcomes. Once these algorithms are in place though, the have an influence on what happens on a social media site. Individuals still have responsibility with how they behave, but the system itself may be set up so that individual efforts cannot not be overcome the problems in the system. Fig. 11.1 A tweet highlighting the difference between structural problems (systemic analysis) and personal choices (individual analysis).# Sometimes though, individuals are still blamed for systemic problems. For example, Elon Musk, who has the power to change Twitters recommendation algorithm, blames the users for the results: Fig. 11.2 A tweet from current Twitter owner Elon Musk blaming users for how the recommendation algorithm interprets their behavior.# Elon Musk’s view expressed in that tweet is different than some of the ideas of the previous owners, who at least tried to figure out how to make Twitter’s algorithm support healthier conversation. Though even modifying a recommendation algorithm has limits in what it can do, as social groups and human behavior may be able to overcome the recommendation algorithms influence.

      This section clearly explains the difference between individual and systemic responsibility, and the sentencing example makes the idea of systemic bias very concrete and easy to understand. I especially like how recommendation algorithms are framed as systems that can produce harmful outcomes even without bad intent from individual designers or users. The contrast between blaming users and addressing structural problems is effective, and the tweets help connect theory to real-world discourse. Overall, this is a strong and thought-provoking explanation of why ethical analysis of algorithms needs to go beyond individual behavior.

    1. 10.5. Design Analysis: Accessibility# We want to provide you, the reader, a chance to explore accessibility more. In this activity you will be looking at a social media site on your device (e.g., your phone or computer). We will again follow the five step CIDER method (Critique, Imagine, Design, Expand, Repeat). So open a social media site on your device (the website or app may have additional accessibility settings, but don’t use those for now, just consider how it works as it is currently). Then do the following (preferably on paper or in a blank computer document): 10.5.1. Critique (3-5 minutes, by yourself):# What assumptions do the site and your device make about individuals or groups using social media, which might not be true or might cause problems? List as many as you can think of (bullet points encouraged). 10.5.2. Imagine (2-3 minutes, by yourself):# Select one of the above assumptions that you think is important to address. Then write a 1-2 sentence scenario where a user face difficulties because of the assumption you selected. This represents one way the design could exclude certain users. 10.5.3. Design (3-5 minutes, by yourself):# Brainstorm ways to change the site or your device to avoid the scenario you wrote above. List as many different kinds of potential solutions you can think of – aim for ten or more (bullet points encouraged). 10.5.4. Expand (5-10 minutes, with others):# Combine your list of critiques with someone else’s (or if possible, have a whole class combine theirs). 10.5.5. Repeat the Imagine and Design Tasks:# Select another assumption from the list above that you think is important to address. Make sure to choose a different assumption than you used before. Choose one that you didn’t come up with yourself, if possible. Repeat the Imagine and Design steps. 10.5.6. Explore accessibility settings# Now, try to find the accessibility settings on the social media site and on your device. For each setting you see, try to come up with what disabilities that setting would be beneficial for (there may be multiple).

      This activity is a really effective way to make accessibility feel concrete instead of abstract. By starting with critique and assumptions, it highlights how many “default” design choices silently exclude users before accessibility settings are even considered. I especially like how the Imagine and Design steps force you to think through a specific user’s experience and then brainstorm multiple solutions, rather than jumping straight to a single fix. Ending with exploring existing accessibility settings also reinforces that accessibility is often an afterthought in design, even though it should be part of the core system from the beginning.

    1. 10.2. Accessible Design# There are several ways of managing disabilities. All of these ways of managing disabilities might be appropriate at different times for different situations. 10.2.1. Coping Strategies# Those with disabilities often find ways to cope with their disability, that is, find ways to work around difficulties they encounter and seek out places and strategies that work for them (whether realizing they have a disability or not). Additionally, people with disabilities might change their behavior (whether intentionally or not) to hide the fact that they have a disability, which is called masking and may take a mental or physical toll on the person masking, which others around them won’t realize. For example, kids who are nearsighted and don’t realize their ability to see is different from other kids will often seek out seats at the front of classrooms where they can see better. As for us two authors, we both have ADHD and were drawn to PhD programs where our tendency to hyperfocus on following our curiosity was rewarded (though executive dysfunction with finishing projects created challenges)1. This way of managing disabilities puts the burden fully on disabled people to manage their disability in a world that was not designed for them, trying to fit in with “normal” people. 10.2.2. Modifying the Person# Another way of managing disabilities is assistive technology, which is something that helps a disabled person act as though they were not disabled. In other words, it is something that helps a disabled person become more “normal” (according to whatever a society’s assumptions are). For example: Glasses help people with near-sightedness see in the same way that people with “normal” vision do Walkers and wheelchairs can help some disabled people move around closer to the way “normal” people can (though stairs can still be a problem) A spoon might automatically balance itself when held by someone whose hands shake Stimulants (e.g., caffeine, Adderall) can increase executive function in people with ADHD, so they can plan and complete tasks more like how neurotypical people do. Assistive technologies give tools to disabled people to help them become more “normal.” So the disabled person becomes able to move through a world that was not designed for them. But there is still an expectation that disabled people must become more “normal,” and often these assistive technologies are very expensive. Additionally, attempts to make disabled people (or people with other differences) act “normal” can be abusive, such as Applied Behavior Analysis (ABA) therapy for autistic people, or “Gay Conversion Therapy.” 10.2.3. Making an environment work for all# Another strategy for managing disability is to use Universal Design, which originated in architecture. In universal design, the goal is to make environments and buildings have options so that there is a way for everyone to use it2. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor. In this way of managing disabilities, the burden is put on the designers to make sure the environment works for everyone, though disabled people might need to go out of their way to access features of the environment. 10.2.4. Making a tool adapt to users# When creating computer programs, programmers can do things that aren’t possible with architecture (where Universal Design came out of), that is: programs can change how they work for each individual user. All people (including disabled people) have different abilities, and making a system that can modify how it runs to match the abilities a user has is called Ability based design. For example, a phone might detect that the user has gone from a dark to a light environment, and might automatically change the phone brightness or color scheme to be easier to read. Or a computer program might detect that a user’s hands tremble when they are trying to select something on the screen, and the computer might change the text size, or try to guess the intended selection. In this way of managing disabilities, the burden is put on the computer programmers and designers to detect and adapt to the disabled person. 10.2.5. Are things getting better?# We could look at inventions of new accessible technologies and think the world is getting better for disabled people. But in reality, it is much more complicated. Some new technologies make improvements for some people with some disabilities, but other new technologies are continually being made in ways that are not accessible. And, in general, cultures shift in many ways all the time, making things better or worse for different disabled people. 1 We’ve also noticed many youtube video essayists have mentioned having ADHD. This is perhaps another job that attracts those who tend to hyperfocus on whatever topic grabbed their attention, and then after releasing their video, move on to something completely different. 2 Universal Design has taken some criticism. Some have updated it, such as in acknowledging that different people’s needs may be contradictory, and others have replaced it with frameworks like Inclusive Design..

      This section does a great job comparing different ways of managing disability and, more importantly, showing how each approach places responsibility on different people. Coping strategies and modifying the person often shift the burden onto disabled individuals, asking them to adapt or appear “normal” in environments that were not designed for them. In contrast, universal design and ability-based design move that responsibility to designers and programmers, emphasizing systems that work for a wider range of users. I also appreciated the final point that accessibility is not a linear story of progress—new technologies can improve access for some people while creating new barriers for others, making accessibility an ongoing design challenge rather than a solved problem.

    1. 4.1.2. Basic Data Types# First, we’ll look at a few basic data storage types. We’ll also be including some code examples you can look at, though don’t worry yet if you don’t understand the code, since we’ll be covering these in more detail throughout the rest of the book. Booleans (True / False)# Binary consisting of 0s and 1s make it easy to represent true and false values, where 1 often represents true and 0 represents false. Most programming languages have built-in ways of representing True and False values. Fig. 4.4 A blue checkmark is something an account either has or doesn’t so it can be stored as a binary value.# Booleans are often created when doing sort of comparison or test, like: Do I have enough money in my wallet to pay for the item? Does this tweet start with “hello” (meaning it is a greeting)? Click to see example Python code # Save a boolean value in a variable called does_user_have_blue_checkmark does_user_have_blue_checkmark = True # Save a boolean value in a variable based on a comparison. # The code checks if a wallet has more in it than the cost of the item # which will be True or False, and be saved in has_enough_money has_enough_money = money_in_wallet > cost_of_item # Save a boolean value in a variable based on a function call. # The code checks if the text of a tweet (stored in tweet_text) starts # with "Hello", which will be True or False, and be saved in is_greeting is_greeting = tweet_text.starts_with("Hello") Copy to clipboard Numbers# Numbers are normally stored in two different ways: Integer: whole numbers like 5, 37, -10, and 0 Floating point numbers: these can represent decimals like: 0.75, -1.333, and 3 x 10 ^ 8 Fig. 4.5 The number of replies, retweets, and likes can be represented as integer numbers (197.8K can be stored as a whole number like 197,800).

      This section helped me clearly see how different data types represent different kinds of information. Booleans are especially interesting because they force complex situations into true/false decisions, which can oversimplify reality. It also made me realize how choices about numbers and strings affect what computers can accurately store and how much meaning might be lost through rounding or categorization.

    1. Dictionaries# The other method of grouping data that we will discuss here is called a “dictionary” (sometimes also called a “map”). You can think of this as like a language dictionary where there is a word and a definition for each word. Then you can look up any name or word and find the value or definition. Example: An English Language Dictionary with definitions of three terms: Social Media: An internet-based platform used for people to form connections to each other and share things. Ethics: Thinking systematically about what makes something morally right or wrong, or using ethical systems to analyze moral concerns in different situations Automation: Making a process or activity that can run on its own without needing a human to guide it. The Dictionary data type allows programmers to combine several pieces of data by naming each piece. When we do this, the dictionary will have a number of names, and for each of those names a piece of information (called a “value” in this context). Dictionary: Name 1: Value 1 Name 2: Value 2 Name 3: Value 3 So if we look at the example tweet, we can combine all the data in a dictionary. Fig. 4.9 A tweet with photos of a cute puppy! (source)# Dictionary (with some of the data): user_name: “WeRateDogs®” user_handle: “@dog_rates” user_has_blue_checkmark: True tweet_text: “This is Woods. He’s here to help with the dishes. Specifically the pre-rinse, where he licks every item he can. 12/10” number_of_replies: 1533 number_of_retweets: 26200 number_of_likes: 197800 Click to see example Python code # Save some info about a tweet in a variable called tweet_info tweet_info = { "user_name": "WeRateDogs®", "user_handle": "@dog_rates", "user_has_blue_checkmark": True, "tweet_text": "This is Woods. He’s here to help with the dishes. Specifically the pre-rinse, where he licks every item he can. 12/10", "number_of_replies": 1533, "number_of_retweets": 26200, "number_of_likes": 197800 } Copy to clipboard Note: We’ll demonstrate dictionaries later in Chapter 5: History of Social Media, and Chapter 8: Data Mining. Groups within Groups# We can use dictionaries and lists together to make lists of dictionaries, lists of lists, dictionaries of lists, or any other combination. So for example, I could make a list of Twitter users. Each Twitter user could be a dictionary with info about that user, and one piece of information it might have is a list of who that user is following. List of users: User 1: Username: kylethayer (a String) Twitter handle: @kylemthayer (a String) Profile Picture: [TODO picture here] (an image) Follows: @SusanNotess, @UW, @UW_iSchool, @ajlunited, … (a list of Strings) User 2: Username: Dr Susan Notess (a String) Twitter handle: @SusanNotess (a String) Profile Picture: [TODO picture here] (an image) Follows: @kylemthayer, @histoftech, @j_kalla, @dbroockman, @qaxaawut, @shengokai, @laniwhatison (a list of Strings)

      I like the dictionary analogy because it makes clear how data gets structured and labeled. By assigning names to values, dictionaries don’t just store information, they also shape how programmers interpret and access it. This made me realize that how data is organized can influence what questions are easy—or hard—to ask later.

    1. 3.4. Bots and Responsibility# As we think about the responsibility in ethical scenarios on social media, the existence of bots causes some complications. 3.4.1. A Protesting Donkey?# To get an idea of the type of complications we run into, let’s look at the use of donkeys in protests in Oman: “public expressions of discontent in the form of occasional student demonstrations, anonymous leaflets, and other rather creative forms of public communication. Only in Oman has the occasional donkey…been used as a mobile billboard to express anti-regime sentiments. There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed.” From Kings and People: Information and Authority in Oman, Qatar, and the Persian Gulf by Dale F. Eickelman1 In this example, some clever protesters have made a donkey perform the act of protest: walking through the streets displaying a political message. But, since the donkey does not understand the act of protest it is performing, it can’t be rightly punished for protesting. The protesters have managed to separate the intention of protest (the political message inscribed on the donkey) and the act of protest (the donkey wandering through the streets). This allows the protesters to remain anonymous and the donkey unaware of it’s political mission. 3.4.2. Bots and responsibility# Bots present a similar disconnect between intentions and actions. Bot programs are written by one or more people, potentially all with different intentions, and they are run by others people, or sometimes scheduled by people to be run by computers. This means we can analyze the ethics of the action of the bot, as well as the intentions of the various people involved, though those all might be disconnected. 3.4.3. Reflection questions# How are people’s expectations different for a bot and a “normal” user? Choose an example social media bot (find on your own or look at Examples of Bots (or apps).) What does this bot do that a normal person wouldn’t be able to, or wouldn’t be able to as easily? Who is in charge of creating and running this bot? Does the fact that it is a bot change how you feel about its actions? Why do you think social media platforms allow bots to operate? Why would users want to be able to make bots? How does allowing bots influence social media sites’ profitability? 1 We haven’t been able to get the original chapter to load to see if it indeed says that, but I found it quoted here and here. We also don’t know if this is common or representative of protests in Oman, nor that we fully understand the cultural importance of what is happening in this story. Still, we are using it at least as a thought experiment. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch03_bots" }, predefinedOutput: true } kernelName = 'python3'

      I found the donkey protest example helpful for understanding how responsibility can be separated from action. Just like the donkey does not understand the protest it carries, bots can perform actions without intention or awareness. This makes it harder to assign responsibility, since the people who design, deploy, or benefit from a bot may all have different roles and intentions.

    1. 3.1. Definition of a bot# There are several ways computer programs are involved with social media. One of them is a “bot,” a computer program that acts through a social media account. There are other ways of programming with social media that we won’t consider a bot (and we will cover these at various points as well): The social media platform itself is run with computer programs, such as recommendation algorithms (chapter 12). Various groups want to gather data from social media, such as advertisers and scientists. This data is gathered and analyzed with computer programs, which we will not consider bots, but will cover later, such as in Chapter 8: Data Mining. Bots, on the other hand, will do actions through social media accounts and can appear to be like any other user. The bot might be the only thing posting to the account, or human users might sometimes use a bot to post for them. Note that sometimes people use “bots” to mean inauthentically run accounts, such as those run by actual humans, but are paid to post things like advertisements or political content. We will not consider those to be bots, since they aren’t run by a computer. Though we might consider these to be run by “human computers” who are following the instructions given to them, such as in a click farm: Fig. 3.1 A photo that is likely from a click-farm, where a human computer is paid to do actions through multiple accounts, such as like a post or rate an app. For our purposes here, we consider this a type of automation, but we are not considering this a “bot,” since it is not using (electrical) computer programming.# { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch03_bots" }, predefinedOutput: true } kernelName = 'python3' previous 3. Bots next

      This section helped clarify that not all automation on social media counts as a bot. I found it especially useful that the definition focuses on whether the account is operated by computer code rather than by humans, even if those humans behave mechanically, like in click farms. This distinction makes it easier to think more precisely about responsibility and accountability when automation affects online spaces.

    1. 9.3. Additional Privacy Violations# Besides hacking, there are other forms of privacy violations, such as: Unclear Privacy Rules: Sometimes privacy rules aren’t made clear to the people using a system. For example: If you send “private” messages on a work system, your boss might be able to read them. When Elon Musk purchased Twitter, he also was purchasing access to all Twitter Direct Messages Others Posting Without Permission: Someone may post something about another person without their permission. See in particular: The perils of ‘sharenting’: The parents who share too much Metadata: Sometimes the metadata that comes with content might violate someone’s privacy. For example, in 2012, former tech CEO John McAfee was a suspect in a murder in Belize, John McAfee hid out in secret. But when Vice magazine wrote an article about him, the photos in the story contained metadata with the exact location in Guatemala. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them. Inferred Data: Sometimes information that doesn’t directly exist can be inferred through data mining (as we saw last chapter), and the creation of that new information could be a privacy violation. This includes the creation of Shadow Profiles, which are information about the user that the user didn’t provide or consent to Non-User Information: Social Media sites might collect information about people who don’t have accounts, like how Facebook does

      This section made me realize that privacy violations don’t always involve hacking or illegal access. Even data that seems harmless—like metadata or anonymized datasets—can still expose people in ways they never agreed to. I was especially surprised by how companies can infer new information or create shadow profiles about both users and non-users, which shows how limited individual control over personal data really is.

    1. 9.2. Security# While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure. For example, the proper security practice for storing user passwords is to use a special individual encryption process for each individual password. This way the database can only confirm that a password was the right one, but it can’t independently look up what the password is or even tell if two people used the same password. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time). But while that is the proper security for storing passwords. So for example, Facebook stored millions of Instagram passwords in plain text, meaning the passwords weren’t encrypted and anyone with access to the database could simply read everyone’s passwords. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users. From a security perspective there are many risks that a company faces, such as: Employees at the company misusing their access, like Facebook employees using their database permissions to stalk women Hackers finding a vulnerability and inserting, modifying, or downloading information. For example: hackers stealing the names, Social Security numbers, and birthdates of 143 million Americans from Equifax hackers posting publicly the phone numbers, names, locations, and some email addresses of 530 million Facebook users, or about 7% of all people on Earth Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like: Password reuse attacks, where if they find out your password from one site, they try that password on many other sites Hackers tricking a computer into thinking they are another site, for example: the US NSA impersonated Google Social engineering, where they try to gain access to information or locations by tricking people. For example: Phishing attacks, where they make a fake version of a website or app and try to get you to enter your information or password into it. Some people have made malicious QR codes to take you to a phishing site. Many of the actions done by the con-man Frank Abagnale, which were portrayed in the movie Catch Me If You Can One of the things you can do as an individual to better protect yourself against hacking is to enable 2-factor authentication on your accounts. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch09_privacy" }, predefinedOutput: true } kernelName = 'python3' previous 9.1. Privacy next 9.3. Additional Privacy Violations By Kyle Thayer and Susan Notess © Copyright 2022.

      This section helped me realize that security failures are often not just technical problems, but also human and organizational ones. Even when proper security practices are well known, companies still choose convenience or cost-saving over protecting users’ data. What stood out to me most was how easily individuals can become targets through things like password reuse or phishing, which makes personal security practices like two-factor authentication feel necessary rather than optional.

    1. 8.2. Data From the Reddit API# When we’ve been accessing Reddit through Python and the “PRAW” code library. The praw code library works by sending requests across the internet to Reddit, using what is called an “application programming interface” or API for short. APIs have a set of rules for what requests you can make, what happens when you make the request, and what information you can get back. If you are interested in learning more about what you can do with praw and what information you can get back, you can look at the official documentation for those. But be warned they are not organized in a friendly way for newcomers and take some getting used to to figure out what these documentation pages are talking about. So, if you are interested, you can look at the praw library documentation to find out what the library can do (again, not organized in a beginner-friendly way). You can learn a little more by clicking on the praw models and finding a list of the types of data for each of the models, and a list of functions (i.e., actions) you can do with them. You can also look up information on the data that you can get from the Reddit API by looking at the Reddit API Documentation. The Reddit API lets you access just some of the data that Reddit tracks, but Reddit and other social media platforms track much more than they let you have access to.

      This section shows how powerful—and dangerous—data mining can be when patterns are taken out of context. The examples make it clear that just because data lines up does not mean it reveals a true cause, especially with spurious correlations. It highlights how easily data can be used to support misleading or biased conclusions, which is especially concerning when these inferences affect real people’s identities and social outcomes.

    1. Media Data# Social media platforms collect various types of data on their users. Some data is directly provided to the platform by the users. Platforms may ask users for information like: email address name profile picture interests friends Platforms also collect information on how users interact with the site. They might collect information like (they don’t necessarily collect all this, but they might): when users are logged on and logged off who users interact with What users click on what posts users pause over where users are located what users send in direct messages to each other Online advertisers can see what pages their ads are being requested on, and track users across those sites. So, if an advertiser sees their ad is being displayed on an Amazon page for shoes, then the advertiser can start showing shoe ads to that same user when they go to another website. Additionally, social media might collect information about non-users, such as when a user posts a picture of themselves with a friend who doesn’t have an account, or a user shares their phone contact list with a social media site, some of whom don’t have accounts (Facebook does this). Social media platforms then use “data mining” to search through all this data to try to learn more about their users, find patterns of behavior, and in the end, make more money.

      This section clearly shows how much data social media platforms collect, often beyond what users knowingly provide. I was especially surprised by the idea that platforms can collect data about non-users through photos or contact lists. It makes it clear that participation in social media data systems isn’t always a choice, which raises serious concerns about privacy and consent.

  3. Jan 2026
    1. 2.3.5. Compilers and Programimng Languages# History# In the early 1950s, Grace Hopper proposed a better way of programming a computer. She suggested creating a “programming language” based on English words with a “compiler” computer program that would turn the computer language code into binary computer instructions. photo of Grace Hopper c. 1960, at that time a Commander in the US Navy. When Hopper’s ideas were mostly ignored, she proceeded to create her own compiler and later helped design some of the most important and influential early programming languages and compilers. The new set-up for programming# So, thanks to Grace Hopper, we now have a new set-up for computer programming, which is what programmers still use today: When someone wants a computer to perform a task (that hasn’t already been programmed), a human programmer will act as a translator to translate that task into a programming language. Next, a compiler (or interpreter) program will translate the programming language code into the binary code that the computer runs. In this set-up, the programming language acts as an intermediate language the way that French did in my earlier analogy. In this set-up, a programmers basic task is to do these three things: Given a problem, break it down into steps for a computer Write those steps down in a programming language Run the compiler or interpreter, so the computer program can run on the computer Programming languages# Programming languages (e.g., Python, R, Java) are specially designed languages that attempt to split the difference between how a computer thinks and communicates and how people think and communicate. There are many programming languages, with different specializations and trade-offs. In this book, we will use Python, which is commonly used in data science tasks, and has support for writing programs that work with Reddit. Compilers / Interpreters# Compilers are special programs that translate code written in a programming language into the binary 0s and 1s that a computer runs. There are two varieties of compilers: standard compiler: takes a whole computer program and turn it all into binary so it can be run later interpreter: turns the computer language code into binary as it is running the program Python uses an interpreter, so when you run a Python program, the interpreter translates the Python code into binary while it’s running it. Programming in this book# Throughout the rest of this book, we will take ideas for programs written in English and translate them into Python code, and we will look at Python code and translate it back into English descriptions of what the code does. The Python Interpreter will then translate this code into binary instructions, which the computer will then run. Next, let’s look at an example computer program that posts one tweet.

      Grace Hopper’s work shows how programming languages and compilers make computers more accessible to humans by acting as a bridge between human language and machine code. By introducing higher-level languages and compilers, she shifted programming from thinking only in binary to thinking in structured steps, which made software development more flexible and powerful. This structure also highlights that programmers play a key role in translating human intent into actions computers can execute.

    1. 1.2. Kumail Nanjiani’s Reflections on Ethics in Tech# Image source Kumail Nanjiani was a star of the Silicon Valley TV Show, which was about the tech industry. He posted these reflections on ethics in tech on Twitter (@kumailn) on November 1, 2017: As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech. Often we’ll see tech that is scary. I don’t mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues. And we’ll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech. They don’t even have a pat rehearsed answer. They are shocked at being asked. Which means nobody is asking those questions. “We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast. That there is no way humanity or laws can keep up. We don’t even know how to deal with open death threats online. Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news. You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end. Kumail Nanjiani 1.2.1. Reflection questions:# What do you think is the responsibility of tech workers to think through the ethical implications of what they are making? Why do you think the people who Kumail talked with didn’t have answers to his questions?

      I think tech workers have a responsibility to consider the ethical implications of what they create, because technology can shape behavior, privacy, and power in ways that are difficult to reverse. As Kumail Nanjiani points out, once technology is released, it cannot simply be taken back, so ethical thinking should happen before harm occurs.

      I think the people Kumail spoke with lacked answers because ethical reflection is often not prioritized in tech culture. Many developers focus on whether something can be built rather than whether it should be built, and since these questions are rarely asked, they may not be prepared to address them.

    1. 7.6.3. Trolling and Nihilism# While trolling can be done for many reasons, some trolling communities take on a sort of nihilistic philosophy: it doesn’t matter if something is true or not, it doesn’t matter if people get hurt, the only thing that might matter is if you can provoke a reaction. We can see this nihilism show up in one of the versions of the self-contradictory “Rules of the Internet:” 8. There are no real rules about posting … 20. Nothing is to be taken seriously … 42. Nothing is Sacred Youtuber Innuendo Studios talks about the way arguments are made in a community like 4chan: You can’t know whether they mean what they say, or are only arguing as though they mean what they say. And entire debates may just be a single person stirring the pot [e.g., sockpuppets]. Such a community will naturally attract people who enjoy argument for its own sake, and will naturally trend oward the most extremte version of any opinion. In short, this is the free marketplace of ideas. No code of ethics, no social mores, no accountability. … It’s not that they’re lying, it’s that they just don’t care. […] When they make these kinds of arguments they legitimately do not care whether the words coming out of their mouths are true. If they cared, before they said something is true, they would look it up. The Alt-Right Playbook: The Card Says Moops by Innuendo Studios While there is a nihilistic worldview where nothing matters, we can see how this plays out practically, which is that they tend to protect their group (normally white and male), and tend to be extremely hostile to any other group. They will express extreme misogyny (like we saw in the Rules of the Internet: “Rule 30. There are no girls on the internet. Rule 31. TITS or GTFO - the choice is yours”), and extreme racism (like an invented Nazi My Little Pony character). Is this just hypocritical, or is it ethically wrong? It depends, of course, on what tools we use to evaluate this kind of trolling. If the trolls claim to be nihilists about ethics, or indeed if they are egoists, then they would argue that this doesn’t matter and that there’s no normative basis for objecting to the disruption and harm caused by their trolling. But on just about any other ethical approach, there are one or more reasons available for objecting to the disruptions and harm caused by these trolls! If the only way to get a moral pass on this type of trolling is to choose an ethical framework that tells you harming others doesn’t matter, then it looks like this nihilist viewpoint isn’t deployed in good faith1. Rather, with any serious (i.e., non-avoidant) moral framework, this type of trolling is ethically wrong for one or more reasons (though how we explain it is wrong depends on the specific framework).

      This section helped me think about trolling in a much more nuanced way, especially the idea that disruption itself isn’t automatically good or bad. I found the discussion about group formation and norm enforcement really useful, because it explains why trolling can feel threatening—it challenges the patterns and signals that groups rely on to define who belongs. The comparison between trolling, protest, and revolution also stood out to me, since it shows how moral judgment often depends on whether we see the existing social order as legitimate. Overall, this section made it clear that evaluating trolling ethically requires looking beyond intent or humor and examining what is being disrupted and who is harmed or protected by that disruption.

    1. 7.2. Origins of trolling# While the term “trolling” in the sense we are talking about in this chapter comes out of internet culture, the type of actions that we now call trolling have been happening as far back as we have historical records. 7.2.1. Pre-internet trolling# Before the internet, there were many activities that we would probably now call “trolling”, such as: Hazing: Causing difficulty or suffering for people who are new to a group Satire: (e.g., A Modest Proposal) which takes a known form, but does something unexpected or disruptive with it. Practical jokes / pranks The video above is a 1957 April Fool’s Day hoax video broadcast by the BBC claiming to show how spaghetti noodles are harvested from trees. Additionally, the enjoyment of causing others pain or distress (“lulz”) has also been part of the human experience for millennia: “Boys throw stones at frogs in fun, but the frogs do not die in fun, but in earnest.” Bion of Borysthenes (Greece ~300 BCE) Additionally, the inauthentic arguments have long been observed, and were memorably explored by Jean-Paul Sartre as “Bad Faith”. “Bad faith” here means pretending to hold views or feelings, while not actually holding them (this may be intentional, or it may be through self-deception). Sartre particularly observed this in arguments made by antisemites while he lived in Nazi-controlled Paris: “Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.” Jean-Paul Sartre, 1945 CE, Paris, France 7.2.2. Origins of Internet Trolling# We can trace Internet trolling to early social media in the 1980s and 1990s, particularly in early online message boards and in early online video games. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from. One set of the early Internet-based video games were Multi-User Dungeons (MUDs), where you were given a text description of where you were and could say where to go (North, South, East, West) and text would tell you where you were next. In these games, you would come across other players and could type messages or commands to attack them. These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS). In these MUDs, players developed activities that we now consider trolling, such as “Griefing” where one player intentionally causes another player “grief” or distress (such as a powerful player finding a weak player and repeatedly killing the weak player the instant they respawn), and “Flaming” where a player intentionally starts a hostile or offensive conversation. In the 2000s, trolling went from an activity done in some communities to the creation of communities that centered around trolling such as 4chan (2003), Encyclopedia Dramatica (2004), and some forums on Reddit (2005). These trolling communities eventually started compiling half-joking sets of “Rules of the Internet” that both outlined their trolling philosophy: Rule 43. The more beautiful and pure a thing is - the more satisfying it is to corrupt it and their extreme misogyny: Rule 30. There are no girls on the internet Rule 31. TITS or GTFO - the choice is yours [meaning: if you claim to be a girl/woman, then either post a photo of your breasts, or get the fuck out] You can read more at: knowyourmeme and wikipedia { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch07_trolling" }, predefinedOutput: true } kernelName = 'python3' previous 7.1. What is trolling

      This section helped me realize that trolling isn’t just an internet-specific problem, but a behavior that has existed long before online spaces. I found the connection to satire, hazing, and especially Sartre’s idea of “bad faith” really interesting, because it shows how trolling often isn’t about genuine disagreement but about disrupting or provoking others. Understanding these historical roots makes it clearer why trolling is so persistent online today, and why simply asking trolls to “argue rationally” often doesn’t work.

    1. 6.5. Parasocial Relationships# Another phenomenon related to authenticity which is common on social media is the parasocial relationship. Parasocial relationships are when a viewer or follower of a public figure (that is, a celebrity) feel like they know the public figure, and may even feel a sort of friendship with them, but the public figure doesn’t know the viewer at all. Parasocial relationships are not a new phenomenon, but social media has increased our ability to form both sides of these bonds. As comedian Bo Burnham put it: “This awful D-list celebrity pressure I had experienced onstage has now been democratized.” Learn more about parasocial relationships: StrucciMovies: Fake Friends YouTube Series Sarah Z: How Fans Treat Creators 33 min

      The example of Mr. Rogers shows that parasocial relationship sare not automatically unethical or inauthentic. What seems important here is that he tried to clearly define the limits of the relationship, such as calling viewers “television friends” and explaining that visits were not possible. This transparency helped make the parasocial relationship feel more authentic, even though it was not a real two-way friendship.

    1. 6.1. Authenticity# Early in the days of YouTube, one YouTube channel (lonelygirl15) started to release vlogs (video web logs) consisting of a girl in her room giving updates on the mundane dramas of her life. But as the channel continued posting videos and gaining popularity, viewers started to question if the events being told in the vlogs were true stories, or if they were fictional. Eventually, users discovered that it was a fictional show, and the girl giving the updates was an actress. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.

      The lonelygirl15 example shows why authenticity matters so much on social media. What upset people was not that the story was fictional, but that the way the connection was presented did not match reality. This makes me think that authenticity is less about whether something is “real” or “fake,” and more about whether audiences clearly understand what kind of relationship or signal they are engaging with.

    1. 5.7. Reflection Activities: Actions on Social Media Designs# 5.7.1. Comparing social media actions# Open two social media sites and choose equivalent views on each (e.g., a list of posts, an individual post, an author page etc.). List what actions are immediately available. Then explore and see what actions are available after one additional action (e.g., opening a menu), then what actions are two steps away. What do you notice about the similarities and differences in these sites? 5.7.2. Design a social media site# Now it’s your turn to try designing a social media site. Decide a type of social media site (e.g., a video site like youtube or tiktok, or a dating site, etc.), and a particular view of that site (e.g., profile picture, post, comment, etc.). Draw a rough sketch of the view of the site, and then make a list of: What actions would you want available immediately What actions would you want one or two steps away? What actions would you not allow users to do (e.g., there is no button anywhere that will let you delete someone else’s account)?

      This activity shows how design choices influence user behavior by making some actions more visible than others, By comparing different platforms, it becomes clear that actions like sharing or liking are often prioritized, while actions like reporting or privacy controls are placed further away.

    1. The first versions of internet-based social media started becoming popular in the late 1900s. The internet of those days is now called “Web 1.0.” The Web 1.0 internet had some features that make it stand out compared to later internet trends: If you wanted to make a profile to talk about yourself, or to show off your work, you had to create your own personal webpage, which others could visit. These pages had limited interaction, so you were more likely to load one thing at a time and look at a separate page for each post or piece of information. Communication platforms were generally separate from these profiles or personal web pages.

      Early Web 1.0 social media required much more technical effort from users, such as creating personal webpages. This likely limited participation to people with more technical knowledge and made online communities smaller and less diverse.