22 Matching Annotations
  1. Last 7 days
    1. While there are healthy ways of sharing difficult emotions and experiences (see the next section), when these difficult emotions and experiences are thrown at unsuspecting and unwilling audiences, that is called trauma dumping. Social media can make trauma dumping easier. For example, with parasocial relationships, you might feel like the celebrity is your friend who wants to hear your trauma. And with context collapse, where audiences are combined, how would you share your trauma with an appropriate audience and not an inappropriate one (e.g., if you re-post something and talk about how it reminds you of your trauma, are you dumping it on the original poster?).

      I have experienced being on the viewer-side of users trauma dumping- whether it be in video formats, posted publicly for everyone to see and interact with, or under a comment section, where no one asked to read about their traumatic past. In many ways I can see how this may be therapeutic or even comforting, knowing that there are people out there to listen to and talk about your past with, however there are also cons to doing so. Most users when scrolling on social media somewhat expect there to be negative, sad, or frustrating stories that they will come across, however a lot of users may not feel comfortable being told personal stories and traumatic events that have happened to other people, as it could be triggering to their own past, or make them skeptical, fearing, or deeply uncomfortable. There have been various times where under a persons post where they're sharing their health journey from an eating disorder I will see users in the comments talking about all the ways in which they endulged in their unhealthy habits, and that could not only make users and the video poster uncomfortable, but even push them into reforming their bad habits. There's so many potential consequences sharing one's traumatic story online can bring, I find it would be much more efficient and beneficial to all users if these people would instead speak to a therapist or close friends (who are comfortable speaking about this with), rather than strangers on the internet who never asked to read or listen to your story. **

    1. Some researchers have found that people using social media may enter a dissociation state, where they lose track of time (like what happens when someone is reading a good book).

      Through various research methods in the past, I have found that this sort of "dissociation state" usually occurs because when people are using social media, they are mindlessly scrolling until they find their next dose of short-term dopamine. It's what also keeps users on the platform. A constant need and search for dopamine from a funny video or a cute dog keeps the user scrolling, until the hours are passing and it's suddenly 4am.

  2. Feb 2026
    1. Though even modifying a recommendation algorithm has limits in what it can do, as social groups and human behavior may be able to overcome the recommendation algorithms influence.

      I have personally experienced this especially on Twitter or Threads. These two apps prioritize engagement, doesn't matter what type it may be- positive, negative, neutral, the algorithm will take it as a sign that you'd like to see more of that content, and suggest posts that are similar to the one that you interacted with. Tiktok I think is a bit smarter with how they've designed their algorithm, as whenever I leave a negative comment on a negative post that I dont agree with, it doesn't show me more of that content, but rather people also hating on that negative content. It's a lot smarter in the ways that it shows posts to it's users, it's able to read when a user has a positive or negative response and is able to adjust accordingly.

    1. Now, how these algorithms precisely work is hard to know, because social media sites keep these algorithms secret, probably for multiple reasons: They don’t want another social media site copying their hard work in coming up with an algorithm They don’t want users to see the algorithm and then be able to complain about specific details They don’t want malicious users to see the algorithm and figure out how to best make their content go viral

      Out of the large variety of social media apps, I believe Tiktok is the most renowned for its algorithm and the methods in which it recommends videos to users. Just the other day I was in a situation where I wanted to watch videos on silent, and the tiktok I came across was captioned something relating to the audio that was playing, but because I couldn't hear it I went to the comments for some clues. To my surprise I was met with other users who were in the same situation as me- commenting things such as: "So we're all watching Tiktok on mute rn?" or "I'm never on mute, how does tiktok know that everyone whos seeing this video is watching it on mute???" I found it pretty crazy and I was kind of freaked out because it took such a niche scenario for me to be in and put a video on my page that perfectly matched what me and hundreds of thousands of other people were doing. It wasn't like I had just said some words around my phone and the app decided to show me a video related to those words, this was a situation where I didn't think my phone would have any idea that I was on silent. ****

    1. Additionally, attempts to make disabled people (or people with other differences) act “normal” can be abusive, such as Applied Behavior Analysis (ABA) therapy for autistic people, or “Gay Conversion Therapy.”

      I think it's interesting to see what is and what isn't depicted as a disability. As someone who doesn't believe that being a part of the LGBT community makes you "disabled"- that it's actually quite normal, seeing groups of people form Gay Conversion Therapy, it makes me incredibly sad. It's putting people into a position where they feel that there is something inherently wrong with them, and that it deserves changing. Its like putting people who use glasses into a "Conversion Therapy" where they're told to simply "change who they are and what their disability is". It's not as simple as changing your clothes, it's something that's a part of you and that makes you, you. It's interesting to see the lengths people will go to change someone who's "different" or not a part of "the norm".**

    1. If an airplane seat was designed with little leg room, assuming people’s legs wouldn’t be too long, then someone who is very tall, or who has difficulty bending their legs would have a disability in that situation.

      Although I wouldn't consider this particular example as a "disability" I think it definitely affects a large group of people who don't fit into the "average height" category. As someone who's 5'5", I've never had to deal with this issue, as airplane seats are usually quite comfy for me, but when I met my boyfriend I found out relatively quickly that booking flights is a pretty big hassle for this reason. Going on a flight for 3 hours or less could be bearable, but when traveling around the world on a 15 hour flight, that's a nightmare for anyone that is taller than average. You have to either pay extra to choose a seat by the emergency exit, or have your legs in an awkward position in the aisle for the entirety of the flight. It's easy to not consider the minority of a population when designing something for consumers, but it is essential to keep the minority in mind, especially as sometimes their lives could be at risk if someone were to be designing a device or technology that was made for health related reasons

    1. Hackers finding a vulnerability and inserting, modifying, or downloading information. For example:

      When reading this section I'm reminded of a pretty infamous case that happened not long ago. The "Tea App" was used among young girls all across North America, and it was known for being an app where young women would go online and post about men they've been with and their bad, weird, or good attributes- essentially "spilling the tea". It was marketed as a "safe space" for women, where they could post anonomously and communicate potential catfishes, offenders, or overall bad men. However in 2023 a hacker had managed to leak all of the users' information, including but not limited to- credit card information, addresses, and 13,000 government IDs. This happened because the Tea app hadn't properly encrypted or protected the data, allowing the hacker to virtually access alll the users' information.**

    2. While we have our concerns about the privacy of our information, we often share it with social media platforms under the understanding that they will hold that information securely. But social media companies often fail at keeping our information secure.

      I find the concept of securty and privacy on social media incredibly intriguing, as users are almost always promised that just by having a username and password, your data is completely protected, or at least we are given that assumption. However, it's relatively easy to hack into anyone's account if you have the right knowledge and know where to look. Especially if you have the capabality to manage an app or website from the back end, having the ability to go through data within an application can lead to data leaks, and private information you thought no one would have can suddenly be given to the world.

    1. Datasets can be poisoned unintentionally. For example, many scientists posted online surveys that people can get paid to take. Getting useful results depended on a wide range of people taking them. But when one TikToker’s video about taking them went viral, the surveys got filled out with mostly one narrow demographic, preventing many of the datasets from being used as intended. See more in

      I had previously known that intentionally poisoning datasets was possible, and that unintentionally doing so was also possible, but I wasn't aware that something like a TikTok video was able to make that big of an impact. It's interesting to see how one influencer's video and her audience were able to make such a big impact and effectively ruin most of the researcher's data- considering her audience is mainly women in their 20s.

    1. Social media sites then make their money by selling targeted advertising, meaning selling ads to specific groups of people with specific interests. So, for example, if you are selling spider stuffed animal toys, most people might not be interested, but if you could find the people who want those toys and only show your ads to them, your advertising campaign might be successful, and those users might be happy to find out about your stuffed animal toys. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from voting’
      1. As someone who's on social media pretty often, I can 100% attest to the fact that any sort of interaction with an ad or sponsored post will cause your timeline to be filled with other ads relating to a similar product, if not the same one. Sometimes ads do get my attention and I look at the item to see the price, and when I return to my homepage I manage to see the same post 3 or 4 times in the same hour. I also do recognize that gambling advertisements are often shown to children, which I think should 100% monitored, considering children are both not able to grasp the concept of money at their age, and they're children, the idea that it's legal to show these ads to children when they're using electronics is beyond me.****
  3. Jan 2026
    1. Have you witnessed different responses to trolling? What happened in those cases? What do you think is the best way to deal with trolling?

      I actually have seen different responses to trolling that have worked. I saw a video a woman made on Tiktok recently where she had received comments from a user on her videos harrassing her, and when she had realized that he lived around 3 hours from her, she went to his place of work and recorded herself confronting him. When she confronted him she told him if he didn't apologize she'd tell his wife about the Grindr account he had. He then ended up apologizing and saying he wouldn't do it again. One of my favorite quotes from her was "You dont know me, I dont know you, but I was the person you left that comment under, but I just wanted you to know, you see how easy i found you?". An absolutely deserved consequence for his actions. **

    1. In the Black Lives Matters protests of 2020, Dallas Police made an app where they asked people to upload videos of protesters doing anything illegal. In support of the protesters, K-pop fans swarmed the app and uploaded as many K-pop videos as they could eventually leading to the app crashing and becoming unusable, and thus protecting the protesters from this attempt at Police surveillance.

      I find it incredibly interesting that such a nice part of history like K-pop fans trolling in protest of the police by crashing the app- it's such a small gesture, but was incredibly impactful. Their voices were heard and seen. I was also there to witness when Tiktok users had come together to reserve tickets to a Trump rally and not show up, in an effort to ensure no one would show up to the event. Watching how powerful the internet can be when users come together is insane but an important lesson.

    1. The way we present ourselves to others around us (our behavior, social role, etc.) is called our public persona. We also may change how we behave and speak depending on the situation or who we are around, which is called code-switching.

      As a person of color, I find myself code-switching relatively often. The culture I've been surrounded by growing up is incredibly different to others', and when I'm put in situations where I'm not talking with people who share a similar culture I tend to bottle-up and switch to a different version of myself. A version of myself that's more approachable and respectful, a bit more timid. I don't intentionally do so most of the time, it's sort of just turned into a habit for me, as I'm sure it has for other people of color.

    1. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      Although this incident happened years ago, this is still a repeating pattern we see in social media today. Oftentimes when I come across videos on platforms such as TikTok or Instagram, people in the comments would be debating whether or not something was fabricated as "rage-bait". Rage-bait as a term is relatively new, but the concept is as old as time- whether it be used in newspapers, story-telling, movies, music, etc, rage-bait is an effective method to garner criticism, but more importantly- engagement. The more people talk and critique a work, the more it will rise in popularity.

    1. One famous example of reducing friction was the invention of infinite scroll. When trying to view results from a search, or look through social media posts, you could only view a few at a time, and to see more you had to press a button to see the next “page” of results. This is how both Google search and Amazon search work at the time this is written. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      From the perspective of the social media companies, I can see why they'd add the infinite scroll to their apps. It keeps the users from leaving the app and allows them to engage with more content- watch more ads, etc. But as a user I find the infinite scroll to be incredibly harmful, especially to children and mentally ill people. When you're stuck in a scolling-trance, it can be hard to stop, and before you know it you've spent the entirety of your day scrolling on TikTok. One can become addicted to their phone, and although the health affects social media has done to people isn't that well studied- it's easy to tell that long-term use of one's phone can negatively impact their health.

    2. Sometimes designers add friction to sites intentionally. For example, ads in mobile games make the “x” you need to press incredibly small and hard to press to make it harder to leave their ad:

      As a design major I've encountered so many app interfaces that intentionally guide the user to a place on their app just to influence them to use a new feature or to click on an ad. One of the biggest ones I can think of is Spotify- and how they recently moved the tab for 'My Library' and added a 'Create' tab in its place. Replacing icons that they know users often visit (so much so its like muscle memory) tricks the user into clicking on a feature they didn't mean to. Instagram is also notorious for doing this.

    1. Dates turn out to be one of the trickier data types to work with in practice. One of the main reasons for this is that what time or day it depends on what time zone you are in. So, for example, when Twitter tells me that the tweet was posted on Feb 10, 2020, does it mean Feb 10 for me? Or for the person who posted it? Those might not be the same. Or if I want to see for a given account, how much they tweeted “yesterday,” what do I mean by “yesterday?” We might be in different time zones and have different start and end times for what we each call “yesterday.”

      I notice this sort of glitch sometimes when I'm using the app BeReal. When the notification goes off for everyone at the same time (no matter what time zone you are), the way the content is displayed to you is based entirely on what country you're currently in. Example- I have a friend who was visiting South Korea, and when the notification went off for us to take a photo through the app, it indicated that hers was 17 hours late. Interesting how there's a lack of solutions when technology has advanced so fast.

    2. In addition to the main components of the images, sound, and video data, this information is often stored with metadata, such as: The time the image/sound/video was created The location where the image/sound/video was taken The type of camera or recording device used to create the image/sound/video etc.

      I find it so intriguing that, by simply posting a photo or tweet, a platform can gather immense amounts of data from the user. This type of data (metadata) is typically accessible to those who know their way around a computer, and one can assume how dangerous it can be when given to the wrong people.

    1. Copy to clipboard If you run the code above you will see that the program pauses as it displays the output above. These pauses may come in handy when posting tweets, to make it look like your bot is taking time to type in the text. You will get a chance to try that in the next practice section.

      I always wondered how programmers would create these sorts of commands, and it's cool to know that it's done with simple commands like these! I was also not previously aware that to display something on a screen, you have to use the command 'display'. I previously thought that 'print' was the main form to do so.

    1. We also would like to point out that there are fake bots as well, that is real people pretending their work is the result of a Bot. For example, TikTok user Curt Skelton posted a video claiming that he was actually an AI-generated / deepfake character:

      As someone who's majoring in a creative field, I find it both incredibly interesting and concerning just how advanced AI is getting, and where this rapid innovation will take us in just a few years. It's so jarring to be watching a video on Tiktok or Instagram and fully believe it to be completely real, just to feel the need to dissect the video to see if it's really real. I can't begin to imagine how the job industry will change due to AI, but with innovation there (hopefully) comes opportunity.**

    1. Something is right or wrong because God(s) said so. Euthyphro Dilemma: “Is the pious [action] loved by the gods because it is pious, or is it pious because it is loved by the gods?” (Socrates, 400s BCE Greece) If the gods love an action because it is morally good, then it is good because it follows some other ethics framework. If we can figure out which ethics framework the gods are using, then we can just apply that one ourselves without the gods. If, on the other hand, an action is morally good because it is loved by the gods, then it doesn’t matter whether it makes sense under any ethics framework, and it is pointless to use ethics frameworks.1

      As someone who grew up in a religious household, I often asked questions challenging this theory. It's interesting to think about what could be reprimanded or praised by your god(s)/religious circle, as long as it was written into the guidelines in a scripture or reading. Additionally, I do think that this thinking is dangerous, as it opens up the possibility for people within the religion to misinterpret or maliciously translate certain texts to push negative propaganda to a group of people, and the possibility of mistranslation is incredibly high as most of these texts were written hundreds of years ago.

    1. We also see this phrase used to say that things seen on social media are not authentic, but are manipulated, such as people only posting their good news and not bad news, or people using photo manipulation software to change how they look

      I think this is an interesting concept to think about, as we are usually conditioned to think that the internet "isn't real", that most things online are fabricated, exaggerated, etc. However, I do think that just because this is common online, it's not to say that "real life" is a place where everyone is completely authentic and themselves, as some people may feel that they only want to share the good parts of their lives with their friends or family, while keeping anything that wouldn't be considered "good" to themselves, and vice versa. I do think it's hasty to say that all that we see on social media "is not real", as there are plenty of real people behind each account, but we must consider that because people are able to be behind potentially anonymous accounts, it is much easier to fabricate stories or life experiences, or to center one's entire online presence around a portion of their life they want the internet to see, essentially artificially creating an online persona that is not reflective of who they are in real life.