22 Matching Annotations
  1. Last 7 days
    1. Doomscrolling

      I think doomscrolling can be really damaging on mental health, but I also think that in the recent time it's become more than just bad news, it's when you're on a social media platform (and say to yourself: just 5 more minutes) and then end up scrolling for another hour or so, without being able to stop. This loss of self control can be really hard to grasp, and take a toll on ones mental health.

    1. feeling calmer but isolated, re-downloading them, feeling worse but connected again

      This is such a central comment, because it's so true. I feel like we all a some point feel the need to take a step back, overwhelmed by the media, but also feeling the need to stay connected. So do we have to choose? feeling alone or feeling overwhelmed, it's hard to figure out how to work it out the best way.

  2. Feb 2026
    1. For example, author Roxane Gay has said, “Content going viral is overwhelming, intimidating, exciting, and downright scary..”

      I do partly agree with this, it must be overwhelming, but if you don't want people to watch your content, why do you do it? If someone puts themselves on the internet they must think about the possible consequences. It feels like you always want what you aren't or can't have. It is like the TikTok says, a double edged sword.

    1. [As I follow YouTube recommendations] It’s far more likely that my biases will be confirmed and possibly even enhanced than they are to be challenged and re-evaluated.

      I remember a case from the real world where this resulted in a terrible outcome. Because YouTube kept recommending similar videos (these were from ISIS) the user eventually became a recruit, because of the echo chamber created by the algorithm. In this case though it was hard to put blame on an algorithm because of the argument of free choice to click away. But who carries responsibility in these cases?

    1. n how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for.

      This chapter makes me think of a text I have read several times at my home university regards this topic: ‘Do Artefacts Have Politics?’ (Winner, 1980). This text is very central in my education because it talks about how things are designed leave people out. It happens everywhere, so even though the text is older and focuses mostly on the hight of bridges to keep lower class people who didn't own a car from going places, it's still so relevant today. And I do think that it's sort of a "wicked problem" Buchanan, R. (1992) because when you make an active choice to make it accessible for one group, there is a big chance that you might leave out other groups as well.

    2. The following tweet has a video of a soap dispenser that apparently was only designed to work for people with light-colored skin

      I feel like this shows, that design is a very biased field. I Denmark we also had the issue with train doors between cabins. You have to put your hand up and wave for the door to open, but like the soap dispenser it didn't react the same way with people of darker skin colors. It just highlights the gap in accessible design, and the fact that even though a designer might try to include everyone, someone will alway be left out, whether that be unconscious for conscious.

    1. “right to be forgotten”

      The right to be forgotten in the EU is kind of "funny" because everyone can make use of it e.g. call a company who has your information and demand to be deleted from the system. As someone who worked in customer support in a big danish company I would receive my fair share of these. The thing is, most people do it because they feel as if they've been treated badly, and then when you tell them that it's a whole process and not something that can be done by the push of a button, they don't want to do it anymore. So even though it's a possibility, a lot of people opt out because it takes time from both the company and the customer.

    2. such as if someone was sending us death threats.

      This is a particularly important note in my opinion. There are cases about sex trafficking victims who were found and the predator prosecuted because of social media giant having access to private messages and information. Of course it isn't something they should throw about, but when it's such a big part of our lives, it makes sense that it should help the police in investigation.

  3. Jan 2026
    1. Sometimes a dataset has so many problems that it is effectively poisoned or not feasible to work with.

      I also feel like it's important to mention the choices data cleaners take in creating finished datasets, because it really does affect the outcome. If a data cleaner removes unfinished parts of the data, or if they chose to use it even though it isn't fully complete, both of those influence the outcome. This really highlights that no data is unbiased even though we see it as the most unbiased thing, because it's hard data.

    1. Try this yourself and see what Google thinks of you!

      This kind of surprised me, I feel like I behave my age on the internet but apparently i'm 35-44 years old. It got a lot of things wrong like it says i'm a house owner, in a relationship and that a high income. I can't help but wonder what I did on the internet that makes the algorithm think that about me??

    1. Below is a fake pronunciation guide on youtube for “Hors d’oeuvres”:

      This example seems more like harmless fun, because it isn't hurting anyone. The worst that can happen is that you mispronounce a word, and then I hope someone will tell you. This is where I feel like trolling should stay. I can see the argument in people being misled and being shamed for not knowing the word, especially if someone isn't educated (child), but it's not actually keeping someone out of a community, bullying or provoking outrage.

    1. In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with.

      This way of trolling the mewbies seems like a power play or some sort of online bullying , that is excused by "trolling" I wonder if it's just an excuse to be mean, because if it happened in real life, I don't think people would find as amusing.

    1. Though Dr. McLaughlin claimed a personal experience as a witness in a Title IX sexual harassment case, through the fake @Sciencing_Bi, she invented an experience of sexual harassment from a Harvard professor. This professor was being accused of sexual harassment by multiple real women, and these real women were very upset to find out that @Sciencing_Bi, who was trying to join them, was not a real person.

      I feel like this case really showcases the need for authenticity in social media, and how we as users need to be critical. But it also highlights how we are in danger of being so critical that we don't believe anything anymore and loose our trust in the world and our fellow citizens in it. If a person can create a persona who is entirely inauthentic to reality, hurting so many people in them doing so, what is then the limit to the inauthenticity online, and how can we limit it while still believing in the world?

    1. Trump Tweet Sources

      This way of receiving "news" from a President from a social media site (now truth social) is always very interesting to me. As a person from a country where news come from a "public news channel" and not privately owned businesses like in the us. I feel like news in the us are more unreliable and I feel like I have to fact check everything in order to receive news that are truthful and not angled to support a left or right political pov. So I feel like news from my country are way more authentic than the news are here.

    1. Affordances are what a user interface lets you do

      I find affordances so important to think about when you're using a web page or app. It's necessary to think about why developers are designing in the way that they are, to get users to do a specific thing. It could be when viewing Instagram stories, and an ad pops up, and almost everywhere you click on the screen, you will end up on the link of the weppage.

    2. In 2006, Aza Raskin invented infinite scroll, where you can scroll to the bottom of the current results, and new results will get automatically filled in below. Most social media sites now use this, so you can then scroll forever and never hit an obstacle or friction as you endlessly look at social media posts. Aza Raskin regrets what infinite scroll has done to make it harder for users to break away from looking at social media sites.

      It's funny to read about something being invented to make something frictionless, and instead ending up being something that makes it harder for users to stop their infinite scrolling. It fuels an addiction, but was created for good. Also I think that now with for example Instagram, the infinite scroll isn't even frictionless anymore, because of all the ads that are so hard nor to click on when you're scrolling.

    1. So, for example, if we made a form that someone needed to enter their address, we could assume everyone is in the United States and not have any country selection.

      This one is personal for me: as an exchange student applying for exchange I encountered this problem many times, as it could not register my Danish address, because it is written in a different way. This highlight the fact that all data and all artefacts have politics. Even if you try to accommodate everyone, you are always forced to make choices that sometimes exclude people entirely. This could be blind of deaf people, but also gender as mentioned in the next paragraph.

    1. Data points often give the appearance of being concrete and reliable, especially if they are numerical.

      I find this especially interesting, because it's true, when we see a number or percentage it seems like it must be the correct one. But it is so important to think about the data collection behind it. If we use the bot example, when processing all the users in the systems did the data processors then remove all incomplete data (if something was missing from the profile) or did they leave it in? I believe that even though data seems unbiased, there are always choices in how it's processed that effect how the outcome looks.

    1. Fake Bots

      This example scares me a lot personally. Because how are you as a user expected to know what is real and what is fake? If I saw that first video I would believe it, and I have been taught a lot of source criticism in school. It is becoming increasingly harder for the individual to see through fake content. I know that on Tiktok there is a label if the creator has labeled it as AI-generated, but what if the creator wants to fool their audience like here? I think this example poses a difficult question of how to recognize what is real, and what is fake?

    2. “Gender Pay Gap Bot”

      I really find this kind of bot interesting, because it is both good and bad (mostly good in my opinion). It is bad for the companies because it outs them and tells people the real story of how they are not actually supporting women. It is really good for highlighting the inequality that is so very present in the corporate world. I think that this is a provocative bot, but I still find it so necessary in exposing companies who show themselves publicly as supporting women, but not really doing it in reality.

    1. There are many other varieties of social media sites, though hopefully we have at least covered a decent range of them.

      I found the discussion of blurred boundaries between public and private particularly compelling. The text highlights how users may experience social media as an intimate or personal space, while the platform simultaneously functions as a public arena with broader visibility and consequences. This tension helps explain why ethical norms on social media are often unclear or contested, since users may not share the same assumptions about what kind of space they are participating in.

    1. More on Ethics

      One ethical framework that could be added here is care ethics, which focuses on relationships, vulnerability, and responsibility rather than abstract rules or aggregate outcomes. In the context of social media automation, care ethics would draw attention to how bots affect trust, emotional labor, and users’ sense of being in a social space with other humans. This perspective could be especially relevant for platforms like Bluesky, where social interaction and community norms play an important role, and where even seemingly harmless automation might undermine relational trust.