17 Matching Annotations
  1. Last 7 days
    1. A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of:

      This reminds me of something quite silly but I think it's worth mentioning. While this term was later adapted to refer to what we today call a meme, it was still in use a this definition before and did circle through media, which made the media retroactively very comedic through the redefining of the word meme. My favorite example of this is the 2013 game Metal Gear Rising: Revengeance, which has a plot points revolving around how the only thing that truly matters to a persons self and decisions is memes and the ideas that their culture pass on to them. But with our modern definition, all the thoughtful speeches throughout the game become unintentionally very funny.

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. BBC. YouTube aids flat earth conspiracy theorists, research suggests. BBC, February 2019. URL: https://www.bbc.com/news/technology-47279253 (visited on 2023-12-07). { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { name: "python3", path: "./ch11_recommendations" }, predefinedOutput: true } kernelName = 'python3'

      You tube is a big part in the spreading the idea of the flat Earth to people online. Of course You tube is full of information but also misinformation, but the Youtube algorithm makes it all too easy to funnel users down a conspiracy theory rabbit hole. After interviewing people at fat earth conventions, they found that many of them got the idea from Youtube videos. They propose that the only way to fight misinformation on Youtube is to make accurate, informative videos themselves, which I would argue is happening a lot on You tube today.

    1. Sometimes though, individuals are still blamed for systemic problems. For example, Elon Musk, who has the power to change Twitters recommendation algorithm, blames the users for the results: Fig. 11.4 A tweet [k5] from current Twitter owner Elon Musk blaming users for how the recommendation algorithm interprets their behavior

      This tweet my Elon is interesting, because while this could just be me, it feels like Elon is in favor of this "you get more accounts that you hate" mechanic on the site. Makes sense since hate and malice is what gets people to stay on sites longer, but still, its funny how the person with the most power in this situation is actively blaming the user for outcomes completely in his control.

  3. Oct 2025
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Meg Miller and Ilaria Parogni. The Hidden Image Descriptions Making the Internet Accessible. The New York Times, February 2022. URL: https://www.nytimes.com/interactive/2022/02/18/arts/alt-text-images-descriptions.html (visited on 2023-12-07).

      Alt text for images as described before is often essential for certain people to use the internet. It is not always helpful however, since there are no rules or regulations for making alt text, often it just says "image" or "jpeg" or the like. Other than that, the main issue is that no one includes alt text with their images, either because its too inconvenient or because it seems unnecessary. Recently, companies have tried using AI to generated alt text for images with mixed results.

    1. For example, a building with stairs might also have ramps and elevators, so people with different mobility needs (e.g., people with wheelchairs, baby strollers, or luggage) can access each area. In the elevators the buttons might be at a height that both short and tall people can reach. The elevator buttons might have labels both drawn (for people who can see them) and in braille (for people who cannot), and the ground floor button may be marked with a star, so that even those who cannot read can at least choose the ground floor.

      Another good example of this I like are curb buts, those dips on sidewalks that go down to the street. It's said that they were originally designed just to help those with wheelchairs navigate off and on the street and sidewalk. But, as it turns out, it helped not only people in wheelchairs, but most people in general, like parents rolling their kids in strollers, people with wheeled carts trying to transport stuff, skateboarders and roller-skates. Whether this story is true or not, it has inspired the term the curb-cut effect, where something designed to aid a disabled person also aids everyone.

    1. Alannah Oleson. Beyond “Average” Users: Building Inclusive Design Skills with the CIDER Technique. Bits and Behavior, October 2022. URL: https://medium.com/bits-and-behavior/beyond-average-users-building-inclusive-design-skills-with-the-cider-technique-413969544e6d (visited on 2023-12-06).

      The CIDER technique is a 5-step analysis method to find out how your technology can (or can't) provide use to diverse users who go beyond what an "average" user needs or wants. CIDER stands for critique, imagine, design, expand, repeat. Critique the assumptions you may have about your design and the people who may use it. Imagine how one of your assumptions can lead to the exclusion of a type of user. Design possible changes so your design doesn't rely on your chosen assumption. Expand your knowledge by seeing and sharing more ideas with your team. Repeat the imagine and design steps for other assumptions.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. And Adobe encrypted their passwords improperly and then hackers leaked their password database of 153 million users

      I could just read the article but I'll do that later. but basically, I've always been confused on when hackers release the passwords of a bunch or users of a website or similar. Not so much how they do it (I still don't know how) but more so how they share that information. Like, do they just share the passwords without it's respected user? In that case it wouldn't be absolutely terrible since you still wouldn't know which password is for what account, but a smart hacker to maybe use a bot to try each of the 153 million passwords on one account (would still take ages, but at least you have a finite number of passwords to try). Or, do hackers put up all the password along with the users in a massive spreadsheet? That would make sense, you can just look up an account to hack and hack it easily. But do they share this on public platforms like Reddit? Do they share it directly with each other? Do they post it on some sort of evil dark web place? I'll find out I guess.

  6. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Samantha Cole. People Are Spamming Kellogg’s Job Applications in Solidarity with Striking Workers. Vice, December 2021. URL: https://www.vice.com/en/article/v7dvy9/spamming-kelloggs-job-

      Kellogg at the time of this articles publication have been trying to hire new workers to replace the previous workers who have gone on strike due for better wages and working conditions. To fight this, people from the antiwork subreddit have been spamming the job portal site to fight back. The union went on strike after the company refused to meet with the union, so Kellogg tried to hire another 1,400 employees. The members of antiwork then repeated sent Kellogg fake applications that were either completely made up people in the cities that Kellogg was hiring from, or just resumes from google images.

    1. You can try social media sites as well. Twitter’s ad profile is located here

      I did this with my Twitter profile and the results were interesting. The Twitter personalized ads profile area is organized with a list of everything that Twitter thinks you're interested in. For me, I found that my list was so massive and overly generous with what it thinks I like. It was filled with a non-insignificant amount of things that I have never heard of that Twitter thought interested me. I'm not sure if this is intentional or not, because I feel having a bunch of stuff I'm not interested in would make it harder to advertise stuff to me.

    1. Below is a fake pronunciation guide on youtube for “Hors d’oeuvres”: Note: you can find the real pronunciation guide here [g25], and for those who can’t listen to the video, there is an explanation in this footnote[1] In the youtube comments, some people played along and others celebrated or worried about who would get tricked

      This reminds me of the curious case of the popular youtuber SIivaGunner. SIivaGunner has been on the internet since the early 2010's, and their content has focused around uploading high quality songs of various video games, and if you were to look at their channel you'd see just that, videos of video game songs labeled accordingly, at least that's what it seems. If you were to watch any of these videos, you may quickly realize that the songs are slightly, if not very different to what you would expect. That is the crux of SIivaGunner, they upload songs that seem to be accurate riffs from the game their from, but instead the songs have been altered and remixed to reference and sound like another song entirely. This is technically trolling, but in a harmless and fun way, with people loving the altered songs and memes, that is until the channel got banned by Youtube for "false thumbnails". The channel actually got banned multiple times, each timer the team made a new channel with a similar name (ie. SilvaGunner, GIlvaSunner). The Youtube channel is mostly safe as of now with the workaround they came up with, were they give the titles of the songs a seemingly true but made up versions of the song, such as "Beta Mix" or "JP Version".

    1. The 1980s and 1990s also saw an emergence of more instant forms of communication with chat applications. Internet Relay Chat (IRC) [e7] lets people create “rooms” for different topics, and people could join those rooms and participate in real-time text conversations with the others in the room.

      Reading this reminds me a lot of modern day Discord, so you could defiantly say that IRC was ancestor of modern multiple room based chats like Discord and other similar things. Even the layout as shown in this image is almost exactly like Discord and how it is laid out now, with a series of "channels" with different conversations to switch between on the left, the main conversation for that room in the middle (complete with the handle of whoever said something with when they said it), and the list of users on the right. If it ain't broke don't fix it I guess.

  7. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Matt Binder. The majority of traffic from Elon Musk's X may have been fake during the Super Bowl, report suggests. February 2024. Section: Tech. URL: https://mashable.com/article/x-twitter-elon-musk-bots-fake-traffic (visited on 2024-03-31).

      About 75.85% of Twitter traffic during the 2024 super bowl was fake bots accounts according to a cyber security firm. At the time, even most Twitter users could tell the increase in the number of unauthentic content, and this Superbowl situation shows how it's likely not all false

    1. In most cases, after the initial data representation is created, the computer runs a compression algorithm, which takes the image, sound, or video, and finds a way of storing it in much less computer memory, often losing some of the quality when doing so.

      This is something I kind of want to learn about, and that is how exactly this compression algorithm works to get this kind of output. The ways images are compressed are always very consistent in how they appear, and very consistent in how bad they are, especially how they often create these sort of yellowish/greenish tones where there weren't before, or at least in the same way (see the yellowish area below the lips on this image).

  8. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Buy TikTok Followers. 2023. URL: https://www.socialwick.com (visited on 2023-12-02).

      This is a site where you can buy follows for not just TikTok but for a multitude of various platforms. They do say however on their why page that they do not use bot farms and that real people will be liking and following your account if you buy their services, so according to them they are not a bot farm.

    1. In 2016, Microsft launched a Twitter bot that was intended to learn to speak from other Twitter users and have conversations. Twitter users quickly started tweeting racist comments at Tay, which Tay learned from and started tweeting out within one day.

      This kind of touches upon an issue I've seen that affects bots and generative ai. Often, a bot or ai that is designed to learn and change based on data fed into it can majorly backfire if the data being sent in is biased in some way, like being racist or antisemitic. I want to say that a lot of this happens because these bots scrape info from the internet (where else?), but as we all know, people can say some pretty awful stuff on the internet thanks to anonymity and echo chamber communities, so if you're not careful your ai can easily be trained on that exact data.

  9. Sep 2025
    1. Distrust of abstract propositional claims

      This may be a little silly but the way I interpret this or summarize this ethical framework is that a person who practices American indigenous ethics would not care for hypotheticals. In an exaggerated a way, if you were to ask a person who practices American indigenous ethics their solution to the trolly problem, they may disregard the whole idea being that it is based on a highly unlikely hypothetical event, which would have little to do with actual reality

    1. Some platforms are used for sharing text and pictures (e.g., Facebook, Twitter

      Ever since Elon Musk bought Twitter and renamed it io X (I will still be calling it Twitter going forward), Elon has been trying to market Twitter not just as a platform for sharking text and pictures like this passage says, but as an "everything app" as the unofficial slogan goes. based on the tweet where Elon said the same thing. This is the exact opposite of the idea of making a platform minimalist or for a specific group, this is social media maximalism, a way of attempting to make a social platform as broad and all-encompassing as possible. As someone who does in fact use Twitter often I can safely say that Elon here has not really delivered on that idea.