10 Matching Annotations
  1. Last 7 days
  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Meg Miller and Ilaria Parogni. The Hidden Image Descriptions Making the Internet Accessible. The New York Times, February 2022. URL: https://www.nytimes.com/interactive/2022/02/18/arts/alt-text-images-descriptions.html (visited on 2023-12-07).

      This article in the New York Times by Meg Miller and Ilaria Parogni details "alt text"- pieces of digital text (typically with read-aloud functionality) attached to images which they describe. These alt-texts are described to be useful tools for those with varying degrees of impaired vision- but are often quite limited by the fact that most images posted online simply lack them- and by the fact that AI generated alt-texts (while allowing for them to be more widely generated) are often lacking in quality. However, the article still notes that immense progress on this front is being made.

    1. In how we’ve been talking about accessible design, the way we’ve been phrasing things has implied a separation between designers who make things, and the disabled people who things are made for.

      I think this goes back to what we were discussing earlier in the course- particularly on programmed bias in social media. In that earlier topic, we discussed how social media designers, mostly unconsciously, program in their own biases into social media- particularly the algorithms they make- leading to prejudiced outcomes. Here, it's a similar situation where non-disabled designers create non-accessible social media sites because they are simply not likely to consider such things.

    1. Bruce Schneier. Why 'Anonymous' Data Sometimes Isn't. Wired, December 2007. URL: https://www.wired.com/2007/12/why-anonymous-data-sometimes-isnt/ (visited on 2023-12-06).

      This article on Wired explored how it is quite possible to identify or "de-anonymize" someone using seemingly incurious pieces of data- with a particular focus on movie preferences, but also google searches, Amazon purchases, and more. On an emotional level, I personally felt this to be pretty terrifying article to read. Even as someone who's not the most concerned with online privacy (I don't have much to hide), it is quite scary that my identity can be triangulated via sparse pieces of online info- pieces of data I probably produce in the hundreds every day.

    1. What incentives do social media companies have to protect privacy?

      Probably the biggest one is customer trust. Even if security breaches only affect a small portion of the user base, the fear they can instill in people can lead them to abandon the site in search of one that offers better privacy. The best way to mitigate this is by offering good privacy and security in the first place- building customer trust in the social media site.

      I would note however, that given the highly consolidated nature of social media, many social media companies don't feel this incentive, as they have no true competitors. This incentive is also mitigated by the fact that longtime users are often quite inelastic in the sites they use, as they have often built large webs of friends and followers on their favorite sites- which leads them to stay on even when better alternatives arise.

  3. Apr 2026
  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Tom Knowles. I’m so sorry, says inventor of endless online scrolling. The Times, April 2019. URL: https://www.thetimes.co.uk/article/i-m-so-sorry-says-inventor-of-endless-online-scrolling-9lrv59mdk (visited on 2023-11-24). { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { name: "python3", path: "./ch05_history" }, predefinedOutput: true } kernelName = 'python3'

      In this article, Tom Knowles writes on the words of Aza Raskin, the inventor of the concept of a social media "infinite scroll": it details Raskin's regrets that the concept he pioneered has worsened social media addiction and its associated negative externalities (the article notes depression as being linked to excessive social media use). It also details some of Raskin's attempts to remedy his perceived mistake through his "Center for Humane Technology"- a center that advocates for social media companies to design sites in a way lessens media addiction.

    1. This board emphasizes “free speech” and “no rules” (with exceptions for child sexual abuse material [CSAM] and some other illegal content). In

      I also believe that, on occasion, under extreme circumstances, 4Chan will ban behavior or content that effectively cripples all other discussion on the site. I do not quite remember what it was, but there was gimmick on 4Chan that was being spammed by real users in every possible discussion board. It got to the point where nothing else could really be said, leading users to complain to mods and administrators and for that gimmick trend to be banned on 4Chan. It think that points to one of the follies of the idea of truly "free" speech online- in that there is speech that is perfectly legal, but can also, whether intentionally or not, shut out other speech in a variety of ways.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Sarah Jeong. How to Make a Bot That Isn't Racist. Vice, March 2016. URL:

      Using Microsoft's, at the time, recently launched @TayandYou Twitter Bot as a clear negative example, Sarah Jeong summarizes the processes of veteran bot-designers who strive to make their bots behave ethically: this process ranges from creating a blacklist of offensive words and phrases a bot can't say, using an algorithm to try and make sure a bot cannot write an offensive sentence, as well as attempts to make sure a bot cannot be mistaken as a human user.

    1. Why would users want to be able to make bots?

      This is actually relevant for a platform I use a lot- Twitter. A while back, monetization was introduced for users that received a lot of engagement. This created a clear financial incentive to create bot-accounts as they could potentially generate revenue for their owner- and as much the number of bots noticeably shot up after this change.

    1. When one of us ran the program, who made those posts (me? you? the bot?)?

      I do think, with this specific instance of a very simple bot, I was ultimately the one who made post- since the difference is essentially a difference of which buttons were pressed where. But I can see for more autonomous bots, the question can get more complicated- if the bots were making their posts not based on hard-coded inputs but rather algorithm-based language models- for example. I would argue there is always at least a particle of the original creator's will even in that case, but it certainly decreases as the bot grows more autonomous.

    1. These traits are often performed and achieved through ceremonies and rituals (including sacrificing to ancestors, music, and tea drinking), resulting in a harmonious society.

      I would like to add that Confucianism has a strong emphasis on orthodoxy and a strong emphasis on learning "The Classics"- i.e, texts that are deemed fundamental to Confucian doctrine. In Confucian societies of the past (not so much now), mastery over these classics was a way for elites to gain social and even political influence- and it was seen generally seen as a way to become a more virtuous person as well. So I think the text is slightly missing something when it doesn't include study of classics as one the ways Confuncian-practictioners sought to better themselves.