70 Matching Annotations
  1. Dec 2022
    1. Alas, lawmakers are way behind the curve on this, demanding new "online safety" rules that require firms to break E2E and block third-party de-enshittification tools: https://www.openrightsgroup.org/blog/online-safety-made-dangerous/ The online free speech debate is stupid because it has all the wrong focuses: Focusing on improving algorithms, not whether you can even get a feed of things you asked to see; Focusing on whether unsolicited messages are delivered, not whether solicited messages reach their readers; Focusing on algorithmic transparency, not whether you can opt out of the behavioral tracking that produces training data for algorithms; Focusing on whether platforms are policing their users well enough, not whether we can leave a platform without losing our important social, professional and personal ties; Focusing on whether the limits on our speech violate the First Amendment, rather than whether they are unfair: https://doctorow.medium.com/yes-its-censorship-2026c9edc0fd

      This list is particularly good.


      Proper regulation of end to end services would encourage the creation of filtering and other tools which would tend to benefit users rather than benefit the rent seeking of the corporations which own the pipes.

  2. Oct 2022
    1. Edgerly noted that disinformation spreads through two ways: The use of technology and human nature.Click-based advertising, news aggregation, the process of viral spreading and the ease of creating and altering websites are factors considered under technology.“Facebook and Google prioritize giving people what they ‘want’ to see; advertising revenue (are) based on clicks, not quality,” Edgerly said.She noted that people have the tendency to share news and website links without even reading its content, only its headline. According to her, this perpetuates a phenomenon of viral spreading or easy sharing.There is also the case of human nature involved, where people are “most likely to believe” information that supports their identities and viewpoints, Edgerly cited.“Vivid, emotional information grabs attention (and) leads to more responses (such as) likes, comments, shares. Negative information grabs more attention than (the) positive and is better remembered,” she said.Edgerly added that people tend to believe in information that they see on a regular basis and those shared by their immediate families and friends.

      Spreading misinformation and disinformation is really easy in this day and age because of how accessible information is and how much of it there is on the web. This is explained precisely by Edgerly. Noted in this part of the article, there is a business for the spread of disinformation, particularly in our country. There are people who pay what we call online trolls, to spread disinformation and capitalize on how “chronically online” Filipinos are, among many other factors (i.e., most Filipinos’ information illiteracy due to poverty and lack of educational attainment, how easy it is to interact with content we see online, regardless of its authenticity, etc.). Disinformation also leads to misinformation through word-of-mouth. As stated by Edgerly in this article, “people tend to believe in information… shared by their immediate families and friends”; because of people’s human nature to trust the information shared by their loved ones, if one is not information literate, they will not question their newly received information. Lastly, it most certainly does not help that social media algorithms nowadays rely on what users interact with; the more that a user interacts with a certain information, the more that social media platforms will feed them that information. It does not help because not all social media websites have fact checkers and users can freely spread disinformation if they chose to.

  3. Aug 2022
  4. Apr 2022
  5. Mar 2022
    1. Prof Peter Hotez MD PhD. (2021, December 30). When the antivaccine disinformation crowd declares twisted martyrdom when bumped from social media or condemned publicly: They contributed to the tragic and needless loss of 200,000 unvaccinated Americans since June who believed their antiscience gibberish. They’re the aggressors [Tweet]. @PeterHotez. https://twitter.com/PeterHotez/status/1476393357006065670

  6. Feb 2022
  7. Jan 2022
  8. Dec 2021
  9. Nov 2021
  10. Oct 2021
  11. Aug 2021
  12. Jul 2021
    1. Early on, circa 2015, there was a while when every first-person writer who might once have written a Tumblr began writing a TinyLetter. At the time, the writer Lyz Lenz observed that newsletters seemed to create a new kind of safe space. A newsletter’s self-selecting audience was part of its appeal, especially for women writers who had experienced harassment elsewhere online.

      What sort of spaces do newsletters create based upon their modes of delivery? What makes them "safer" for marginalized groups? Is there a mitigation of algorithmic speed and reach that helps? Is it a more tacit building of community and conversation? How can these benefits be built into an IndieWeb space?

      How can a platform provide "reach" while simultaneously creating negative feedback for trolls and bad actors?

  13. Jun 2021
  14. May 2021
    1. Charlotte Jee recently wrote a lovely fictional intro to a piece on a “feminist Internet” that crystallized something I can’t quite believe I never saw before; if girls, women and non-binary people really got to choose where they spent their time online, we would never choose to be corralled into the hostile, dangerous spaces that endanger us and make us feel so, so bad. It’s obvious when you think about it. The current platforms are perfectly designed for misogyny and drive literally countless women from public life, or dissuade them from entering it. Online abuse, doxing, blue-tick dogpiling, pro-stalking and rape-enabling ‘features’ (like Strava broadcasting runners’ names and routes, or Slack’s recent direct-messaging fiasco) only happen because we are herded into a quasi-public sphere where we don’t make the rules and have literally nowhere else to go.

      A strong list of toxic behaviors that are meant to keep people from having a voice in the online commons. We definitely need to design these features out of our social software.

    1. Darren Dahly. (2021, February 24). @SciBeh One thought is that we generally don’t ‘press’ strangers or even colleagues in face to face conversations, and when we do, it’s usually perceived as pretty aggressive. Not sure why anyone would expect it to work better on twitter. Https://t.co/r94i22mP9Q [Tweet]. @statsepi. https://twitter.com/statsepi/status/1364482411803906048

  15. Mar 2021
  16. Feb 2021
  17. Jan 2021
  18. Nov 2020
  19. Oct 2020
  20. Sep 2020
  21. Aug 2020
  22. Jul 2020
    1. Bex, F., Lawrence. J., Snaith. M., Reed. C., (2013) implementing the Argument Web. Communications of the ACM. (56). (10). Retrieved from chrome-extension://bjfhmglciegochdpefhhlphglcehbmek/pdfjs/web/viewer.html?file=http%3A%2F%2Farg-tech.org%2Fpeople%2Fchris%2Fpublications%2F2013%2FbexCACM.pdf

  23. Jun 2020
  24. May 2020
  25. Apr 2020
  26. Mar 2019
    1. This is a discussion of informal learning that focuses on ensuring that incidences of informal learning are recognized. This discussion portrays it has happening through casual conversations, online discussions, or social media. The page is easy enough to read though it does not try to be comprehensive. rating 2/5

  27. Dec 2016
    1. Ninety-five percent of 12- to 17-year-olds already go online on a regular basis. They use social networks, and create and contribute to websites. Our work is focused on taking full advantage of the kinds of tools and technologies that have transformed every other aspect of life to power up and accelerate students’ learning. We need to do things differently, not just better.

      Hypothes.is nicely bridges the worlds of social media and formal education.