1,617 Matching Annotations
  1. Last 7 days
    1. One can find utility in asking questions of their own note box, but why not also leverage the utility of a broader audience asking questions of it as well?!

      One of the values of social media is that it can allow you to practice or rehearse the potential value of ideas and potentially getting useful feedback on individual ideas which you may be aggregating into larger works.

  2. Jan 2023
    1. “She is likely our earliest Black female ethnographic filmmaker,” says Strain, who also teaches documentary history at Wesleyan University.

      Link to Robert J. Flaherty

      Where does she sit with respect to Robert J. Flaherty and Nanook of the North (1922)? Would she have been aware of his work through Boaz? How is her perspective potentially highly more authentic for such a project given her context?

    1. Ryan Randall @ryanrandall@hcommons.socialEarnest but still solidifying #pkm take:The ever-rising popularity of personal knowledge management tools indexes the need for liberal arts approaches. Particularly, but not exclusively, in STEM education.When people widely reinvent the concept/practice of commonplace books without building on centuries of prior knowledge (currently institutionalized in fields like library & information studies, English, rhetoric & composition, or media & communication studies), that's not "innovation."Instead, we're seeing some unfortunate combination of lost knowledge, missed opportunities, and capitalism selectively forgetting in order to manufacture a market.

      https://hcommons.social/@ryanrandall/109677171177320098

    1. social media platform

      This technical jargon, in the context of Cohost.org, means "a website".

    1. is zettelkasten gamification of note-taking? .t3_zkguan._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; }

      reply to u/theinvertedform at https://www.reddit.com/r/Zettelkasten/comments/zkguan/is_zettelkasten_gamification_of_notetaking/

      Social media and "influencers" have certainly grabbed onto the idea and squeezed with both hands. Broadly while talking about their own versions of rules, tips, tricks, and tools, they've missed a massive history of the broader techniques which pervade the humanities for over 500 years. When one looks more deeply at the broader cross section of writers, educators, philosophers, and academics who have used variations on the idea of maintaining notebooks or commonplace books, it becomes a relative no-brainer that it is a useful tool. I touch on some of the history as well as some of the recent commercialization here: https://boffosocko.com/2022/10/22/the-two-definitions-of-zettelkasten/.

    1. Books and Presentations Are Playlists, so let's create a NeoBook this way.

      https://wiki.rel8.dev/co-write_a_neobook

      A playlist of related index cards from a Luhmann-esque zettelkasten could be considered a playlist that comprises an article or a longer work like a book.

      Just as one can create a list of all the paths through a Choose Your Own Adventure book, one could do something similar with linked notes. Ward Cunningham has done something similar to this programmatically with the idea of a Markov monkey.

    1. Results for the YouTube field experiment (study 7), showing the average percent increase in manipulation techniques recognized in the experimental (as compared to control) condition. Results are shown separately for items (headlines) 1 to 3 for the emotional language and false dichotomies videos, as well as the average scores for each video and the overall average across all six items. See Materials and Methods for the exact wording of each item (headline). Error bars show 95% confidence intervals.

    1. John B. Kelly highlighted this disparity in a memorable passage published in 1973:

      Distance, the filtering of news through so many intermediate channels, and the habitual tendency to discuss and interpret Middle Eastern politics in the political terminology of the West, have all contrived to impart a certain blandness to the reporting and analysis of Middle Eastern affairs in Western countries. ... To read, for instance, the extracts from the Cairo and Baghdad press and radio ... is to open a window upon a strange and desolate landscape, strewn with weird, amorphous shapes cryptically inscribed "imperialist plot," "Zionist crime," "Western exploitation," ... and "the revolution betrayed." Around and among these enigmatic structures, curious figures, like so many mythical beats, caper and cavort - "enemies," "traitors," "stooges," "hyenas," "puppets," "lackeys," "feudalists," "gangsters," "tyrants," "criminals," "oppressors," "plotters" and deviationists". ... It is all rather like a monstrous playing board for some grotesque and sinister game, in which the snakes are all hydras, the ladders have no rungs, and the dice are blank.

  3. Dec 2022
    1. nalyze the content of 69,907 headlines pro-duced by four major global media corporations duringa minimum of eight consecutive months in 2014. In or-der to discover strategies that could be used to attractclicks, we extracted features from the text of the newsheadlines related to the sentiment polarity of the head-line. We discovered that the sentiment of the headline isstrongly related to the popularity of the news and alsowith the dynamics of the posted comments on that par-ticular news
    1. "Queer people built the Fediverse," she said, adding that four of the five authors of the ActivityPub standard identify as queer. As a result, protections against undesired interaction are built into ActivityPub and the various front ends. Systems for blocking entire instances with a culture of trolling can save users the exhausting process of blocking one troll at a time. If a post includes a “summary” field, Mastodon uses that summary as a content warning.
    1. Investigating social structures through the use of network or graphs Networked structures Usually called nodes ((individual actors, people, or things within the network) Connections between nodes: Edges or Links Focus on relationships between actors in addition to the attributes of actors Extensively used in mapping out social networks (Twitter, Facebook) Examples: Palantir, Analyst Notebook, MISP and Maltego
    1. Drawing from negativity bias theory, CFM, ICM, and arousal theory, this study characterizes the emotional responses of social media users and verifies how emotional factors affect the number of reposts of social media content after two natural disasters (predictable and unpredictable disasters). In addition, results from defining the influential users as those with many followers and high activity users and then characterizing how they affect the number of reposts after natural disasters
    1. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this “illusory truth effect” for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader’s political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem.
    1. . Furthermore, our results add to the growing body of literature documenting—at least at this historical moment—the link between extreme right-wing ideology and misinformation8,14,24 (although, of course, factors other than ideology are also associated with misinformation sharing, such as polarization25 and inattention17,37).

      Misinformation exposure and extreme right-wing ideology appear associated in this report. Others find that it is partisanship that predicts susceptibility.

    2. . We also find evidence of “falsehood echo chambers”, where users that are more often exposed to misinformation are more likely to follow a similar set of accounts and share from a similar set of domains. These results are interesting in the context of evidence that political echo chambers are not prevalent, as typically imagined
    3. And finally, at the individual level, we found that estimated ideological extremity was more strongly associated with following elites who made more false or inaccurate statements among users estimated to be conservatives compared to users estimated to be liberals. These results on political asymmetries are aligned with prior work on news-based misinformation sharing

      This suggests the misinformation sharing elites may influence whether followers become more extreme. There is little incentive not to stoke outrage as it improves engagement.

    4. Estimated ideological extremity is associated with higher elite misinformation-exposure scores for estimated conservatives more so than estimated liberals.

      Political ideology is estimated using accounts followed10. b Political ideology is estimated using domains shared30 (Red: conservative, blue: liberal). Source data are provided as a Source Data file.

      Estimated ideological extremity is associated with higher language toxicity and moral outrage scores for estimated conservatives more so than estimated liberals.

      The relationship between estimated political ideology and (a) language toxicity and (b) expressions of moral outrage. Extreme values are winsorized by 95% quantile for visualization purposes. Source data are provided as a Source Data file.

    5. In the co-share network, a cluster of websites shared more by conservatives is also shared more by users with higher misinformation exposure scores.

      Nodes represent website domains shared by at least 20 users in our dataset and edges are weighted based on common users who shared them. a Separate colors represent different clusters of websites determined using community-detection algorithms29. b The intensity of the color of each node shows the average misinformation-exposure score of users who shared the website domain (darker = higher PolitiFact score). c Nodes’ color represents the average estimated ideology of the users who shared the website domain (red: conservative, blue: liberal). d The intensity of the color of each node shows the average use of language toxicity by users who shared the website domain (darker = higher use of toxic language). e The intensity of the color of each node shows the average expression of moral outrage by users who shared the website domain (darker = higher expression of moral outrage). Nodes are positioned using directed-force layout on the weighted network.

    6. Exposure to elite misinformation is associated with the use of toxic language and moral outrage.

      Shown is the relationship between users’ misinformation-exposure scores and (a) the toxicity of the language used in their tweets, measured using the Google Jigsaw Perspective API27, and (b) the extent to which their tweets involved expressions of moral outrage, measured using the algorithm from ref. 28. Extreme values are winsorized by 95% quantile for visualization purposes. Small dots in the background show individual observations; large dots show the average value across bins of size 0.1, with size of dots proportional to the number of observations in each bin. Source data are provided as a Source Data file.

    1. Exposure to elite misinformation is associated with sharing news from lower-quality outlets and with conservative estimated ideology.

      Shown is the relationship between users’ misinformation-exposure scores and (a) the quality of the news outlets they shared content from, as rated by professional fact-checkers21, (b) the quality of the news outlets they shared content from, as rated by layperson crowds21, and (c) estimated political ideology, based on the ideology of the accounts they follow10. Small dots in the background show individual observations; large dots show the average value across bins of size 0.1, with size of dots proportional to the number of observations in each bin.

    1. Notice that Twitter’s account purge significantly impacted misinformation spread worldwide: the proportion of low-credible domains in URLs retweeted from U.S. dropped from 14% to 7%. Finally, despite not having a list of low-credible domains in Russian, Russia is central in exporting potential misinformation in the vax rollout period, especially to Latin American countries. In these countries, the proportion of low-credible URLs coming from Russia increased from 1% in vax development to 18% in vax rollout periods (see Figure 8 (b), Appendix).

    2. Interestingly, the fraction of low-credible URLs coming from U.S. dropped from 74% in the vax devel-opment period to 55% in the vax rollout. This large decrease can be directly ascribed to Twitter’s moderationpolicy: 46% of cross-border retweets of U.S. users linking to low-credible websites in the vax developmentperiod came from accounts that have been suspended following the U.S. Capitol attack (see Figure 8 (a), Ap-pendix).
    3. Considering the behavior of users in no-vax communities,we find that they are more likely to retweet (Figure 3(a)), share URLs (Figure 3(b)), and especially URLs toYouTube (Figure 3(c)) than other users. Furthermore, the URLs they post are much more likely to be fromlow-credible domains (Figure 3(d)), compared to those posted in the rest of the networks. The differenceis remarkable: 26.0% of domains shared in no-vax communities come from lists of known low-credibledomains, versus only 2.4% of those cited by other users (p < 0.001). The most common low-crediblewebsites among the no-vax communities are zerohedge.com, lifesitenews.com, dailymail.co.uk (consideredright-biased and questionably sourced) and childrenshealthdefense.com (conspiracy/pseudoscience)
    1. We applied two scenarios to compare how these regular agents behave in the Twitter network, with and without malicious agents, to study how much influence malicious agents have on the general susceptibility of the regular users. To achieve this, we implemented a belief value system to measure how impressionable an agent is when encountering misinformation and how its behavior gets affected. The results indicated similar outcomes in the two scenarios as the affected belief value changed for these regular agents, exhibiting belief in the misinformation. Although the change in belief value occurred slowly, it had a profound effect when the malicious agents were present, as many more regular agents started believing in misinformation.

    1. Therefore, although the social bot individual is “small”, it has become a “super spreader” with strategic significance. As an intelligent communication subject in the social platform, it conspired with the discourse framework in the mainstream media to form a hybrid strategy of public opinion manipulation.
    2. There were 120,118 epidemy-related tweets in this study, and 34,935 Twitter accounts were detected as bot accounts by Botometer, accounting for 29%. In all, 82,688 Twitter accounts were human, accounting for 69%; 2495 accounts had no bot score detected.In social network analysis, degree centrality is an index to judge the importance of nodes in the network. The nodes in the social network graph represent users, and the edges between nodes represent the connections between users. Based on the network structure graph, we may determine which members of a group are more influential than others. In 1979, American professor Linton C. Freeman published an article titled “Centrality in social networks conceptual clarification“, on Social Networks, formally proposing the concept of degree centrality [69]. Degree centrality denotes the number of times a central node is retweeted by other nodes (or other indicators, only retweeted are involved in this study). Specifically, the higher the degree centrality is, the more influence a node has in its network. The measure of degree centrality includes in-degree and out-degree. Betweenness centrality is an index that describes the importance of a node by the number of shortest paths through it. Nodes with high betweenness centrality are in the “structural hole” position in the network [69]. This kind of account connects the group network lacking communication and can expand the dialogue space of different people. American sociologist Ronald S. Bert put forward the theory of a “structural hole” and said that if there is no direct connection between the other actors connected by an actor in the network, then the actor occupies the “structural hole” position and can obtain social capital through “intermediary opportunities”, thus having more advantages.
    3. We analyzed and visualized Twitter data during the prevalence of the Wuhan lab leak theory and discovered that 29% of the accounts participating in the discussion were social bots. We found evidence that social bots play an essential mediating role in communication networks. Although human accounts have a more direct influence on the information diffusion network, social bots have a more indirect influence. Unverified social bot accounts retweet more, and through multiple levels of diffusion, humans are vulnerable to messages manipulated by bots, driving the spread of unverified messages across social media. These findings show that limiting the use of social bots might be an effective method to minimize the spread of conspiracy theories and hate speech online.
    1. I want to insist on an amateur internet; a garage internet; a public library internet; a kitchen table internet.

      Social media should be comprised of people from end to end. Corporate interests inserted into the process can only serve to dehumanize the system.


      Robin Sloan is in the same camp as Greg McVerry and I.

    1. Alas, lawmakers are way behind the curve on this, demanding new "online safety" rules that require firms to break E2E and block third-party de-enshittification tools: https://www.openrightsgroup.org/blog/online-safety-made-dangerous/ The online free speech debate is stupid because it has all the wrong focuses: Focusing on improving algorithms, not whether you can even get a feed of things you asked to see; Focusing on whether unsolicited messages are delivered, not whether solicited messages reach their readers; Focusing on algorithmic transparency, not whether you can opt out of the behavioral tracking that produces training data for algorithms; Focusing on whether platforms are policing their users well enough, not whether we can leave a platform without losing our important social, professional and personal ties; Focusing on whether the limits on our speech violate the First Amendment, rather than whether they are unfair: https://doctorow.medium.com/yes-its-censorship-2026c9edc0fd

      This list is particularly good.


      Proper regulation of end to end services would encourage the creation of filtering and other tools which would tend to benefit users rather than benefit the rent seeking of the corporations which own the pipes.

    1. Okay, so flashback to the 1920s and the emergence of something called the public interest mandate, basically when radio was new, a ton of people wanted to broadcast the demand for space on the dial outstripped supply. So to narrow the field, the federal government says that any station using the public airwaves needs to serve the public interest. So what do they mean by the public interest? Yeah, right? It's like super vague, right? But the FCC clarified what it meant by public interest in the years following World War Two, They had seen how radio could be used to promote fascism in Europe, and they didn't want us radio stations to become propaganda outlets. And so in 1949, the FCC basically says to stations in order to serve the public, you need to give airtime to coverage of current events and you have to include multiple perspectives in your coverage. This is the basis of what comes to be known as the fairness doctrine.

      Origin of the FCC Fairness Doctrine

    1. I'd love it to be normal and everyday to not assume that when you post a message on your social network, every person is reading it in a similar UI, either to the one you posted from, or to the one everyone else is reading it in.

      🤗

    1. [https://a.gup.pe/ Guppe Groups] a group of bot accounts that can be used to aggregate social groups within the [[fediverse]] around a variety of topics like [[crafts]], books, history, philosophy, etc.

    1. Musk appears to be betting that the spectacle is worth it. He’s probably correct in thinking that large swaths of the world will not deem his leadership a failure either because they are ideologically aligned with him or they simply don’t care and aren’t seeing any changes to their corner of the Twitterverse.

      How is this sort of bloodsport similar/different to the news media coverage of Donald J. Trump in 2015/2016?

      The similarities over creating engagement within a capitalistic framing along with the need to only garner at least a minimum amount of audience to support the enterprise seem to be at play.

      Compare/contrast this with the NBAs conundrum with the politics of entering the market in China.

    2. A lot has changed about our news media ecosystem since 2007. In the United States, it’s hard to overstate how the media is entangled with contemporary partisan politics and ideology. This means that information tends not to flow across partisan divides in coherent ways that enable debate.

      Our media and social media systems have been structured along with the people who use them such that debate is stifled because information doesn't flow coherently across the political partisan divide.

    3. I often think back to MySpace’s downfall. In 2007, I penned a controversial blog post noting a division that was forming as teenagers self-segregated based on race and class in the US, splitting themselves between Facebook and MySpace. A few years later, I noted the role of the news media in this division, highlighting how media coverage about MySpace as scary, dangerous, and full of pedophiles (regardless of empirical evidence) helped make this division possible. The news media played a role in delegitimizing MySpace (aided and abetted by a team at Facebook, which was directly benefiting from this delegitimization work).

      danah boyd argued in two separate pieces that teenagers self-segregated between MySpace and Facebook based on race and class and that the news media coverage of social media created fear, uncertainty, and doubt which fueled the split.

      http://www.danah.org/papers/essays/ClassDivisions.html

    1. “The damage commercial social media has done to politics, relationships and the fabric of society needs undoing.
    2. As users begin migrating to the noncommercial fediverse, they need to reconsider their expectations for social media — and bring them in line with what we expect from other arenas of social life. We need to learn how to become more like engaged democratic citizens in the life of our networks.
    1. I have about fourteen or sixteen weeks to do this, so I'm breaking the course into an "intro" section that covers some basic stuff like affordances, and other insights into how tech functions. There's a section on AI which is nothing but critical appraisals on AI from a variety of areas. And there's a section on Social Media, which is the most well formed section in terms of readings.

      https://zirk.us/@shengokai/109440759945863989

      If the individuals in an environment don't understand or perceive the affordances available to them, can the interactions between them and the environment make it seem as if the environment possesses agency?

      cross reference: James J. Gibson book The Senses Considered as Perceptual Systems (1966)


      People often indicate that social media "causes" outcomes among groups of people who use it. Eg: Social media (via algorithmic suggestions of fringe content) causes people to become radicalized.

  4. Nov 2022
    1. The TTRG (time to reply guy) was getting so fast, that I can’t actually remember the last time I tweeted something helpful like a design or development tip. I just couldn’t be arsed, knowing some dickhead would be around to waste my time with whataboutisms and “will it scale”?
    1. 11/30 Youth Collaborative

      I went through some of the pieces in the collection. It is important to give a platform to the voices that are missing from the conversation usually.

      Just a few similar initiatives that you might want to check out:

      Storycorps - people can record their stories via an app

      Project Voice - spoken word poetry

      Living Library - sharing one's story

      Freedom Writers - book and curriculum based on real-life stories

    1. If more Americans were like TV Tropes’ users—that is, if they could spot the recurring motifs in purported political plots—might they also be better at separating fact from fiction?

      Perhaps EIP could partner with On the Media to produce a trope consumer handbook for elections, vaccines, and various conspiracy theory areas?

      Cross reference: https://www.wnycstudios.org/podcasts/otm/projects/breaking-news-consumers-handbook

    2. As part of the Election Integrity Partnership, my team at the Stanford Internet Observatory studies online rumors, and how they spread across the internet in real time.
    1. In late 2006, Eno released 77 Million Paintings, a program of generative video and music specifically for home computers. As its title suggests, there is a possible combination of 77 million paintings where the viewer will see different combinations of video slides prepared by Eno each time the program is launched. Likewise, the accompanying music is generated by the program so that it's almost certain the listener will never hear the same arrangement twice.

      Brian Eno's experiments in generative music mirror some of the ideas of generative and experimental fiction which had been in the zeitgeist and developing for a while.

      Certainly the fictional ideas were influential to the zeitgeist here, but the technology for doing these sorts of things in the musical realm lagged the ability to do them in the word realm.

      We're just starting to see some of these sorts of experimental things in the film space and with artificial intelligence they're becoming much easier to do in all of these media spaces.

      In some of the film spaces, they exist, but may tend to be short in nature, in part given the technology and processing power required.

      see also: Deepfake TikTok of Keanu Reeves which I've recently run across (algorithmically) on Instagram: https://www.dailydot.com/debug/unreal-keanu-reeves-ai-deepfake/

      Had anyone been working on generative art? Marcel Duchamp, et al? Some children's toys can mechanically create generative art which can be subtly modified by the children using axes of color, form, etc. Etch-a-sketch, kaleidoscopes, doodling robots (eg: https://www.amazon.com/4M-Doodling-Robot-Packaging-Vary/dp/B002EWWW9O).

    1. The notable exception: social media companies. Gen Zers are more likely to trust social media companies to handle their data properly than older consumers, including millennials, are.

      Gen-Z is more trusting of data handling by social media companies

      For most categories of businesses, Gen Z adults are less likely to trust a business to protect the privacy of their data as compared to other generations. Social media is the one exception.

    1. https://zettelkasten.social/about

      Someone has registered the domain and it is hosted by masto.host, but not yet active as of 2022-11-13

    1. Any migration is likely to face many of the challenges previous platform migrations have faced: content loss, fragmented communities, broken social networks and shifted community norms.
    2. By asking participants about their experiences moving across these platforms – why they left, why they joined and the challenges they faced in doing so – we gained insights into factors that might drive the success and failure of platforms, as well as what negative consequences are likely to occur for a community when it relocates.
    1. "This is a job market that just won't quit. It's challenging the rules of economics," said Becky Frankiewicz,  chief commercial officer of hiring company ManpowerGroup in an email after the data was released. "The economic indicators are signaling caution, yet American employers are signaling confidence."

      This article explains the economic market. Creating 528,000 jobs is an outstanding aspect for the American people. But It also needs to explain the bad parts of creating jobs in this situation. Because challenging the rules of economics should not make a better situation, There are also high risks.

    1. That could create even more burdens for businesses because hiking interest rates tends to create higher rates on consumer and business loans, which slows the economy by forcing employers to cut back on spending.

      This article describes the disadvantages of high-interest rates. Although there are facts and parts that we need to be concerned about, high-interest rates also have advantages. There are more information about advantages about high-interest.

    1. DHS’s mission to fight disinformation, stemming from concerns around Russian influence in the 2016 presidential election, began taking shape during the 2020 election and over efforts to shape discussions around vaccine policy during the coronavirus pandemic. Documents collected by The Intercept from a variety of sources, including current officials and publicly available reports, reveal the evolution of more active measures by DHS. According to a draft copy of DHS’s Quadrennial Homeland Security Review, DHS’s capstone report outlining the department’s strategy and priorities in the coming years, the department plans to target “inaccurate information” on a wide range of topics, including “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.”

      DHS pivots as "war on terror" winds down

      The U.S. Department of Homeland Security pivots from externally-focused terrorism to domestic social media monitoring.

  5. Oct 2022
    1. A recent writer has called attention to apassage in Paxson's presidential address before the American Historical Associationin 1938, in which he remarked that historians "needed Cheyney's warning . . . not towrite in 1917 or 1918 what might be regretted in 1927 and 1928."

      There are lessons in Frederic L. Paxson's 1938 address to the American Historical Association for todays social media culture and the growing realm of cancel culture when he remarked that historians "needed Cheyney's warning... not to write in 1917 or 1918 what might be regretted in 1927 and 1928.

    1. By teaching them all to read, we have left them atthe mercy of the printed word.

      Knowing how to read without the associated apparatus of the trivium, leaves people open to believing just about anything. You can read words, but knowing what to do with those words, endow them with meaning, and reason with them. (summarization)


      Oral cultures with knowledge systems engrained into them would likely have included trivium-esque structures to allow their users to not only better remember to to better think and argue.

    1. @1:10:20

      With HTML you have, broadly speaking, an experience and you have content and CSS and a browser and a server and it all comes together at a particular moment in time, and the end user sitting at a desktop or holding their phone they get to see something. That includes dynamic content, or an ad was served, or whatever it is—it's an experience. PDF on the otherhand is a record. It persists, and I can share it with you. I can deliver it to you [...]

      NB: I agree with the distinction being made here, but I disagree that the former description is inherent to HTML. It's not inherent to anything, really, so much as it is emergent—the result of people acting as if they're dealing in live systems when they shouldn't.

    2. @48:20

      I should actually add that the PDF specification only specifies the file format and very few what we call process requirements on software, so a lot of those sort of experiential things are actually not defined in the PDF spec.

    1. https://glasp.co/home

      Glasp is a startup competitor in the annotations space that appears to be a subsidiary web-based tool and response to a large portion of the recent spate of note taking applications.

      Some of the first users and suggested users are names I recognize from this tools for thought space.

      On first blush it looks like it's got a lot of the same features and functionality as Hypothes.is, but it also appears to have some slicker surfaces and user interface as well as a much larger emphasis on the social aspects (followers/following) and gamification (graphs for how many annotations you make, how often you annotate, streaks, etc.).

      It could be an interesting experiment to watch the space and see how quickly it both scales as well as potentially reverts to the mean in terms of content and conversation given these differences. Does it become a toxic space via curation of the social features or does it become a toxic intellectual wasteland when it reaches larger scales?

      What will happen to one's data (it does appear to be a silo) when the company eventually closes/shuts down/acquihired/other?

      The team behind it is obviously aware of Hypothes.is as one of the first annotations presented to me is an annotation by Kei, a cofounder and PM at the company, on the Hypothes.is blog at: https://web.hypothes.is/blog/a-letter-to-marc-andreessen-and-rap-genius/

      But this is true for Glasp. Science researchers/writers use it a lot on our service, too.—Kei

      cc: @dwhly @jeremydean @remikalir

    1. Edgerly noted that disinformation spreads through two ways: The use of technology and human nature.Click-based advertising, news aggregation, the process of viral spreading and the ease of creating and altering websites are factors considered under technology.“Facebook and Google prioritize giving people what they ‘want’ to see; advertising revenue (are) based on clicks, not quality,” Edgerly said.She noted that people have the tendency to share news and website links without even reading its content, only its headline. According to her, this perpetuates a phenomenon of viral spreading or easy sharing.There is also the case of human nature involved, where people are “most likely to believe” information that supports their identities and viewpoints, Edgerly cited.“Vivid, emotional information grabs attention (and) leads to more responses (such as) likes, comments, shares. Negative information grabs more attention than (the) positive and is better remembered,” she said.Edgerly added that people tend to believe in information that they see on a regular basis and those shared by their immediate families and friends.

      Spreading misinformation and disinformation is really easy in this day and age because of how accessible information is and how much of it there is on the web. This is explained precisely by Edgerly. Noted in this part of the article, there is a business for the spread of disinformation, particularly in our country. There are people who pay what we call online trolls, to spread disinformation and capitalize on how “chronically online” Filipinos are, among many other factors (i.e., most Filipinos’ information illiteracy due to poverty and lack of educational attainment, how easy it is to interact with content we see online, regardless of its authenticity, etc.). Disinformation also leads to misinformation through word-of-mouth. As stated by Edgerly in this article, “people tend to believe in information… shared by their immediate families and friends”; because of people’s human nature to trust the information shared by their loved ones, if one is not information literate, they will not question their newly received information. Lastly, it most certainly does not help that social media algorithms nowadays rely on what users interact with; the more that a user interacts with a certain information, the more that social media platforms will feed them that information. It does not help because not all social media websites have fact checkers and users can freely spread disinformation if they chose to.

    1. Direkte Kanäle > Social Media: Bei Instagram, Facebook und TikTok entscheiden Algorithmen, welche deiner Inhalte für User sichtbar werden. Dem entgegen stehen direkte Distributionswege: Eine neue Podcast-Episode landet unmittelbar im Podcatcher, der Newsletter im Posteingang, die SMS oder Push Notification auf dem Smartphone. Klarer Win! Trotzdem bleiben die Socials wichtig, um neue Zielgruppen auf sich aufmerksam zu machen und erste Bande zu knüpfen.

      sozial MEdia

    1. Trolls, in this context, are humans who hold accounts on social media platforms, more or less for one purpose: To generate comments that argue with people, insult and name-call other users and public figures, try to undermine the credibility of ideas they don’t like, and to intimidate individuals who post those ideas. And they support and advocate for fake news stories that they’re ideologically aligned with. They’re often pretty nasty in their comments. And that gets other, normal users, to be nasty, too.

      Not only programmed accounts are created but also troll accounts that propagate disinformation and spread fake news with the intent to cause havoc on every people. In short, once they start with a malicious comment some people will engage with the said comment which leads to more rage comments and disagreements towards each other. That is what they do, they trigger people to engage in their comments so that they can be spread more and produce more fake news. These troll accounts usually are prominent during elections, like in the Philippines some speculates that some of the candidates have made troll farms just to spread fake news all over social media in which some people engage on.

    2. So, bots are computer algorithms (set of logic steps to complete a specific task) that work in online social network sites to execute tasks autonomously and repetitively. They simulate the behavior of human beings in a social network, interacting with other users, and sharing information and messages [1]–[3]. Because of the algorithms behind bots’ logic, bots can learn from reaction patterns how to respond to certain situations. That is, they possess artificial intelligence (AI). 

      In all honesty, since I don't usually dwell on technology, coding, and stuff. I thought when you say "Bot" it is controlled by another user like a legit person, never knew that it was programmed and created to learn the usual patterns of posting of some people may be it on Twitter, Facebook, and other social media platforms. I think it is important to properly understand how "Bots" work to avoid misinformation and disinformation most importantly during this time of prominent social media use.

  6. Sep 2022
    1. https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/

      Good overview article of some of the psychology research behind misinformation in social media spaces including bots, AI, and the effects of cognitive bias.

      Probably worth mining the story for the journal articles and collecting/reading them.

    2. Bots can also accelerate the formation of echo chambers by suggesting other inauthentic accounts to be followed, a technique known as creating “follow trains.”
    3. We observed an overall increase in the amount of negative information as it passed along the chain—known as the social amplification of risk.

      Could this be linked to my FUD thesis about decisions based on possibilities rather than realities?

    4. We confuse popularity with quality and end up copying the behavior we observe.

      Popularity ≠ quality in social media.

    5. “Limited individual attention and online virality of low-quality information,” By Xiaoyan Qiu et al., in Nature Human Behaviour, Vol. 1, June 2017

      The upshot of this paper seems to be "information overload alone can explain why fake news can become viral."

    6. Running this simulation over many time steps, Lilian Weng of OSoMe found that as agents' attention became increasingly limited, the propagation of memes came to reflect the power-law distribution of actual social media: the probability that a meme would be shared a given number of times was roughly an inverse power of that number. For example, the likelihood of a meme being shared three times was approximately nine times less than that of its being shared once.
    7. One of the first consequences of the so-called attention economy is the loss of high-quality information.

      In the attention economy, social media is the equivalent of fast food. Just like going out for fine dining or even healthier gourmet cooking at home, we need to make the time and effort to consume higher quality information sources. Books, journal articles, and longer forms of content with more editorial and review which take time and effort to produce are better choices.

    1. After looking at various studies fromthe 1960s until the early 1980s, Barry S. Stein et al. summarises:“The results of several recent studies support the hypothesis that

      retention is facilitated by acquisition conditions that prompt people to elaborate information in a way that increases the distinctiveness of their memory representations.” (Stein et al. 1984, 522)

      Want to read this paper.

      Isn't this a major portion of what many mnemotechniques attempt to do? "increase distinctiveness of memory representations"? And didn't he just wholly dismiss the entirety of mnemotechniques as "tricks" a few paragraphs back? (see: https://hypothes.is/a/dwktfDiuEe2sxaePuVIECg)

      How can one build or design this into a pedagogical system? How is this potentially related to Andy Matuschak's mnemonic medium research?

    1. Is video making podcasts obsolete?

      No.

      I'm two years shy of 30 and I've been listening to "podcasts" - that is, audio programs distributed over RSS - nearly every day since Middle School. In all of that time, the word - not the medium, I'd argue - podcast has been so unnecessarily stretched to oblivion. Before making statements like this... can we please just... try out a few different words to describe what we're talking about here?

      I listen to content because I be Doin Something dog. Cannot and will not look, and the extra production time required to just make a video version has continued to cause friction with the sort of content I actually want to hear.

    1. https://mleddy.blogspot.com/2005/05/tools-for-serious-readers.html

      Interesting (now discontinued) reading list product from Levenger that in previous generations may have been covered by a commonplace book but was quickly replaced by digital social products (bookmark applications or things like Goodreads.com or LibraryThing.com).

      Presently I keep a lot of this sort of data digitally myself using either/both: Calibre or Zotero.

    1. Artykuł na temat mediów społecznościowych wykorzystywanych przez instytucje kultury i organizacje pozarządowe; o ich roli, charakterze i sposobie podejścia do pracy z treściami tam publikowanych.

      Dwoma podstawowymi rolami mediów społecznościowych są: - przekazywanie treści, - budowanie społeczności.

      Dlatego istotnym elementem jest komunikacja z publicznością, do tego tak, często, jak często wymaga tego dane zagadnienie, a także bez zbędnego dystansu (zwracamy się "na ty"), czyli z naciskiem na społecznościowy charakter.

      Zatem rozmawiamy z ludźmi, jesteśmy blisko nich, tworzymy z nimi przyjazną przestrzeń do wspólnej dyskusji.

    1. the tools can be distributed within static web-pages, which can easily be hosted on any number of exter-nal services, so researchers need not run servers themselves
  7. Aug 2022
    1. Indie sites can’t complete with that. And what good is hosting and controlling your own content if no one else looks at it? I’m driven by self-satisfaction and a lifelong archivist mindset, but others may not be similarly inclined. The payoffs here aren’t obvious in the short-term, and that’s part of the problem. It will only be when Big Social makes some extremely unpopular decision or some other mass exodus occurs that people lament about having no where else to go, no other place to exist. IndieWeb is an interesting movement, but it’s hard to find mentions of it outside of hippie tech circles. I think even just the way their “Getting Started” page is presented is an enormous barrier. A layperson’s eyes will 100% glaze over before they need to scroll. There is a lot of weird jargon and in-joking. I don’t know how to fix that either. Even as someone with a reasonably technical background, there are a lot of components of IndieWeb that intimidate me. No matter the barriers we tear down, it will always be easier to just install some app made by a centralised platform.
    1. We’re trapped in a Never-Ending Now — blind to history, engulfed in the present moment, overwhelmed by the slightest breeze of chaos. Here’s the bottom line: You should prioritize the accumulated wisdom of humanity over what’s trending on Twitter.

      Recency bias and social media will turn your daily inputs into useless, possibly rage-inducing, information.