1,269 Matching Annotations
  1. Jan 2023
    1. You’re not going to have a clear picture at the start. So start with a fuzzy one

      This sounds like what I call soft-focusing. Some years ago I let go of being strict with myself, and stopped having defined goals in favor of course/directions and a vaguer sense of the destination. I also started soft-focusing my inputs (if there's a connection to my running list of interests connected to my sense of direction it qualifies), and am now trying to soft-focus my outputs. Not blogpost / project A or deliverable B as I would earlier, but more emergent. Then when I have task / creative thing to do, I use it to formulate questions to my notes and see what comes up. This evolved from doing the same in conversations with clients and colleagues where the value of that and resulting associations was clearly visible.

    1. another narrative failure: the inability to imagine a world different than the one we currently inhabit

      are there compelling stories about what comes after?

    2. frames the possibilities in absolutes: if we can’t win everything, then we lose everything

      oversimplification abounds: this isn't the single cure, so it's not helpful. While we're in a truly complex environment and per def this means a whole collectiojn of simultaneous interventions is needed. Complexity never has a single answer.

    3. stories of premature defeat are all too common

      stories about not having the solutions / being too late.

    4. we still lack stories that give context. For example, I see people excoriate the mining, principally for lithium and cobalt, that will be an inevitable part of building renewables – turbines, batteries, solar panels, electric machinery – apparently oblivious to the far vaster scale and impact of fossil fuel mining. If you’re concerned about mining on indigenous land, about local impacts or labour conditions, I give you the biggest mining operations ever undertaken: for oil, gas, and coal, and the hungry machines that must constantly consume them.

      stories lack context discouraging proper comparison (imo often by design)

    5. Greenwashing – the schemes created by fossil fuel corporations and others to portray themselves as on the environment’s side while they continue their profitable destruction – is rampant

      green washing is a category of stories

    6. Outright climate denial – the old story that climate change isn’t real – has been rendered largely obsolete (outside social media) by climate-driven catastrophes around the globe and good work by climate activists and journalists.

      social media the last refuge of climate change denial

    7. What the climate crisis is, what we can do about it, and what kind of a world we can have is all about what stories we tell and whose stories are heard

      we are all storytellers or should be, and we're in a power negotiation situation.

    8. climate journalist Mary Heglar writes, we are not short on innovation. “We’ve got loads of ideas for solar panels and microgrids. While we have all of these pieces, we don’t have a picture of how they come together to build a new world. For too long, the climate fight has been limited to scientists and policy experts. While we need their skills, we also need so much more. When I survey the field, it’s clear that what we desperately need is more artists.”

      It might be we have all the pieces, just not the connective and compelling narrative.

    9. change our relationship to the physical world – to end an era of profligate consumption by the few that has consequences for the many – means changing how we think about pretty much everything: wealth, power, joy, time, space, nature, value, what constitutes a good life, what matters, how change itself happens

      who's the we here? Just the few (rich countries) or also the many. Then lists aspects systems / ethics and what method of change you think to deploy.

      The Ponzi scheme that is western society is centered as cause and its end the remedy. Vgl [[De externe input cheat 20091015070231]] where I formulate the same. We've run out of being able to ignore [[Geexternaliseerde effecten 20200914204533]]. Is this a [[Ethics of Agency 20201003161155]]?

    10. Perhaps we also need to become better critics and listeners, more careful about what we take in and who’s telling it, and what we believe and repeat, because stories can give power – or they can take it away

      This sounds like crap detection and [[Infostrat Filtering 20050928171301]] What do you amplify, how do you judge sources, how do you shape your info-diet, and are you aware that how/what you share is a feedback loop to those who shared the stuff you're reacting to? Solnit focuses here on the narrative/shape of what you share in response to your intake (in contrast to resharing other people's narratives)

    11. In order to do what the climate crisis demands of us, we have to find stories of a livable future, stories of popular power, stories that motivate people to do what it takes to make the world we need.

      progressive populism? Vgl [[countering the populist narrative 20221103141532]] and https://jarche.com/2022/09/better-stories-for-a-better-world/

    12. adrienne maree brown wrote not long ago that there is an element of science fiction in climate action: “We are shaping the future we long for and have not yet experienced. I believe that we are in an imagination battle.”

      This is how I've read SF for years, both near future and space opera. As mood board and thinking input.

      adrienne maree brown https://en.wikipedia.org/wiki/Adrienne_Maree_Brown in turn inspired by SF author Octavia Butler (have I read her xenogenesis trilogy?)

    1. he stability we observe in the sheer number of disruptive papers and patents suggests that science and technology do not appear to have reached the end of the ‘endless frontier.’

      All this, and in the end a statement that in absolute terms there's stability? Wow, did they miss the demographic factrs at play in scientific community as possible explanation of the relative effect?

    2. given the limits constraining further research, science will be hard-pressed to make any truly profound additions to the knowledge it has already generated. Further research may yield no more great revelations or revolutions but only incremental returns.

      At the same time this is precisely the argument above wrt 19th century. Right when you think you know all, everything gets turned over.

    3. What happens when the cost of a new discovery becomes so high that it simply is not achieved? Horgan saw that day if not already at hand, the certainly right around the corner.

      Kind of like an inverse singularity: a brick wall.

    4. the book’s core idea: We should expect fewer, and less important, scientific discoveries as time goes on. The reasoning behind this was simple. In the beginning, everything was available to discover. Scientists could make a discovery about the scale of the Earth with an upright stick. They could learn about the speed of sound by watching someone chop wood. However, with each passing year, as the big book of facts became more stuffed with learning, the difficulty of making fundamental new discoveries increases. In the 19th century, the electron was discovered by one guy using equipment that might have been found in a high school science lab (or the basement of a wealthy naturalist). To close out the particle zoo with the Higgs Boson took an international effort with an over $4 billion collider.

      Pointing to [[Evolutionair vlak van mogelijkheden 20200826185412]] again and that new disruptions probably have higher thresholds to cross (resources, crossdiscipl teams)

    5. There’s an important precursor to this paper that many media seem to have omitted from this discussion, and that’s the 1996 book, The End of Science, by science journalist John Horgan.

      Book to find.

    6. The bulk of the paper is related to how they determined “disruptiveness” of papers and patent filings (which is where many of those offended by the idea find traction in disputing the overall theme), but the thrust of the conclusion is this: The number of publications has increased, many of those papers are very high quality, some remain disruptive, but many only confirm the status quo. Or at best, they offer new insight that leads to little potential for either scientific or economic impact.

      Again this reads (second hand) more as a quantity-dynamic. Many confirming the status quo btw is also K. (Vgl Edison's '999 ways I established that don't work'.)

    7. Overall, our results suggest that slowing rates of disruption may reflect a fundamental shift in the nature of science and technology.

      Rate of disruption, need to check how this rate is determined. Absolute number of big breaks over time, or relative to scientific production in general (which is when it would be expected to slow with rising production).

    8. We find that the observed declines are unlikely to be driven by changes in the quality of published science, citation practices or field-specific factors.

      Suggesting it isn't in scientific practice. So what changed very much? Volume of scientists -> volume of publications.

    9. Subsequently, we link this decline in disruptiveness to a narrowing in the use of previous knowledge, allowing us to reconcile the patterns we observe with the ‘shoulders of giants’ view.

      the narrowing maybe a relative one? Or is it really being on a branching but ending path in [[Evolutionair vlak van mogelijkheden 20200826185412]]

    10. We find that papers and patents are increasingly less likely to break with the past in ways that push science and technology in new directions. This pattern holds universally across fields and is robust across multiple different citation- and text-based metrics.

      Is this caused by anything in science, or a symptom of the growing scientific community globally, and rising average edu levels? Quantitative phase shifts have qualitative effects.

    11. data they used wasn’t polling of those in the fields, but a survey of patent filings.

      switching here from publications to patents which are a very different beast. Patents are transactions (publishing ideas for temporary market exclusivity) There isn't a necessary path from paper to patent, and def not in all scientific fields. By def patents are more engineering (in the primary meaning) oriented imo, they're about how to (potentially) make things (work). Why's aren't patentable.

    12. while the number of new scientific publications has never been higher, the impact of those publications is constantly declining

      I can see how the volume of publications rising is also result of broader access to scientific disciplines. By def high impact is rare, so the average impact will decline with volume. If only because by def a lower percentage of eyes will ever see a paper.

    13. https://web.archive.org/web/20230116221448/https://www.dailykos.com/stories/2023/1/16/2147067/-Are-we-living-in-the-last-days-of-the-Scientific-Age Discusses a recent Nature article looking at how increasing numbers of new patents (a rightly criticized indicator) deal with ideas of decreasing impact. Conclusion is though that the number of disruptive patents remains high, just that the overall number of patents rises. Meaning perhaps more the democratisation of patenting, or perhaps the end of the utility of patenting, than stalling scientific progress.

      Some points from a 1996 book mentioned vgl [[Evolutionair vlak van mogelijkheden 20200826185412]] wrt scientific progress / increasing niche-specification in the evolutionary plane of possibilities. The book suggests skating to a different place has prohibitive costs and maybe out of reach. Vgl local optimisation in complexity, and what breaking loose from a local optimum takes. Is the loss of the Scientific Age here discussed a needed path into chaos to be able to reach other peaks? Check comments on the Nature article to see if this type of aspect gets discussed.

    1. Please join me in resisting and start helping to curb the hype.

      Call to action: curb the hype.

      Does that ever work? (Vgl when I was recently seen as old and negative simply because of listing a range of (pos and neg) real experiences wrt metaverse from earlier waves of hype for VR, and asking questions that are a lithmus test to determine contextual value to a user.

      One can not participate in hype, and ensuring the hypers are never able to be seen at the same table / level of disucssion as you (it legitimises the entity of lesser status if a key figure debates a more trivial figure) But can one pour cold water on it when others and other outlets do join in? And without being seen as 'just negative' Hype shifts perception of the neg-pos spectrum [[Overton window 20201024155353]] and Trevino scale, which also applies I think in ethics / phil of tech discussions (which in part means circling back to monstertheory).

    2. risks of hyped and harmful technology that is made mainstream at a dazzling speed and on a frightening scale

      speed and mainstreaming are points of contention here, in light of unripe tech, and unprincipled company behind it.

    3. critical thinking skills

      Vgl w 'disinfo innoculation' a la Finland.

    4. In this age of AI, where tech and hype try to steer how we think about “AI” (and by implication, about ourselves and ethics), for monetary gain and hegemonic power (e.g. Dingemanse, 2020; McQuillan, 2022), I believe it is our academic responsibility to resist.

      When hype is used to influence public opinion, there's a obligation to resist. (Vgl [[Crap detection is civic duty 2018010073052]], en [[Progress is civic duty of reflection 20190912114244]]) Also with which realm of [[Monstertheorie 20030725114320]] are we dealing here with this type of response? In the comments on Masto it's partly positioned as monster slaying, but that certainly isn't it. It's warning against monster embracing. I think the responses fall more into monster adaptation than assimiliation, as it aims to retain existing cultural categories although recognising the challenges issued against it. Not even sure the actual LLM is the monster perceived, but its origins and the intentions and values of the company behind it. Placing it outside the Monster realm entirely.

    5. push to make Large Language Models (LLMs), such as ChatGPT, larger and larger creates a “gigantic ecological footprint” with implications for “our planet that are far from beneficial for humankind”. [quotes translated from Dutch to English, original available in footnote 1].

      Additionally the ecological footprint of the tech is problematic (Vgl the blockchain mining activities footprint discussion).

    6. The willingness to provide free labour for a company like OpenAI is all the more noteworthy given (i) what is known about the dubious ideology of its founders known as `Effective Altruism’ (EA) (Gebru, 2022, Torres, 2021), (ii) that the technology is made by scraping the internet for training data without concern for bias, consent, copyright infringement or harmful content, nor for the environmental and social impact of both training method and the use of the product (Abid, Farooqi, & Zou, 2021; Bender et al., 2021; Birhane, Prabhu, & Kahembwe, 2021; Weidinger, et al., 2021), and (iii) the failure of Large Language Models (LLMs), such as ChatGPT, to actually understand language and their inability to produce reliable, truthful output (Bender & Koller, 2020; Bender & Shah, 2022).

      Claims: Doing free labour for OpenAI is problematic. (not expressed: every usage feeds back into the machine and is more free labout put in) Reasons: * OpenAI founders are on the utilitarianism followed ad absurdum end of Effective Altruism. The 'Open' bit is open-washing. * The provenance of training data is ethically suspect (internet scraping), and not controlled for quality. * Externalities aren't taken into account * Social impact of use (including based on faulty output) not considered. * LLMs are still bad at understanding (and routinely fail test that contain imprecise references to other words in the sentence, which humans find easy to solve based on real world knowledge outside the text)

    7. It’s almost as if academics are eager to do the PR work for OpenAI

      buying into the hype equated to doing PR work for OpenAI

    8. prevent that hyped-AI hijacks our attention and dictates our education and examination policies

      Iris van Rooij means to counteract edu sector buying into the AI hype

    1. Dave positions free will as a 1 or 0 thing, and then tends to 0. Is it that binary though? A spectrum (matters of degree across different contexts) would also help explain things.

      Free will is not free of consequences, which is where conditioning kicks in. (and society demanding responsibility or not)

      How about animals? Conditioned by their ecosystem (like us, by ecosystem and culture), yet free to roam which coalesces into patterns (migration, foraging, trails)

      Is emergence then disproving free will? Emergence is the only possible definition of 'they' here. Which lacks intention and planning. Still means you can perceive it as forcing you / hostility compared to you individually.

      Evolution. Your starting point on the evolutionary field of possibilities is determined outside of you, and limits choices/paths, as does every choice made along the way cut off certain paths and brings others in reach. One has limited control there by def, and no control for the first phase of life (what little control as child is there is while ignorant of consequences down the line wrt options).

      This juxtaposes invidivual (having free will or not), and society (imposing full conditioning). Crux of complexity is group level, groups one is part of by conditioning/birth and by choice/seeking out within available pre-conditioned options. There is no influence free place, but doubt it's a prerequisite for free will. Is agency more useful term, as it does apply to groups, and by extension organisations and countries. Bit weird their mention in context of free will as if anthromorphisation is something actual.

    1. Moreover, the decision is fundamentally pointless because it will have zero impact on consumer privacy. Neither Facebook nor Instagram sell user data—they simply use the information on their platform to show users targeted ads. The only change that this decision will cause is that Meta will have to rewrite its privacy policy to use one of the other legal bases provided in the GDPR to operate Facebook and Instagram, including to deliver targeted ads.

      Actually 'delivering targeted ads' based on protected data is inconsistent with the GDPR entirely.

    1. Control + K

      so if I annotate and highlight something at the same time?

    2. I don't presently have plans to expand this into an annotation extension, as I believe that purpose is served by Hypothesis. For now, I see this extension as a useful way for me to save highlights, share specific pieces of information on my website, and enable other people to do the same.

      I wonder if it uses the W3C recommendation for highlighting and annotation though? Which would allow it to interact with other highlighting/annotation results.

      To me highlighting is annotation, though a leightweight form, as the decision to highlight is interacting with the text in a meaningful way. And the pop up box actually says Annotation right there in the screenshot, so I don't fully grasp what distinction James is making here.

    1. Sure, this means that the conversations take place on those platforms, but the source of my content – my words – are still on my site, which I control.

      Kev is equating integration with any service to attempts to increase conversation around a post. That is often true but not always. E.g. I'm looking at AP to increase what own words I am sharing. E.g. AP for limited audience postings, and e.g. RSS for a subset of posting that are unlisted for the general public on my site.

    2. While that discourse is very important, the complexity it would add to the site to manage it, just isn’t worth it in my eyes.

      Valid point Kev makes here. A site should do only what its author needs it to do. I want interaction visible on my site, though I probably will cut down on the facepiles.

    1. Adding The Post Title To My “Reply By Email” Button

      I wonder if that would increase responses to my blog as Kev indicates. There might be those who will respond in e-mail, but not in a public comment. Worth a try.

  2. Dec 2022
    1. Niklas Luhmanns »Soziale Systeme«
    2. https://web.archive.org/web/20221020022908/https://www.spiegel.de/kultur/soziologie-flug-ueber-den-wolken-a-1ef2fd09-0002-0001-0000-000013511556

      A disc by Dirk Käsler of Niklas Luhmann's 'Soziale Systeme', both from 1984. Käsler notes something I hadn't realised yet. Luhmann and Habermas are almost the same age, and Käsler in 1984 dubs them Western-Germany's best sociological and philosophical contemporary export products respectively. E did her master thesis on Habermas in comparison with blogs (H never said anything about internet). The Western-Germany context is also something to take note of I think. Reading the name made me realise how much context / cultural background there is in it. How different I experience Germany now from the 1980s when even a year before the fall of the wall that happening seemed a generation away. The summers I spent in Germany, all in the west, and the visit to East Germany in 1987. A different world compared to later visits on both sides of the former divide.

    1. Although there is no server governance census, in my own research I found very few examples of more participatory approaches to server governance (social.coop keeps a list of collectively owned instances)

      Tarkowski has only seen a few more participatory server governance set-ups, most are 'benevolent dictator' style. Same. I have seen several growing instances that have changed from 'dictator' to a group of moderators making the decisions. You could democratise those roles by having users propose / vote on moderators. It's what we do offline in communal structures.

    2. the world needs proof that social networks can be maintained and governed in a decentralized and democratic way.

      We have that proof, once you let go of the notion that everyone should be on the same tech platform to count as social network. Offline social networks are by def decentralised and operate by consensus (or they split, change, move on), we build governance mushrooms on top of it (countries, democracies) that are the scaling layers, but not the building block. Mimick human networks digitally (which internet fundamentally already is) and it will be fine, messy as humans are, but fine in terms of utility and value derived by people using their tools. Tarkowski hits on key points, but seemingly takes the notions of scale and the centrality of a tech platform (and calling that the community) as a given, whereas I see them as the narrative that bigger tech platforms created. For them that narrative was necessary to be able to monetise, to get above the Cosean Floor with funding. There is no presumptive need for any social networks to be at scale for the entirety of human interaction to be at scale as a result. Community is not a synonym for platform or for the combination of a platform and its users. Approaching the fediverse from this 'big tech' framing more likely permeates the Paradox of Open it starts with than the resulting suggestions solve it. Escape it by working with the actual human meaning and level of social networks and the tools they deploy to interact, and ensure they can interconnect. E-mail, snail mail, SSB, IFPS make more sense here than Twitter, FB et al.

    3. This is the time to start thinking about the long-term sustainability and governance of the new, bigger Mastodon.

      This again is where it derails: 'a new bigger Mastodon'. There is no Mastodon as such. That's the point to/of leverage.

    4. Build a stronger social and institutional layer. There are many experts in community- and network-building who should be interested in working on such a project.

      Yes. It requires however letting go of starting from the tech as a single plaform or the presumption of scale, and starting from community and network. The group once formed shapes/defines the tech tools used. Then link those tools used up into a federated whole, that in turn may need its own governance structures. Vgl [[Waardevol op zich, waardevoller verbonden 20190513163855]] (valuable on its own, more valuable connected)

    5. Secure greater involvement of public institutions. Dan Hon proposed for organizations to set up their own Mastodon instances and serve as verified, public interest driven, trusted nodes in the network. Public institutions should also bring in resources needed to develop the network – for example invest in public benefit algorithmic solutions for the Fediverse, or a broader range of services (Peer Tube is a good example of a publicly funded Fediverse service).

      This makes sense. Not just for public institutions, also for other types of organisations. Public institutions have a leading by example role here.

    6. Launch a participatory project to define a shared mission for building the digital public space on the basis of the Fediverse.

      Why is the apparant assumption here that it is needed to start from a perspective of global scale? Why one 'shared mission' e.g.?

    7. I am uncertain whether such a shift towards participatory governance is possible. A useful analogy is that of Wikipedia and other Wikimedia projects, which are undergoing a significant “phase shift”, from the culture defined by the community of early contributors, to a broader and more inclusive culture– one centered not just on encyclopedic prowess, but also institutional organizing. This example suggests that such a shift is possible, but hard. It requires both significant resources, which have been invested in the case of Wikimedia, but also strong leadership that is in dialogue with the community and can negotiate together the changes (this has happened to a lesser extent). 

      I'm surprised that the underlying assumption (and tone) not just here but in most tech discussion of this type, is still that 'everything' around a tech tool should be done through that tech tool. Of course you need to organise around it, and professionalise that in the face of growth or becoming more central to some group's functioning. Obviously you need to leverage other types of governance and decision making than what went into creating a tech at first. Institutionalising is a time proven way to sustain an effort. Technology = politics. You need to be a politician in your own technology space. A politican in the artisanal and as a practice / behaviour sense, not in the occupation sense. Vgl [[Mijn werk is politiek 20190921114750]] which mostly implies thinking at different levels of abstraction about your situation simultaneously (Vgl [[Triz denken in systeemniveaus 20200826114731]] but then socially as well as tech)

    8. The mission of building digital public spaces

      PublicSpaces overtones here. They're not wrong. Presumably Paul Keller is involved in it?

    9. The crucial divide, just as with the Mastodon code, is between the programming haves and have-nots, the coders and non-coders. The openness of the ecosystem means that it is in principle a lot more democratic, as it creates meaningful possibilities to shape it by contributing code. But this ability is not available to the majority of users, leading to a sort of caste society, built on top of an open source infrastructure. There is no realistic scenario in which all users learn to code – therefore participatory governance approaches, which take control of the code away from the hands of the coders, and into collective decision-making processes, is the only way forward. 

      This comes back to my 'tech smaller than us', governance and technological control over a tool should reside within the context-specific group of users using it. Which does not mean every single person within the group needs all the skills to control the tool, just that the group has it within itself, and decides on how that control is used a group. Vgl [[Networked Agency 20160818213155]] and [[Technologie kleiner dan ons 20160818122905]] n:: This would e.g. imply in Tarkowski's text that a community run instance would ensure having at least 1 member contribute code to Mastodon, and strategically operate to ensure that. Is this also an element wrt the above about Paradox of Open, as the non-monetary benefits of contributing may be well be enumerated as part of the operating costs of a community instance?

    10. Users can also make such decisions at individual level, but at the scale of the network, it’s the servers that count.

      unless as I said servers and individuals are often the same. Not per se to run a Mastodon instance as discussed here, but to have a website that also supports #ActivityPub interaction

    11. The Fediverse is, by design, fractured by server-level decisions that block and cancel access to other parts of the network.

      Indeed, and this is what many don't seem to take into account. E.g. calls for various centralisation efforts. The way 'out' of being under 'benevolent dictators' is to fragment more to the level where you are your own dictator and moderation decisions at individual level and server level are identical, or where you have a cohesive circle of trust in which those decisions take place (i.e. small servers with small groups)

    12. This is a case of what Paul Keller and I have called the Paradox of Open: the existence of power imbalances that leads to, at best, ambivalent outcomes of openness. It is a paradox that the success of open code software, both in terms of the reach of the technology and in economic terms, has happened through underfunded or entirely volunteer work of individual coders. This shows the limits of the open source development model that will affect the future growth of the Fediverse as well. 

      The Paradox of Open (Source) is as Keller and Tarkowski formulate it, that the clear socio-economic value and tech impact of open source comes from underfunded / volunteer work. Vgl [[Bootstrapping 20201007204011]] and the role of precarity in it. Makes me think about 1937 Ronald Coase's transaction costs which in [[Here Comes Everybody by Clay Shirky]] is used to derive a Cosean Floor (and Cosean Ceiling). Openness allows you to operate below the Cosean Floor, but it seems that to bootstrap beyond that first stage is harder. Are such projects incapable of finding a spot above the Floor further down the chain, or are they pushed aside (or rather 'harvested') by those already positioned and operating above that Floor? Perhaps your spot above the floor needs to be part of the design of the work done below the floor. What's the link with my [[Openheid en haar grenzen 20130131154227]] in which I position openness as necessity to operate in a networked environment, and as necessarily limited by human group dynamics and keeping those healthy? Balancing both is the sweetspot in the complex domain. Are we any good at doing that in other terms than just social group behaviour in a room? #openvraag what's the online/bootstrapping equiv of it? How did we do that for my company 11 yrs ago (in part by operating something else above the Floor alongside)?

    13. Hans Gerwitz
    14. Mastodon is a sustainable, healthy network that reached – before the migration begun – an equilibrium of around 5 million overall, with half a million active users. So why does it need to grow further? Because millions more people need access to healthy, just, sustainable, user-friendly communication tools. Hans Gerwitz described it as seeing the network’s growth as “souls saved,” instead of “eyeballs captured.”  

      An eye-opening metric. 'souls saved' vs eyeballs. As in, the number of people with access to a community values reinforcing platform. Vgl [[EU Digital Rights and Principles]] and [[Digital Services Act 20210319103722]] more geared to things that aren't specifically aimed at the culture of interaction, but at things that can be regulated (in the hope that it will impact culture of interaction).

    1. Actor objects MUST have, in addition to the properties mandated by 3.1 Object Identifiers, the following properties: inbox A reference to an [ActivityStreams] OrderedCollection comprised of all the messages received by the actor; see 5.2 Inbox. outbox An [ActivityStreams] OrderedCollection comprised of all the messages produced by the actor; see 5.1 Outbox.

      An actor has id and type properties and an inbox and outbox. It should also have a number of things (which I don't all see the use of, why should I disclose followers and followed e.g.?), and may have a number of things, including token endpoints which might be useful

    2. The source property is intended to convey some sort of source from which the content markup was derived, as a form of provenance, or to support future editing by clients. In general, clients do the conversion from source to content, not the other way around.

      this is akin to how Dave Winer added the capability to add source to a feed in the RSS spec. I read this as only of interest if you use the client to potentially edit the source. If I'd use WP to originate AP msgs, this would not be needed, as any changes would be initiated in WP anyway

    3. dereference the id both to ensure that it exists and is a valid object, and that it is not misrepresenting the object

      dereferencing = following the link to the source, and look at its content, to see if it matches the representation in the json data. Does this imply you could leave it at the id for the object, and leave out the rest as it isn't accepted anyway? Below it says that every object has at least id and type, so yes to the above.

    4. https://social.example/alyssa/followers/

      to address a message to followers

    5. It is called out whenever a portion of the specification only applies to implementation of the federation protocol. In addition, whenever requirements are specified, it is called out whether they apply to the client or server (for the client-to-server protocol) or whether referring to a sending or receiving server in the server-to-server protocol.

      it is needed to be aware of the distinctions between server elements related to client interaction, and server elements related to server-to-server things. A server is basically 2 servers in 1 if it is fully capable of federatiion.

    6. https://www.w3.org/ns/activitystreams#Public

      a special to group, meaning publicly available (e.g. on yr profile page, or anyone looking in your outbox.

    7. @context": "https://www.w3.org/ns/activitystreams"

      any AP message is in context of ActivityStreams

    8. Alyssa's server looks up Ben's ActivityStreams actor object, finds his inbox endpoint, and POSTs her object to his inbox.

      this is the server to server step

    1. The core Actor Types include: Application Group Organization Person Service

      Actors in AP can be apps, groups, orgs, people, services. It's a choice that Mastodon assumes people? What If I create an actor profile that says it's a group, would Mastodon know how to deal with it?

  3. Nov 2022
    1. Servers should not trust client submitted content, and federated servers also should not trust content received from a server other than the content's origin without some form of verification.

      If it's my client posting to my server then it's of less concern, for the client to server side. Federation is by def not to be trusted.

    2. Servers SHOULD validate the content they receive to avoid content spoofing attacks. (A server should do something at least as robust as checking that the object appears as received at its origin, but mechanisms such as checking signatures would be better if available). No particular mechanism for verification is authoritatively specified by this document, but please see Security Considerations for some suggestions and good practices

      You should do some evaluation of received content, but the spec doesn't provide an authorative way for verification.

    3. ActivityPub defines some terms in addition to those provided by ActivityStreams. These terms are provided in the ActivityPub JSON-LD context at https://www.w3.org/ns/activitystreams.

      https://www.w3.org/ns/activitystreams#activitypub this seems quite a list, and rather central to AP: All terms in this section are described in the [ActivityPub] specification.

      endpoints (@id)
      following (@id)
      followers (@id)
      inbox (@id) - This is an alias of ldp:inbox.
      liked (@id)
      shares (@id)
      likes (@id)
      oauthAuthorizationEndpoint (@id)
      oauthTokenEndpoint (@id)
      outbox (@id)
      preferredUsername
      provideClientKey (@id)
      proxyUrl (@id)
      sharedInbox (@id)
      signClientKey (@id)
      source
      streams (@id)
      uploadMedia (@id)
      
    4. Objects are the core concept around which both [ActivityStreams] and ActivityPub are built. Objects are often wrapped in Activities and are contained in streams of Collections, which are themselves subclasses of Objects. See the [Activity-Vocabulary] document, particularly the Core Classes; ActivityPub follows the mapping of this vocabulary very closely.

      Objects are the core building blocks of AP, following ActivityStreams spec fully. Objects are wrapped in Activities, and can be part of Collections (itself an Object). Also follows the saem URI conventions

    5. This specification defines two closely related and interacting protocols: A client to server protocol, or "Social API" This protocol permits a client to act on behalf of a user. For example, this protocol is used by a mobile phone application to interact with a social stream of the user's actor. A server to server protocol, or "Federation Protocol" This protocol is used to distribute activities between actors on different servers, tying them into the same social graph. The ActivityPub specification is designed so that once either of these protocols are implemented, supporting the other is of very little additional effort. However, servers may still implement one without the other

      It is possible to have an AP client-server implementation that does not do the federation part, but it would be little effort to support it. And vice versa. Does this mean that in such a case every other server should come and get information, iow another server should know to come get it? Federation is more about push it seems. Not doing the federation part means limiting the amount of stuff you get to receive (all the other stuff). For my site it would be good to limit what's being processed. Currently it seems to me that maybe WP AP plugin isn't doing the federation bits?

    6. Since this is a non-activity object, the server recognizes that this is an object being newly created, and does the courtesy of wrapping it in a Create activity. (Activities sent around in ActivityPub generally follow the pattern of some activity by some actor being taken on some object. In this case the activity is a Create of a Note object, posted by a Person).

      posting to your outbox is supposed to be treated as a Create type activity and wrapped as such. Other activities (e.g. Like) should be deliberately expressed, but create is assumed if no activity is present.

    7. Indeed, federation happens usually by servers posting messages sent by actors to actors on other servers' inboxes.

      Federation comes from POSTing to other server inboxes, rather than GETting from server's outboxes. This is the source of M being rather traffic heavy?

    8. if that last one (GET'ing from someone's outbox) was the only way to see what people have sent, this wouldn't be a very efficient federation protocol!

      ? So at core, AP would require cycling through people's outboxes to get the stuff that is available there. Such pulling may be enough for just specific circles of people. Would it be enough for dAPplr? Come and get it?

    9. You can POST to someone's inbox to send them a message (server-to-server / federation only... this is federation!) You can GET from your inbox to read your latest messages (client-to-server; this is like reading your social network stream) You can POST to your outbox to send messages to the world (client-to-server) You can GET from someone's outbox to see what messages they've posted (or at least the ones you're authorized to see). (client-to-server and/or server-to-server)

      get and post behave differently on the client and server side. Get from inbox client side, is reading ones stream. Post to outbox client side is sending messages out. Post to inbox server side is incoming stuff from the fediverse. Get from outbox serverside is getting the msgs you're authorised to see. This last one would be how I'd approach e.g. dAPplr (I think I found my working title)

    10. An inbox: How they get messages from the world An outbox: How they send messages to others

      Every actor has an inbox and outbox. These are endpoints URLs listed in the ActivityPub actor's ActivityStreams description

    11. In ActivityPub, a user is represented by "actors" via the user's accounts on servers. User's accounts on different servers correspond to different actors.

      a user is an actor is an account on server. Actors can be of different type (org, group, person, app and service) I suppose here person is usually meant, but I can see group and org useful too, as well as service (different content streams by me) and even app (source of content creation e.g.)

    12. ActivityPub provides two layers: A server to server federation protocol (so decentralized websites can share information) A client to server protocol (so users, including real-world users, bots, and other automated processes, can communicate with ActivityPub using their accounts on servers, from a phone or desktop or web application or whatever) ActivityPub implementations can implement just one of these things or both of them. However, once you've implemented one, it isn't too many steps to implement the other, and there are a lot of benefits to both

      It seems all current implementations are both, but I wonder if separating them out creates lower threshold agency, much like how microsub separates the feed server and the reading client, and micropub separates the posting server and the writing client. I can see having multiple client side thing to be able to post or see content, even if the base case is my site being my client and my server. Keeping a clear distinction of what's what is always useful.

    13. The ActivityPub protocol is a decentralized social networking protocol based upon the [ActivityStreams] 2.0 data format. It provides a client to server API for creating, updating and deleting content, as well as a federated server to server API for delivering notifications and content.

      In sum, I'd need to: understand activities use my site to create those activities, and display those of others separate the client part on my site and the server part clearly in my mind. figure out how the server part processes incoming notifications and content.

    14. federated server to server API for delivering notifications and content

      The servers between eachother also have an API to deliver notifications and content.

    15. provides a client to server API for creating, updating and deleting content

      the client creates/updates/deletes content, and uses an API to send that to server.

    16. The ActivityPub protocol is a decentralized social networking protocol based upon the [ActivityStreams] 2.0 data format.

      Activitystreams is a fundament for AP. This is why I'm interested, as ActviityStreams contains a much wider range of verbs than just for basic social interaction. Specifically covering things that used to have a social software service since gone (e.g. Dopplr, original Foursquare, Jaiku's way of combining streams into 1) Should read and mine the ActivityStreams spec for understanding too.

    1. These are the specifications produced by the Social Web Working Group. New implementation reports and feedback are always welcome (details for where to submit these are at the top of each document). [Activitypub] JSON(-LD)-based APIs for client-to-server interactions (ie. publishing) and server-to-server interactions (ie. federation). ActivityStreams 2.0 [activitystreams-core] and [activitystreams-vocabulary] The syntax and vocabulary for representing social activities, actors, objects, and collections in JSON(-LD). Linked Data Notifications ([LDN]) A JSON-LD-based protocol for delivery. [Micropub] A form-encoding and JSON-based API for client-to-server interactions (ie. publishing). [Webmention] A form-encoding-based protocol for delivery. [WebSub] A protocol for subscription to any resource and delivery of updates about it. Specifications which are not Social Web Working Group recommendations, but which are nonetheless relevent to the charter deliverables, are described in 8. Related specifcations.

      The various specs by the Social Web Working Group are listed here together. AP with the purpose of interaction is mentioned alongside MicroPub Sub and Webmention. The 'related specs' mentions IndieAuth and MF2. Useful to see it presented here as a 'family' rather than as alternatives, and underscores the gap that AP could fill for my otherwise IndieWeb enabled site.

    1. To "keep things the way they are" is always an option, never the default. Framing this option as a default position introduces a significant conservative bias — listing it as an option removes this bias and keeps a collective evolutionary. To "look for other options" is always an option. If none of the other current options are good enough, people are able to choose to look for better ones — this ensures that there is always an acceptable option for everyone. Every participant can express how much they support or oppose each option. Limiting people to choose their favorite or list their preference prevents them from fully expressing their opinions — scoring clarifies opinions and makes it much more likely to identify the best decision. Acceptance (non-opposition) is the main determinant for the best decision. A decision with little opposition reduces the likelihood of conflict, monitoring or sanctioning — it is also important that some people actively support the decision to ensure it actually happens.

      Four elements to make 'score voting' more a cooperative effort. Status quo is one of the options to choose, not the default if no decision is made, adding options is always possible (meaning no limitative list of options, which would be giving a certain power to the maker of the list), everyone marks support/opposition to all options, not just favourites (score voting) and totals are tabulated (#openvraag how does this avoid 'brainless squid' results?), acceptance (meaning no or low opposition) rather than faving is main factor in decision making. That last one reads as pointing to a balanced dual indicator: the strongest attractor wins given the lowest barrier. So first determine lowest barrier options, than the biggest attractor amongst those.

  4. fasiha.github.io fasiha.github.io
    1. Yoyogi

      Yoyogi is a tool that taps into your mastodon account (in your browser, locally) and shows messages by author / thread not as timeline. If you'd sort that like Fraidycat that would be a pretty interesting interface.

    1. We are now seeing such reading return to its former social base: a self-perpetuating minority that we shall call the reading class. — Griswold, McDonnell and Wright, “Reading and the Reading Class in the Twenty-First Century,” Annual Review of Sociology (2005) They see two options for readers in society: Gaining “power and prestige associated with an increasingly rare form of cultural capital” Becoming culturally irrelevant and backwards with “an increasingly arcane hobby”

      Reading is suggested to be potentially waning, maybe becoming more elite or even obsolete. It seems to disregard its counterpart: writing. For every thing that can be read, writing has preceded it. Writing, other than direct transcription, is not just creating text it is a practice, that also creates effects/affordances for the writer. Also thinking of Rheingold's definition of literacy as a skill plus community in which that skill is widely present. Writing/reading started out as bookkeeping, and I assume professional classes will remain text focused (although AR is an 'oral' path here too)

    2. It’s interesting to divide the internet into Word People and Image People because the Internet is a modern evolution of oral culture — and technological/bandwidth limitations have enabled text to serve as the leading means to transfer information online up till now, when more direct oral presentations (podcasts, video streaming, video) become a feasible way to distribute more of the pool of information.

      Tracy Durnell comments on a quote that divides internet users in 'word people' and 'image people' by by position the entire internet as a modern form of oral culture. The only reason in that perspective for the abundance of text is early bandwidth and technology limitations. Nowadays presentations, streaming, videos and podcasts make a much direct version of distributing oral expressions. When Durnell talks about oral culture is that because of the style more than the format? Blogs, IRC chats, microblogging and messaging are more oral in tone. Whereas ;'serious' texts are still in document shape. Reminds me of annotation as conversation and as social interaction.

    1. The last thing Europe wants is its regulation that restricts future innovation, raising barriers to entry for new businesses and users alike. 

      Which is why DSA and DMA target larger entities beyond that start-up scale.

    2. There is no central authority or control that one could point to and hold responsible for content moderation practices; instead, moderation happens in an organic bottom-up manner

      This is I think an incorrect way of picturing it. Moderation isn't bottom-up, that again implies seeing the fediverse as a whole. Moderation is taking place in each 'shop' in a 'city center', every 'shop' has its own house rules. And that is the only level of granularity that counts, the system as a whole isn't a system entity. Like road systems, e-mail, postal systems, internet infra etc. aren't either.

    3. Since moderation in major social media platforms is conducted by a central authority, the DSA can effectively hold a single entity accountable through obligations. This becomes more complex in decentralized networks, where content moderation is predominantly community-driven.

      Does it become more complex in federation? Don't think so as it also means that the reach and impact of each of those small instances is by def limited. Most of the fediverse will never see most of the fediverse. Thus it likely flies under any ceiling that incurs new responsibilities.

    4. what will it mean if an instance ends up generating above EUR 10 million in annual turnover or hires more than 50 staff members? Under the DSA, if these thresholds are met the administrators of that instance would need to proceed to the implementation of additional requirements, including a complaint handling system, cooperation with trusted flaggers and out-of-court dispute bodies, enhanced transparency reporting and the adoption of child protection measures, as well as the banning of dark patterns. Failure to comply with these obligations may result in fines or the geo-blocking of the instance across the EU market. 

      50ppl and >10M turnover for a single instance (mastodon.social runs on 50k in donations or so)? Don't see that happening, and if, how likely is it that will be in the European market? Where would such turnover come from anyways, it isn't adverts so could only be member fees as donations don't count? Currently it's hosters that make money, for keeping the infra humming.

    5. Today– given the non-profit model and limited, volunteer administration of most existing instances– all Mastodon servers would seem to be exempt from obligations for large online platforms

      Almost by definition federated instances don't qualify as large platform.

    6. However, based on the categorizations of the DSA, it is most probable that each instance could be seen as an independent ‘online platform’ on which a user hosts and publishes content that can reach a potentially unlimited number of users. Thus, each of these instances will need to comply with a set of minimum obligations for intermediary and hosting services, including having a single point of contact and legal representative, providing clear terms and conditions, publishing bi-annual transparency reports, having a notice and action mechanism and, communicating information about removals or restrictions to both notice and content providers.

      Mastodon instances, other than personal or closed ones, would fall within the DSA. Each instance is its own platform though. Because of that I don't think this holds up very well, are closed Discord servers platforms under the DSA too then? Most of these instances are small, many don't encourage new users (meaning the potential reach is very limited). For largers ones like mastodon.nl this probably does apply.

    1. Mastodon is just blogs

      "Mastodon is just blogs and Google Reader, skinned to look like Twitter." That is pretty accurate, microblogging and following does what feedreading does too. In this case commenting is put at the exact same level as the orginal blogpost, akin to how I can reply to posts with a post of my own (like old trackbacks, now webmention)

    1. In 2022 willen we nadrukkelijker in beeld krijgen welke veranderingen veel waarde opleveren voor de gebruikers en daarbij speciale aandacht besteden aan enkele sectoren. Daarnaast willen we volgende stappen zetten in de doorontwikkeling van inhoud, processen en voorzieningen. Hierbij gaat het bijvoorbeeld ook om het structureel in beheer nemen van voorzieningen die de afgelopen jaren zijn gerealiseerd, zodat deze breed gebruikt kunnen worden. Tot slot besteden we aandacht aan de organisatie van het programma, zodat voor iedereen duidelijk is wat waar gebeurt en hoe mensen uit het veld daarbij betrokken kunnen zijn. We blijven iedereen op de hoogte houden via nieuwsbrief, website, DiS-Online-bijeenkomsten en als het weer kan voorlichtingsbijeenkomsten in het land.

      In 2021 is DiS Geo goed in het GI Beraad opgenomen. In 2022 ging het om sectorgericht en laaghangend fruit. En het netjes organiseren van verandering. Hoe past in 2023 EU datastrategie daarbij: zijn er dingen die je niet meer zel hoeft bijv?

    1. The majority of scholarship on platform governance focuses on for-profit, corporate social media with highly centralized network structures. Instead, we show how non-centralized platform governance functions in the Mastodon social network. Through an analysis of survey data, Github and Discourse developer discussions, Mastodon Codes of Conduct, and participant observations, we argue Mastodon’s platform governance is an exemplar of the covenant, a key concept from federalist political theory. We contrast Mastodon’s covenantal federalism platform governance with the contractual form used by corporate social media. We also use covenantal federalist theory to explain how Mastodon’s users, administrators, and developers justify revoking or denying membership in the federation. In doing so, this study sheds new light on the innovations in platform governance that go beyond the corporate/alt-right platform dichotomy.

      Promises to be interesting wrt governance structures in moderation/adminning.

    1. It's not entirely the Twitter people's fault. They've been taught to behave in certain ways. To chase likes and retweets/boosts. To promote themselves. To perform.

      Twitter trains users to behave a certain way. It rewards a specific type of performance. In contrast, until now at least, M is focused on conversation (and the functionality of the apps reinforce that, with how boosts and likes work differently)

    2. Loudly proclaiming that content warnings are censorship, that functionality that has been deliberately unimplemented due to community safety concerns are "missing" or "broken", and that volunteer-run servers maintaining control over who they allow and under what conditions are "exclusionary". No consideration is given to why the norms and affordances of Mastodon and the broader fediverse exist, and whether the actor they are designed to protect against might be you.

      Agreed.

    3. It is the very tools and settings that provide so much more agency to users that pundits claim make Mastodon "too complicated".

      Indeed.

    4. Nevertheless, the basic principles have mostly held up to now: the culture and technical systems were deliberately designed on principles of consent, agency, and community safety. Whilst there are definitely improvements that could be made to Mastodon in terms of moderation tools and more fine-grained control over posting, in general these are significantly superior to the Twitter experience. It's hardly surprising that the sorts of people who have been targets for harrassment by fascist trolls for most of their lives built in protections against unwanted attention when they created a new social media toolchain.

      Agreed, M allows more agency to accountholders. I see how agency and community safety are part of the technical design. What tech / functionality in M is aimed at consent? You can determine the audience for each message more granularly than elsewhere, but that to me is not an implementation of consent, more one of signalling intent.

    5. The people creating, publishing, and requesting public lists of Mastodon usernames for certain categories of person (journalists, academics in a particular field, climate activists...) didn't appear to have checked whether any of those people felt safe to be on a public list. They didn't appear to have considered that there are names for the sort of person who makes lists of people so others can monitor their communications. They're not nice names.

      fair point. At the same time Mastodon has I think overly relied on 'security by obscurity' for safety, which is always a failing tactic in the face of sudden influx of people. If you're in the public square you will be seen. If you need private conversation, in groups, find eachother in the public square and take the conversation elsewhere is more sound. Vgl. the e2e encrypted conversations I'm in, various Matrix servers etc. There's a plethora of tools out there. M never was a 'safe' tool in that regard, but it suggested it is because of the paucity of users.

    6. The academics excitedly considering how to replicate their Twitter research projects on a new corpus of "Mastodon" posts didn't seem to wonder whether we wanted to be studied by them.

      This I think is more a matter of the research boards at univs. Esp US univs have a very limited perspective on what e.g. constitutes 'human subject' research. Vgl Princeton GDRP/website study last year, or the CrisisTextLine datasharing that danah boyd thought able to defend.

    7. The people re-publishing my Mastodon posts on Twitter didn't think to ask whether I was ok with them doing that. The librarians wondering loudly about how this "new" social media environment could be systematically archived didn't ask anyone whether they want their fediverse posts to be captured and stored by government institutions.

      This I think is an unfounded expectation.

    8. I hadn't fully understood — really appreciated — how much corporate publishing systems steer people's behaviour until this week. Twitter encourages a very extractive attitude from everyone it touches.

      This stands out indeed.

    9. I was nervously watching the file storage creep up on the ausglam.space wondering if I'd make it to the end of the weekend before the hard drive filled up, and starting to draft new Rules and Terms of Use for the server to make explicit things that previously "everybody knew" implicitly because we previously could acculturate people one by one.

      Author runs a community server. Here points to how it used to be possible to 'acculturate' new people 1 by 1. That is the lurking/delurking/participating process.

    10. I finally realised on Monday that the word I was looking for was "traumatic". In October I would have interacted regularly with perhaps a dozen people a week on Mastodon, across about 4 or 5 different servers. Suddenly having hundreds of people asking (or not) to join those conversations without having acclimatised themselves to the social norms felt like a violation, an assault. I know I'm not the only one who felt like this.

      Recognisable. Author was accustomed to quiet conversation and suddenly many others joined those conversations without lurking for a while. To me it felt like many T-migrants brought with them the passive aggressive tone, the streetwise attitude of don't f with me, that kept the trolls and baiting away over there. Classically what one does when joining a new conversation, in a bar, online or wherever, is you lurk to observe the setting and context of the conversation, then signal you want to join by injecting an insignificant contribution (to de-lurk) and when acknowledged you join more fully. That is not what has been happening. Various T-migrants came with the expectation it seems that they had replicated their existing conversations into a new room. Where those in the room already were the new participants, and therefore the ones delurking. The T-migrants weren't budding in, they were continuing their conversation, in their mind, imo. This creates clashes between perspectives on weaker and stronger ties. Vgl [[Lurking Definition 20040204063311]] and [[Lurking Weak Strong Ties 20040204063311]]

    11. Early this week, I realised that some people had cross-posted my Mastodon post into Twitter. Someone else had posted a screenshot of it on Twitter. Nobody thought to ask if I wanted that.

      Author expects to be asked consent before posting their words in another web venue, here crossposting to Twitter. I don't think that's a priori a reasonable expectation. The entire web is a public sphere, and expressions in it are public expressions. Commenting on them, extending on them is annotation, and that's fair game imo. Problems arise from how that annotation is used/positioned. If it's part of the conversation with the author and others that's fine depending on tone e.g. forcefully budding in, yet even if unwelcomed. If it is quoting an author and commenting as performance to one's own audience, then the original author becomes an object, a prop in that performance. That is problematic. I can't judge (no links) here which of the two it is.

    12. Like when you're sitting in a quiet carriage softly chatting with a couple of friends and then an entire platform of football fans get on at Jolimont Station after their team lost. They don't usually catch trains and don't know the protocol. They assume everyone on the train was at the game or at least follows football. They crowd the doors and complain about the seat configuration.

      Compares the influx of new people on Mastodon as the sudden crowding of a train by a loud group. I can see what the author means. My timeline has felt like that.

    13. For those of us who have been using Mastodon for a while (I started my own Mastodon server 4 years ago), this week has been overwhelming

      author has been running his own instance for as long as I have. Not sure if it's a community server or a personal one. Assuming community one.

    14. Home invasion Mastodon's Eternal September begins

      About the impact on Mastodon culture of the new influx of people. Esp now that influx is a significant portion of overall users. Several instances have more than doubled in a week or so. Hence the eternal Sept reference. In the mean time #twittermigration seems to be levelling off in the last day or so.

    1. But Mastodon instances aren't even competing on that! They seem to all be running the same version of the same software, so aside from some banner images and icons, they are all exactly the same user interface. This is great if you are in the "Federated Feed Reader" camp, less so for the "we are all unique flowers" camp.

      This reads like being confused about what instances are. You don't need to pick one even, can participate fine without an instance. If you do choose to be part of a group instance there are indeed things to consider, wrt orientation, group traits, culture. Which are as varied as we all are. It's not about the tool or competing on css and interface, it's choosing fav watering hole to chat. Where others can wander in but also get bounced. And you can frequent multiple watering holes depending on your whim (you may not want to talk work stuff with colleagues on the sports field where your kid is playing Saturday morning). Why are these odd comparisons made within the singular viewpoint of Mastodon as tool. All comparisons must be made against human social interaction in general. Twitter is the odd one out there: everyone shouting their loudest, all in the exact same place, where all can bud into any conversation without the conversers perspective playing a role. Doug Belshaw describes this dynamic much better: https://dougbelshaw.com/blog/2022/11/12/on-the-importance-of-fediverse-server-rules/

      It's not just 'everything open to all' or 'walled garden' it's not indiividual or the global population. It's all about the intermediate layers, where the fluidity of humans choosing their groups and places of interaction. That is where the complexity lives, and thus the value. Tech sin't neutral in it, and shouldn't be, as it's a human tool, humans who are part of that complexity. Vgl Technoloog podcast where they were as confused about the role and purpose of instances.

    2. It's gonna go great!

      It will be as messay as, the internet itself, as the web itself. Which works. The abberation imo is centralised website silos on top of a fully federated internet and web. At least AP embraces the underlying srtucture of the internet, and the underlying structure of human networks. Federation brings the human and tech networks to a closer resemblance, which brings more digital affordances, esp social ones we have offline already.

    3. n the olden days, when someone picked yahoo.com as their email host instead of hotmail.com, it wasn't because they thought to themselves, "I have more friends who use Yahoo than Hotmail, so I definitely want it to be easier to communicate with them." It wasn't because, "The Hotmail brand really speaks to my identity." No, they picked one over the other because it seemed like one of them had a website that sucked less.

      In the olden days, yes that is a clear marker of when this remark made sense.Yahoo and Hotmail have no role in this comparison. Having your mail on your own domain is more apt. Or a group's domain (company, sports org, brand whatever).

    4. Taking something like Mastodon, whose core concept is federation, and then not federating, or limiting federation, is kind of like buying an iPhone and not putting a SIM card in it. Like, yeah, there are use cases where that will work I guess, but if that's what you need there are simpler and more economical ways to get that.

      This is nonsense hyperbole. Noone in the world uses their iphone with sim in the expectation of phoning every other phone user in the world. The only expectation is that you can phone the people you want to phone in a given situation. I have a blocklist on my phone as well. I cotrrol who can call me when and where as well. Limiting federation is what everyone does in their offline life every single second, and when deciding on every single human interaction.

    5. I know a lot of people who want the Federated Feed Reader version. These are the people who were kinda-ok with Twitter but would prefer it to not be dismantled by a billionaire crybaby, and also fewer nazis if at all possible. The people I know who want the Private Walled Garden version are already using Discord for that. ("Discord: non-federated IRC with emoji-first design.")

      An example of dilemma-phrasing. The world isn't dilemma's it's always multilemma. It's not either living room or public square with the entire globe, there are many spaces in between. I'm mostly feed-reader camp in this author's dilemma, but also definitely want to limit both what I encounter and where things can spread. Just not in an absolute or absolute control sense.

    1. How does Guppe work? Guppe groups look like regular users you can interact with using your existing account on any ActivityPub service, but they automatically share anything you send them with all of their followers. Follow a group on @a.gup.pe to join that groupMention a group on @a.gup.pe to share a post with everyone in the groupNew groups are created on demand, just search for or mention @YourGroupNameHere@a.gup.pe and it will show upVisit a @a.gup.pe group profile to see the group history

      a.gup.pe is a group mechanism on Mastodon. Works like my email set-up: using an address makes it exist. This means groups are open to all I suppose, so personal curation (blocking, muting accounts) is needed. Like following # in that sense, but then with active distribution, as the group account serves as a repeater. Interesting addition.

    1. Preserving web content never really left my mind ever since taking screenshots of old sites and putting them in my personal museum. The Internet Archive’s Wayback Machine is a wonderful tool that currently stores 748 billion webpage snapshots over time, including dozens of my own webdesign attempts, dating back to 2001. But that data is not in our hands. Should it? It should. Ruben says: archive it if you care about it: The only way to be sure you can read, listen to, or watch stuff you care about is to archive it. Read a tutorial about yt-dlp for videos. Download webcomics. Archive podcast episodes.

      Should people have their own webarchive? A long list of pro's and cons comes to mind. For several purposes a 3rd party archive is key, for others having things locally is good enough. For other situations having a off-site location is of interest. Is this less a question of webarchiving and more a question of how wide the scope should be of one's own 3-2-1 back-up choices? I find myself more frequently thinking about the processes at e.g. the National Archive in The Hague, where a lot comes down to knowing what you will not keep.

    1. o understand what Activity Streams is, think of it as an abstract syntax to represent basically anything that can be an action on social media. The Activity Streams Vocabulary specification defines, amongst other things, three types of objects: Actors: Application, Group, Organization, Person, Service. Activity types: Accept, Add, Announce, Arrive, Block, Create, Delete, Dislike, Flag, Follow, Ignore, Invite, Join, Leave, Like, Listen, Move, Offer, Question, Read, Reject, Remove, TentativeAccept, TentativeReject, Travel, Undo, Update, View. Objects: Article, Audio, Document, Event, Image, Note, Page, Place, Profile, Relationship, Tombstone, Video. To build a valid Activity Streams activity, you pick one of each category and add some metadata to it. You describe that something did something to or with something, and you explain those things in more detail.

      A valid activity in Activity Streams is using one of each Actors, Types and Objects. Me Arrives at Place, Me Travels to Place, Me Announce Event etc. It's all JSON

    1. for the safety of the LGBTQ community here we refused to engage in mass server blocking and instead encouraged our users to block servers on an individual basis and provided access to block lists for them to do so

      This instance encourages their account holders to actively block for themselves. Pushing agency into their hands, also by providing existing blocklists to make that easier. After all it isn't pleasant to have to experience abuse first before you know whom to block.

    2. In fact we added a feature just for them called subscriptions which allowed them to monitor accounts without following them so they could do so anonymously.

      Providing lurking opportunities for security reasons. Very sensible. Example of actively providing tools that create agency for groups to protect themselves.

    3. s pecifically from the LGBTQ community, onto our server. It turns out many people relied on us not-blocking for their physical safety. There were big name biggots (like milo yanappolus) who were on the network. They used their accounts here to watch his account for doxing so they could warn themselves and their community and protect themselves accordingly

      Having the ability to see what known bigots get up to on social media is a security feature.

    4. allowed people read content from any server (but with strict hate speech rules)

      Blocking means your account holders don't see that part of the fediverse, you're taking away their overview. A decision you're making about them, without them. A block decision isn't only about the blocked server, it impacts your account holders too, and that needs to be part of the considerations.

    5. So there are some servers out there that demand every server int he network block every instance they do, and if a server doesnt block an instance they block then they block you in rettatliation.Their reason for this is quite flawed but it goes like this.. If we federate with a bad actor instance and we boost one of their posts then their users will see it and defeat the purpose of the block. The problem is, this isnt how it actually works. If they block a server and we boost it, they wont see the boost, thats how blocks work.

      There are M instances that block servers that don't block the same servers they do. That seems to defeat the entire concpet of federating (and the rationale isn't correct).

    1. Towards a  federated metaverse

      Immers Space is a immersive web / metaverse initiative. It is federated, using ActivityPub. The AP implementation uses the Arrive/Leave/Travel and Places Object Types for virtual destinations. Vgl [[ActivityPub voor Check-ins 20221109095516]]

    1. or the type of services I offer and my target audience, Twitter is an unlikely place for me to connect with potential clients

      I've seen it mostly as place for finding professional peers, like my blog did. But that is the 2006 perspective, pre-algo. I wrote about FBs toxicity and quit it, I removed LinkedIn timeline. Twitter I did differently: following #'s on Tweetdeck and broadcasting my blogposts. I fight to not be drawn into discussions, unless they're responses to my posts. In the past 4 yrs I have had good conversations on Mastodon. No clients either though, not in my line of work. Some visibility to existing professional network does very much play an active role though.

    2. Pretending Twitter is the answer to gaining respect for and engagement with my work is an addict’s excuse that removes responsibility from myself.

      ouch. The metrics of engagement (likes, rts) make it possible to 'rationalise' this perception of needing it for one's work/career eg.

    1. Nowhere better to start getting one’s head around this distributed vision than with Mike Masnick’s epic explainer, Protocols, Not Platforms: A Technological Approach to Free Speech.

      Mike Masnick https://knightcolumbia.org/content/protocols-not-platforms-a-technological-approach-to-free-speech Aug 2019. n:: "Build protocols, not platforms" (reminds me of the convo's around Jabber/XMPP I listened into in the early '00s) Also, IndieWeb / ActivityPub

    2. Algorithmic choice. Algorithms dictate what we see and who we can reach. We must have control over our algorithms if we’re going to trust in our online spaces. The AT Protocol includes an open algorithms mode so users have more control over their experience.

      I don't have much expectation for Bluesky, mostly because of Jarvis earlier ref to Mike Masnick's protocols above platforms/apps. But this is the one interesting bit (the others less so in the sense that portability and interoperability are becoming mandatory soon in the biggest market, and performance is an existing customer-feedback loop pushing for it): being able to choose which algo to let loose on your dataset, or bring your own [[Individuele software agents 20200402151419]] is a novel area, a new [[Evolutionair vlak van mogelijkheden 20200826185412]] adding to [[Networked Agency 20160818213155]]

    3. Mind you, this is not a net without corporations and capitalism; they can use the protocols, too, and I’m glad Google gives us usable email and spam protection. But it need no longer be a net corrupted by the business model of mass media imported online: the attention economy. And it need no longer be a net under sole corporate control — and thus, potentially, the influence of malign actors, whether Musk or his pals Putin or Trump.If we gain this promising future, if we return to the net’s founding principles, keep one thing clearly in mind: It won’t be so easy to blame the bad shit on the corporations and nasty nerd boys anymore. The net will be ours along with the responsibility to build and enforce the expectations and standards we wish for. The net is us, or it can be at last.

      Jeff clearly hearking back to the golden era of blogging wrt which values it (should) promote(s). Is the 'attention economy' only the 'corrupted biz model of mass media imported online'? The algo-induced raging sure is. Otherwise not sure: [[3 kwantitatieve veranderingen 20100420210721]] was in full effect way before it, and all three contribute to [[Aandacht is het schaarst 20201013163120]]. Maybe it's the flipside that's key? n:: Being economic with attention, as the core of what an attention economy really is like. Towards the end Jarvis slips in the responsibilities stemming from the triplet of obligations [[Obligation to explain 20120327173752]] [[Wie deelt bestaat 20130131133926]] and [[Obligation to re-use 20191223194129]]

    1. federated mastodon is neat. that “ericajoy”can exist on any server is going to be a problem, especially around impersonation. a third party “verification” player will be necessary if mastodon gains broad traction.

      Poster implies that a benefit of globally centralised structures like Twitter, FB and LinkedIn is verification. I think impersonation is rife there, and will be less on Mastodon. Apart from basic measures (rel-me verification against your website, use your own domain for an instance), there are similar to T/FB/LinkedIn ways to verify someone outside the platform itself, where people check it's you through a channel they already know it's you. Above all the potential benefit of impersonation does not exist on M: no immediate global audience, no amplification of messages through self-feeding loops of engagement. Your reach is limited to your own follow(er)s mostly, and they won't fall for an impersonation, as you're already there among them. The power assymmetry inherent in T/FB's algo's doesn't exist on M. So impersonating would cost the impersonator way more, and become unsustainable to them.

    1. Plausible open source webstats, reached $1M recurring annual revenue. 4 people, no marketing, only word of mouth, started late 2018, launched beta Jan 2019, so they've built that up in 4 years. Became sustainable at 11k monthly revenue, then 2 people. 3rd added at 29k monthly revenue, June 21. Useful overview.

    1. The myth of the good venture startup is dead

      Not sure why Werdmüller concludes this only now, in the wake of Musk taking over Twitter. The VC has been clear for ages, esp in the US view of shareholder value as only litmus test. Value extraction, regardless of its consequences. Zebra's are more realistic than unicorn chasing. https://medium.com/zebras-unite/zebrasfix-c467e55f9d96 The EU has been developing its own geopolitical proposition wrt digital/data in contrast with the VC for the past years, with two legs: maximising socio-economic benefit from digital and strengthening citizen and human rights and European values. Not to stifle invention as GAFAM would have it, but to ensure a different set of starting conditions and success criteria for innovation.

      Apart from that: any funding must be closely examined in terms of strings attached, and in terms of their consequences on possible paths in the evolutionary field forward. (I've seen subsidy conditions that made it impossible to reach the moment to do without the subsidy, in contradiction to the stated goal of the subsidy.)

  5. Oct 2022
    1. An assessment method for algorithms. In een sessie werd dit genoemd in combinatie met IAMA als methoden voor assessment.

    1. What if explanations resorting automatically to power, society, discourse had outlived their usefulness and deteriorated to the point of now feeding the most gullible sort of critique?8 Maybe I am taking conspiracy theories too seriously, but it worries me to detect, in those mad mixtures of knee‐jerk disbelief, punctilious demands for proofs, and free use of powerful explanation from the social neverland many of the weapons of social critique. Of course conspiracy theories are an absurd deformation of our own arguments, but, like weapons smuggled through a fuzzy border to the wrong party, these are our weapons nonetheless. In spite of all the deformations, it is easy to recognize, still burnt in the steel, our trademark: Made in Criticalland.

      Are earlier tools of critiqueing obsolete, and now misused by conspiracyfantasists? Criticism as instrument vs criticism as rejection/avoiding change? The first is part of a theory of change, so what's the other, theory of stasis? Sounds too neutral, it's more destructive than that. Not moving is also a move, in the face or urgencies, usually in the wrong direction. Note this paper is from 2004! Since the early pandemic this is more pertinent in our everyday lives

    2. What’s the real difference between conspiracists and a popularized, that is a teachable version of social critique inspired by a too quick reading of, let’s say, a sociologist as eminent as Pierre Bourdieu (to be polite I will stick with the French field commanders)? In both cases, you have to learn to become suspicious of everything people say because of course we all know that they live in the thralls of a complete illusio of their real motives. Then, after disbelief has struck and an explanation is requested for what is really going on, in both cases again it is the same appeal to powerful agents hidden in the dark acting always consistently, continuously, relentlessly. Of course, we in the academy like to use more elevated causes—society, discourse, knowledge‐slash‐power, fields of forces, empires, capitalism—while conspiracists like to portray a miserable bunch of greedy people with dark intents, but I find something troublingly similar in the structure of the explanation, in the first movement of disbelief and, then, in the wheeling of causal explanations coming out of the deep dark below.

      How to make the difference between the doubting academics do and the conspiracyfantasists do clear?

    3. My argument is that a certain form of critical spirit has sent us down the wrong path, encouraging us to fight the wrong enemies and, worst of all, to be considered as friends by the wrong sort of allies because of a little mistake in the definition of its main target. The question was never to get away from facts but closer to them, not fighting empiricism but, on the contrary, renewing empiricism.

      Critique / scepticism is meant to get closer to establishing facts, not meant to get away from them. Not meant against empiricism (yeah, that's just, like, your opinion dude) but to strengthen empirical factfinding. Establishing how the presented facts came about, and see if the method of establishing them can be improved. Vgl [[Data geeft klein deel werkelijkheid slecht weer 20201219122618]]

    1. However there are follow (and boost and like) notifications there if you want them, which contains the seeds of the twitter engagement spiral.

      I don't think they run risk of spiraling. Fav's are not shared back to the fav'rs audience, only visible as action by the OP, and in aggregate under the original message. So it doesn't serve as signal to a fav'rs own audience. Boosts don't allow remarks, just straight boosts (no 'quote-tweeting') limiting it to sharing only the original message, sharing it back to the booster's audience only. Otherwise there's only replies, which are always to the person replied to, favouring interaction. Most of all: no algo watching over what gains traction and pushing those higher up in all timelines: the timeline is strictly chronological. Meaning most of the time I do not see what people I follow boost or fav. Only in the moments I dip my toe in the river of messages do I see things pass by.

    2. As with Twitter, and indeed the web in general, we all see a different subset of  the conversation. We each have our own public that we see and address. These publics are semi-overlapping - they are connected, but adjacent. This is not Habermas’s public sphere, but de Certeau's distinction of place and space. The place is the structure provided, the space the life given it by the paths we take through it and our interactions.

      de Certeau, theologist turned cultural philosopher because of 1968. NL Wikipedia spreekt v Europese en Amerikaanse Certeau, de eerste de theoloog, de tweede als de theoreticus van het alternatieve discours. https://nl.wikipedia.org/wiki/Michel_de_Certeau vs https://en.wikipedia.org/wiki/Michel_de_Certeau Veel werk lijkt pas na zijn dood in het Engels vertaald te zijn, vooral ook jaren 90/00, op tijd voor vergelijking met internettools. Kernwerk in het Engels is 1984 https://en.wikipedia.org/wiki/The_Practice_of_Everyday_Life lijkt het. Is dat de bron van het place/space onderscheid

    3. standards are documentation, not legislation. We have been working in the w3c Social Web Working Group to clarify and document newer, simpler protocols, but rough consensus and running code does define the worlds we see

      Marks says standards should be descriptive (doc) of actual things used/happening, not prescriptive (legislation). Usable distinction. Interesting to me is that the EU data strategy does both, legislation that has a mechanism to point to documentation and declare it a rule or instigate the creation of such documentation.

    4. It may be that the more concrete boundaries that having multiple instances provide can dampen down the cascades caused by the small world network effect. It is an interesting model to coexist between the silos with global scope and the personal domains beloved by the indieweb. In indieweb we have been saying ‘build things that you want for yourself’, but building things that you want for your friends or organisation is a useful step between generations.

      I'd say not just interesting, but also crucial. Where T and FB operate at generic level (despite FB pages as subgroups), the statistical, and IndieWeb on the personal (my site, my self-built tool), M works at group level or just above (bigger instances). That middle ground between singular and the statistical is where complexity resides and where it needs to be addressed and embraced. The network metaphor favors that intermediate level.

    1. The synthetic party, a Danish political party with an AI generated program from all Danish fringe party programs since the 70s. Aimed at the 20% non-voting Danes. 'Leder Lars' is leading the party, which is a chatbot residing on a Discord server where you can interact with it. An art project.

    1. often say that my PKM approach is technology-neutral. I do not promote one tool about another. I share my top tools but do not ask others to use them. But it seems I do have a chosen technology — the blog.

      Practice informs tool choice, tools do influence practice in return, and can become 'favourites' temporarily as exploration, but also long term. Here I'd say Harold's blogging is a practice more than a technology.

    2. My understanding of PKM began in 2004 with Lilia Efimova’s blogging of her journey through her doctoral dissertation on personal knowledge management entitled  — Personal productivity in a knowledge intensive environment: A weblog case. So I came to PKM through my blog, Lilia’s blog, and the blogs of the researchers she was observing

      I first encountered PKM in the form of Mick Cope's book Know Your Value, Value What You Know, which did little in terms of system for me, but helped cement setting the individual as key element in KM. That was the summer of 2000. Before that I had read Sveiby's New Organisation Wealth (first published 1997, read it in 2000), which wasn't about PKM, but still put the individual knowledge worker very much at the center of how KM evolves and why it is needed. For me PKM and KM were largely the same thing, with KM the aggregate over an organisation of the PKM of its people. In 2001 I joined KnowledgeBoard where I met Lilia, David Gurteen, Johnnie Moore, and many others active in this field. They reinforced the P in KM for me. It led me to blogging in nov 2002, after publishing an essay on KBoard about trust's role in KM, which very much centered on the P and relationships. In nov 2004 I co-org'd a PKM workshop at KM Europe, together with Lilia and Piers.

    3. The bullshit is believing in a technology silver bullet. We constantly see that BS sells.

      This is the underpinning of the current hypelet, plus that having forgotten what went before (centuries ago, or as little as 2 decades ago) obscures how to tap into existing practices which reinforces the shiny new tool effect.

    4. I concluded that PKM is bullshit only when it is technology-centric, and not a set of processes, individually constructed, to help each of us make sense of our world and work more effectively.

      PKM is defined by the P, not by the tools for M. This means an individual system aimed at what is of import to its user. In that system processes and methods come first, then tools. Although all tools in return influence the system. It's an artisanal perspective on tools: to be informed and shaped by the artisan's intent and experience, plus the experience gained of using the tool.

    1. Using Niklas Luhmann’s rough average of six zettels per day working full time for 8 hours a day

      For ZKII this was true. I think this, the full working day, is often overlooked when people talk about L's ZK, that it was the core of his working practice, his job to do research for which this was his tool of choice. Whereas for many in the current hypelet it is a tool next to most of their activities.

      For ZKI he had about the same average but it seems with less systematic reading approaches and a more generic purpose. Still working towards his academic career, from another career path.

    2. It bears mention that Vannevar’s influential essay “As We May Think” in the July 1945 issue of The Atlantic is entirely underpinned by the commonplace book and zettelkasten traditions pervading Western thought and culture. Rather than acknowledge this tradition tacitly, he creates the neologism “Memex” which stands in for a networked and connected zettelkasten

      This is an interesting observation. Also because Memex went on to inspire e.g. Doug Engelbart. Was Engelbart aware of the history when he demo'd outlining and notes? Was Nelson when he thought up stretchtext in 67?

    3. Additionally Colleen Kennedy has an excellent 12 page primer she developed for classroom use on how to actively implement and create one’s own commonplace book which takes into account some of the historical practices seen in the literature.
    4. One can’t help but notice the proliferation of specific method names for slightly different practices within the now growing space

      yes, it's a drag.

    5. branded method

      This may well be true for Bush too. Why say commonplace and linked notes when you can claim Memex?

    6. author Steven B. Johnson who wrote frequently about his experiences with note taking, commonplaces, and DevonThink in the early 2000s in The New York Times as well as his blog. 5

      I did not realise Johnson made note cards (in DevonThink) but have read his 2004 book Emergence which he probably wrote that way. https://www.zylstra.org/blog/2004/05/the_emergence_o/ I do associate card based interlinked notes with emergence, Vgl [[Emergente structuur ontdekken is kennisontwikkeling 20200922082048]] 'spotting emergent structure is newly developed knowledge'

    7. TiddlyWiki, first released on September 20, 2004, is a card-based user interface software built by Jeremy Ruston

      I played with this at the time in 2004 https://www.zylstra.org/blog/2004/10/tiddlywiki/

    8. Hypertext Gardens: Delightful Vistas (1998)

      My zettelkasten section of notes is called The Garden of the forking paths, from a 1941 short story by Argentinian author Jorge Luis Borges titled El jardin de senderos que se bifurcan. In 1992 it was worked into Victory Garden, an early hypertext novel, published by Eastgate. Eastgate is Mark Bernstein's company. https://en.wikipedia.org/wiki/Victory_Garden_(novel)

    9. writer, scientist, and engineer Mark Bernstein who created Tinderbox in 2002 as a note taking tool, outliner, and publishing software

      Good to see Mark Bernstein mentioned here. He's definitely strongly aware of the history and legacy he is building on with his software. I met him and came to know Tinderbox in 2004. I have been using Tinderbox since early 2008 when I went independent and started using Mac.

    10. Eco, Umberto. How to Write a Thesis.

      I have the 2015 MIT Press version see Zotero

    11. Heyde, Johannes Erich. Technik des wissenschaftlichen Arbeitens: zeitgemässe Mittel und Verfahrungsweisen. Junker und Dünnhaupt, 1931.

      I have a 1969/70 edition

    12. the commonplace tradition

      One of the most fascinating things in historical exhibitions or overviews of the work of an artist I find are surviving note(book)s. Across the centuries it is clear that so much of the work of making sense, of developing practices, striving for results, consists of making notes. Even if not for re-use as a way of being present.

    13. commonplace book kept using index cards

      This is akin to how I kept notes for most my life. With notable exceptions when I used The Brain and later a local wiki, which made interlinking easy. Before that, it was loose handwritten notes (since I was 10), often bundled in a5 blocks, but still one note per page, or loose txt files on a xt. After it was Evernote. Until early 2020 when I returned to loose notes digitally

    14. compounded by the lack of appropriate history and context,

      Everything has a lineage, and the one for pkm is centuries deep.

    15. There’s a specific set of objects (cards and boxes or their digital equivalents), but there’s also a spectrum of methods or practices which can be split into two broad categories.

      there's tools and there's practices.

    16. around 2018 during the COVID-19 pandemic

      around 2018 AND later during ... Covid started early 2020, so something is missing here. Was Roam launched 2018? Obsidian is from early 2020 indeed.

    17. The Two Definitions of Zettelkasten

      Great to read this essay, after folllowing the annotations Chris made in h. that fed into his notes that led to this essay. Fun to recognise bits and pieces from his h. feed in recent months.

    1. A reasonable shortcut might be a simple editor app that provides the create modal for all apps, with different template presets (notes, posts, tasks, etc), that then is designed to sync what you’ve written with other applications, whether directly or in conjunction with Alfred. It could also take command line input.

      I see people use Drafts like this for anything text based. Also to quickly jot things down on mobile and then later have it be processed (by Hazel or Alfred) to be placed in the right context and application.

    2. What I really want is a user-centered desktop. If I want to save a note, I enter a key combination and a window appears for just as long as I need to save it, superimposed on whatever else I’m doing; then it disappears.

      This comes near the previous annotation about programming portals closing the divide between the terminal and the use of graphical interfaces across applications.

    3. An intent-centered desktop

      the title immediately grabbed me.

    4. The end of Twitter

      Ben Werdmüller sees the Musk take-over as one of more signs that Twitter as we know it is sunsetting. Like FB it is losing its role as the all-in-one communal 'space'. I think the decline is real, but also think it will be long drawn out decline. Early adopters and early main stream may well jump ship, if they haven't already some time ago. The rest, including companies, will hang around much longer, if only for the sunk costs (socially and capital). An alternative (hopefully a multitude as Ben suggests) needs to clearly present itself, but hasn't in a way the mainstream recognises I think. It may well hurt to hold on for many, but if there's no other thing to latch onto people will endure the pain. Boiling frog and all that.

    1. This kind of accessible end-user programming on the web feels like something we've been dancing around for a decade. I really want to someone build pre-Notion. And it's unlikely to be Notion.

      In light of Hypercard more like 'the industry has been avoiding this on purpose'.

    2. 1. HyperCard HyperCard is the grand OG example of programming portals. Developed by Bill Atkinson at Apple in 1987, its interface married all the accessibility of simple, graphical user interfaces with the power of writing programmatic logic. Its core concepts were the card and the stack.

      murmuratur http://www.loper-os.org/?p=568 #2011/11/29 that's exactly why it went away, breaking the divide between coder and user, as everyone being a coder, a shaper of their computer as a tool conflicts with the biz model. Fitting it is listed here as letting user and programmer be the same person.

    1. The information manager was surprised by this, saying something like “and I have these BI specialists who never came up with this kind of use for the data”.

      Internal re-use along the lines of [[Data wat de overheid doet 20141013110101]] means questions being asked of the data, that BI teams don't think of. (perhaps because of the common disconnect between bi-teams and operational/policy teams?) This is a repeat pattern of what can be observed externally with open data as well. (Vgl CBS open data community in the 2010s)

    2. companies are their own objects of sociality as well as their own user group

      companies are their own objects of sociality (the work, processes, habits etc.) companies are their own self-formed user group. I am placing companies here on the spectrum of communities of interest/learning/practice. #2007/10/26

    3. They were adding social structures and context to the data. Basically adding social software design principles to a large volume of data.

      After letting professionals in a company have access to their internal BI data, they made it re-usable for themselves by -adding social structures (iirc indicating past and present people, depts etc., curating it for specific colleagues, forming subgroups around parts of the data) -adding context (iirc linking it to ongoing work, and external developments, adding info on data origin) Thus they started socially filtering the data, with the employee network as social network [[Social netwerk als filter 20060930194648]].

    4. they had given a number of their professionals access to their business intelligence data. Because they were gathering so much data nobody really looked at for lack of good questions to ask of the dataset. The professionals put the data to good use, because they could formulate the right questions.

      Most data in a company is collected for a single purpose (indicators, reporting, marketing). Companies usually don't look at how that data about themselves might be re-used by themselves. Vgl [[Data wat de overheid doet 20141013110101]] where I described this same effect for the public sector (based on work for the Court of Audit, not tying it back to this here. n:: re-use company internal data

    5. Companies are excellent environments for social filtering. Because they sit on large volumes of data and information, going largely unused. Because organisations are a group of people with shared goals and tasks.

      This never happened in this way. Another example of how #socsoft became marketing almost exlusively. With the exeception perhaps of async tools like Slack (2013) or Yammer (2008, still exists as part of MS), although filtering is not their point, their users may use it that way. The whole #socsoft for org internal k-work never got much traction. Still a lost opportunity imo. Tools probably need to better fit existing culture/communication styles in org and be internal, but being created as separate place external with its own assumptions.

    6. Social software works well given these conditions because these tools are the internet’s response to the enormous volume of information the internet helped create. Social software is the answer to the internet by the internet. The quantitative change in information availability (going from scarcity to abundance) leads to qualitative changes in our information strategies. Social filtering is one of those changed information strategies. Social software caters to social filtering.

      I wrote this in 2007, just as FB and Twitter took off, so was thinking not of them but other social tools (social software rather than media). One way #socmed turned toxic is because they started filtering for us?

    1. But now let’s layer in some costs.

      This one is just silly. None of those costs are seriously influenced by the way you make your income. The costs of living are always there. The comparison between self-employed and salaried work as an artist is odd too, which artist is salaried as such? Meaning journalists here probably. Taking Kelly literally on his 100k/yr is even odder, that is high end earning literally everywhere in the world (top 10% in the USA and NL e.g). Yes living in a city is usually more expensive. Kelly's true fans message isn't promising that costs disappear.

    2. Thousand True Fans encourages us to embrace the individual opportunities and whistles past the broader social trends.

      I think this is the key thing. The US-ian idea that there's only 'do it on your own' alternatives in the face of failing larger structures (apart from the 'so let's tear those structures down with gusto' that others conclude from it) breaks because it only works for people doing so within the context of those structures existing failing or not, as a minority. Otherwise there's no comparison to be made. How do I fare on my own, compared to those who still work in 'the industry'?

    3. The creator economy is not good, and it's getting worse.

      This should have been the title! And then perhaps use Kelly's essay to illustrate that people used that nudge to try and avoid the bad state of the creator economy, but that it doesn't solve the underlying problems.

      This feels like 'Kelly's 1000 fans has proven to be a nicely sticky message so let's make it lead' so that people may read about the bad state of the creator economy in general, because that isn't sticky. Piggybacking on Kelly to make a different point altogether.

    4. the essay is meant as a supportive nudge towards attainable dreams

      and that's all it is and ever was, while pointing out there are now other means at hand to do that, and measure your progress, than pre-web. so why set it up as 'theory' in the title?

    5. What bothers me most about the Thousand True Fans concept today is how I see it being deployed by the Web3 crowd

      bothered by how people use it, is not the same as being bothered by the concept is it? "Kitchen knives are great for cooking, you can now mix ingredients by chopping things up, it revolutionises the concept of meals and dishes allowing a variety of taste combinations. We can all be chefs!" "What bothers me most about the "Kitchen Knive" concept is how nowadays there are bad people stabbing others with it."

    6. As the network gets bigger, the platforms develop algorithms to help people discover what they are looking for/what they want but might not be looking for yet. It results in a power law/rich-get-richer phenomenon, driving attention and audiences toward the biggest successes and away from the niches.

      indeed. see 'do without intermediaries' and don't make yourself as consumer only a passive element in the whole recommendation circus.

    7. A fundamental virtue of a peer-to-peer network (like the web) is that the most obscure node is only one click away from the most popular node. In other words the most obscure under-selling book, song, or idea, is only one click away from the best selling book, song or idea.” This is only true when the peer-to-peer networks are small, though

      That doesn't follow. I don't read Kelly's 'one click' as meaning that the most obscure thing is directly adjacent to the most famous thing as seen from their end. The one click point is that everything is just one hyperlink away, from the 'consumer's' end. Not everyone in the world is my neighbour, but all people online are indeed one click away (although I may not be aware of that most of the time). For me this points to the importance of self-intermediation, i.e. the weblogstyle curation, that 'word-of-hyperlink' propagation of finds. There are now many ways around the intermediaries, even as those evolve or new ones claim their role, even if they remain dominant.

    8. Theory

      theory? indeed one way of setting things up so you can be seen to take it down.

    9. The culture industry still bends toward the big hit-makers.

      Kelly: do without intermediators Karpf: intermediators have found ways to keep intermediating. How does that negate the premise as such?

      (The clue is in the word industry, if one's looking for what's wrong with it)

    10. What we’ve mostly experienced with the Internet of the past fifteen years is that the platforms algorithmically funnel everyone’s attention to the same thing

      yes, if you let them. The flipside of artists, or anyone, looking for true fans, is that it requires a certain level of pro-activeness on the side of the audience too looking for their niche 'true stars'.

    11. Not everyone has a hundred bucks per year to spend on each of their hobbies.

      True. You can't afford being a True Fan in Kelly's literal wording most of the time even in affluent societies. If I was a true fan of something in my teens (I wasn't) I wasn't spending money on it. For all the concern W is displaying for artists trying to make an income off Kelly's rule of thumb, then in this turn of phrase turning that creative output into their fans 'hobby' probably is a put-down for any artist reading this. Thanks, mate.

      It isn't about 'hobbies' only either. The stacking of subscriptions is also problematic. It's the culture intermediators from above doing the exact same thing, thus eroding potential revenue in any actual existing niches (Netflix, Amazon Prime, Spotify: the point is most people want both the fat head of the long tail to be available to them, in addition to the niches they're fan of. The spending likely still starts at the fat head, esp if it follows the same pattern as niche spending, small amounts regularly), and everybody else too (why does every single piece of software turn into a yearly subscription without realising all the umpteenth tools on my laptop trying to do the same make that impossible)

      Yet I've never taken Kelly literally, not about the 1000 people, and not about the $100, you can switch that to any number, relevant to any location on earth, and any lifestyle, and still be invited to think realistically about the actual reach you need to make a living. In all cases you don't need to be a superstar to make it, nor a global market leader. It always used to be you could be 'world famous' in your part of the woods, now your part of the woods can be more distributed and does not depend on locality per se.

    1. Of course this would not result in immediate translation of texts into all languages. It would however result in ideas being transmitted throughout the whole system. When enough interest is generated within a certain circle (tipping point like) this subset of people will arrange for translation, on the basis of perceived needs. Once translated a document has become a more ‘spreadable meme’ and will travel through the system once more.

      More recently I also have come to see translation as creative work, not just placing it in a different conversation, but as a 'language game' a la Wittgenstein, it has different effects when one translates something. When my notes are mixed language, or when I translate one for a blogpost, that act of translation becomes part of the work of making sense of the notion in the note, it regularly offers new avenues of thought. Due to slight differences in meaning between words in languages, or because of the ethymology of a word in another language offering a new line of thought.

    2. These connectors would be able to mesh the different language-networks

      My current practice, reading in multiple languages, seeing myself as participant in different conversations, where I sometimes carry those conversations to another place (in another language). If I blog in another language, it's because of the conversational context I am placing that posting in, it's never an attempt to offer the content here in translation.

      Meshing is a useful term

    3. I’d go for a decentralized way of looking at it. So it’s up to individuals to create a solution. To connect networks you need connectors, networkstraddlers. Through them knowledge and information can flow between two otherwise seperated networks.

      This is still how I approach multilingual settings. Building a chain (my go to example is an early 90s example of a students meet-up in Hungary where a Russian spoke Russian to a Bulgarian who spoke Spanish to a Spanish student who spoke English and put the Russian's words in front of the rest of us.) This is imprecise, but human and forgiving.

    4. I try to point out that centralized solutions to the language divide in my view won’t work. Not adopting one language, and not going for the huge amount of work of having one centralized hub doing all the translation

      I think this is still valid. Enforcing a single language is too rigid. Although for a temporary context may well be the best working solution (say, a single meeting), and centralised translation is only worth the enormous effort if there's a strong need for reliably translated material (such as the EU laws)

    5. Mechanical translation might offer a solution in the future, but not at the moment.

      We're 19yrs on now from when I wrote this. Things are better but far from perfect. I'm getting good use out of DeepL, but it still helps if you know the other language to help understand what is meant.

    1. 9/8j Im Zettelkasten ist ein Zettel, der dasArgument enthält, das die Behauptungenauf allen anderen Zetteln widerlegt. Aber dieser Zettel verschwindet, sobald manden Zettelkasten aufzieht. D.h. er nimmt eine andere Nummer an,verstellt sich und ist dann nicht zu finden. Ein Joker.

      ha! an elusive joker that refutes the other concepts in the notes. Sounds like L could be frustrated in his 'communication with his ZK' searching for a note. I've definitely had that, being unable to find a thing in my 1600 or so ZK type notes that I know is in there somewhere in some form. Let alone in his 67k notes ZKII

    1. Zettelkasten mit dem kompliziertenVerdauungssystem eines Wiederkäuers. Alle arbiträren Einfälle, alle Zufälleder Lektüren, können eingebrachtwerden. Es entscheidet dann die interne Anschlussfähig-keit.

      Another metaphor, the ZK has the complicated digestive system of ruminantia. This says something about the work involved, but also about how there's a temporal dimension at play. Things can come back to be used long after being put in. The internal connectivity determines the process.

    1. Hinter der Zettelkastentechnik steht dieErfahrung: Ohne zu schreiben kann mannicht denken – jedenfalls nicht in anspruchsvollen,selektiven Zugriff aufs Gedächtnis voraussehendenZusammenhängen. Das heißt auch: ohne Differenzen einzukerben,

      L sees ZK as extension of his personal experience with better thinking through writing, although that seems to mostly cover the input side of ZK. The mention of anspruchsvollen Zugriff (ambitious/demanding ways of accessing the notes) in constellation (Zusammenhängen) is very much usage/output related. The mention of clearly establishing differences (einzukerben, which is a raw almost violent choice of words) is notable, also goes back to the stated purpose of ZKII to find the imperfections and inadequacies in the concepts studied.

    1. Daher wird der Zettelkasten produktiv insofern,als er Notiertes nichtmitnotierten Hintergründenaussetzt und dadurch Information entstehenlässt, die so nicht gespeichert war.

      This sounds bit like [[Gestalten and Constellations above Crumbs 20200426111123]] wrt to exposing context without noting such context all down, but keeping it accessible for when one is going through what was noted.

    1. Rückwirkungen auf die Lektüre: manliest anders, wenn man auf die Möglichkeitender Verzettelung achtet– nicht: Exzerpte!

      L says there's a feedback loop to reading when you make notes. Atomic notes change reading and are not excerpts. Fragmentation and non-linearity, also a form of interdependency interruption?