1,024 Matching Annotations
  1. May 2023
    1. The linked Mastodon thread gives a great example of using Obsidian (but could easily have been Tinderbox of any similar tool) for a journalism project. I can see me do this for some parts of my work too. To verify, see patterns, find omissions etc. Basically this is what Tinderbox is for, while writing keep track of characters, timelines, events etc.

    1. This simple approach to avoiding bad decisions is an example of second-level thinking. Instead of going for the most immediate, obvious, comfortable decision, using your future regrets as a tool for thought is a way to ensure you consider the potential negative outcomes.

      Avoiding bad decisions isn't the same as making a constructive decision though. This here is more akin to postponed gratification.

    2. This visualisation technique can be used for small and big decisions alike. Thinking of eating that extra piece of cake? Walk yourself through the likely thoughts of your future self. Want to spend a large sum of money on a piece of tech you’re not sure yet how you will use? Think about how your future self will feel about the decision

      Note that these are examples that imply that using regret of future self in decision making is mostly for deciding against a certain action (eat cake, buy new toy).

    3. Instead of letting your present self make the decision on their own, ignoring the experience of your future self who will need to deal with the consequences later, turn the one-way decision process into a conversation between your present and future self.

      As part of decision making involve a 'future self' so that different perspective(s) can get taken into account in a personal decision on an action.

    4. Bring your future self in the decision-making process

      Vgl Vinay Gupta's [[Verantwoording aan de kinderen 20200616102016]] as a way of including future selves, by tying consequence evalution to the human rights of children.

    5. In-the-moment decisions have a compound effect: while each of them doesn’t feel like a big deal, they add up overtime.

      Compounding plays a role in any current decision. Vgl [[Compound interest van implementatie en adoptie 20210216134309]] [[Compound interest of habits 20200916065059]]

    6. temporal discounting. The further in the future the consequences, the least we pay attention to them

      Temporal discounting: future consequences are taken into account as an inverse of time. It's based on urgency as a survival trait.

    1. Agent-regret seems a useful term to explore. Also in less morally extreme settings than the accidental killing in this piece.

    1. New to me form of censorship evasion: easter egg room in a mainstream online game that itself is not censored. Finnish news paper Helsingin Sanomat has been putting their reporting on the Russian war on Ukraine inside a level of online FPS game Counter Strike, translated into Russian. This as a way to circumvent Russian censorship that blocks Finnish media. It saw 2k downloads from unknown geographic origins, so the effect might be very limited.

    1. After 29 billion USD in 2 yrs, Metaverse is still where it was and where Second Life already was in 2003 (Linden Labs and their product Second Life still exist and have been profitable since their start.) I warned a client about jumping into this stuff that Meta while the talk and the walk were not a single thing beyond capabilities that have existed for two decades. https://www.zylstra.org/blog/2022/02/was-second-life-ahead-or-metaverse-nothing-really-new/ en https://www.zylstra.org/blog/2021/11/metaverse-reprise/ Good thing they didn't change their name to anything related .....

    1. Where are the thinkers who always have “a living community before their eyes”?

      I suspect within the living community in question. The scientific model of being an outside observer falls flat in a complex environment, as any self-styled observer is part of it, and can only succeed by realising that. Brings me to action research too. If they're hard to find from outside such a living community that's probably because they don't partake in the academic status games that run separate from those living communities. How would you recognise one if you aren't at least yourself a boundary spanner to the living community they are part of?

    2. For intellectuals of this sort, even when they were writing learned tomes in the solitude of their studies, there was always a living community before their eyes

      This quote is about early Christian bishops from The Spirit of Early Christian Thought by Robert Wilken. Not otherwise of interest to me, except this quote that Ayjay lifts from it. 'Always a living community before their eyes' is I realise my take on pragmatism. Goes back to [[Heinz Wittenbrink]] when he wrote about my 'method' in the context of #stm18 https://www.zylstra.org/blog/2018/09/heinz-on-stm18/

    1. Another downside to using Gutenberg’s sidebar panels is that, as long as I want to keep supporting the classic editor, I’ve basically got to maintain two copies of the same code, one in PHP and another in JavaScript.

      Note to self: getting into WP Gutenberg is a shift deeper into JS and less PHP. My usually entry into creating something for myself is to base it on *AMP (MAMP now) so I can re-use what I have in PHP and MySQL as a homecook.

    1. The amount of EVs in Norway is impacting air quality ('we have solved the NOx issue' it says) in Oslo. Mentions electrified building machinery also reducing noise and NOx on building sites. This has been a long time coming: in [[Ljubljana 2013]] there was this Norwegian guy who told me EVs had started leading new car sales. via Bryan Alexander.

      https://web.archive.org/web/20230509045023/https://www.nytimes.com/2023/05/08/business/energy-environment/norway-electric-vehicles.html

    1. https://web.archive.org/web/20230507143729/https://ec.europa.eu/commission/presscorner/detail/en/ip_23_2413

      The EC has designated the first batch of VLOP and VLOSE under the DSA

      consultation on data access to researchers is opened until 25 May. t:: need to better read Article 41? wrt this access. Lots of conspiracytalk around it re censorship, what does the law say?

    1. European digital infrastructure consortia are as of #2022/12/14 a new legal entity. Decision (EU) 2022/2481 of 14 December 2022 establishing the Digital Decade Policy Programme 2030

      Requirement is that Member States may implement a multi-country project by means of an EDIC. The EC will than create them as legal entity by the act of an EC decision on the consortium funding. There is a public register for them.

      No mention of UBO (although if members are publshed, those members will have UBO registered).

    1. Amazon has a new set of services that include an LLM called Titan and corresponsing cloud/compute services, to roll your own chatbots etc.

    1. Databricks is a US company that released Dolly 2.0 an open source LLM.

      (I see little mention of stuff like BLOOM, is that because it currently isn't usable, US-centrism or something else?)

    1. What Obs Canvas provides is a whiteboard where you can add notes, embed anything, create new notes, and export of the result.

      Six example categories of using Canvas in Obsidian. - Dashboard - Create flow charts - Mindmaps - Mapping out ideas as Graph View replacement - Writing, structure an article ([[Ik noem mijn MOCs Olifantenpaadjes 20210313094501]]) - Brainstorming (also a Graph View replacement)

      I have used [[Tinderbox]] as canvas / outliner (as it allows view-switch between them) for dashboards mostly, as well as for braindumping and then mapping it for ideas and patterns.

      Canvas w Excalibur may help escape the linearity of a note writing window (atomic notes are fine as linear texts)

    1. I have decided that the most efficient way to develop a note taking system isn’t to start at the beginning, but to start at the end. What this means, is simply to think about what the notes are going to be used for

      yes. Me: re-usable insights from project work, exploring defined fields of interest to see adjacent topics I may move into or parts to currently focus on, blogposts on same, see evolutionary patterns in my stuff.

      Btw need to find a diff term than output, too much productivity overtones. life isn't 'output', it's lived.

    2. seriously considering moving my research into a different app, or vault to keep it segregated from the slip box

      ? the notes are the research/learning, no? Not only a residue of it. Is this a mix-up between the old stock and flow disc in (P)KM and the sense it needs to be one or the other? Both! That allows dancing with it.

    1. Kate Darling wrote a great book called The New Breed where she argues we should think of robots as animals – as a companion species who compliments our skills. I think this approach easily extends to language models.

      Kate Darling (MIT, Econ/Law from Uni Basel and ETH ZH) https://en.wikipedia.org/wiki/Kate_Darling http://www.katedarling.org/ https://octodon.social/@grok

      antilibrary add [[The New Breed by Kate Darling]] 2021 https://libris.nl/boek?authortitle=kate-darling/the-new-breed--9781250296115#

      Vgl the 'alloys' in [[Meru by S.B. Divya]]

    2. Language models are very good at some things humans are not good at, such as search and discovery, role-playing identities/characters, rapidly organising and synthesising huge amounts of data, and turning fuzzy natural language inputs into structured computational outputs.And humans are good at many things models are bad at, such as checking claims against physical reality, long-term memory and coherence, embodied knowledge, understanding social contexts, and having emotional intelligence.So we should use models to do things we can’t do, not things we’re quite good at and happy doing. We should leverage the best of both kinds of “minds.”

      The Engelbart perspective on how models can augment our cognitive abilities. Machines for search/discovery (of patterns I'd add, and novel outliers), role play (?, NPCs?, conversational partner Luhmann like, learning buddy?), structuring, lines of reasoning, summaries. (Of the last, those may actually be needed human work, go from the broader richer to the summarised outline as part of the internalisation process in learning).

      Human: access to reality, social context, emotional intelligence, access to reality, longterm memory (machines can help here too obvs), embodied K. And actual real world goals / purposes!

    3. Making these models smaller and more specialised would also allow us to run them on local devices instead of relying on access via large corporations.

      this. Vgl [[CPUs, GPUs, and Now AI Chips]] hardware with ai on them. Vgl [[Everymans Allemans AI 20190807141523]]

    4. They're just interim artefacts in our thinking and research process.

      weave models into your processes not shove it between me and the world by having it create the output. doing that is diminishing yourself and your own agency. Vgl [[Everymans Allemans AI 20190807141523]]

    5. One alternate approach is to start with our own curated datasets we trust. These could be repositories of published scientific papers, our own personal notes, or public databases like Wikipedia.We can then run many small specialised model tasks over them.

      Yes, if I could run my own notes of 3 decades or so on an LLM locally (where it doesn't feed the general model), that I would do instantly.

    6. The question I want everyone to leave with is which of these possible futures would you like to make happen? Or not make happen?
      1. Passing the reverse Turing test
      2. Higher standards, higher floors and ceilings
      3. Human centipede epistemology (ugh what an image)
      4. Meatspace premium
      5. Decentralised human authentication
      6. The filtered web

      Intuitively I think 1, 4, and 6 already de facto exist in the pre-generative AI web, and will get more important. Tech bros will go all in on 5, and I do see a role for it (e.g. to vouch that a certain agent acts on my behalf). I can see the floor raising of 2, and the ceiling raising too, but only if it is a temporary effect to a next 'stable' point (or it will be a race we'll loose), grow sideways not only up). Future 3 is def happening in essence, but it will make the web useless so there's a hard stop to this scenario, at high societal cost. Human K as such isn't dependent on the web or a single medium, and if it all turns to ashes, other pathways will come up (which may again be exposed to the same effect though)

    7. A more ideal form of this is the human and the AI agent are collaborative partners doing things together. These are often called human-in-the-loop systems.

      collaborative is different from shifting the locus of agency to the human, it implies shared agency. Also human in the loop I usually see used not for agency but for control (final decision is a human) and hence liability. (Which is often problematic because the human is biased to accept conclusions presented to them. ) Meant as safeguard only, not changing the role of the model agent, or intended to shift agency.

    8. I’m on Twitter @mappletonsI’m sure lots of people think I’ve said at least one utterly sacrilegious and misguided thing in this talk.You can go try to main character me while Twitter is still a thing.

      Ha! :D

    9. I tried to come up with three snappy principles for building products with language models. I expect these to evolve over time, but this is my first passFirst, protect human agency. Second, treat models as reasoning engines, not sources of truth And third, augment cognitive abilities rather than replace them.

      Use LLM in tools that 1. protect human agency 2. treat models as reasoning engines, not source of truth / oracles 3. augment cog abilities, no greedy reductionism to replace them

      I would not just protect human agency, which turns our human efforts into a preserve, LLM tools need to increase human agency (individually and societally) 3 yes, we must keep Engelbarting! lack of 2 is the source of the hype balloon we need to pop. It starts with avoiding anthromorphizing through our idiom around these tools. It will be hard. People want their magic wand, not the colder realism of 2 (you need to keep sorting out your own messes, but with a better shovel)

    10. At this point I should make clear generative AI is not the destructive force here. The way we’re choosing to deploy it in the world is. The product decisions that expand the dark forestness of the web are the problem.So if you are working on a tool that enables people to churn out large volumes of text without fact-checking, reflection, and critical thinking. And then publish it to every platform in parallel... please god, stop.So what should you be building instead?

      tech bro's will tech bro, in short. I fully agree, I wonder if this one sentence is enough to balance the entire talk until now not challenging the context of these tool deployments, but only addressing the symptoms and effects it's causing?

    11. We will eventually find it absurd that anyone would browse the “raw web” without their personal model filtering it.

      yes, it already is that way in effect.

    12. In the same way, very few of us would voluntarily browse the dark web. We’re quite sure we don’t want to know what’s on it.

      indeed, that's what it currently looks like. However....I would not mind my agents going over the darkweb as precaution or as check for patterns. At issue is that me doing that personally now takes way too much time for the small possibility I catch something significant. If I can send out agents the time spent wouldn't matter. Of course at scale it would remove the dark web one more step into the dark, as when all send their agents the darkweb is fully illuminated.

    13. We will have to design this very carefully, or it'll give a whole new meaning to filter bubbles.

      Not just bubble, it will be the FB timeline. Key here is agency, and design for human biases. A model is likely much better than I to manage the diversity of sources for me, if I give it a starting point myself, or to see which outliers to include etc. Again I think it also means moving away from single artefacts. Often I'm not interested in what everyone is saying about X, but am interested in who is talking about X. Patterns not singular artefacts. See [[Mijn ideale feedreader 20180703063626]]

    14. I expect these to be baked into browsers or at the OS level.These specialised models will help us identify generated content (if possible), debunk claims, flag misinformation, hunt down sources for us, curate and suggest content, and ideally solve our discovery and search problems.

      Appleton suggests agents to fact check / filter / summarise / curate and suggest (those last two are more personal than the others, which are the grunt work of infostrats) would become part of your browser. Only if I can myself strongly influence what it does (otherwise it is the FB timeline all over again!)

      If these models become part of the browser, do we still need the browser as a metaphor for a window on the web, or surfing the net? Why wouldn't those models come up with whatever they grabbed from the web/net/darkweb in the right spot in my own infostrats? The browser is itself not a part of my infostrats, it's the starting point of it, the viewer on the raw material. Whatever I keep from browsing is when PKM starts. When the model filters / curates why not put that in the right spots for me to start working with it / on it / processing it? The model not as part of the browser, but doing the actual browsing, an active agent going out there to flag patterns of interest (based on my prefs/current issues etc) and organising it for me for my next steps? [[Individuele software agents 20200402151419]]

    15. Those were all a bit negative but there is some hope in this future.We can certainly fight fire with fire.I think it’s reasonable to assume we’ll each have a set of personal language models helping us filter and manage information on the web

      Yes, agency at the edges. Ppl running their own agents. Have your agents talk to my agents to arrange a meeting etc. That actually frees up time. Have my agent check out the context and background of a text to judge whether it's a human author or not etc. [[Persoonlijke algoritmes als agents 20180417200200]] [[Individuele software agents 20200402151419]]

    16. People will move back to cities and densely populated areas. In-person events will become preferable.

      Ppl never stopped moving into cities. Cities are an efficient form human organisation. [[De stad als efficientie 20200811085014]]

      In person events have always been preferable because we're human. Living further away with online access has mitigated that, but not undone it.

    17. Once two people, they can confirm the humanity of everyone else they've met IRL. Two people who know each of these people can confirm each other's humanity because of this trust network.

      ssl parties etc. Threema. mentioned above. Catfish! Scale is an issue in the sense that social distance will remain social distance, so it still leaves you with the question how to deal with something that is from a far away social distance (as is an issue on the web now: how we solve it is lurking / interacting and then when the felt distance is smaller go to IRL)

    18. As we start to doubt all “people” online, the only way to confirm humanity is to meet offline over coffee or a drink.

      this is already common for decades, not because of doubt, but because of being human. My blogging since 2002 has created many new connections to people ('you imaginary friends' the irl friends of a friend call them teasingly), and almost immediately there was a shared need felt to meet up in person. Online allowed me to cast a wider net for connections, but over time that was spun into something IRL. I visited conferences for this, organised conferences for it, traveled to people's homes, many meet-ups, our birthday unconferences are also a shape of this. Vgl [[Menselijk en digitaal netwerk zijn gelijksoortig 20200810142551]] Dopplr serviced this.

    19. Next, we have the meatspace premium.We will begin to preference offline-first interactions. Or as I like to call them, meatspace interactions.

      meat-space premium, chuckle.

    20. study done this past December to get a sense of how possible this is: Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers" – Catherine Gao, et al. (2022)Blinded human reviewers were given a mix of real paper abstracts and ChatGPT-generated abstracts for submission to 5 of the highest-impact medical journals.

      I think these types of tests can only result in showing human failing at them. Because the test is reduced to judging only the single artefact as a thing in itself, no context etc. That's the basic element of all cons: make you focus narrowly on something, where the facade is, and not where you would find out it's fake. Turing isn't about whether something's human, but whether we can be made to believe it is human. And humans can be made to believe a lot. Turing needs to keep you from looking behind the curtain / in the room to make the test work, even in its shape as a thought experiment. The study (judging by the sentences here) is a Turing test in the real world. Why would you not look behind the curtain? This is the equivalent of MIT's tedious trolley problem fixation and calling it ethics of technology, without ever realising that the way out of their false dilemma's is acknowledging nothing is ever a di-lemma but always a multi-lemma, there are always myriad options to go for.

    21. Takes the replication crisis to a whole new level.Just because words are published in journals does not make them true.

      Agreed, still this was true before generative AI too. There's a qualitative impact to be expected from this quantitative shift [[Kwantiteit leidt tot kwaliteit 20201211155505]], and it may well be the further/complete erosion of scientific publishing in its current form. Which likely isn't bad, as it is way past its original purpose already: making dissemination cheaper so other scientists can build on it. Dissemination has no marginal costs attached anymore since digitisation. Needs a new trusted human system for sharing publications though, where peer network precedes submission of things to a pool of K.

    22. if content generated from models becomes our source of truth, the way we know things is simply that a language model once said them. Then they're forever captured in the circular flow of generated information

      This is definitely a feedback loop in play, as already LLMs emulate bland SEO optimised text very well because most of the internet is already full of that crap. It's just a bunch of sites, and mostly other sources that serve as source of K though, is it not? So the feedback loop exposes to more people that they shouldn't see 'the internet' as the source of all truth? And is this feedbackloop not pointing to people simply stopping to take this stuff in (the writing part does not matter when there's no reader for it)? Unless curated, filtered etc by verifiable human actors? Are we about to see personal generative agents that can do lots of pattern hunting for me on my [[Social Distance als ordeningsprincipe 20190612143232]] en [[Social netwerk als filter 20060930194648]]

    23. We can publish multi-modal work that covers both text and audio and video. This defence will probably only last another 6-12 months.

      Multi-modal output can for now still suggest there's a human at work, not a generative agent. But multi-modal output can soon if not already also be generated. This still seems to focus on the output being the thing authenticated to identify human making. output that is connected to other generated output. There's still no link to things outside the output, into the authors life e.g. Can one fake the human process towards output, which is not a one-off thing (me writing this in a certain way), but a continuous and evolving thing (me writing this in a certain way as part of a certain information process, connected to certain of my work processes etc.). Seen from processes multi-modal output isn't a different media format, it is also work results, projects created, agency in the physical world. In those processes all output is an intermediate result. Because of those evolving processes my [[Blogs als avatar 20030731084659]]. Vgl [[Kunst-artefact is (tussen)uitkomst proces 20140505070232]] There was this article about an artist I can't find back that saw all his outputs over time as intermediate and expression of one narrative. This https://www.flickr.com/photos/tonz/52849988531/in/datetaken/ comes to mind to. Provenance and entanglement as indicators of authenticity.

    24. But some people will realise they shouldn’t be letting language models literally write words for them. Instead, they'll strategically use them as part of their process to become even better writers.They'll integrate them by using them as sounding boards while developing ideas, research helpers, organisers, debate partners, and Socratic questioners.

      This hints towards prompt-engineering, and the role of prompts in human interaction itself [[Prompting skill in conversation and AI chat 20230301120740]]

      High Q use of generative AI will be about where in a creative / work process you employ to what purpose. Not in accepting the current face presented to us in e.g. chatGPT: give me an input and I'll give you an output. This in turn requires an understanding of one's own creative work processes, and where tools can help reduce friction (and where the friction is the cognitive actual work and must not be taken out)

    25. Some of these people will become even more mediocre. They will try to outsource too much cognitive work to the language model and end up replacing their critical thinking and insights with boring, predictable work. Because that’s exactly the kind of writing language models are trained to do, by definition.

      If you use LLMs to improve your mediocre writing it will help. If you use it to outsource too much of your own cognitive work it will get you the bland SEO texts the LLMs were trained on and the result will be more mediocre. Greedy reductionism will get punished.

    26. This raises both the floor and the ceiling for the quality of writing.

      I wonder about reading after this entire section about writing. Why would I ever bother reading generated texts (apart from 'anonymous' texts like manuals? It does not negate the need to be able to identify a human author, on the contrary, but it would also make even the cheapest way of generating too costly if noone will ever read it or act upon it. Current troll farming has effect because we read it, and still assume it's human written and genuine. As soon as that assumption is fully eroded whatever gets generated will not have impact, because there's no reader left to be impacted. The current transitional assymmetry in judging output vs generating it is costly to humans, people will learn to avoid that cost. Another angle is humans pretending to be the actual author of generated texts.

    27. And lastly, we can push ourselves to do higher quality writing, research, and critical thinking. At the moment models still can't do sophisticated long-form writing full of legitimate citations and original insights.

      Is this not merely entering an 'arms race' against our own tools? With the rat race effect of higher demands over time?

      What about moving sideways not up? Bringing in the richness of the layering of our (internal) reality and lives? The entire fabric that makes up our lives, work, communities, societies, indicately more richly in our artefacts. Which is where my sense of beauty is [[Schoonheidsbegrip 20151023132920]] as [[Making sense is deeply emotional 20181217130024]]

    28. On the new web, we’re the ones under scrutiny. Everyone is assumed to be a model until they can prove they're human.

      On a web with many generative agents, all actors are going to be assumed models until it is clear they're really human.

      Maggie Appleton calls this 'passing the reverse Turing test'. She suggests using different languages than English, insider jargon etc, may delay this effect by a few months at most (and she's right, I've had conversations with LLMs in several languages now, and there's no real difference anymore with English as there was last fall.)

    29. When you read someone else’s writing online, it’s an invitation to connect with them. You can reply to their work, direct message them, meet for coffee or a drink, and ideally become friends or intellectual sparring partners. I’ve had this happen with so many people. Highly recommend.There is always someone on the other side of the work who you can have a full human relationship with.Some of us might argue this is the whole point of writing on the web.

      The web is conversation (my blog def is), texts are a means to enter into a conversation, connection. For algogens the texts are the purpose (and human time spend evaluating its utility and finding it generated an externalised cost, assymmetric as an LLM can generate more than one can ever evaluate for authenticity). Behind a generated text there's no author to connect to. Not in terms of annotation (cause no author intention) and not in terms of actual connection to the human behind the text.

    30. This clearly does not represent all human cultures and languages and ways of being.We are taking an already dominant way of seeing the world and generating even more content reinforcing that dominance

      Amplifying dominant perspectives, a feedback loop that ignores all of humanity falling outside the original trainingset, which is impovering itself, while likely also extending the societal inequality that the data represents. Given how such early weaving errors determine the future (see fridges), I don't expect that to change even with more data in the future. The first discrepancy will not be overcome.

    31. This means they primarily represent the generalised views of a majority English-speaking, western population who have written a lot on Reddit and lived between about 1900 and 2023.Which in the grand scheme of history and geography, is an incredibly narrow slice of humanity.

      Appleton points to the inherent severely limited trainingset and hence perspective that is embedded in LLMs. Most of current human society, of history and future is excluded. This goes back to my take on data and blind faith in using it: [[Data geeft klein deel werkelijkheid slecht weer 20201219122618]] en [[Check data against reality 20201219145507]]

    32. But a language model is not a person with a fixed identity.They know nothing about the cultural context of who they’re talking to. They take on different characters depending on how you prompt them and don’t hold fixed opinions. They are not speaking from one stable social position.

      Algogens aren't fixed social entities/identities, but mirrors of the prompts

    33. Everything we say is situated in a social context.

      Conversation / social interaction / contactivity is the human condition.

    34. A big part of this limitation is that these models only deal with language.And language is only one small part of how a human understands and processes the world.We perceive and reason and interact with the world via spatial reasoning, embodiment, sense of time, touch, taste, memory, vision, and sound. These are all pre-linguistic. And they live in an entirely separate part of the brain from language.Generating text strings is not the end-all be-all of what it means to be intelligent or human.

      Algogens are disconnected from reality. And, seems a key point, our own cognition and relation to reality is not just through language (and by extension not just through the language center in our brain): spatial awareness, embodiment, senses, time awareness are all not language. It is overly reductionist to treat intelligence or even humanity as language only.

    35. This disconnect between its superhuman intelligence and incompetence is one of the hardest things to reconcile.

      generative AI as very smart and super incompetent at the same time, which is hard to reconcile. Is this a [[Monstertheorie 20030725114320]] style cultural category challenge? Or is the basic one replacing human cognition?

    36. But there are a few key differences between content generated by models versus content made by humans.First is its connection to reality. Second, the social context they live within. And finally their potential for human relationships.

      yes, all generated content is devoid of an author context e.g. It's flat and 2D in that sense, and usually fully self contained no references to actual experiences, experiments or things outside the scope of the immediate text. As I describe https://hypothes.is/a/kpthXCuQEe2TcGOizzoJrQ

    37. I think we’re about to enter a stage of sharing the web with lots of non-human agents that are very different to our current bots – they have a lot more data on how behave like realistic humans and are rapidly going to get more and more capable.Soon we won’t be able to tell the difference between generative agents and real humans on the web.Sharing the web with agents isn’t inherently bad and could have good use cases such as automated moderators and search assistants, but it’s going to get complicated.

      Having the internet swarmed by generative agents is unlike current bots and scripts. It will be harder to see diff between humans and machines online. This may be problematic for those of us who treat the web as a space for human interaction.

    38. There's a new library called AgentGPT that's making it easier to build these kind of agents. It's not as sophisticated as the sim character version, but follows the same idea of autonomous agents with memory, reflection, and tools available. It's now relatively easy to spin up similar agents that can interact with the web.

      AgentGPT https://agentgpt.reworkd.ai/nl is a version of such Generative Agents. It can be run locally or in your own cloud space. https://github.com/reworkd/AgentGPT

    39. These language-model-powered sims had some key features, such as a long-term memory database they could read and write to, the ability to reflect on their experiences, planning what to do next, and interacting with other sim agents in the game

      Generative agents have a database for long term memory, and can do internal prompting/outputs

    40. Recently, people have taken this idea further and developed what are being called “generative agents”.Just over two weeks ago, this paper "Generative Agents: Interactive Simulacra of Human Behavior" came out outlining an experiment where they made a sim-like game (as in, The Sims) filled with little people, each controlled by a language-model agent.

      Generative agents are a sort of indefinite prompt chaining: an NPC or interactive thing can be LLM controlled. https://www.youtube.com/watch?v=Gz6mAX41fs0 shows this for Skyrim. Appleton mentions a paper https://arxiv.org/abs/2304.03442 which does it for simlike stuff. See Zotero copy Vgl [[Stealing Worlds by Karl Schroeder]] where NPC were a mix of such agents and real people taking on an NPC role.

    41. Recently, people have been developing more sophisticated methods of prompting language models, such as "prompt chaining" or composition.Ought has been researching this for a few years. Recently released libraries like LangChain make it much easier to do.This approach solves many of the weaknesses of language models, such as a lack of knowledge of recent events, inaccuracy, difficulty with mathematics, lack of long-term memory, and their inability to interact with the rest of our digital systems.Prompt chaining is a way of setting up a language model to mimic a reasoning loop in combination with external tools.You give it a goal to achieve, and then the model loops through a set of steps: it observes and reflects on what it knows so far and then decides on a course of action. It can pick from a set of tools to help solve the problem, such as searching the web, writing and running code, querying a database, using a calculator, hitting an API, connecting to Zapier or IFTTT, etc.After each action, the model reflects on what it's learned and then picks another action, continuing the loop until it arrives at the final output.This gives us much more sophisticated answers than a single language model call, making them more accurate and able to do more complex tasks.This mimics a very basic version of how humans reason. It's similar to the OODA loop (Observe, Orient, Decide, Act).

      Prompt chaining is when you iterate through multiple steps from an input to a final result, where the output of intermediate steps is input for the next. This is what AutoGPT does too. Appleton's employer Ought is working in this area too. https://www.zylstra.org/blog/2023/05/playing-with-autogpt/

    42. Most of the tools and examples I’ve shown so far have a fairly simple architecture.They’re made by feeding a single input, or prompt, into the big black mystery box of a language model. (We call them black boxes because we don't know that much about how they reason or produce answers. It's a mystery to everyone, including their creators.)And we get a single output – an image, some text, or an article.

      generative AI currently follows the pattern of 1 input and 1 output. There's no reason to expect it will stay that way. outputs can scale : if you can generate one text supporting your viewpoint, you can generate 1000 and spread them all as original content. Using those outputs will get more clever.

    43. By now language models have been turned into lots of easy-to-use products. You don't need any understanding of models or technical skills to use them.These are some popular copywriting apps out in the world: Jasper, Copy.ai, Moonbeam

      Mentioned copy writing algogens * Jasper * Wordtune * copy.ai * quillbot * sudowrite * copysmith * moonbeam

    44. These are machine-learning models that can generate content that before this point in history, only humans could make. This includes text, images, videos, and audio.

      Appleton posits that the waves of generative AI output will expand the dark forest enormously in the sense of feeling all alone as a human online voice in an otherwise automated sea of content.

    45. However, even personal websites and newsletters can sometimes be too public, so we retreat further into gatekept private chat apps like Slack, Discord, and WhatsApp.These apps allow us to spend most of our time in real human relationships and express our ideas, with things we say taken in good faith and opportunities for real discussions.The problem is that none of this is indexed or searchable, and we’re hiding collective knowledge in private databases that we don’t own. Good luck searching on Discord!

      Appleton sketches a layering of dark forest web (silos mainly), cozy web (personal sites, newsletters, public but intentionally less reach), and private chat groups, where you are in pseudo closed or closed groups. This is not searchable so any knowledge gained / expressed there is inaccessible to the wider community. Another issue I think is that these closed groups only feel private, but are in fact not. Examples mentioned like Slack, Discord and Whatsapp are definitely not private. The landlord is wacthing over your shoulder and gathering data as much as the silos up in the dark forest.

    46. We end up retreating to what’s been called the “cozy web.”This term was coined by Venkat Rao in The Extended Internet Universe – a direct response to the dark forest theory of the web. Venkat pointed out that we’ve all started going underground, as if were.We move to semi-private spaces like newsletters and personal websites where we’re less at risk of attack.

      Cozy Web is like Strickler/Liu's black zones above. Sounds friendlier.

    47. The overwhelming flood of this low-quality content makes us retreat away from public spaces of the web. It's too costly to spend our time and energy wading through it.

      Strickler compares this to black zones as described in [[Three Body Problem _ Dark Forest by Cixin Liu]], withdraw into something smaller which is safe but also excluding yourself permanently from the greater whole. Liu describes planets that lower the speed of light around them on purpose so they can't escape their own planet anymore. Which makes others leave them alone, because they can't approach them either.

    48. It’s difficult to find people who are being sincere, seeking coherence, and building collective knowledge in public.While I understand that not everyone wants to engage in these activities on the web all the time, some people just want to dance on TikTok, and that’s fine!However, I’m interested in enabling productive discourse and community building on at least some parts of the web. I imagine that others here feel the same way.Rather than being a primarily threatening and inhuman place where nothing is taken in good faith.

      Personal websites like mine since mid 90s fit this. #openvraag what incentives are there actually for people now to start their own site for online interaction, if you 'grew up' in the silos? My team is largely not on-line at all, they use services but don't interact outside their own circles.

    49. Many people choose not to engage on the public web because it's become a sincerely dangerous place to express your true thoughts.

      The toxicity made me leave FB and reduce my LinkedIn and Twitter exposure. Strickler calls remaining nonetheless the bowling alley effect: you don't like bowling but you know you'll meet your group of regular friends there.

    50. This is a theory proposed by Yancey Striker in 2019 in the article The Dark Forest Theory of the InternetYancey describes some trends and shifts around what it feels like to be in the public spaces of the web.

      Hardly a 'theory', a metaphor re-applied to experiencing online interaction. (Strickler ipv Striker)

      The internet feels lifeless: ads, trolling factories, SEO optimisation, crypto scams, all automated. No human voices. The internet unleashes predators: aggressie behaviour at scale if you do show yourself to be a human. This is the equivalent of Dark Forest.

      Yancey Strickler https://onezero.medium.com/the-dark-forest-theory-of-the-internet-7dc3e68a7cb1 https://onezero.medium.com/beyond-the-dark-forest-a905e2dd8ae0 https://www.ystrickler.com/

    51. the dark forest theory of the universe

      A specific proposed solution to [[Fermi Paradox 20201123150738]] where is everybody? Dark forest, it's full of life but if you walk through it it seems empty. Universe seems empty of intelligent life to us as well. Because life forms know that if you let yourself be heard/seen you'll be attacked by predators. Leading theme in [[Three Body Problem _ Dark Forest by Cixin Liu]]

    52. Secondly, I’m what we call “very online”. I live on Twitter and write a lot online. I hang out with people who do the same, and we write blog posts and essays to each other while researching. As if we're 18th-century men of letters. This has led to lots of friends and collaborators and wonderful jobs.Being a sincere human on the web has been an overwhelmingly positive experience for me, and I want others to have that same experience.

      True for me (and E) too. For me it largely was because the internet became a thing right around when I entered uni in the late 80s, and it always was about connecting. Blogging esp early in the years 2002-2009 led to a large part of my personal and professional peers network.

      '18th c. men of letters' I've sometimes thought about it like that actually, and treat meet-ups etc like the Salons of old vgl. [[Salons organiseren 20201216205547]]

    53. https://web.archive.org/web/20230503150426/https://maggieappleton.com/forest-talk

      Maggie Appleton on the impact of generative AI on internet, with a focus on it being a place for humans and human connection. Take out some of the concepts as shorthand, some of the examples mentioned are new to me --> add to lists, sketch out argumentation line and arguments. The talk represents an updated version of earlier essay https://maggieappleton.com/ai-dark-forest which I probably want to go through next for additional details.

    1. Ought makes Elicit (a tool I should use more often). Maggie Appleton works here. A non-profit research lab into machine learning systems to delegate open-ended thinking to.

    1. https://web.archive.org/web/20230503191702/https://www.rechtenraat.nl/artikel-10-evrm-en-woo/ Caroline Raat start zaak tegen RvS om toepassing uitzonderingsgronden WOO.

      Argumentatie: - 2017 EHRM arrest stelt dat public watchdogs (journo's, bloggers, ngo's, wetenschap) direct beroep kunnen doen op art 10 EVRM voor toegang tot documenten van de overheid. - Als het zo'n watchdog is die toegang vraagt, gaat EHRM boven WOO/WOB. - Watchdog hoeft alleen aan te tonen dat info nodig voor publieke info-voorlichting - Watchdog hoeft geen bijzondere omstandigheden aan te tonen - Weigering kan alleen als daar dringen maatschappelijk reden toe is - Weigering kan alleen op basis genoemde uitzonderingen in art10lid2 EVRM zelf, en weigering moet als noodzakelijk voor de samenleving gemotiveerd worden - Andere weigeringsgronden in WOB/WOO zijn niet van toepassing.

      Dit zou bijv heel ander verloop van Shell papers kunnen geven.

    1. ICs as hardware versions of AI. Interesting this is happening. Who are the players, what is on those chips? In a sense this is also full circle for neuronal networks, back in the late 80s / early 90s at uni neuronal networks were made in hardware, before software simulations took over as they scaled much better both in number of nodes and in number of layers between inputs and output. #openvraag Any open source hardware on the horizon for AI? #openvraag a step towards an 'AI in the wall' Vgl [[AI voor MakerHouseholds 20190715141142]] [[Everymans Allemans AI 20190807141523]]

    1. https://web.archive.org/web/20230503153010/https://subconscious.substack.com/p/llms-break-the-internet-signing-everything

      Gordon Brander on how Maggie Appleton's point in her talk may be addressed: by humans signing their output (it doesn't preclude humans signing generated output I suppose, which amounts to the same result as not signing) Appleton suggests IRL meet-ups are key here for the signing. Reminds me of the 'parties' where we'd sign / vouch for each others SSL certs. Or how already in Threema IRL meet-ups are used to verify Threema profiles as mutually trusted. Noosphere is more than this though? It would replace the current web with its own layer. (and issues). Like Maggie Appleton mentions Dead Internet Theory

    1. https://web.archive.org/web/20230430194301/https://netzpolitik.org/2023/longtermism-an-odd-and-peculiar-ideology/ The EA/LT reasoning explained in this interview, in a way that allows easy outlining. Bit sad to see Jaan Tallinns existential risk path taking this shape, CSER seemed to be more balanced back in 2012/13 when I briefly met him in the context of TEDxTallinn, with climate change a key existential risk, not a speed bump on the road to advanced AI to provide future humanity.

    1. https://web.archive.org/web/20230502113317/https://wattenberger.com/thoughts/boo-chatbots

      This seem like a number of useful observations wrt interacting with LLM based tools, and how to prompt them. E.g. I've seen mention of prompt marketplaces where you can buy better prompts for your queries last week. Which reinforces some of the points here. Vgl [[Prompting skill in conversation and AI chat 20230301120740]] and [[Prompting valkuil instrumentaliseren conversatiepartner 20230301120937]]

  2. Apr 2023
    1. You’d think this “independence” might drive a person toward that problematic pioneer fantasy, but it only underlines to me how self-sufficiency is a LARP.

      off grid / prepping is a LARP, very striking observation.

    1. The one-sentence-summary compresses the summary to one sentence (or two). The title is a further compression of the content into a few words. Working on the one-sentence summary and the title is an act of learning itself. You cannot get any understanding of the Method without real content. See this video for further explanations: How to write good titles for your Zettelkasten

      In narrative inquiry I ask people to title the experience they shared after sharing. Similarly I write my own titles usually after the content of a blogpost or a notion. Although when it comes to the internal branching highlighted above I usually start with a temporary title, which captures the jumping off point from the originating note.

    2. he digital Zettelkasten, freed from physical limitations, offers a unique feature: You can flesh out ideas, look at them from different directions, apply different ways of analysis, and use theoretically infinite methods to explore the idea on a single note. As a result, the note grows in size, but then you can refactor it. You refactor the note, move the grown components as new ideas into new notes and make the parent note about the relationship between the new notes.

      I have this regularly, whenever I spend a bit of time on usually two or three related notes. Usually it annoys me because it sometimes feels like the branching goes faster than I can keep up with noting. That's from a 'production' perspective. Here I was aiming to finish a note, reducing the unfinished corpus by one, only to add a bunch of new beginnings to the heap to go through. The internal branching is a more positive phrasing for an effect I regularly treat as 'more work'. Good switch of perspective, as I have a mental image of external explosion that I can't contain, whereas internal branching is like fractals within the same general boundary. Good image.

    1. In other words, the currently popular AI bots are ‘transparent’ intellectually and morally — they provide the “wisdom of crowds” of the humans whose data they were trained with, as well as the biases and dangers of human individuals and groups, including, among other things, a tendency to oversimplify, a tendency for groupthink, and a confirmation bias that resists novel and controversial explanations

      not just trained with, also trained by. is it fully transparent though? Perhaps from the trainers/tools standpoint, but users are likely to fall for the tool abstracting its origins away, ELIZA style, and project agency and thus morality on it.

    1. https://web.archive.org/web/20230411095546/https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/

      On the temporary ban of ChatGPT in Italy on the basis of GDPR concerns.

      Italian DPA temporarily bans ChatGPT until adequate answers are received from OpenAI. Issues to address: 1. Absence of age check (older than 13) of ChatGPT users 2. Missing justification for the presence of personal data in trainingsdata of ChatGPT. 3. OpenAI has no EU based offices and as such there's no immediate counterparts for DPAs to interact with them. The temp ban is to ensure a conversation with OpenAI will be started.

      The trigger was a 9 hour cybersecurity breach where user's financial information and content of their prompts/generated texts leaked over into other accounts.

    1. This is the space where AI can thrive, tirelessly processing these countless features of every patient I’ve ever treated, and every other patient treated by every other physician, giving us deep, vast insights. AI can help do this eventually, but it will first need to ingest millions of patient data sets that include those many features, the things the patients did (like take a specific medication), and the outcome.

      AI tools yes, not ChatGPT though. More contextualising and specialisation needed. And I'd add the notion that AI might be necessary as temporary fix, on our way to statistics. Its power is in weighing (literally) many more different factors then we could statistically figure out, also because of interdependencies between factors. Once that's done there may well be a path to less blackbox tooling like ML/DL towards logistic regression: https://pubmed.ncbi.nlm.nih.gov/33208887/ [[Machine learning niet beter dan Regressie 20201209145001]]

    2. My fear is that countless people are already using ChatGPT to medically diagnose themselves rather than see a physician. If my patient in this case had done that, ChatGPT’s response could have killed her.

      More ELIZA. The opposite of searching on the internet for your symptoms and ending up with selfdiagnosing yourself with 'everything' as all outliers are there too (availability bias), doing so through prompting generative AI will result in never suggesting outliers because it will stick to dominant scripted situations (see the vignettes quote earlier) and it won't deviate from your prompts.

    3. If my patient notes don’t include a question I haven’t yet asked, ChatGPT’s output will encourage me to keep missing that question. Like with my young female patient who didn’t know she was pregnant. If a possible ectopic pregnancy had not immediately occurred to me, ChatGPT would have kept enforcing that omission, only reflecting back to me the things I thought were obvious — enthusiastically validating my bias like the world’s most dangerous yes-man.

      Things missing in a prompt will not result from a prompt. This may reinforce one's own blind spots / omissions, lowering the probability of an intuitive leap to other possibilities. The machine helps you search under the light you switched on with your prompt. Regardless of whether you're searching in the right place.

    4. My experiment illustrated how the vast majority of any medical encounter is figuring out the correct patient narrative. If someone comes into my ER saying their wrist hurts, but not due to any recent accident, it could be a psychosomatic reaction after the patient’s grandson fell down, or it could be due to a sexually transmitted disease, or something else entirely. The art of medicine is extracting all the necessary information required to create the right narrative.

      This is where complexity comes in, teasing out narratives and recombine them into probes, probing actions that may changes the weights of narratives and mental models held about a situation. Not diagnostics, but building the path towards diagnostics. Vgl [[Probe proberend handelen 20201111162752]] [[Vertelpunt 20201111170556]]

    5. ChatGPT rapidly presents answers in a natural language format (that’s the genuinely impressive part)

      I am coming to see this as a pitfall of generative AI texts. It seduces us to anthromorphise the machine, to read intent and comprehension in the generated text. Removing the noise in generating text, meaning the machine would give the same rote answers to the same prompts would reduce this human projection. It would make the texts much 'flatter' and blander than they currently already are. Our fascination with these machines is that they sometimes sound like us, and it makes us easily overlook the actual value of the content produced. In human conversation we would give these responses a pass as they are plausible, but we'd also not treat conversation as likely fully true.

    6. This is likely why ChatGPT “passed” the case vignettes in the Medical Licensing Exam. Not because it’s “smart,” but because the classic cases in the exam have a deterministic answer that already exists in its database.

      Machines will do well in scripted situations (in itself a form of automation / codification). This was a factor in Hzap 08 / 09 in Rotterdam, where in programming courses the problems were simplified and highly scripted to enable the teacher to be able to grade the results, but at the cost of removing students from actual real life programming challenges they might encounter. It's a form of greedy reductionism of complexity. Whereas the proof of the pudding is performing well within complexity.

    7. Here’s what I found when I asked ChatGPT to diagnose my patients

      A comparison of ChatGPT responses to actual ER case descriptions. Interesting experiment by the author, though there shouldn't be an expectation for better results than it gave.

    1. Genre is a conversation

      Ha. Annotation Kalir/Garcia positions annotation as genre, and as (distributed) conversation. [[Annotatie als genre of als middel 20220515112227]], [[Annotation by Remi Kalir Antero Garcia]] and [[Gedistribueerde conversatie 20180418144327]]

      The human condition in its entirety is an infinite conversation I suspect.

    1. https://web.archive.org/web/20230404092627/https://newsletter.mollywhite.net/p/feedly-launches-strikebreaking-as

      This entire 'feedly goes into strikebreaking' headline at first didn't make any sense to me when E first mentioned it. First because it sounds extremely out there, in terms of 'service', and second, it's RSS, which I think is hardly suited for the type of claims this service makes (and the article shows that too imo). RSS content hardly shows emergent patterns if you've not defined the network/group you're drawing from imo (e.g. media are not useful for it), and it works at a slower pace than 'let's see if this protest turns violent'. I've worked for orgs that had a 'keep our employees save' coordination centre, and they defintely didn't tap into RSS. They'd send me an sms to avoid a certain part of certain city because of a disease outbreak for instance, or warn met of specific types of crime to watch out for when embarking on a mission, or real time weather warnings for my location.

      I haven't used Feedly, I only mentioned it once on my blog in 2019, because my hoster blocked it as 'bad bot'. Foresight? https://www.zylstra.org/blog/2019/06/feedly-blocked-as-bad-bot-by-my-hoster/ I think that blocking feedly might be not as bad as I thought in 2019

    2. But I also don’t think that a company that creates harmful technology should be excused simply because they’re bad at it.

      Being crap at doing harm doesn't allow you to claim innocence of doing harm.

    1. https://web.archive.org/web/20230404050349/https://greshake.github.io/

      This site goes with this paper <br /> https://doi.org/10.48550/arXiv.2302.12173

      The screenshot shows a curious error which makes me a little bit suspicious: the reverse Axelendaer is not rednelexa, there's an a missing.

    2. If allowed by the user, Bing Chat can see currently open websites.

      The mechanism needs a consent step from the user: to allow Bing Chat to see currently open websites. And one of those websites already open, needs to contain the promptinjection.

    3. Microsoft prevents content from GitHub pages domains from being ingested by Bing Chat at the present time.

      Wait, what does this mean. #openvraag That previously it did, but now doesn't in response to this? Or that Bing Chat never did so in the first place? In the latter this paper is dealing in hypotheticals at this stage?

    1. Somewhat suspicious of timing, but listen to those soundfiles. We're surrounded by Triffids!

      Timing: the info about these sounds is known since 2012 https://gizmodo.com/plants-communicate-with-each-other-by-using-clicking-so-5919973 but this new paper turns to learning models to derive info from the sounds made.

    1. Running BLOOM on a VPS or locally even is either expensive or very slow. Mostly because of the sheer size of the 176B model. Share it in a group of similar users on a VPS set-up? Use the Hugging Face API for BLOOM?

  3. Mar 2023
    1. Donald points rightly to some of the classic monsterisation responses to AI. Although he imo misrepresents the EU AI Act (which in its proposal carefully avoids static tech regulation).

      Vgl [[Monstertheorie 20030725114320]]

    1. I want to bring to your attention one particular cause of concern that I have heard from a number of different creators: these new systems (Google’s Bard, the new Bing, ChatGPT) are designed to bypass creators work on the web entirely as users are presented extracted text with no source. As such, these systems disincentivize creators from sharing works on the internet as they will no longer receive traffic

      Generative AI abstracts away the open web that is the substrate it was trained on. Abstracting away the open web means there may be much less incentive to share on the open web, if the LLMs etc never point back to it. Vgl the way FB et al increasingly treated open web URLs as problematic.

    2. he decimation of the existing incentive models for internet creators and communities (as flawed as they are) is not a bug: it’s a feature

      replacing the incentives to share on the open web are not a mere by-effect of it being abstracted away by generative AI, but an aimed for effect. As it may push people to seek the gains of sharing elsewhere, i.e. enclosed web3 services.

    1. https://web.archive.org/web/20230316103739/https://subconscious.substack.com/p/everyone-will-have-their-own-ai

      Vgl [[Onderzoek selfhosting AI tools 20230128101556]] en [[Persoonlijke algoritmes als agents 20180417200200]] en [[Everymans Allemans AI 20190807141523]] en [[AI personal assistants 20201011124147]]

  4. www.nationaalarchief.nl www.nationaalarchief.nl
    1. MDTO (Metagegevens voor duurzaam toegankelijke overheidsinformatie) lijkt ook de norm te worden voor WOO actieve openbaarmaking. #openvraag hoe zit dat voor passieve openbaarmaking? Wordt metadatering beperkt tot de actieve categorieën?

    1. https://web.archive.org/web/20230309111559/https://www.d4d.net/news/ai-and-the-state-of-open-data/

      Tim Davies looks at the bridge between #opendata and #AI. Should go throug the chapter in version 1 of the State of Open Data too. Note: while Tim acknowledges some of the EU data strategy developments (e.g. the dataspaces) it doesn't mention others (e.g. data altruistic non-profit entities) which may fit the call for instutions better. Data space aren't an institution, but a common market

    1. https://web.archive.org/web/20230301112750/http://donaldclarkplanb.blogspot.com/2023/02/openai-releases-massive-wave-of.html

      Donald points to the race that OpenAI has spurred. Calls the use of ChatGPT to generate school work and plagiarism a distraction. LLMs are seeing a widening in where they're used, and the race is on. Doesn't address whether the race is based on any solid starting points however. To me getting into the race seems more important to some than actually having a sense what you're racing and racing for.

    1. Conversation is an art, and we are mostly pretty rubbish at it.We are entering a new era of conversational/constitutional AI. A powerful byproduct could be that we improve our conversations.

      Interesting point by John Caswell. AI prompting is a skill to learn, can we simultaneously learn to prompt better in conversations with other people? Prompting is a key thing in collecting narrated experiences for instance. Or will more conscious prompting lead to instrumentalising your conversation partner? After all AI chat prompting is goal oriented manipulation, what to put int to get the desired output? In collecting narrated experiences the narrator's reality remains a focal point, and only patterns over collections of narrated experiences are abstracted away from the original conversations. n:: [[Prompting skill in conversation and AI chat 20230301120740]] n:: [[Prompting pitfall instrumentalising conversation partner 20230301120937]]

  5. Feb 2023
    1. https://web.archive.org/web/20230226002724/https://medium.com/@ElizAyer/meetings-are-the-work-9e429dde6aa3 Meetings are regular work, so blindly avoiding meetings is damaging.

      Julian Elve follows up https://www.synesthesia.co.uk/2023/02/27/finding-the-real-work-that-meetings-are-good-at/ with lifting out the parts where Ayer discusses the type of meeting that are 'real work' and what they're for. (learning, vgl [[Netwerkleren Connectivism 20100421081941]]

    1. He didn’t just put his notes anywhere, but rather, in a place that made sense at the time, near something related, even if this was not the only or even best place for the note to go in the long term. Again, this difficulty of there being no one, best place for a particular note was addressed through the use of cross-links between notes, making it so that any given note could "exist" in more than one spot.

      Folgezettel are per the linked https://web.archive.org/web/20220125173712/https://omxi.se/2015-06-21-living-with-a-zettelkasten.html posting also a way to create some sort of initial overview in a physical system. In digital systems network maps serve a similar purpose as initial overview to be able to start with something. The outline Lawson mentions as origin is a thing in itself to me, esp as the connections / place in a system of a note can be reconsidered over time. Physical placement is by def a compromise, the question if it is a constraint that has a creative effect?

    2. hough I don’t know for certain, it seems possible that his system is a hybrid of the outlining method from law and the notecard method from history and sociology. The use of copious cross-links between the individual notes stems from his particular project of synthesizing knowledge from multiple disciplines, thus making it difficult to ever place most cards in one and only one spot in the ever-growing outline.

      presumption: L's Folgezettel are a combination of outlining (as common in US maybe not German law edu) and the note cards used in sociology. Cross linking as a way to escape forced categorisation into exclusive buckets. Is there also in cross linking an element perhaps of escaping established idiom while building new (fields) of knowledge? (Vgl. Richard Rorty's struggle when forced to explain pragmatism in the language of Platonic dilemma's. [[Taal als zicht beperkend element 20031104104523]] )

    3. Luhmann’s particular implementation of zettelkasten method should not necessarily be seen as a universal model for all knowledge work because his implementation was tailored to his own project and research questions–i.e. the production of big social theory by drawing on disparate literatures from many disciplines.

      Yes. Any pkm system or method is (or should be) tailored to one's own needs. Vgl [[% Interessevelden 20200523102304]] as [[Macroscope 20090702120700]]

    4. We just need to understand where they (likely) come from and their purpose in the overall system. In short, I believe that they are an artifact of Luhmann’s legal education and serve the purpose of synthesis.

      Lawson thinks L's Folgezettel are a product of his training in law, and they were used by L for synthesis.

    1. They may be right about Lockdown in one way that the concept of it has become big enough and detached from reality enough to house whatever theories or madness anybody wants to house in it. As such, lockdown was a huge psychohistoric event.

      ha! psychohistoric event. Yes, I recognise some of that. I've been in recent sessions that were the 3rd of 4th larger public gathering with the same group, with the group leader still repeating the mantra that we hadn't been able to meet up for so long, where we just did the same thing several times before. A ritualised phrasing to excuse any time spent on catching up. I'd rather put the catching up on the actual program. No excuse needed, psychohistoric or not.

    2. If I lift this one level, the so called “Lockdown” is being used as a scapegoat for anything and everything that people don’t like. Here in Europe the lockdowns felt very long but were brief in retrospect. The longest probably being the 3 month school/daycare closure at the start of the pandemic during which we also suffered immensely. Real hard lockdowns happened in a country like China. Claiming that the relatively mild restrictions that we had for a couple of months (and then twice more) created irreparable damage in the general population is very fucking rich.

      Indeed. 'the lockdown' in various conversations I've been in seems to be an indetermined period between 2019 and now, which serves as explanation for anything that wasn't finished in the past 3 years. As if we were all in actual stasis all that time continuously. Yes it was hard for us at times, I know it was much harder for other people I know at times in other locations, let alone what's been going on in China. But it wasn't constant and everywhere in NL or in EU. The Dutch actual lockdowns were 3 different periods and to very different degrees, with the first being the strictest, but the last one feeling the most difficult to me. I should mark the actual lockdowns and restrictions more clearly in my notes as factcheck.

    1. Why not work on improving a technical solution for Folgezettel?

      Reading this I realise I'm not using Folgezettel really, only linking back to a previous notion. There's some sequencing, esp when I create little 'trains' (a notion, a link to a more abstract notion, a link to a more detailed one, a link to an example). The forward linking I generally not do, except sometimes. L always did forward linking in the sense of placing the index card.

    1. https://web.archive.org/web//https://www.binnenlandsbestuur.nl/bestuur-en-organisatie/io-research/morele-vragen-rijksambtenaren-vaak-onvoldoende-opgevolgd

      #nieuw reden:: Verschillende soorten morele dilemma's volgens I&O bij (rijks)ambtenaren. Let wel, alleen rondom beleidsproces, niet om digitale / data dingen. #openvraag is er een kwalitatief verschil tussen die 2 soorten vragen, of zijn het verschillende verschijningsvormen van dezelfde vragen? Verhouden zich de vragen tot de 7 normen van openbaar bestuur? Waarom is dat niet de indeling dan? #2022/06/30

    1. The philosopher Peter Singer, whose writing is a touchstone for EA leaders,

      Singer, b1946, Australian @ Princeton, applied ethics, utilitarian. EA / LT as utilitarianism ad absurdum

    2. the problem is particularly acute in EA. The movement’s high-minded goals can create a moral shield, they say, allowing members to present themselves as altruists committed to saving humanity regardless of how they treat the people around them. “It’s this white knight savior complex,” says Sonia Joseph, a former EA who has since moved away from the movement partially because of its treatment of women. “Like: we are better than others because we are more rational or more reasonable or more thoughtful.” The movement “has a veneer of very logical, rigorous do-gooderism,” she continues. “But it’s misogyny encoded into math.”

      Lofty goals can serve as 'moral shield', excusing immoral behaviour in other situations, because the higher ends 'prove' the ultimate morality of the actor.

    1. I have more recently added a ‘Start Here’ page which presents posts according to their labels (categories) as a jumping off point. Each time the page loads it will present the labels in a random order and show three random posts from that label just to mix things up a bit.

      Colin Walker added a start here page to his site to present posts in a more curated way than stream to new / incidental visitors.

    2. It does become problematic (and I wonder if Ben has noticed this) since most people are on their phone, where they won’t notice the multi-column, but rather a stream and the rest of the website underneath 🙁 No idea how to “fix” that yet.

      A comment about the diff between desktop and mobile browsing experience: one might miss the multiple columns. For my site that is true too: all the right column stuff comes at the bottom on mobile. Noone scrolls that far.

    3. As humans, our interests have become wide enough that we can at best peck at what’s flowing through

      individually yes. Feedback loops is the response. It's just that we allowd socmed to base feedback almost entirely on outrage.

    4. come to the conclusion that most of us can no longer follow the stream and make sense of what’s flowing through, or even catch what’s important

      I've always assumed the point of the stream is that you can't drink it all. My [[Infostrat Filtering 20050928171301]] is based on the stream being overwhelming. Never twice into the same river etc. You don't make sense of the stream or catch what's important. Social filtering is the bit you 'drink' from the stream, and what you reshare is feedback into it. Given enough feedback what is important will always resurface.

    5. Social networks are increasingly algorithmically organized, so their stream isn’t really a free-flowing stream

      Metaphor of algo-timeline as a canalised river, vs the free flowing stream that is e.g. a socmed stream like Mastodon.

    1. We had more than one way of presenting our blogs to the readers. Why did we stop that?

      True, not sure I did stop, but a rethink is definitely useful, and extending it.

    2. Recommend stuff to the reader on our platform, our blogs

      blogroll is that too, no?

    3. I generally follow blogs through RSS, where a stream is meaningless

      I don't follow, RSS is the stream I'd say, it's entire design is the reverse chronological order? Or does Amit specifically mean the representation of the stream on the blog front page?

    4. stream is important for me for discoverability

      As is the blogroll.

  6. Jan 2023
    1. https://web.archive.org/web/20221214055312/https://wildrye.com/roundup-of-67-tools-for-thought-to-build-your-second-brain/

      Glad to notice that: - I've heard of / know many of these tools, so have an ok overview of the current space. No surprises in the list. - I have not cycled through all these tools.

      Also interesting that The Brain still exists. Used to be my desktop interface in the late 90s/early 00s.

    1. You’re not going to have a clear picture at the start. So start with a fuzzy one

      This sounds like what I call soft-focusing. Some years ago I let go of being strict with myself, and stopped having defined goals in favor of course/directions and a vaguer sense of the destination. I also started soft-focusing my inputs (if there's a connection to my running list of interests connected to my sense of direction it qualifies), and am now trying to soft-focus my outputs. Not blogpost / project A or deliverable B as I would earlier, but more emergent. Then when I have task / creative thing to do, I use it to formulate questions to my notes and see what comes up. This evolved from doing the same in conversations with clients and colleagues where the value of that and resulting associations was clearly visible.

    1. another narrative failure: the inability to imagine a world different than the one we currently inhabit

      are there compelling stories about what comes after?

    2. frames the possibilities in absolutes: if we can’t win everything, then we lose everything

      oversimplification abounds: this isn't the single cure, so it's not helpful. While we're in a truly complex environment and per def this means a whole collectiojn of simultaneous interventions is needed. Complexity never has a single answer.

    3. stories of premature defeat are all too common

      stories about not having the solutions / being too late.

    4. we still lack stories that give context. For example, I see people excoriate the mining, principally for lithium and cobalt, that will be an inevitable part of building renewables – turbines, batteries, solar panels, electric machinery – apparently oblivious to the far vaster scale and impact of fossil fuel mining. If you’re concerned about mining on indigenous land, about local impacts or labour conditions, I give you the biggest mining operations ever undertaken: for oil, gas, and coal, and the hungry machines that must constantly consume them.

      stories lack context discouraging proper comparison (imo often by design)

    5. Greenwashing – the schemes created by fossil fuel corporations and others to portray themselves as on the environment’s side while they continue their profitable destruction – is rampant

      green washing is a category of stories

    6. Outright climate denial – the old story that climate change isn’t real – has been rendered largely obsolete (outside social media) by climate-driven catastrophes around the globe and good work by climate activists and journalists.

      social media the last refuge of climate change denial

    7. What the climate crisis is, what we can do about it, and what kind of a world we can have is all about what stories we tell and whose stories are heard

      we are all storytellers or should be, and we're in a power negotiation situation.

    8. climate journalist Mary Heglar writes, we are not short on innovation. “We’ve got loads of ideas for solar panels and microgrids. While we have all of these pieces, we don’t have a picture of how they come together to build a new world. For too long, the climate fight has been limited to scientists and policy experts. While we need their skills, we also need so much more. When I survey the field, it’s clear that what we desperately need is more artists.”

      It might be we have all the pieces, just not the connective and compelling narrative.

    9. change our relationship to the physical world – to end an era of profligate consumption by the few that has consequences for the many – means changing how we think about pretty much everything: wealth, power, joy, time, space, nature, value, what constitutes a good life, what matters, how change itself happens

      who's the we here? Just the few (rich countries) or also the many. Then lists aspects systems / ethics and what method of change you think to deploy.

      The Ponzi scheme that is western society is centered as cause and its end the remedy. Vgl [[De externe input cheat 20091015070231]] where I formulate the same. We've run out of being able to ignore [[Geexternaliseerde effecten 20200914204533]]. Is this a [[Ethics of Agency 20201003161155]]?

    10. Perhaps we also need to become better critics and listeners, more careful about what we take in and who’s telling it, and what we believe and repeat, because stories can give power – or they can take it away

      This sounds like crap detection and [[Infostrat Filtering 20050928171301]] What do you amplify, how do you judge sources, how do you shape your info-diet, and are you aware that how/what you share is a feedback loop to those who shared the stuff you're reacting to? Solnit focuses here on the narrative/shape of what you share in response to your intake (in contrast to resharing other people's narratives)

    11. In order to do what the climate crisis demands of us, we have to find stories of a livable future, stories of popular power, stories that motivate people to do what it takes to make the world we need.

      progressive populism? Vgl [[countering the populist narrative 20221103141532]] and https://jarche.com/2022/09/better-stories-for-a-better-world/

    12. adrienne maree brown wrote not long ago that there is an element of science fiction in climate action: “We are shaping the future we long for and have not yet experienced. I believe that we are in an imagination battle.”

      This is how I've read SF for years, both near future and space opera. As mood board and thinking input.

      adrienne maree brown https://en.wikipedia.org/wiki/Adrienne_Maree_Brown in turn inspired by SF author Octavia Butler (have I read her xenogenesis trilogy?)

    1. he stability we observe in the sheer number of disruptive papers and patents suggests that science and technology do not appear to have reached the end of the ‘endless frontier.’

      All this, and in the end a statement that in absolute terms there's stability? Wow, did they miss the demographic factrs at play in scientific community as possible explanation of the relative effect?

    2. given the limits constraining further research, science will be hard-pressed to make any truly profound additions to the knowledge it has already generated. Further research may yield no more great revelations or revolutions but only incremental returns.

      At the same time this is precisely the argument above wrt 19th century. Right when you think you know all, everything gets turned over.

    3. What happens when the cost of a new discovery becomes so high that it simply is not achieved? Horgan saw that day if not already at hand, the certainly right around the corner.

      Kind of like an inverse singularity: a brick wall.

    4. the book’s core idea: We should expect fewer, and less important, scientific discoveries as time goes on. The reasoning behind this was simple. In the beginning, everything was available to discover. Scientists could make a discovery about the scale of the Earth with an upright stick. They could learn about the speed of sound by watching someone chop wood. However, with each passing year, as the big book of facts became more stuffed with learning, the difficulty of making fundamental new discoveries increases. In the 19th century, the electron was discovered by one guy using equipment that might have been found in a high school science lab (or the basement of a wealthy naturalist). To close out the particle zoo with the Higgs Boson took an international effort with an over $4 billion collider.

      Pointing to [[Evolutionair vlak van mogelijkheden 20200826185412]] again and that new disruptions probably have higher thresholds to cross (resources, crossdiscipl teams)

    5. There’s an important precursor to this paper that many media seem to have omitted from this discussion, and that’s the 1996 book, The End of Science, by science journalist John Horgan.

      Book to find.

    6. The bulk of the paper is related to how they determined “disruptiveness” of papers and patent filings (which is where many of those offended by the idea find traction in disputing the overall theme), but the thrust of the conclusion is this: The number of publications has increased, many of those papers are very high quality, some remain disruptive, but many only confirm the status quo. Or at best, they offer new insight that leads to little potential for either scientific or economic impact.

      Again this reads (second hand) more as a quantity-dynamic. Many confirming the status quo btw is also K. (Vgl Edison's '999 ways I established that don't work'.)

    7. Overall, our results suggest that slowing rates of disruption may reflect a fundamental shift in the nature of science and technology.

      Rate of disruption, need to check how this rate is determined. Absolute number of big breaks over time, or relative to scientific production in general (which is when it would be expected to slow with rising production).

    8. We find that the observed declines are unlikely to be driven by changes in the quality of published science, citation practices or field-specific factors.

      Suggesting it isn't in scientific practice. So what changed very much? Volume of scientists -> volume of publications.

    9. Subsequently, we link this decline in disruptiveness to a narrowing in the use of previous knowledge, allowing us to reconcile the patterns we observe with the ‘shoulders of giants’ view.

      the narrowing maybe a relative one? Or is it really being on a branching but ending path in [[Evolutionair vlak van mogelijkheden 20200826185412]]

    10. We find that papers and patents are increasingly less likely to break with the past in ways that push science and technology in new directions. This pattern holds universally across fields and is robust across multiple different citation- and text-based metrics.

      Is this caused by anything in science, or a symptom of the growing scientific community globally, and rising average edu levels? Quantitative phase shifts have qualitative effects.

    11. data they used wasn’t polling of those in the fields, but a survey of patent filings.

      switching here from publications to patents which are a very different beast. Patents are transactions (publishing ideas for temporary market exclusivity) There isn't a necessary path from paper to patent, and def not in all scientific fields. By def patents are more engineering (in the primary meaning) oriented imo, they're about how to (potentially) make things (work). Why's aren't patentable.

    12. while the number of new scientific publications has never been higher, the impact of those publications is constantly declining

      I can see how the volume of publications rising is also result of broader access to scientific disciplines. By def high impact is rare, so the average impact will decline with volume. If only because by def a lower percentage of eyes will ever see a paper.

    13. https://web.archive.org/web/20230116221448/https://www.dailykos.com/stories/2023/1/16/2147067/-Are-we-living-in-the-last-days-of-the-Scientific-Age Discusses a recent Nature article looking at how increasing numbers of new patents (a rightly criticized indicator) deal with ideas of decreasing impact. Conclusion is though that the number of disruptive patents remains high, just that the overall number of patents rises. Meaning perhaps more the democratisation of patenting, or perhaps the end of the utility of patenting, than stalling scientific progress.

      Some points from a 1996 book mentioned vgl [[Evolutionair vlak van mogelijkheden 20200826185412]] wrt scientific progress / increasing niche-specification in the evolutionary plane of possibilities. The book suggests skating to a different place has prohibitive costs and maybe out of reach. Vgl local optimisation in complexity, and what breaking loose from a local optimum takes. Is the loss of the Scientific Age here discussed a needed path into chaos to be able to reach other peaks? Check comments on the Nature article to see if this type of aspect gets discussed.

    1. Please join me in resisting and start helping to curb the hype.

      Call to action: curb the hype.

      Does that ever work? (Vgl when I was recently seen as old and negative simply because of listing a range of (pos and neg) real experiences wrt metaverse from earlier waves of hype for VR, and asking questions that are a lithmus test to determine contextual value to a user.

      One can not participate in hype, and ensuring the hypers are never able to be seen at the same table / level of disucssion as you (it legitimises the entity of lesser status if a key figure debates a more trivial figure) But can one pour cold water on it when others and other outlets do join in? And without being seen as 'just negative' Hype shifts perception of the neg-pos spectrum [[Overton window 20201024155353]] and Trevino scale, which also applies I think in ethics / phil of tech discussions (which in part means circling back to monstertheory).

    2. risks of hyped and harmful technology that is made mainstream at a dazzling speed and on a frightening scale

      speed and mainstreaming are points of contention here, in light of unripe tech, and unprincipled company behind it.

    3. critical thinking skills

      Vgl w 'disinfo innoculation' a la Finland.

    4. In this age of AI, where tech and hype try to steer how we think about “AI” (and by implication, about ourselves and ethics), for monetary gain and hegemonic power (e.g. Dingemanse, 2020; McQuillan, 2022), I believe it is our academic responsibility to resist.

      When hype is used to influence public opinion, there's a obligation to resist. (Vgl [[Crap detection is civic duty 2018010073052]], en [[Progress is civic duty of reflection 20190912114244]]) Also with which realm of [[Monstertheorie 20030725114320]] are we dealing here with this type of response? In the comments on Masto it's partly positioned as monster slaying, but that certainly isn't it. It's warning against monster embracing. I think the responses fall more into monster adaptation than assimiliation, as it aims to retain existing cultural categories although recognising the challenges issued against it. Not even sure the actual LLM is the monster perceived, but its origins and the intentions and values of the company behind it. Placing it outside the Monster realm entirely.

    5. push to make Large Language Models (LLMs), such as ChatGPT, larger and larger creates a “gigantic ecological footprint” with implications for “our planet that are far from beneficial for humankind”. [quotes translated from Dutch to English, original available in footnote 1].

      Additionally the ecological footprint of the tech is problematic (Vgl the blockchain mining activities footprint discussion).

    6. The willingness to provide free labour for a company like OpenAI is all the more noteworthy given (i) what is known about the dubious ideology of its founders known as `Effective Altruism’ (EA) (Gebru, 2022, Torres, 2021), (ii) that the technology is made by scraping the internet for training data without concern for bias, consent, copyright infringement or harmful content, nor for the environmental and social impact of both training method and the use of the product (Abid, Farooqi, & Zou, 2021; Bender et al., 2021; Birhane, Prabhu, & Kahembwe, 2021; Weidinger, et al., 2021), and (iii) the failure of Large Language Models (LLMs), such as ChatGPT, to actually understand language and their inability to produce reliable, truthful output (Bender & Koller, 2020; Bender & Shah, 2022).

      Claims: Doing free labour for OpenAI is problematic. (not expressed: every usage feeds back into the machine and is more free labout put in) Reasons: * OpenAI founders are on the utilitarianism followed ad absurdum end of Effective Altruism. The 'Open' bit is open-washing. * The provenance of training data is ethically suspect (internet scraping), and not controlled for quality. * Externalities aren't taken into account * Social impact of use (including based on faulty output) not considered. * LLMs are still bad at understanding (and routinely fail test that contain imprecise references to other words in the sentence, which humans find easy to solve based on real world knowledge outside the text)

    7. It’s almost as if academics are eager to do the PR work for OpenAI

      buying into the hype equated to doing PR work for OpenAI

    8. prevent that hyped-AI hijacks our attention and dictates our education and examination policies

      Iris van Rooij means to counteract edu sector buying into the AI hype

    1. Dave positions free will as a 1 or 0 thing, and then tends to 0. Is it that binary though? A spectrum (matters of degree across different contexts) would also help explain things.

      Free will is not free of consequences, which is where conditioning kicks in. (and society demanding responsibility or not)

      How about animals? Conditioned by their ecosystem (like us, by ecosystem and culture), yet free to roam which coalesces into patterns (migration, foraging, trails)

      Is emergence then disproving free will? Emergence is the only possible definition of 'they' here. Which lacks intention and planning. Still means you can perceive it as forcing you / hostility compared to you individually.

      Evolution. Your starting point on the evolutionary field of possibilities is determined outside of you, and limits choices/paths, as does every choice made along the way cut off certain paths and brings others in reach. One has limited control there by def, and no control for the first phase of life (what little control as child is there is while ignorant of consequences down the line wrt options).

      This juxtaposes invidivual (having free will or not), and society (imposing full conditioning). Crux of complexity is group level, groups one is part of by conditioning/birth and by choice/seeking out within available pre-conditioned options. There is no influence free place, but doubt it's a prerequisite for free will. Is agency more useful term, as it does apply to groups, and by extension organisations and countries. Bit weird their mention in context of free will as if anthromorphisation is something actual.

    1. Moreover, the decision is fundamentally pointless because it will have zero impact on consumer privacy. Neither Facebook nor Instagram sell user data—they simply use the information on their platform to show users targeted ads. The only change that this decision will cause is that Meta will have to rewrite its privacy policy to use one of the other legal bases provided in the GDPR to operate Facebook and Instagram, including to deliver targeted ads.

      Actually 'delivering targeted ads' based on protected data is inconsistent with the GDPR entirely.

    1. Control + K

      so if I annotate and highlight something at the same time?

    2. I don't presently have plans to expand this into an annotation extension, as I believe that purpose is served by Hypothesis. For now, I see this extension as a useful way for me to save highlights, share specific pieces of information on my website, and enable other people to do the same.

      I wonder if it uses the W3C recommendation for highlighting and annotation though? Which would allow it to interact with other highlighting/annotation results.

      To me highlighting is annotation, though a leightweight form, as the decision to highlight is interacting with the text in a meaningful way. And the pop up box actually says Annotation right there in the screenshot, so I don't fully grasp what distinction James is making here.

    1. Sure, this means that the conversations take place on those platforms, but the source of my content – my words – are still on my site, which I control.

      Kev is equating integration with any service to attempts to increase conversation around a post. That is often true but not always. E.g. I'm looking at AP to increase what own words I am sharing. E.g. AP for limited audience postings, and e.g. RSS for a subset of posting that are unlisted for the general public on my site.

    2. While that discourse is very important, the complexity it would add to the site to manage it, just isn’t worth it in my eyes.

      Valid point Kev makes here. A site should do only what its author needs it to do. I want interaction visible on my site, though I probably will cut down on the facepiles.

    1. Adding The Post Title To My “Reply By Email” Button

      I wonder if that would increase responses to my blog as Kev indicates. There might be those who will respond in e-mail, but not in a public comment. Worth a try.

  7. Dec 2022