1,269 Matching Annotations
  1. Aug 2023
    1. This led me to run a series of psych experiments where my data suggested that people’s ability to be able to navigate 3D VR seems to be correlated with the dominance of certain sex hormones in their system. Folks with high levels of estrogen and low levels of testosterone – many of whom would identify as women – were more likely to get nauseous navigating VR than those who have high levels of testosterone streaming through their body. What was even stranger was that changes to hormonal levels appeared to shape how people respond to these environments.

      estrogen / testosteron levels influence responses to VR environment and increase getting nauseous navigating in VR.

    2. https://web.archive.org/web/20230809191748/http://www.zephoria.org/thoughts/archives/2023/08/06/still-trying-to-ignore-the-metaverse.html

      There are many reasons why Meta's Metaverse is a dud (Vgl https://zylstra.org/blog/2021/11/metaverse-reprise/ and https://www.zylstra.org/blog/2022/02/was-second-life-ahead-or-metaverse-nothing-really-new/ ) but boyd points to a whole other range of reasons: women and men respond entirely different to VR based on hormonal levels.

      Potential antilib [[Making a Metaverse That Matters by Wagner James Au]]

    1. Unlike 20 years ago, the people poised to be early adopters today are those who are most toxic, those who get pleasure from making others miserable. This means that the rollout has to be carefully nurtured

      Interesting observation/postion: current early adopters of new platforms are not motivated by shiny new tech syndrom but are motivated by finding amplification for their toxicity. Sounds intriguing but I wonder about causality and the earlier mentioned norm setting. New platforms may have diff norms they set. Toxicity is an outcome of the norms promoted by tech functionality (amplification/engagement goading) Will that carry over into other things (does it carry over into other non-collapsed contexts e.g. in practice?: sometimes, mostly not I think). Tocivity is probably not intrincis to the people involved, but learned. And can be unlearned, when encountering different social expectations.

    2. I should note that blitzscaling is not the only approach we’re seeing right now. The other (and I would argue wiser) approach to managing dense network formation is through invitation-based mechanisms. Heighten the desire, the FOMO, make participating feel special. Actively nurture the network. When done well, this can get people to go deeper in their participation, to form community.

      This seems a false dichotomy. There are more than two ways to do this, more than 'blitzscaling' and 'invitation-based' (which I have come to see as manipulative and a clear sign to stay away as it makes you the means not the goal right from the start of a platform, talking about norm setting). Federation is e.g. very different (and not even uniform in how it's different from both those options: from open to all to starting from a pre-existing small social graph offline). This like above seems to disregard, despite saying building tools is not the same as building community somewhere above, the body of knowledge about stewarding communities / network that exists outside of tech. Vgl [[Invisible hand of networks 20180616115141]]

    3. context collapse, a term that Alice Marwick and I coined long ago

      huh? Isn't this an 'old' thing from within communication/psychology? I spent quite some time with my therapist in 97/98 discussing why I purposefully avoided context collapse as a kid preventing different circles from overlapping. 2010 is the ref'd paper, I use it in my blog in May 2009 https://www.zylstra.org/blog/2009/05/hate_mailers_un/ (though I may have been aware of boyd or Michael Wesch using it then). Wikipedia https://en.wikipedia.org/wiki/Context_collapse says boyd is credited with coining 'collapsed contexts' (which is both a hedge by WP editors and different from the claim here). Did she already use it when I first encountered her (work) in 2006 during her Phd?

    4. Cuz that’s the thing about social media. For people to devote their time and energy to helping enable vibrancy, they have to gain something from it. Something that makes them feel enriched and whole, something that gives them pleasure (even if at someone else’s pain). Social media doesn’t come to life through military tactics. It comes to life because people devote their energies into making it vibrant for those that are around them. And this ripples through networks.

      boyd here stating what has been a core notion of community stewarding since late 90s knowledge management: participation value to members. (e.g. Wenger 1998/9 and 2002)

  2. Jul 2023
    1. https://web.archive.org/web/20230709085606/https://kolektiva.social/@ophiocephalic/110680030293653277

      Good description of ZAD, zone a defense, not as gatekeeping (keeping others out that would also enjoy what's inside) but as defending a zone (keeping others out to prevent the zone's destruction). ZAD I encountered in Nantes in the area where an airport was planned.

    1. I work in marketing, for my sins. This is mostly why I’m so entirely down on the marketing industry and many of the people who work in it. I also happen to have an MSc in psychology – actual psychology! – with a focus on behaviour change. On day 1 of your class about behaviour change in a science course, you learn that behaviour change is not a simple matter of information in, behaviour out. Human behaviour, and changing it, is big and complex. Meanwhile, on your marketing courses, which I have had the misfortune to attend, the model of changing behaviour is pretty much this: information in, behaviour out.

      Marketing assumes information in means behaviour out, and conveys that in marketing courses. Psychology teaches that behavioural change is not just info in behaviour out, but a complex thing. Marketing has clay feet.

  3. Jun 2023
    1. https://web.archive.org/web/20230625094359/https://orgmode.org/worg/org-syntax.html

      https://braintool.org/2022/04/29/Tools4Thought-should-use-Org-for-interop.html

      Proposal for org-mode syntax as the interoperability standard for tools for thought. The issue with things like markdown and opml is said to be the lack of semantic mark-up. Is that different in org-mode?

    1. https://web.archive.org/web/20230617185715/https://diggingthedigital.com/het-dilemma-van-de-digitale-diversiteit/

      Frank on having a different experience for your site than just a blog timeline.

      Ik herken wat je ze zegt. Ik zou het prettig vinden om meerdere soorten ingangen, tijdslijn, op thema of onderwerp, type content, setjes die onderling linken, etc. te kunnen bieden als een soort spectrum. Met name als voorpagina om niet alleen een blogtijdslijn te bieden aan een toevallige lezer of aan de explorerende lezer. Drie jaar geleden ben ik eens begonnen met een WordPress theme daarvoor. Maar ja, ik kan eigenlijk helemaal geen themes maken. Misschien dat het met Jan Boddez' IndieBlocks nu makkelijker zou gaan, want dan hoef ik in een nieuw theme niet ook nog eens al die IndieWeb dingen te regelen. Maar eens de project notities uit 2020 (toen, want toch thuis) afstoffen voor komend najaar. De zomer wordt dat niks, die is voor lezen.

      Zoals ik https://www.zylstra.org/blog/2020/11/15326/ schreef: The idea is to find a form factor that does not clearly say ‘this is a blog’ or ‘this is a wiki’, but presents a slightly confusing mix of stock and flow / garden and stream, something that shows the trees and the forest at the same time. So as to invite visitors to explore with a sense of wonder, rather than read the latest or read hierarchically. At the back-end nothing will fundamentally change, there still will be blogposts and pages with their current URLs, and the same-as-now feeds for them to subscribe to.

    1. Social software tools are all smaller than us, we control them individually

      Is this my first mention of [[Technologie kleiner dan ons 20160818122905]]? I know I used the concept in my talks back then. Need to relabel my note with correct timestamp.

      Updated [[Technologie kleiner dan ons 20050617122905]]

    1. Overview of how tech changes work moral changes. Seems to me a detailing of [[Monstertheorie 20030725114320]] diving into a specific part of it, where cultural categories are adapted to fit new tech in. #openvraag are the sources containing refs to either Monster theory by Smits or the anthropoligical work of Mary Douglas. Checked: it doesn't, but does cite refs by PP Verbeek and Marianne Boenink, so no wonder there's a parallel here.

      The first example mentioned points in this direction too: the 70s redefinition of death as brain death, where it used to be heart stopped (now heart failure is a cause of death), was a redefinition of cultural concepts to assimilate tech change. Third example is a direct parallel to my [[Empathie verschuift door Infrastructuur 20080627201224]] [[Hyperconnected individuen en empathie 20100420223511]]

      Where Monstertheory is a tool to understand and diagnose discussions of new tech, wherein the assmilation part (both cultural cats and tech get adapted) is the pragmatic route (where the mediation theory of PP Verbeek is located), it doesn't as such provide ways to act / intervene. Does this taxonomy provide agency?

      Or is this another way to locate where moral effects might take place, but still the various types of responses to Monsters still may determine the moral effect?

      Zotero antilib Mechanisms of Techno-moral Change

      Via Stephen Downes https://www.downes.ca/post/75320

    1. https://web.archive.org/web/20230616140838/https://www.theguardian.com/education/2023/jun/16/george-washington-university-professor-antisemitism-palestine-dc

      psychoanalysis was the guided internal journey of individuals, in the nineties CBT displaced this (visible in the sessions I did at the time), and now a new wave of psychoanalysis comes in that doesn't only take the individual as focus, but also the impact of the structures and systems around yourself. That's an interesting evolutionary sketch of the field.

      To me this article is as much about power and generations as it is about a lack of a professional field being able to apply its own expertise to itself.

      culture war as generational war and but also US specific perhaps. Also the culture war seems to be precisely about taking the individual vs the collective influence on the individual. The old guard feeling individually blamed for things that the new guard says is a collective thing to reckon with. Where again the responses of each are seen through the other lens. There's now no way to resolve that easily. Change happens when the old people die said Howard. Seems to be at issue here too.

    1. https://web.archive.org/web/20230613121025/https://www.workfutures.io/p/note-what-do-we-do-when-we-cant-predict

      Stowe says the 'unpredictability' e.g. investors see comes down that there's no way to assess risk in the global network created complexity. Points to older piece on uncertainty risk and ambiguity. https://www.sunsama.com/blog/uncertainty-risk-and-ambiguity explore.

      I would say that in complexity you don't try to predict the future, as that is based on linear causal chains of the knowable an known realms, you try to probe the future, running multiple small probes (some contradictory) and feed those that yield results.

    1. In an ever more unequal world, it is perhaps not surprising that we are splitting into news haves and have-nots. Those who can afford and are motivated to pay for subscriptions to access high-quality news have a wealth of choices: newspapers such as The Times, The Washington Post, The Wall Street Journal and The Financial Times compete for their business, along with magazines such as The New Yorker and The Atlantic. Niche subscription news products serving elite audiences are also thriving and attracting investment — publications like Punchbowl News, Puck and Air Mail. The people who subscribe to these publications tend to be affluent and educated.It bodes ill for our democracy that those who cannot pay — or choose not to — are left with whatever our broken information ecosystem manages to serve up, a crazy quilt that includes television news of diminishing ambition, social media, aggregation sites, partisan news and talk radio. Yes, a few ambitious nonprofit journalism outlets and quality digital news organizations remain, but they are hanging on by their fingernails. Some news organizations are experimenting with A.I.-generated news, which could make articles reported and written by actual human beings another bauble for the Air Mail set, along with Loro Piana loafers and silk coats from the Row.

      Opinion piece on how news is becoming a have/have-not thing. I assume it was always thus, with the exception of public TV/radio news broadcasting and then the web. So how did 'we' deal with it then?

    1. https://web.archive.org/web/20230612101920/https://thefugue.space/thoughts/the-glimmer

      Spatial computing

      what of early insights wrt [[Ambient Findability by Peter Morville]] 2006, and my conclusion 2008 that though adding an info layer while interacting in the physical world was key, we put it all in our pocket. I doubt it will end up as ski goggles on our head much.

      Via [[Boris Mann]] https://blog.bmannconsulting.com/2023/06/08/kharis-oconnell-has.html

    1. I don’t think we have them, except piecemeal and by chance, or through the grace of socially gifted moderators and community leads who patch bad product design with their own EQ

      indeed. Reminds me of Andrew Keen 2009 in Hamburg raging about the lack of community in socmed and then stating, "except Twitter, that's a real community". Disqualifying himself entirely in a single sentence and being laughed at by the audience at Next09. Taking community stewarding aspects as starting point for tools would yield very different results. [[Communitydenken Wenger 20200924110143]]

    2. All this unmobilized love

      Good title, puts humanity front and center in socsoft discussion. Key wrt [[Menselijk en digitaal netwerk zijn gelijksoortig 20200810142551]]

    3. But we also need new generations of user-accountable institutions to realize the potential of new tech tools—which loops back to what I think Holgren was writing toward on Bluesky. I think it’s at the institutional and constitutional levels that healthier and more life-enhancing big-world tools and places for community and sociability will emerge—and are already emerging

      institutionalising as a way for socsoft to become sustainable, other than through for profit structures that have just one aim. Vgl [[2022 Public Spaces Conference]], I have doubts as institutions are slow by design which is what gives them their desirable stability. Vgl [[Invisible hand of networks 20180616115141]] vs markets.

      Also : generations are institutions too. It is needed to repeat these things to new gens, as they take what is currently there as given. Is currently true for things like open data too.

    4. I’ll be speaking with and writing about people working on some of the tools and communities that I think help point ways forward—and with people who’ve built fruitful, immediately useful theories and practices

      Sounds interesting. Add to feeds. Wrt [[Invisible hand of networks 20180616115141]] scaling comes from moving sideways, repetition and replication. And that takes gathering and sharing (through the network) of examples. Vgl [[OurData.eu Open Data Voorbeelden 20090720142847]] but for civic tech, socsoft? What would it look like?

    5. The big promise of federated social tools is neither Mastodon (or Calckey or any of the other things I’ve seen yet) nor the single-server Bluesky beta—it’s new things built in new ways that use protocols like AT and ActivityPub to interact with the big world.

      Vgl [[Build protocols not platforms 20190821202019]] I agree. Kissane says use protocols in new ways for new tools, starting from the premise of actually social software.

    6. we’ve seen weirdly little experimentation with social forms at scale

      yes, we call it social media these days, and the focus is on media, not social. Yet [[Menselijk en digitaal netwerk zijn gelijksoortig 20200810142551]], meaning we should design such tools starting from human social dynamics.

    7. Where are the networks that deeply in their bones understand hospitality vs. performance, safe-to vs. safe-from, double-edged visibility, thresholds vs. hearths, gifts vs. barter, bystanders vs. safety-builders, even something as foundational as power differentials?

      yes!

    8. Even most of the emergent gestures in our interfaces are tweaks on tech-first features—@ symbols push Twitter to implement threading, hyperlinks eventually get automated into retweets, quote-tweets go on TikTok and become duets. “Swipe left to discard a person” is one of a handful of new gestures, and it’s ten years old.

      Author discusses specific socially oriented interface functions (left/right swiping, @-mentions) that are few and old. There's also the personal notes on new connections in Xing and LinkedIn (later), imo. And the groupings/circles in various platforms. Wrt social, adding qualitative descriptions to a connection to be able to do pattern detection e.g. would be interesting, as is moving beyond just hub with spokes (me and my connections) and allowing me to add connections I see between people I'm connected to. All non-public though, making it unlikely for socmed. Vgl [[Personal CRM as a Not-LinkedIn – Interdependent Thoughts 20210214170304]]

    9. https://web.archive.org/web/20230612090744/https://erinkissane.com/all-this-unmobilized-love

      Reminds me of https://www.zylstra.org/blog/2006/09/barcamp_brussel/ #2006/09/24 and the session I did with [[Boris Mann]] on 'all the things I need from social media, they don't provide yet' phrasing [[People Centered Navigation 20060930163901]]. http://barcamp.org/w/page/400567/BarCampBrussels

    1. https://web.archive.org/web/20230609140440/https://techpolicy.press/artificial-intelligence-and-the-ever- receding-horizon-of-the-future/

      Via Timnit Gebru https://dair-community.social/@timnitGebru/110498978394074048

    2. As the EU heads toward significant AI regulation, Altman recently suggested such regulation might force his company to pull out of Europe. The proposed EU regulation, of course, is focused on copyright protection, privacy rights, and suggests a ban on certain uses of AI, particularly in policing — all concerns of the present day. That reality turns out to be much harder for AI proponents to confront than some speculative future

      While wrongly describing the EU regulation on AI, author rightly points to the geopolitical reality it is creating for the AI sector. AIR is focused on market regulation, risk mitigation wrt protection of civic rights and critical infrastructure, and monopoly-busting/level playing field. Threatening to pull out of the EU is an admission you don't want to be responsible for your tech at all. And it thus belies the ethical concerns voiced through proximate futurising. Also AIR is just one piece of that geopolitical construct, next to GDPR, DMA, DSA, DGA, DA and ODD which all consistently do the same things for different parts of the digital world.

    3. In 2010, Paul Dourish and Genevieve Bell wrote a book about tech innovation that described the way technologists fixate on the “proximate future” — a future that exists “just around the corner.” The authors, one a computer scientist, and the other a tech industry veteran, were examining emerging tech developments in “ubiquitous computing,” which promised that the sensors, mobile devices, and tiny computers embedded in our surroundings would lead to ease, efficiency, and general quality of life. Dourish and Bell argue that this future focus distracts us from the present while also absolving technologists of responsibility for the here and now.

      Proximate Future is a future that is 'nearly here' but never quite gets here. Ref posits this is a way to distract from issues around a tech now and thus lets technologists dodge responsibility and accountability for the now, as everyone debates the issues of a tech in the near future. It allows the technologists to set the narrative around the tech they develop. Ref: [[Divining a Digital Future by Paul Dourish Genevieve Bell]] 2010

      Vgl the suspicious call for reflection and pause wrt AI by OpenAI's people and other key players. It's a form of [[Ethics futurising dark pattern 20190529071000]]

      It may not be a fully intentional bait and switch all the time though: tech predictions, including G hypecycle put future key events a steady 10yrs into the future. And I've noticed when it comes to open data readiness and before that Knowledge management present vs desired [[Gap tussen eigen situatie en verwachting is constant 20071121211040]] It simply seems a measure of human capacity to project themselves into the future has a horizon of about 10yrs.

      Contrast with: adjacent possible which is how you make your path through [[Evolutionair vlak van mogelijkheden 20200826185412]]. Proximate Future skips actual adjacent possibles to hypothetical ones a bit further out.

    4. Looking to the “proximate future,” even one as dark and worrying as AI’s imagined existential threat, has some strategic value to those with interests and investments in the AI business: It creates urgency, but is ultimately unfalsifiable.

      Proximate future wrt AI creates a fear (always useful dark patterns wrt forcing change or selling something) that always remains unfalsifiable. Works the other way around to, as stalling tactic (tech will save us). Same effect.

    5. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

      What is missing here? The one thing with the highest probability as we are already living the impacts: climate. The phrase itself is not just a strategic bait and switch for the AI businesses, but also a more blatant bait and switch wrt climate politics.

    1. Enthusiasm about Apple's VIsion Pro. Rightly points out we've had 3D software for 3 decades (From Traveler, Wolfenstein3D through SL and now Roblox etc.) But skiing goggles do not a lifestyle make like Apple's ipod, iphone and watch did. It has better capabilities but there's no fundamental difference with the Oculus Rift et al, and the various versions of such devices lying unused gathering dust in my attic. Neck-RSI wave incoming if it does take off. Would you want to be seen wearing one in public? AR and MR are powerful, VR won't be mainstream imo unless as general addiction as per SF tropes.

    1. [[Jaan Tallinn]] is connected to Nick Bostrom wrt the risks of AI / other existential risks, which is problematic. It may be worthwile to map out these various institutions, donors and connections between them. This to have a better grasp of influences and formulate responses to the 'tescreal' bunch. Vgl [[2023-longtermism-an-odd-and-peculiar-ideology]] where I observe same.

    1. We are nowhere near having a self-driving cars on our roads, which confirms that we are nowhere near AGI.

      This does not follow. The reason we don't have self driving cars is because the entire effort is car based not physical environment based. Self driving trains are self driving because of rails and external sensors and signals. Make rails of data, and self driving cars are like trains. No AI, let alone AGI needed. Self driving cars as indicator for AGI make no sense. Vgl https://www.zylstra.org/blog/2015/10/why-false-dilemmas-must-be-killed-to-program-self-driving-cars/ and [[Triz denken in systeemniveaus 20200826114731]]

  4. May 2023
    1. Ooit in NB bij de weg vragen zei een ouder iemand 'straks rechts de macadamweg op', ipv asfaltweg. Macadam roads, named after MacAdam, are a 18th/19th road building concept of layers of stones in decreasing sizes (the top layer smaller than the average wheel), enabling easier road building and maintenance. Tar was used sometimes to reduce dust,, esp after the intro of cars who had much wider tires than carriage wheels and created more dust. Until the top layer stones and the tar were pre-mixed as asphalt. Tarmac= tarred-macadam

      Vgl https://hypothes.is/a/h9luNPx5Ee2ZnxcNCCTotA

    1. Interesting examples of shrinking travel time (and costs) in the UK in the 18th and 19th centuries. These examples fit [[De 19e eeuwse infrastructuren 20080627201224]] [[Sociale effecten van 19e eeuwse infra 20080627201425]] I described at Reboot 10, 2008, where the scale of novel infra allowed a shift of regional perspectives to the aggregation level of a nation state. Stross compares travel times of 18th century roads and 19th century rail to the advent of mass flight in the 20th, which is similar in time/cost. It's also a qualitative shift away from nation to mass and global (but with the nation as go-between and shorthand)

    1. Dave Pollard writes about types of silence and its cultural role in different situations. Prompted by a K-cafe by David Gurteen. Great to see such old network connections still going strong.

      Book mentioned [[The Great Unheard at Work by Mark Cole and John Higgins]] something for the antilib re power assymmetries?

    1. Chatti notes that Connectivism misses some concepts, which are crucial for learning, such as reflection, learning from failures, error detection and correction, and inquiry. He introduces the Learning as a Network (LaaN) theory which builds upon connectivism, complexity theory, and double-loop learning. LaaN starts from the learner and views learning as the continuous creation of a personal knowledge network (PKN).[18]

      Learning as a Network LaaN and Personal Knowledge Network PKN , do these labels give me anything new?

      Mohamed Amine Chatti: The LaaN Theory. In: Personalization in Technology Enhanced Learning: A Social Software Perspective. Aachen, Germany: Shaker Verlag, 2010, pp. 19-42. http://mohamedaminechatti.blogspot.de/2013/01/the-laan-theory.html I've followed Chatti's blog in the past I think. Prof. Dr. Mohamed Amine Chatti is professor of computer science and head of the Social Computing Group in the Department of Computer Science and Applied Cognitive Science at the University of Duisburg-Essen. (did his PhD at RWTH in 2010, which is presumably how I came across him, through Ralf Klamma)

    1. Dave Troy is a US investigative journalist, looking at the US infosphere. Places resistance against disinformation not as a matter of factchecking and technology but one of reshaping social capital and cultural network topologies.

      Early work by Valdis Krebs comes to mind vgl [[Netwerkviz en people nav 20091112072001]] and how the Finnish 'method' seemed to be a mix of [[Crap detection is civic duty 2018010073052]] and social capital aspects. Also re taking an algogen text as is / stand alone artefact vs seeing its provenance and entanglement with real world events, people and things.

    1. The linked Mastodon thread gives a great example of using Obsidian (but could easily have been Tinderbox of any similar tool) for a journalism project. I can see me do this for some parts of my work too. To verify, see patterns, find omissions etc. Basically this is what Tinderbox is for, while writing keep track of characters, timelines, events etc.

    1. This simple approach to avoiding bad decisions is an example of second-level thinking. Instead of going for the most immediate, obvious, comfortable decision, using your future regrets as a tool for thought is a way to ensure you consider the potential negative outcomes.

      Avoiding bad decisions isn't the same as making a constructive decision though. This here is more akin to postponed gratification.

    2. This visualisation technique can be used for small and big decisions alike. Thinking of eating that extra piece of cake? Walk yourself through the likely thoughts of your future self. Want to spend a large sum of money on a piece of tech you’re not sure yet how you will use? Think about how your future self will feel about the decision

      Note that these are examples that imply that using regret of future self in decision making is mostly for deciding against a certain action (eat cake, buy new toy).

    3. Instead of letting your present self make the decision on their own, ignoring the experience of your future self who will need to deal with the consequences later, turn the one-way decision process into a conversation between your present and future self.

      As part of decision making involve a 'future self' so that different perspective(s) can get taken into account in a personal decision on an action.

    4. Bring your future self in the decision-making process

      Vgl Vinay Gupta's [[Verantwoording aan de kinderen 20200616102016]] as a way of including future selves, by tying consequence evalution to the human rights of children.

    5. In-the-moment decisions have a compound effect: while each of them doesn’t feel like a big deal, they add up overtime.

      Compounding plays a role in any current decision. Vgl [[Compound interest van implementatie en adoptie 20210216134309]] [[Compound interest of habits 20200916065059]]

    6. temporal discounting. The further in the future the consequences, the least we pay attention to them

      Temporal discounting: future consequences are taken into account as an inverse of time. It's based on urgency as a survival trait.

    1. Agent-regret seems a useful term to explore. Also in less morally extreme settings than the accidental killing in this piece.

    1. New to me form of censorship evasion: easter egg room in a mainstream online game that itself is not censored. Finnish news paper Helsingin Sanomat has been putting their reporting on the Russian war on Ukraine inside a level of online FPS game Counter Strike, translated into Russian. This as a way to circumvent Russian censorship that blocks Finnish media. It saw 2k downloads from unknown geographic origins, so the effect might be very limited.

    1. After 29 billion USD in 2 yrs, Metaverse is still where it was and where Second Life already was in 2003 (Linden Labs and their product Second Life still exist and have been profitable since their start.) I warned a client about jumping into this stuff that Meta while the talk and the walk were not a single thing beyond capabilities that have existed for two decades. https://www.zylstra.org/blog/2022/02/was-second-life-ahead-or-metaverse-nothing-really-new/ en https://www.zylstra.org/blog/2021/11/metaverse-reprise/ Good thing they didn't change their name to anything related .....

    1. Where are the thinkers who always have “a living community before their eyes”?

      I suspect within the living community in question. The scientific model of being an outside observer falls flat in a complex environment, as any self-styled observer is part of it, and can only succeed by realising that. Brings me to action research too. If they're hard to find from outside such a living community that's probably because they don't partake in the academic status games that run separate from those living communities. How would you recognise one if you aren't at least yourself a boundary spanner to the living community they are part of?

    2. For intellectuals of this sort, even when they were writing learned tomes in the solitude of their studies, there was always a living community before their eyes

      This quote is about early Christian bishops from The Spirit of Early Christian Thought by Robert Wilken. Not otherwise of interest to me, except this quote that Ayjay lifts from it. 'Always a living community before their eyes' is I realise my take on pragmatism. Goes back to [[Heinz Wittenbrink]] when he wrote about my 'method' in the context of #stm18 https://www.zylstra.org/blog/2018/09/heinz-on-stm18/

    1. Another downside to using Gutenberg’s sidebar panels is that, as long as I want to keep supporting the classic editor, I’ve basically got to maintain two copies of the same code, one in PHP and another in JavaScript.

      Note to self: getting into WP Gutenberg is a shift deeper into JS and less PHP. My usually entry into creating something for myself is to base it on *AMP (MAMP now) so I can re-use what I have in PHP and MySQL as a homecook.

    1. The amount of EVs in Norway is impacting air quality ('we have solved the NOx issue' it says) in Oslo. Mentions electrified building machinery also reducing noise and NOx on building sites. This has been a long time coming: in [[Ljubljana 2013]] there was this Norwegian guy who told me EVs had started leading new car sales. via Bryan Alexander.

      https://web.archive.org/web/20230509045023/https://www.nytimes.com/2023/05/08/business/energy-environment/norway-electric-vehicles.html

    1. https://web.archive.org/web/20230507143729/https://ec.europa.eu/commission/presscorner/detail/en/ip_23_2413

      The EC has designated the first batch of VLOP and VLOSE under the DSA

      consultation on data access to researchers is opened until 25 May. t:: need to better read Article 41? wrt this access. Lots of conspiracytalk around it re censorship, what does the law say?

    1. European digital infrastructure consortia are as of #2022/12/14 a new legal entity. Decision (EU) 2022/2481 of 14 December 2022 establishing the Digital Decade Policy Programme 2030

      Requirement is that Member States may implement a multi-country project by means of an EDIC. The EC will than create them as legal entity by the act of an EC decision on the consortium funding. There is a public register for them.

      No mention of UBO (although if members are publshed, those members will have UBO registered).

    1. Amazon has a new set of services that include an LLM called Titan and corresponsing cloud/compute services, to roll your own chatbots etc.

    1. Databricks is a US company that released Dolly 2.0 an open source LLM.

      (I see little mention of stuff like BLOOM, is that because it currently isn't usable, US-centrism or something else?)

    1. What Obs Canvas provides is a whiteboard where you can add notes, embed anything, create new notes, and export of the result.

      Six example categories of using Canvas in Obsidian. - Dashboard - Create flow charts - Mindmaps - Mapping out ideas as Graph View replacement - Writing, structure an article ([[Ik noem mijn MOCs Olifantenpaadjes 20210313094501]]) - Brainstorming (also a Graph View replacement)

      I have used [[Tinderbox]] as canvas / outliner (as it allows view-switch between them) for dashboards mostly, as well as for braindumping and then mapping it for ideas and patterns.

      Canvas w Excalibur may help escape the linearity of a note writing window (atomic notes are fine as linear texts)

    1. I have decided that the most efficient way to develop a note taking system isn’t to start at the beginning, but to start at the end. What this means, is simply to think about what the notes are going to be used for

      yes. Me: re-usable insights from project work, exploring defined fields of interest to see adjacent topics I may move into or parts to currently focus on, blogposts on same, see evolutionary patterns in my stuff.

      Btw need to find a diff term than output, too much productivity overtones. life isn't 'output', it's lived.

    2. seriously considering moving my research into a different app, or vault to keep it segregated from the slip box

      ? the notes are the research/learning, no? Not only a residue of it. Is this a mix-up between the old stock and flow disc in (P)KM and the sense it needs to be one or the other? Both! That allows dancing with it.

    1. Kate Darling wrote a great book called The New Breed where she argues we should think of robots as animals – as a companion species who compliments our skills. I think this approach easily extends to language models.

      Kate Darling (MIT, Econ/Law from Uni Basel and ETH ZH) https://en.wikipedia.org/wiki/Kate_Darling http://www.katedarling.org/ https://octodon.social/@grok

      antilibrary add [[The New Breed by Kate Darling]] 2021 https://libris.nl/boek?authortitle=kate-darling/the-new-breed--9781250296115#

      Vgl the 'alloys' in [[Meru by S.B. Divya]]

    2. Language models are very good at some things humans are not good at, such as search and discovery, role-playing identities/characters, rapidly organising and synthesising huge amounts of data, and turning fuzzy natural language inputs into structured computational outputs.And humans are good at many things models are bad at, such as checking claims against physical reality, long-term memory and coherence, embodied knowledge, understanding social contexts, and having emotional intelligence.So we should use models to do things we can’t do, not things we’re quite good at and happy doing. We should leverage the best of both kinds of “minds.”

      The Engelbart perspective on how models can augment our cognitive abilities. Machines for search/discovery (of patterns I'd add, and novel outliers), role play (?, NPCs?, conversational partner Luhmann like, learning buddy?), structuring, lines of reasoning, summaries. (Of the last, those may actually be needed human work, go from the broader richer to the summarised outline as part of the internalisation process in learning).

      Human: access to reality, social context, emotional intelligence, access to reality, longterm memory (machines can help here too obvs), embodied K. And actual real world goals / purposes!

    3. Making these models smaller and more specialised would also allow us to run them on local devices instead of relying on access via large corporations.

      this. Vgl [[CPUs, GPUs, and Now AI Chips]] hardware with ai on them. Vgl [[Everymans Allemans AI 20190807141523]]

    4. They're just interim artefacts in our thinking and research process.

      weave models into your processes not shove it between me and the world by having it create the output. doing that is diminishing yourself and your own agency. Vgl [[Everymans Allemans AI 20190807141523]]

    5. One alternate approach is to start with our own curated datasets we trust. These could be repositories of published scientific papers, our own personal notes, or public databases like Wikipedia.We can then run many small specialised model tasks over them.

      Yes, if I could run my own notes of 3 decades or so on an LLM locally (where it doesn't feed the general model), that I would do instantly.

    6. The question I want everyone to leave with is which of these possible futures would you like to make happen? Or not make happen?
      1. Passing the reverse Turing test
      2. Higher standards, higher floors and ceilings
      3. Human centipede epistemology (ugh what an image)
      4. Meatspace premium
      5. Decentralised human authentication
      6. The filtered web

      Intuitively I think 1, 4, and 6 already de facto exist in the pre-generative AI web, and will get more important. Tech bros will go all in on 5, and I do see a role for it (e.g. to vouch that a certain agent acts on my behalf). I can see the floor raising of 2, and the ceiling raising too, but only if it is a temporary effect to a next 'stable' point (or it will be a race we'll loose), grow sideways not only up). Future 3 is def happening in essence, but it will make the web useless so there's a hard stop to this scenario, at high societal cost. Human K as such isn't dependent on the web or a single medium, and if it all turns to ashes, other pathways will come up (which may again be exposed to the same effect though)

    7. A more ideal form of this is the human and the AI agent are collaborative partners doing things together. These are often called human-in-the-loop systems.

      collaborative is different from shifting the locus of agency to the human, it implies shared agency. Also human in the loop I usually see used not for agency but for control (final decision is a human) and hence liability. (Which is often problematic because the human is biased to accept conclusions presented to them. ) Meant as safeguard only, not changing the role of the model agent, or intended to shift agency.

    8. I’m on Twitter @mappletonsI’m sure lots of people think I’ve said at least one utterly sacrilegious and misguided thing in this talk.You can go try to main character me while Twitter is still a thing.

      Ha! :D

    9. I tried to come up with three snappy principles for building products with language models. I expect these to evolve over time, but this is my first passFirst, protect human agency. Second, treat models as reasoning engines, not sources of truth And third, augment cognitive abilities rather than replace them.

      Use LLM in tools that 1. protect human agency 2. treat models as reasoning engines, not source of truth / oracles 3. augment cog abilities, no greedy reductionism to replace them

      I would not just protect human agency, which turns our human efforts into a preserve, LLM tools need to increase human agency (individually and societally) 3 yes, we must keep Engelbarting! lack of 2 is the source of the hype balloon we need to pop. It starts with avoiding anthromorphizing through our idiom around these tools. It will be hard. People want their magic wand, not the colder realism of 2 (you need to keep sorting out your own messes, but with a better shovel)

    10. At this point I should make clear generative AI is not the destructive force here. The way we’re choosing to deploy it in the world is. The product decisions that expand the dark forestness of the web are the problem.So if you are working on a tool that enables people to churn out large volumes of text without fact-checking, reflection, and critical thinking. And then publish it to every platform in parallel... please god, stop.So what should you be building instead?

      tech bro's will tech bro, in short. I fully agree, I wonder if this one sentence is enough to balance the entire talk until now not challenging the context of these tool deployments, but only addressing the symptoms and effects it's causing?

    11. We will eventually find it absurd that anyone would browse the “raw web” without their personal model filtering it.

      yes, it already is that way in effect.

    12. In the same way, very few of us would voluntarily browse the dark web. We’re quite sure we don’t want to know what’s on it.

      indeed, that's what it currently looks like. However....I would not mind my agents going over the darkweb as precaution or as check for patterns. At issue is that me doing that personally now takes way too much time for the small possibility I catch something significant. If I can send out agents the time spent wouldn't matter. Of course at scale it would remove the dark web one more step into the dark, as when all send their agents the darkweb is fully illuminated.

    13. We will have to design this very carefully, or it'll give a whole new meaning to filter bubbles.

      Not just bubble, it will be the FB timeline. Key here is agency, and design for human biases. A model is likely much better than I to manage the diversity of sources for me, if I give it a starting point myself, or to see which outliers to include etc. Again I think it also means moving away from single artefacts. Often I'm not interested in what everyone is saying about X, but am interested in who is talking about X. Patterns not singular artefacts. See [[Mijn ideale feedreader 20180703063626]]

    14. I expect these to be baked into browsers or at the OS level.These specialised models will help us identify generated content (if possible), debunk claims, flag misinformation, hunt down sources for us, curate and suggest content, and ideally solve our discovery and search problems.

      Appleton suggests agents to fact check / filter / summarise / curate and suggest (those last two are more personal than the others, which are the grunt work of infostrats) would become part of your browser. Only if I can myself strongly influence what it does (otherwise it is the FB timeline all over again!)

      If these models become part of the browser, do we still need the browser as a metaphor for a window on the web, or surfing the net? Why wouldn't those models come up with whatever they grabbed from the web/net/darkweb in the right spot in my own infostrats? The browser is itself not a part of my infostrats, it's the starting point of it, the viewer on the raw material. Whatever I keep from browsing is when PKM starts. When the model filters / curates why not put that in the right spots for me to start working with it / on it / processing it? The model not as part of the browser, but doing the actual browsing, an active agent going out there to flag patterns of interest (based on my prefs/current issues etc) and organising it for me for my next steps? [[Individuele software agents 20200402151419]]

    15. Those were all a bit negative but there is some hope in this future.We can certainly fight fire with fire.I think it’s reasonable to assume we’ll each have a set of personal language models helping us filter and manage information on the web

      Yes, agency at the edges. Ppl running their own agents. Have your agents talk to my agents to arrange a meeting etc. That actually frees up time. Have my agent check out the context and background of a text to judge whether it's a human author or not etc. [[Persoonlijke algoritmes als agents 20180417200200]] [[Individuele software agents 20200402151419]]

    16. People will move back to cities and densely populated areas. In-person events will become preferable.

      Ppl never stopped moving into cities. Cities are an efficient form human organisation. [[De stad als efficientie 20200811085014]]

      In person events have always been preferable because we're human. Living further away with online access has mitigated that, but not undone it.

    17. Once two people, they can confirm the humanity of everyone else they've met IRL. Two people who know each of these people can confirm each other's humanity because of this trust network.

      ssl parties etc. Threema. mentioned above. Catfish! Scale is an issue in the sense that social distance will remain social distance, so it still leaves you with the question how to deal with something that is from a far away social distance (as is an issue on the web now: how we solve it is lurking / interacting and then when the felt distance is smaller go to IRL)

    18. As we start to doubt all “people” online, the only way to confirm humanity is to meet offline over coffee or a drink.

      this is already common for decades, not because of doubt, but because of being human. My blogging since 2002 has created many new connections to people ('you imaginary friends' the irl friends of a friend call them teasingly), and almost immediately there was a shared need felt to meet up in person. Online allowed me to cast a wider net for connections, but over time that was spun into something IRL. I visited conferences for this, organised conferences for it, traveled to people's homes, many meet-ups, our birthday unconferences are also a shape of this. Vgl [[Menselijk en digitaal netwerk zijn gelijksoortig 20200810142551]] Dopplr serviced this.

    19. Next, we have the meatspace premium.We will begin to preference offline-first interactions. Or as I like to call them, meatspace interactions.

      meat-space premium, chuckle.

    20. study done this past December to get a sense of how possible this is: Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers" – Catherine Gao, et al. (2022)Blinded human reviewers were given a mix of real paper abstracts and ChatGPT-generated abstracts for submission to 5 of the highest-impact medical journals.

      I think these types of tests can only result in showing human failing at them. Because the test is reduced to judging only the single artefact as a thing in itself, no context etc. That's the basic element of all cons: make you focus narrowly on something, where the facade is, and not where you would find out it's fake. Turing isn't about whether something's human, but whether we can be made to believe it is human. And humans can be made to believe a lot. Turing needs to keep you from looking behind the curtain / in the room to make the test work, even in its shape as a thought experiment. The study (judging by the sentences here) is a Turing test in the real world. Why would you not look behind the curtain? This is the equivalent of MIT's tedious trolley problem fixation and calling it ethics of technology, without ever realising that the way out of their false dilemma's is acknowledging nothing is ever a di-lemma but always a multi-lemma, there are always myriad options to go for.

    21. Takes the replication crisis to a whole new level.Just because words are published in journals does not make them true.

      Agreed, still this was true before generative AI too. There's a qualitative impact to be expected from this quantitative shift [[Kwantiteit leidt tot kwaliteit 20201211155505]], and it may well be the further/complete erosion of scientific publishing in its current form. Which likely isn't bad, as it is way past its original purpose already: making dissemination cheaper so other scientists can build on it. Dissemination has no marginal costs attached anymore since digitisation. Needs a new trusted human system for sharing publications though, where peer network precedes submission of things to a pool of K.

    22. if content generated from models becomes our source of truth, the way we know things is simply that a language model once said them. Then they're forever captured in the circular flow of generated information

      This is definitely a feedback loop in play, as already LLMs emulate bland SEO optimised text very well because most of the internet is already full of that crap. It's just a bunch of sites, and mostly other sources that serve as source of K though, is it not? So the feedback loop exposes to more people that they shouldn't see 'the internet' as the source of all truth? And is this feedbackloop not pointing to people simply stopping to take this stuff in (the writing part does not matter when there's no reader for it)? Unless curated, filtered etc by verifiable human actors? Are we about to see personal generative agents that can do lots of pattern hunting for me on my [[Social Distance als ordeningsprincipe 20190612143232]] en [[Social netwerk als filter 20060930194648]]

    23. We can publish multi-modal work that covers both text and audio and video. This defence will probably only last another 6-12 months.

      Multi-modal output can for now still suggest there's a human at work, not a generative agent. But multi-modal output can soon if not already also be generated. This still seems to focus on the output being the thing authenticated to identify human making. output that is connected to other generated output. There's still no link to things outside the output, into the authors life e.g. Can one fake the human process towards output, which is not a one-off thing (me writing this in a certain way), but a continuous and evolving thing (me writing this in a certain way as part of a certain information process, connected to certain of my work processes etc.). Seen from processes multi-modal output isn't a different media format, it is also work results, projects created, agency in the physical world. In those processes all output is an intermediate result. Because of those evolving processes my [[Blogs als avatar 20030731084659]]. Vgl [[Kunst-artefact is (tussen)uitkomst proces 20140505070232]] There was this article about an artist I can't find back that saw all his outputs over time as intermediate and expression of one narrative. This https://www.flickr.com/photos/tonz/52849988531/in/datetaken/ comes to mind to. Provenance and entanglement as indicators of authenticity.

    24. But some people will realise they shouldn’t be letting language models literally write words for them. Instead, they'll strategically use them as part of their process to become even better writers.They'll integrate them by using them as sounding boards while developing ideas, research helpers, organisers, debate partners, and Socratic questioners.

      This hints towards prompt-engineering, and the role of prompts in human interaction itself [[Prompting skill in conversation and AI chat 20230301120740]]

      High Q use of generative AI will be about where in a creative / work process you employ to what purpose. Not in accepting the current face presented to us in e.g. chatGPT: give me an input and I'll give you an output. This in turn requires an understanding of one's own creative work processes, and where tools can help reduce friction (and where the friction is the cognitive actual work and must not be taken out)

    25. Some of these people will become even more mediocre. They will try to outsource too much cognitive work to the language model and end up replacing their critical thinking and insights with boring, predictable work. Because that’s exactly the kind of writing language models are trained to do, by definition.

      If you use LLMs to improve your mediocre writing it will help. If you use it to outsource too much of your own cognitive work it will get you the bland SEO texts the LLMs were trained on and the result will be more mediocre. Greedy reductionism will get punished.

    26. This raises both the floor and the ceiling for the quality of writing.

      I wonder about reading after this entire section about writing. Why would I ever bother reading generated texts (apart from 'anonymous' texts like manuals? It does not negate the need to be able to identify a human author, on the contrary, but it would also make even the cheapest way of generating too costly if noone will ever read it or act upon it. Current troll farming has effect because we read it, and still assume it's human written and genuine. As soon as that assumption is fully eroded whatever gets generated will not have impact, because there's no reader left to be impacted. The current transitional assymmetry in judging output vs generating it is costly to humans, people will learn to avoid that cost. Another angle is humans pretending to be the actual author of generated texts.

    27. And lastly, we can push ourselves to do higher quality writing, research, and critical thinking. At the moment models still can't do sophisticated long-form writing full of legitimate citations and original insights.

      Is this not merely entering an 'arms race' against our own tools? With the rat race effect of higher demands over time?

      What about moving sideways not up? Bringing in the richness of the layering of our (internal) reality and lives? The entire fabric that makes up our lives, work, communities, societies, indicately more richly in our artefacts. Which is where my sense of beauty is [[Schoonheidsbegrip 20151023132920]] as [[Making sense is deeply emotional 20181217130024]]

    28. On the new web, we’re the ones under scrutiny. Everyone is assumed to be a model until they can prove they're human.

      On a web with many generative agents, all actors are going to be assumed models until it is clear they're really human.

      Maggie Appleton calls this 'passing the reverse Turing test'. She suggests using different languages than English, insider jargon etc, may delay this effect by a few months at most (and she's right, I've had conversations with LLMs in several languages now, and there's no real difference anymore with English as there was last fall.)

    29. When you read someone else’s writing online, it’s an invitation to connect with them. You can reply to their work, direct message them, meet for coffee or a drink, and ideally become friends or intellectual sparring partners. I’ve had this happen with so many people. Highly recommend.There is always someone on the other side of the work who you can have a full human relationship with.Some of us might argue this is the whole point of writing on the web.

      The web is conversation (my blog def is), texts are a means to enter into a conversation, connection. For algogens the texts are the purpose (and human time spend evaluating its utility and finding it generated an externalised cost, assymmetric as an LLM can generate more than one can ever evaluate for authenticity). Behind a generated text there's no author to connect to. Not in terms of annotation (cause no author intention) and not in terms of actual connection to the human behind the text.

    30. This clearly does not represent all human cultures and languages and ways of being.We are taking an already dominant way of seeing the world and generating even more content reinforcing that dominance

      Amplifying dominant perspectives, a feedback loop that ignores all of humanity falling outside the original trainingset, which is impovering itself, while likely also extending the societal inequality that the data represents. Given how such early weaving errors determine the future (see fridges), I don't expect that to change even with more data in the future. The first discrepancy will not be overcome.

    31. This means they primarily represent the generalised views of a majority English-speaking, western population who have written a lot on Reddit and lived between about 1900 and 2023.Which in the grand scheme of history and geography, is an incredibly narrow slice of humanity.

      Appleton points to the inherent severely limited trainingset and hence perspective that is embedded in LLMs. Most of current human society, of history and future is excluded. This goes back to my take on data and blind faith in using it: [[Data geeft klein deel werkelijkheid slecht weer 20201219122618]] en [[Check data against reality 20201219145507]]

    32. But a language model is not a person with a fixed identity.They know nothing about the cultural context of who they’re talking to. They take on different characters depending on how you prompt them and don’t hold fixed opinions. They are not speaking from one stable social position.

      Algogens aren't fixed social entities/identities, but mirrors of the prompts

    33. Everything we say is situated in a social context.

      Conversation / social interaction / contactivity is the human condition.

    34. A big part of this limitation is that these models only deal with language.And language is only one small part of how a human understands and processes the world.We perceive and reason and interact with the world via spatial reasoning, embodiment, sense of time, touch, taste, memory, vision, and sound. These are all pre-linguistic. And they live in an entirely separate part of the brain from language.Generating text strings is not the end-all be-all of what it means to be intelligent or human.

      Algogens are disconnected from reality. And, seems a key point, our own cognition and relation to reality is not just through language (and by extension not just through the language center in our brain): spatial awareness, embodiment, senses, time awareness are all not language. It is overly reductionist to treat intelligence or even humanity as language only.

    35. This disconnect between its superhuman intelligence and incompetence is one of the hardest things to reconcile.

      generative AI as very smart and super incompetent at the same time, which is hard to reconcile. Is this a [[Monstertheorie 20030725114320]] style cultural category challenge? Or is the basic one replacing human cognition?

    36. But there are a few key differences between content generated by models versus content made by humans.First is its connection to reality. Second, the social context they live within. And finally their potential for human relationships.

      yes, all generated content is devoid of an author context e.g. It's flat and 2D in that sense, and usually fully self contained no references to actual experiences, experiments or things outside the scope of the immediate text. As I describe https://hypothes.is/a/kpthXCuQEe2TcGOizzoJrQ

    37. I think we’re about to enter a stage of sharing the web with lots of non-human agents that are very different to our current bots – they have a lot more data on how behave like realistic humans and are rapidly going to get more and more capable.Soon we won’t be able to tell the difference between generative agents and real humans on the web.Sharing the web with agents isn’t inherently bad and could have good use cases such as automated moderators and search assistants, but it’s going to get complicated.

      Having the internet swarmed by generative agents is unlike current bots and scripts. It will be harder to see diff between humans and machines online. This may be problematic for those of us who treat the web as a space for human interaction.

    38. There's a new library called AgentGPT that's making it easier to build these kind of agents. It's not as sophisticated as the sim character version, but follows the same idea of autonomous agents with memory, reflection, and tools available. It's now relatively easy to spin up similar agents that can interact with the web.

      AgentGPT https://agentgpt.reworkd.ai/nl is a version of such Generative Agents. It can be run locally or in your own cloud space. https://github.com/reworkd/AgentGPT

    39. These language-model-powered sims had some key features, such as a long-term memory database they could read and write to, the ability to reflect on their experiences, planning what to do next, and interacting with other sim agents in the game

      Generative agents have a database for long term memory, and can do internal prompting/outputs

    40. Recently, people have taken this idea further and developed what are being called “generative agents”.Just over two weeks ago, this paper "Generative Agents: Interactive Simulacra of Human Behavior" came out outlining an experiment where they made a sim-like game (as in, The Sims) filled with little people, each controlled by a language-model agent.

      Generative agents are a sort of indefinite prompt chaining: an NPC or interactive thing can be LLM controlled. https://www.youtube.com/watch?v=Gz6mAX41fs0 shows this for Skyrim. Appleton mentions a paper https://arxiv.org/abs/2304.03442 which does it for simlike stuff. See Zotero copy Vgl [[Stealing Worlds by Karl Schroeder]] where NPC were a mix of such agents and real people taking on an NPC role.

    41. Recently, people have been developing more sophisticated methods of prompting language models, such as "prompt chaining" or composition.Ought has been researching this for a few years. Recently released libraries like LangChain make it much easier to do.This approach solves many of the weaknesses of language models, such as a lack of knowledge of recent events, inaccuracy, difficulty with mathematics, lack of long-term memory, and their inability to interact with the rest of our digital systems.Prompt chaining is a way of setting up a language model to mimic a reasoning loop in combination with external tools.You give it a goal to achieve, and then the model loops through a set of steps: it observes and reflects on what it knows so far and then decides on a course of action. It can pick from a set of tools to help solve the problem, such as searching the web, writing and running code, querying a database, using a calculator, hitting an API, connecting to Zapier or IFTTT, etc.After each action, the model reflects on what it's learned and then picks another action, continuing the loop until it arrives at the final output.This gives us much more sophisticated answers than a single language model call, making them more accurate and able to do more complex tasks.This mimics a very basic version of how humans reason. It's similar to the OODA loop (Observe, Orient, Decide, Act).

      Prompt chaining is when you iterate through multiple steps from an input to a final result, where the output of intermediate steps is input for the next. This is what AutoGPT does too. Appleton's employer Ought is working in this area too. https://www.zylstra.org/blog/2023/05/playing-with-autogpt/

    42. Most of the tools and examples I’ve shown so far have a fairly simple architecture.They’re made by feeding a single input, or prompt, into the big black mystery box of a language model. (We call them black boxes because we don't know that much about how they reason or produce answers. It's a mystery to everyone, including their creators.)And we get a single output – an image, some text, or an article.

      generative AI currently follows the pattern of 1 input and 1 output. There's no reason to expect it will stay that way. outputs can scale : if you can generate one text supporting your viewpoint, you can generate 1000 and spread them all as original content. Using those outputs will get more clever.

    43. By now language models have been turned into lots of easy-to-use products. You don't need any understanding of models or technical skills to use them.These are some popular copywriting apps out in the world: Jasper, Copy.ai, Moonbeam

      Mentioned copy writing algogens * Jasper * Wordtune * copy.ai * quillbot * sudowrite * copysmith * moonbeam

    44. These are machine-learning models that can generate content that before this point in history, only humans could make. This includes text, images, videos, and audio.

      Appleton posits that the waves of generative AI output will expand the dark forest enormously in the sense of feeling all alone as a human online voice in an otherwise automated sea of content.

    45. However, even personal websites and newsletters can sometimes be too public, so we retreat further into gatekept private chat apps like Slack, Discord, and WhatsApp.These apps allow us to spend most of our time in real human relationships and express our ideas, with things we say taken in good faith and opportunities for real discussions.The problem is that none of this is indexed or searchable, and we’re hiding collective knowledge in private databases that we don’t own. Good luck searching on Discord!

      Appleton sketches a layering of dark forest web (silos mainly), cozy web (personal sites, newsletters, public but intentionally less reach), and private chat groups, where you are in pseudo closed or closed groups. This is not searchable so any knowledge gained / expressed there is inaccessible to the wider community. Another issue I think is that these closed groups only feel private, but are in fact not. Examples mentioned like Slack, Discord and Whatsapp are definitely not private. The landlord is wacthing over your shoulder and gathering data as much as the silos up in the dark forest.

    46. We end up retreating to what’s been called the “cozy web.”This term was coined by Venkat Rao in The Extended Internet Universe – a direct response to the dark forest theory of the web. Venkat pointed out that we’ve all started going underground, as if were.We move to semi-private spaces like newsletters and personal websites where we’re less at risk of attack.

      Cozy Web is like Strickler/Liu's black zones above. Sounds friendlier.

    47. The overwhelming flood of this low-quality content makes us retreat away from public spaces of the web. It's too costly to spend our time and energy wading through it.

      Strickler compares this to black zones as described in [[Three Body Problem _ Dark Forest by Cixin Liu]], withdraw into something smaller which is safe but also excluding yourself permanently from the greater whole. Liu describes planets that lower the speed of light around them on purpose so they can't escape their own planet anymore. Which makes others leave them alone, because they can't approach them either.

    48. It’s difficult to find people who are being sincere, seeking coherence, and building collective knowledge in public.While I understand that not everyone wants to engage in these activities on the web all the time, some people just want to dance on TikTok, and that’s fine!However, I’m interested in enabling productive discourse and community building on at least some parts of the web. I imagine that others here feel the same way.Rather than being a primarily threatening and inhuman place where nothing is taken in good faith.

      Personal websites like mine since mid 90s fit this. #openvraag what incentives are there actually for people now to start their own site for online interaction, if you 'grew up' in the silos? My team is largely not on-line at all, they use services but don't interact outside their own circles.

    49. Many people choose not to engage on the public web because it's become a sincerely dangerous place to express your true thoughts.

      The toxicity made me leave FB and reduce my LinkedIn and Twitter exposure. Strickler calls remaining nonetheless the bowling alley effect: you don't like bowling but you know you'll meet your group of regular friends there.

    50. This is a theory proposed by Yancey Striker in 2019 in the article The Dark Forest Theory of the InternetYancey describes some trends and shifts around what it feels like to be in the public spaces of the web.

      Hardly a 'theory', a metaphor re-applied to experiencing online interaction. (Strickler ipv Striker)

      The internet feels lifeless: ads, trolling factories, SEO optimisation, crypto scams, all automated. No human voices. The internet unleashes predators: aggressie behaviour at scale if you do show yourself to be a human. This is the equivalent of Dark Forest.

      Yancey Strickler https://onezero.medium.com/the-dark-forest-theory-of-the-internet-7dc3e68a7cb1 https://onezero.medium.com/beyond-the-dark-forest-a905e2dd8ae0 https://www.ystrickler.com/

    51. the dark forest theory of the universe

      A specific proposed solution to [[Fermi Paradox 20201123150738]] where is everybody? Dark forest, it's full of life but if you walk through it it seems empty. Universe seems empty of intelligent life to us as well. Because life forms know that if you let yourself be heard/seen you'll be attacked by predators. Leading theme in [[Three Body Problem _ Dark Forest by Cixin Liu]]

    52. Secondly, I’m what we call “very online”. I live on Twitter and write a lot online. I hang out with people who do the same, and we write blog posts and essays to each other while researching. As if we're 18th-century men of letters. This has led to lots of friends and collaborators and wonderful jobs.Being a sincere human on the web has been an overwhelmingly positive experience for me, and I want others to have that same experience.

      True for me (and E) too. For me it largely was because the internet became a thing right around when I entered uni in the late 80s, and it always was about connecting. Blogging esp early in the years 2002-2009 led to a large part of my personal and professional peers network.

      '18th c. men of letters' I've sometimes thought about it like that actually, and treat meet-ups etc like the Salons of old vgl. [[Salons organiseren 20201216205547]]

    53. https://web.archive.org/web/20230503150426/https://maggieappleton.com/forest-talk

      Maggie Appleton on the impact of generative AI on internet, with a focus on it being a place for humans and human connection. Take out some of the concepts as shorthand, some of the examples mentioned are new to me --> add to lists, sketch out argumentation line and arguments. The talk represents an updated version of earlier essay https://maggieappleton.com/ai-dark-forest which I probably want to go through next for additional details.

    1. Ought makes Elicit (a tool I should use more often). Maggie Appleton works here. A non-profit research lab into machine learning systems to delegate open-ended thinking to.

    1. https://web.archive.org/web/20230503191702/https://www.rechtenraat.nl/artikel-10-evrm-en-woo/ Caroline Raat start zaak tegen RvS om toepassing uitzonderingsgronden WOO.

      Argumentatie: - 2017 EHRM arrest stelt dat public watchdogs (journo's, bloggers, ngo's, wetenschap) direct beroep kunnen doen op art 10 EVRM voor toegang tot documenten van de overheid. - Als het zo'n watchdog is die toegang vraagt, gaat EHRM boven WOO/WOB. - Watchdog hoeft alleen aan te tonen dat info nodig voor publieke info-voorlichting - Watchdog hoeft geen bijzondere omstandigheden aan te tonen - Weigering kan alleen als daar dringen maatschappelijk reden toe is - Weigering kan alleen op basis genoemde uitzonderingen in art10lid2 EVRM zelf, en weigering moet als noodzakelijk voor de samenleving gemotiveerd worden - Andere weigeringsgronden in WOB/WOO zijn niet van toepassing.

      Dit zou bijv heel ander verloop van Shell papers kunnen geven.

    1. ICs as hardware versions of AI. Interesting this is happening. Who are the players, what is on those chips? In a sense this is also full circle for neuronal networks, back in the late 80s / early 90s at uni neuronal networks were made in hardware, before software simulations took over as they scaled much better both in number of nodes and in number of layers between inputs and output. #openvraag Any open source hardware on the horizon for AI? #openvraag a step towards an 'AI in the wall' Vgl [[AI voor MakerHouseholds 20190715141142]] [[Everymans Allemans AI 20190807141523]]

    1. https://web.archive.org/web/20230503153010/https://subconscious.substack.com/p/llms-break-the-internet-signing-everything

      Gordon Brander on how Maggie Appleton's point in her talk may be addressed: by humans signing their output (it doesn't preclude humans signing generated output I suppose, which amounts to the same result as not signing) Appleton suggests IRL meet-ups are key here for the signing. Reminds me of the 'parties' where we'd sign / vouch for each others SSL certs. Or how already in Threema IRL meet-ups are used to verify Threema profiles as mutually trusted. Noosphere is more than this though? It would replace the current web with its own layer. (and issues). Like Maggie Appleton mentions Dead Internet Theory

    1. https://web.archive.org/web/20230430194301/https://netzpolitik.org/2023/longtermism-an-odd-and-peculiar-ideology/ The EA/LT reasoning explained in this interview, in a way that allows easy outlining. Bit sad to see Jaan Tallinns existential risk path taking this shape, CSER seemed to be more balanced back in 2012/13 when I briefly met him in the context of TEDxTallinn, with climate change a key existential risk, not a speed bump on the road to advanced AI to provide future humanity.

    1. https://web.archive.org/web/20230502113317/https://wattenberger.com/thoughts/boo-chatbots

      This seem like a number of useful observations wrt interacting with LLM based tools, and how to prompt them. E.g. I've seen mention of prompt marketplaces where you can buy better prompts for your queries last week. Which reinforces some of the points here. Vgl [[Prompting skill in conversation and AI chat 20230301120740]] and [[Prompting valkuil instrumentaliseren conversatiepartner 20230301120937]]

  5. Apr 2023
    1. You’d think this “independence” might drive a person toward that problematic pioneer fantasy, but it only underlines to me how self-sufficiency is a LARP.

      off grid / prepping is a LARP, very striking observation.

    1. The one-sentence-summary compresses the summary to one sentence (or two). The title is a further compression of the content into a few words. Working on the one-sentence summary and the title is an act of learning itself. You cannot get any understanding of the Method without real content. See this video for further explanations: How to write good titles for your Zettelkasten

      In narrative inquiry I ask people to title the experience they shared after sharing. Similarly I write my own titles usually after the content of a blogpost or a notion. Although when it comes to the internal branching highlighted above I usually start with a temporary title, which captures the jumping off point from the originating note.

    2. he digital Zettelkasten, freed from physical limitations, offers a unique feature: You can flesh out ideas, look at them from different directions, apply different ways of analysis, and use theoretically infinite methods to explore the idea on a single note. As a result, the note grows in size, but then you can refactor it. You refactor the note, move the grown components as new ideas into new notes and make the parent note about the relationship between the new notes.

      I have this regularly, whenever I spend a bit of time on usually two or three related notes. Usually it annoys me because it sometimes feels like the branching goes faster than I can keep up with noting. That's from a 'production' perspective. Here I was aiming to finish a note, reducing the unfinished corpus by one, only to add a bunch of new beginnings to the heap to go through. The internal branching is a more positive phrasing for an effect I regularly treat as 'more work'. Good switch of perspective, as I have a mental image of external explosion that I can't contain, whereas internal branching is like fractals within the same general boundary. Good image.

    1. In other words, the currently popular AI bots are ‘transparent’ intellectually and morally — they provide the “wisdom of crowds” of the humans whose data they were trained with, as well as the biases and dangers of human individuals and groups, including, among other things, a tendency to oversimplify, a tendency for groupthink, and a confirmation bias that resists novel and controversial explanations

      not just trained with, also trained by. is it fully transparent though? Perhaps from the trainers/tools standpoint, but users are likely to fall for the tool abstracting its origins away, ELIZA style, and project agency and thus morality on it.

    1. https://web.archive.org/web/20230411095546/https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/

      On the temporary ban of ChatGPT in Italy on the basis of GDPR concerns.

      Italian DPA temporarily bans ChatGPT until adequate answers are received from OpenAI. Issues to address: 1. Absence of age check (older than 13) of ChatGPT users 2. Missing justification for the presence of personal data in trainingsdata of ChatGPT. 3. OpenAI has no EU based offices and as such there's no immediate counterparts for DPAs to interact with them. The temp ban is to ensure a conversation with OpenAI will be started.

      The trigger was a 9 hour cybersecurity breach where user's financial information and content of their prompts/generated texts leaked over into other accounts.

    1. This is the space where AI can thrive, tirelessly processing these countless features of every patient I’ve ever treated, and every other patient treated by every other physician, giving us deep, vast insights. AI can help do this eventually, but it will first need to ingest millions of patient data sets that include those many features, the things the patients did (like take a specific medication), and the outcome.

      AI tools yes, not ChatGPT though. More contextualising and specialisation needed. And I'd add the notion that AI might be necessary as temporary fix, on our way to statistics. Its power is in weighing (literally) many more different factors then we could statistically figure out, also because of interdependencies between factors. Once that's done there may well be a path to less blackbox tooling like ML/DL towards logistic regression: https://pubmed.ncbi.nlm.nih.gov/33208887/ [[Machine learning niet beter dan Regressie 20201209145001]]

    2. My fear is that countless people are already using ChatGPT to medically diagnose themselves rather than see a physician. If my patient in this case had done that, ChatGPT’s response could have killed her.

      More ELIZA. The opposite of searching on the internet for your symptoms and ending up with selfdiagnosing yourself with 'everything' as all outliers are there too (availability bias), doing so through prompting generative AI will result in never suggesting outliers because it will stick to dominant scripted situations (see the vignettes quote earlier) and it won't deviate from your prompts.

    3. If my patient notes don’t include a question I haven’t yet asked, ChatGPT’s output will encourage me to keep missing that question. Like with my young female patient who didn’t know she was pregnant. If a possible ectopic pregnancy had not immediately occurred to me, ChatGPT would have kept enforcing that omission, only reflecting back to me the things I thought were obvious — enthusiastically validating my bias like the world’s most dangerous yes-man.

      Things missing in a prompt will not result from a prompt. This may reinforce one's own blind spots / omissions, lowering the probability of an intuitive leap to other possibilities. The machine helps you search under the light you switched on with your prompt. Regardless of whether you're searching in the right place.

    4. My experiment illustrated how the vast majority of any medical encounter is figuring out the correct patient narrative. If someone comes into my ER saying their wrist hurts, but not due to any recent accident, it could be a psychosomatic reaction after the patient’s grandson fell down, or it could be due to a sexually transmitted disease, or something else entirely. The art of medicine is extracting all the necessary information required to create the right narrative.

      This is where complexity comes in, teasing out narratives and recombine them into probes, probing actions that may changes the weights of narratives and mental models held about a situation. Not diagnostics, but building the path towards diagnostics. Vgl [[Probe proberend handelen 20201111162752]] [[Vertelpunt 20201111170556]]

    5. ChatGPT rapidly presents answers in a natural language format (that’s the genuinely impressive part)

      I am coming to see this as a pitfall of generative AI texts. It seduces us to anthromorphise the machine, to read intent and comprehension in the generated text. Removing the noise in generating text, meaning the machine would give the same rote answers to the same prompts would reduce this human projection. It would make the texts much 'flatter' and blander than they currently already are. Our fascination with these machines is that they sometimes sound like us, and it makes us easily overlook the actual value of the content produced. In human conversation we would give these responses a pass as they are plausible, but we'd also not treat conversation as likely fully true.

    6. This is likely why ChatGPT “passed” the case vignettes in the Medical Licensing Exam. Not because it’s “smart,” but because the classic cases in the exam have a deterministic answer that already exists in its database.

      Machines will do well in scripted situations (in itself a form of automation / codification). This was a factor in Hzap 08 / 09 in Rotterdam, where in programming courses the problems were simplified and highly scripted to enable the teacher to be able to grade the results, but at the cost of removing students from actual real life programming challenges they might encounter. It's a form of greedy reductionism of complexity. Whereas the proof of the pudding is performing well within complexity.

    7. Here’s what I found when I asked ChatGPT to diagnose my patients

      A comparison of ChatGPT responses to actual ER case descriptions. Interesting experiment by the author, though there shouldn't be an expectation for better results than it gave.

    1. Genre is a conversation

      Ha. Annotation Kalir/Garcia positions annotation as genre, and as (distributed) conversation. [[Annotatie als genre of als middel 20220515112227]], [[Annotation by Remi Kalir Antero Garcia]] and [[Gedistribueerde conversatie 20180418144327]]

      The human condition in its entirety is an infinite conversation I suspect.

    1. https://web.archive.org/web/20230404092627/https://newsletter.mollywhite.net/p/feedly-launches-strikebreaking-as

      This entire 'feedly goes into strikebreaking' headline at first didn't make any sense to me when E first mentioned it. First because it sounds extremely out there, in terms of 'service', and second, it's RSS, which I think is hardly suited for the type of claims this service makes (and the article shows that too imo). RSS content hardly shows emergent patterns if you've not defined the network/group you're drawing from imo (e.g. media are not useful for it), and it works at a slower pace than 'let's see if this protest turns violent'. I've worked for orgs that had a 'keep our employees save' coordination centre, and they defintely didn't tap into RSS. They'd send me an sms to avoid a certain part of certain city because of a disease outbreak for instance, or warn met of specific types of crime to watch out for when embarking on a mission, or real time weather warnings for my location.

      I haven't used Feedly, I only mentioned it once on my blog in 2019, because my hoster blocked it as 'bad bot'. Foresight? https://www.zylstra.org/blog/2019/06/feedly-blocked-as-bad-bot-by-my-hoster/ I think that blocking feedly might be not as bad as I thought in 2019

    2. But I also don’t think that a company that creates harmful technology should be excused simply because they’re bad at it.

      Being crap at doing harm doesn't allow you to claim innocence of doing harm.

    1. https://web.archive.org/web/20230404050349/https://greshake.github.io/

      This site goes with this paper <br /> https://doi.org/10.48550/arXiv.2302.12173

      The screenshot shows a curious error which makes me a little bit suspicious: the reverse Axelendaer is not rednelexa, there's an a missing.

    2. If allowed by the user, Bing Chat can see currently open websites.

      The mechanism needs a consent step from the user: to allow Bing Chat to see currently open websites. And one of those websites already open, needs to contain the promptinjection.

    3. Microsoft prevents content from GitHub pages domains from being ingested by Bing Chat at the present time.

      Wait, what does this mean. #openvraag That previously it did, but now doesn't in response to this? Or that Bing Chat never did so in the first place? In the latter this paper is dealing in hypotheticals at this stage?

    1. Somewhat suspicious of timing, but listen to those soundfiles. We're surrounded by Triffids!

      Timing: the info about these sounds is known since 2012 https://gizmodo.com/plants-communicate-with-each-other-by-using-clicking-so-5919973 but this new paper turns to learning models to derive info from the sounds made.

    1. Running BLOOM on a VPS or locally even is either expensive or very slow. Mostly because of the sheer size of the 176B model. Share it in a group of similar users on a VPS set-up? Use the Hugging Face API for BLOOM?

  6. Mar 2023
    1. Donald points rightly to some of the classic monsterisation responses to AI. Although he imo misrepresents the EU AI Act (which in its proposal carefully avoids static tech regulation).

      Vgl [[Monstertheorie 20030725114320]]

    1. I want to bring to your attention one particular cause of concern that I have heard from a number of different creators: these new systems (Google’s Bard, the new Bing, ChatGPT) are designed to bypass creators work on the web entirely as users are presented extracted text with no source. As such, these systems disincentivize creators from sharing works on the internet as they will no longer receive traffic

      Generative AI abstracts away the open web that is the substrate it was trained on. Abstracting away the open web means there may be much less incentive to share on the open web, if the LLMs etc never point back to it. Vgl the way FB et al increasingly treated open web URLs as problematic.

    2. he decimation of the existing incentive models for internet creators and communities (as flawed as they are) is not a bug: it’s a feature

      replacing the incentives to share on the open web are not a mere by-effect of it being abstracted away by generative AI, but an aimed for effect. As it may push people to seek the gains of sharing elsewhere, i.e. enclosed web3 services.

    1. https://web.archive.org/web/20230316103739/https://subconscious.substack.com/p/everyone-will-have-their-own-ai

      Vgl [[Onderzoek selfhosting AI tools 20230128101556]] en [[Persoonlijke algoritmes als agents 20180417200200]] en [[Everymans Allemans AI 20190807141523]] en [[AI personal assistants 20201011124147]]

  7. www.nationaalarchief.nl www.nationaalarchief.nl
    1. MDTO (Metagegevens voor duurzaam toegankelijke overheidsinformatie) lijkt ook de norm te worden voor WOO actieve openbaarmaking. #openvraag hoe zit dat voor passieve openbaarmaking? Wordt metadatering beperkt tot de actieve categorieën?

    1. https://web.archive.org/web/20230309111559/https://www.d4d.net/news/ai-and-the-state-of-open-data/

      Tim Davies looks at the bridge between #opendata and #AI. Should go throug the chapter in version 1 of the State of Open Data too. Note: while Tim acknowledges some of the EU data strategy developments (e.g. the dataspaces) it doesn't mention others (e.g. data altruistic non-profit entities) which may fit the call for instutions better. Data space aren't an institution, but a common market

    1. https://web.archive.org/web/20230301112750/http://donaldclarkplanb.blogspot.com/2023/02/openai-releases-massive-wave-of.html

      Donald points to the race that OpenAI has spurred. Calls the use of ChatGPT to generate school work and plagiarism a distraction. LLMs are seeing a widening in where they're used, and the race is on. Doesn't address whether the race is based on any solid starting points however. To me getting into the race seems more important to some than actually having a sense what you're racing and racing for.

    1. Conversation is an art, and we are mostly pretty rubbish at it.We are entering a new era of conversational/constitutional AI. A powerful byproduct could be that we improve our conversations.

      Interesting point by John Caswell. AI prompting is a skill to learn, can we simultaneously learn to prompt better in conversations with other people? Prompting is a key thing in collecting narrated experiences for instance. Or will more conscious prompting lead to instrumentalising your conversation partner? After all AI chat prompting is goal oriented manipulation, what to put int to get the desired output? In collecting narrated experiences the narrator's reality remains a focal point, and only patterns over collections of narrated experiences are abstracted away from the original conversations. n:: [[Prompting skill in conversation and AI chat 20230301120740]] n:: [[Prompting pitfall instrumentalising conversation partner 20230301120937]]

  8. Feb 2023
    1. https://web.archive.org/web/20230226002724/https://medium.com/@ElizAyer/meetings-are-the-work-9e429dde6aa3 Meetings are regular work, so blindly avoiding meetings is damaging.

      Julian Elve follows up https://www.synesthesia.co.uk/2023/02/27/finding-the-real-work-that-meetings-are-good-at/ with lifting out the parts where Ayer discusses the type of meeting that are 'real work' and what they're for. (learning, vgl [[Netwerkleren Connectivism 20100421081941]]

    1. He didn’t just put his notes anywhere, but rather, in a place that made sense at the time, near something related, even if this was not the only or even best place for the note to go in the long term. Again, this difficulty of there being no one, best place for a particular note was addressed through the use of cross-links between notes, making it so that any given note could "exist" in more than one spot.

      Folgezettel are per the linked https://web.archive.org/web/20220125173712/https://omxi.se/2015-06-21-living-with-a-zettelkasten.html posting also a way to create some sort of initial overview in a physical system. In digital systems network maps serve a similar purpose as initial overview to be able to start with something. The outline Lawson mentions as origin is a thing in itself to me, esp as the connections / place in a system of a note can be reconsidered over time. Physical placement is by def a compromise, the question if it is a constraint that has a creative effect?

    2. hough I don’t know for certain, it seems possible that his system is a hybrid of the outlining method from law and the notecard method from history and sociology. The use of copious cross-links between the individual notes stems from his particular project of synthesizing knowledge from multiple disciplines, thus making it difficult to ever place most cards in one and only one spot in the ever-growing outline.

      presumption: L's Folgezettel are a combination of outlining (as common in US maybe not German law edu) and the note cards used in sociology. Cross linking as a way to escape forced categorisation into exclusive buckets. Is there also in cross linking an element perhaps of escaping established idiom while building new (fields) of knowledge? (Vgl. Richard Rorty's struggle when forced to explain pragmatism in the language of Platonic dilemma's. [[Taal als zicht beperkend element 20031104104523]] )

    3. Luhmann’s particular implementation of zettelkasten method should not necessarily be seen as a universal model for all knowledge work because his implementation was tailored to his own project and research questions–i.e. the production of big social theory by drawing on disparate literatures from many disciplines.

      Yes. Any pkm system or method is (or should be) tailored to one's own needs. Vgl [[% Interessevelden 20200523102304]] as [[Macroscope 20090702120700]]

    4. We just need to understand where they (likely) come from and their purpose in the overall system. In short, I believe that they are an artifact of Luhmann’s legal education and serve the purpose of synthesis.

      Lawson thinks L's Folgezettel are a product of his training in law, and they were used by L for synthesis.

    1. They may be right about Lockdown in one way that the concept of it has become big enough and detached from reality enough to house whatever theories or madness anybody wants to house in it. As such, lockdown was a huge psychohistoric event.

      ha! psychohistoric event. Yes, I recognise some of that. I've been in recent sessions that were the 3rd of 4th larger public gathering with the same group, with the group leader still repeating the mantra that we hadn't been able to meet up for so long, where we just did the same thing several times before. A ritualised phrasing to excuse any time spent on catching up. I'd rather put the catching up on the actual program. No excuse needed, psychohistoric or not.

    2. If I lift this one level, the so called “Lockdown” is being used as a scapegoat for anything and everything that people don’t like. Here in Europe the lockdowns felt very long but were brief in retrospect. The longest probably being the 3 month school/daycare closure at the start of the pandemic during which we also suffered immensely. Real hard lockdowns happened in a country like China. Claiming that the relatively mild restrictions that we had for a couple of months (and then twice more) created irreparable damage in the general population is very fucking rich.

      Indeed. 'the lockdown' in various conversations I've been in seems to be an indetermined period between 2019 and now, which serves as explanation for anything that wasn't finished in the past 3 years. As if we were all in actual stasis all that time continuously. Yes it was hard for us at times, I know it was much harder for other people I know at times in other locations, let alone what's been going on in China. But it wasn't constant and everywhere in NL or in EU. The Dutch actual lockdowns were 3 different periods and to very different degrees, with the first being the strictest, but the last one feeling the most difficult to me. I should mark the actual lockdowns and restrictions more clearly in my notes as factcheck.

    1. Why not work on improving a technical solution for Folgezettel?

      Reading this I realise I'm not using Folgezettel really, only linking back to a previous notion. There's some sequencing, esp when I create little 'trains' (a notion, a link to a more abstract notion, a link to a more detailed one, a link to an example). The forward linking I generally not do, except sometimes. L always did forward linking in the sense of placing the index card.

    1. https://web.archive.org/web//https://www.binnenlandsbestuur.nl/bestuur-en-organisatie/io-research/morele-vragen-rijksambtenaren-vaak-onvoldoende-opgevolgd

      #nieuw reden:: Verschillende soorten morele dilemma's volgens I&O bij (rijks)ambtenaren. Let wel, alleen rondom beleidsproces, niet om digitale / data dingen. #openvraag is er een kwalitatief verschil tussen die 2 soorten vragen, of zijn het verschillende verschijningsvormen van dezelfde vragen? Verhouden zich de vragen tot de 7 normen van openbaar bestuur? Waarom is dat niet de indeling dan? #2022/06/30

    1. The philosopher Peter Singer, whose writing is a touchstone for EA leaders,

      Singer, b1946, Australian @ Princeton, applied ethics, utilitarian. EA / LT as utilitarianism ad absurdum

    2. the problem is particularly acute in EA. The movement’s high-minded goals can create a moral shield, they say, allowing members to present themselves as altruists committed to saving humanity regardless of how they treat the people around them. “It’s this white knight savior complex,” says Sonia Joseph, a former EA who has since moved away from the movement partially because of its treatment of women. “Like: we are better than others because we are more rational or more reasonable or more thoughtful.” The movement “has a veneer of very logical, rigorous do-gooderism,” she continues. “But it’s misogyny encoded into math.”

      Lofty goals can serve as 'moral shield', excusing immoral behaviour in other situations, because the higher ends 'prove' the ultimate morality of the actor.

    1. I have more recently added a ‘Start Here’ page which presents posts according to their labels (categories) as a jumping off point. Each time the page loads it will present the labels in a random order and show three random posts from that label just to mix things up a bit.

      Colin Walker added a start here page to his site to present posts in a more curated way than stream to new / incidental visitors.

    2. It does become problematic (and I wonder if Ben has noticed this) since most people are on their phone, where they won’t notice the multi-column, but rather a stream and the rest of the website underneath 🙁 No idea how to “fix” that yet.

      A comment about the diff between desktop and mobile browsing experience: one might miss the multiple columns. For my site that is true too: all the right column stuff comes at the bottom on mobile. Noone scrolls that far.

    3. As humans, our interests have become wide enough that we can at best peck at what’s flowing through

      individually yes. Feedback loops is the response. It's just that we allowd socmed to base feedback almost entirely on outrage.

    4. come to the conclusion that most of us can no longer follow the stream and make sense of what’s flowing through, or even catch what’s important

      I've always assumed the point of the stream is that you can't drink it all. My [[Infostrat Filtering 20050928171301]] is based on the stream being overwhelming. Never twice into the same river etc. You don't make sense of the stream or catch what's important. Social filtering is the bit you 'drink' from the stream, and what you reshare is feedback into it. Given enough feedback what is important will always resurface.

    5. Social networks are increasingly algorithmically organized, so their stream isn’t really a free-flowing stream

      Metaphor of algo-timeline as a canalised river, vs the free flowing stream that is e.g. a socmed stream like Mastodon.

    1. We had more than one way of presenting our blogs to the readers. Why did we stop that?

      True, not sure I did stop, but a rethink is definitely useful, and extending it.

    2. Recommend stuff to the reader on our platform, our blogs

      blogroll is that too, no?

    3. I generally follow blogs through RSS, where a stream is meaningless

      I don't follow, RSS is the stream I'd say, it's entire design is the reverse chronological order? Or does Amit specifically mean the representation of the stream on the blog front page?

    4. stream is important for me for discoverability

      As is the blogroll.

  9. Jan 2023
    1. https://web.archive.org/web/20221214055312/https://wildrye.com/roundup-of-67-tools-for-thought-to-build-your-second-brain/

      Glad to notice that: - I've heard of / know many of these tools, so have an ok overview of the current space. No surprises in the list. - I have not cycled through all these tools.

      Also interesting that The Brain still exists. Used to be my desktop interface in the late 90s/early 00s.