708 Matching Annotations
  1. Last 7 days
    1. "I had to stand up and create the sub-category for my work because others could not get it right. Creating a space is better than trying to fit into a space that was never made for you. I don't do erasure. #Africanfuturism #Africanjujuism " says author Nnedi Okorafor. Here 5 authors who coined their own subgenres https://bookriot.com/authors-who-coined-their-own-sub-genres/ e.g. hopepunk, silkpunk, barrio noir, quozy mystery and Okorafor's two.

      Creating a space has harmonics with [[It’s More Logical to Host an Event Than Attend One – Interdependent Thoughts 20210309093335]] and esp Rorty's [[Is het nieuwe uit te leggen in taal van het oude 20031104104340]], labeling your own thing / making a space / creating new language as an act of agency

  2. Sep 2023
    1. Toch lijkt het enthousiasme van Nederland voor OGP beperkt. Waar andere landen delegaties van tientallen leden stuurden en sommige landen ministers afvaardigden, stak Nederland mager af met een officiële delegatie van drie mensen (inclusief Open State Foundation).

      Al zal ook de demissionaire status meebepaald hebben.

    1. https://www.filosofieinactie.nl/blog/2023/9/5/open-source-large-language-models-an-ethical-reflection (archive version not working) Follow-up wrt openness of LLMs, after the publication of the inteprovincial ethics committee on ChatGPT usage within provincial public sector in NL. At the end mentions the work by Radboud Uni I pointed them to. What are their conclusions / propositions?

  3. Aug 2023
    1. Energiehaven wil netbeheerders beter informeren met Data Safe House

      Vraag is wat link is met dataspaces. En welke? Energy? GD? Link met DA/DGA

    2. Het is een afsprakenstelsel, inclusief een platform waarop industriële bedrijven uit het Rotterdamse havengebied datasets over energiedragers uit hun vertrouwelijke investeringsplannen veilig met netbeheerders kunnen delen,

      afsprakenstelsel leidend kennelijk, dan technisch platform.

    3. Daarom is het Data Safe House een stichting, zonder winstoogmerk, waarmee deelnemende partijen overeenkomsten aangaan’.

      Link met data-altruisme te maken?

    1. Marco over Tiago's boek. Vind de vele nieuwe acroniemen voor methoden en taktiekjes die al heel oud zijn onnodig, en mis het historisch besef bij Forte (en Milo et al). Vgl [[Transcript digitale fitheid Tiago Forte]]. en [[BASB Building a second brain 20200929164524]]

    1. After running the tests, I ended up with six profiles (three cached, three uncached). I’ve made those public, and you can find links to them below. First up, here’s a TL;DR of the key findings:Across all tests, loading the WebP page had the lowest energy consumption.Across all tests, loading the AVIF page had the highest energy consumption.JPEG was close to WebP in most tests.The uncached tests are fairly consistent. Testing when images are cached, however, has some wild variability. More testing is probably needed there.

      Fershad Irani looked at power consumption of images in websites. WebP came out on top (to his surprise) and JPG close. By the looks of it this is power consumption on the browser side. I suppose on the server side, power correlates with file size. The files used has JPG at 3.5 times WebP and 6 times Avif. Is webp / avif optimised for file compression (faster transmission) and less for rendering time? Does that explain the diff between Avif and Webp? All in all no biggie to stick with jpg it seems, except for choosing lowest suitable filesizes (percentwise webp would then achieve less optimisation on the transmission side)

      via Heinz .h feed.

    1. The more I learn about her recent activities, however, the less I am able to accept the premise of these questions. They imply that when she went over the edge, she crashed to the ground. A more accurate description is that Wolf marched over the edge and was promptly caught in the arms of millions of people who agree with every one of her extraordinary theories without question, and who appear to adore her. So, while she clearly has lost what I may define as “it”, she has found a great deal more – she has found a whole new world, one I have come to think of as the Mirror World.

      Down the rabbithole there's Mirror World, with its own rewards and sense of community and being welcomed. Vgl conspiracy fantasy as giving you a better position above your environment (I know more, how it really is) and how that gives you standing amongst conspiracy peers.

    2. Conspiracies have always swirled in times of crisis – but never before have they been a booming industry in their own right.

      conspiracy fantasies as genre, as business model and industry (the conpiracy industrial complex as moniker to describe the graph of media outlets, media personalities and network of grifters around them?)

    3. In practice, this squared virality meant that if you put out the right kind of pandemic-themed content – flagged with the right mix-and-match of keywords and hashtags (“Great Reset”, “WEF”, “Bill Gates”, “Fascism”, “Fauci”, “Pfizer”) and headlined with tabloid-style teasers (“The Leaders Colluding to Make Us Powerless”, “What They Don’t Want You to Know About”, “Shocking Details Revealed”, “Bill Gates Said WHAT?!?”) – you could catch a digital magic-carpet ride that would make all previous experiences of virality seem leaden in comparison.

      The global attention to Covid meant an easy way to clout by attaching other stuff.

    4. And nothing had ever been nearly so hot, so potentially clout-rich, as Covid-19. We all know why. It was global. It was synchronous. We were digitally connected, talking about the same thing for weeks, months, years, and on the same global platforms. As Steven W Thrasher writes in The Viral Underclass, Covid-19 marked “the first viral pandemic also to be experienced via viral stories on social media”, creating “a kind of squared virality”.

      Reminds of when tv shows were the talk of the town the next morning: everyone had seen it. You knew others had seen it, because there was just the two channels. It was a communally binding thing this talking about it. Media splintered, our interaction splintered, became diffuse. Covid centered everyones attention on a single thing. Globally, synchronous, on specific platforms, not just in the abstract but with individual's stories through our digital connections. Vgl [[Schaal van aandacht splitst 20210222161155]] wrt attention diffusion, Covid undid the diffusion.

    5. Because what Wolf turned into over the past decade is something very specific to our time: a clout chaser. Clout is the values-free currency of the always-online age – both a substitute for hard cash as well as a conduit to it. Clout is a calculus not of what you do, but of how much bulk you-ness there is in the world. You get clout by playing the victim. You get clout by victimizing others. This is something that is understood by the left and the right. If influence sways, clout squats, taking up space for its own sake.

      'clout chaser' nice parallel to cloud chaser. Clout as volume of your online engagement, and is a thing in itself, clout is the aim of the work. Conspiracy fantasies a means towards clout.

    6. The big misinformation players may be chasing clout, but plenty of people believe their terrifying stories

      clout as metric/currency. Klein's assumption seems to be that the arsonists don't believe their own stuff just see it as business. Those on the outside are always wondering though if that is the case? Maybe they know they are embellishing but perhaps also gradually falling for their own stuff as it seems to follow a predictable path further down the rabbithole. Do they catch up with their own BS over time? Or is it full on cynicism as on display w Bannon and the school shootings in the defamation case?

    7. Wolf is getting everything she once had and lost – attention, respect, money, power. Just through a warped mirror

      Lost in the early 2010s, refound a decade later indeed. Vindication, like she was always right all along.

    8. At the extreme end, diagonal movements share a conviction that all power is conspiracy.

      The sad thing is that isn't even extremely wrong. All power should be viewed with suspicion and have actively enforced limits. All organisations, initially merely a tool for structuring and collaboration, begin to work to perpetuate themselves (leading to vgl [[Corporations as Slow AI 20180201210258]]), all power seeks to sustain if not extend itself.

    9. If the claims are coming from the far right, the covert plan is for a green/socialist/no-borders/Soros/forced-vaccine dictatorship, while the new agers warn of a big pharma/GMO/biometric-implant/5G/robot-dog/forced-vaccine dictatorship. With the exception of the Covid-related refresh, the conspiracies that are part of this political convergence are not new – most have been around for decades, and some are ancient blood libels. What’s new is the force of the magnetic pull with which they are finding one another, self-assembling into what the Vice reporter Anna Merlan has termed a “conspiracy singularity”.

      "Conspiracy singularity", ha! Note the two groupings of far right and 'new age' elements, and how they mix, with 'forced vaccine' the linking pin.

    10. couple of months earlier, Wolf had released a video claiming that those vaccine-verification apps so many of us downloaded represented a plot to institute “slavery for ever”. The apps would usher in a “CCP-style social credit score system” in “the West”, she said

      Same here in NL All temporary instruments would be permanent they said. None of them ever talks about that that didn't happen. They moved on to the rear-guard fight of how all deaths were not Covid but the vaccines or Ukraine biolabs or climate hoax etc. None also notice the pattern of how very different topics end up at the same side of the rabbithole divide.

    11. https://web.archive.org/web/20230827073249/https://www.theguardian.com/books/2023/aug/26/naomi-klein-naomi-wolf-conspiracy-theories

      Also fully downloaded to [[The Other Naomi 20230827093013]]

      I at some point during the pandemic mistook Wolf for Klein too (same first name, fuzzy notion of last name other than it being short) and remember mentioning it to E as a sad shift (which in both cases is/would be true). Note that Wolf according to https://en.wikipedia.org/wiki/Naomi_Wolf landed in the rabbithole a decade before the pandemic. Difference as Klein points out is that it in the mean time became an industry Wolf could be successful in where a decade ago it meant her dropping from previous high reputation.

    1. Brander and Joel started building Subconscious, a local-first decentralized note-taking app. They began with the protocol that would power the app - Noosphere. Noosphere is permissionless and open source, like HTTP or IMAP; anyone can build on top of it

      If this is a 'return to the web' as stated then why a new protocol? The web already has its protocols. Creating your own for your app and saying well if the app goes the protocol is still there for you to build your own is exactly what silo's like Evernote also did (there's always our own xml-based export format, you're not locked in, and they actually are not wrong).

    2. Furthermore, since these centralized apps are walled gardens, your friends and connections are left behind, leaving you missing out on the social aspect of shared note-taking.But the web wasn't always this way.

      Non sequitur: centralised apps <> the web. Evernote isn't on the 'web', Notes idem, Obsidian idem. The step to 'friends and connections' is a sudden thing thrown in. It's not a given you would want 'social' affordances for your notes.

    1. Sortes Vergilianae: taking random quotes from Vergilius and interpret their meaning either as prediction or as advice. The latter as a trigger for self reflection makes it a #leeswijze #reading manner that is non-linear

      Vgl. [[Skillful reading is generally non-linear 20210303154148]]

      St. Antonius (of Egypt, 3rd century) is said to have read the bible this way (sortes sanctorum it's called if you use it for divination), and Augustinus followed that thus picking up Paul's letter to the Romans and getting converted in the 4th century.

      Is this ripping up of the text into isolated paragraphs to access and read a text an early input into commonplace books and florilegia? As a gathering of such things?

      Mentioned in [[Information edited by Ann Blair]] in lemma 'Readers' p730.

    1. Roland Barthes (1915-1980, France, literary critic/theorist) declared the death of the author (in English in 1967 and in French a year later). An author's intentions and biography are not the means to explain definitively what the meaning of a (fictional I think) text is. [[Observator geeft betekenis 20210417124703]] dwz de lezer bepaalt.

      Barthes reduceert auteur to de scribent, die niet verder bestaat dan m.b.t. de voortbrenging van de tekst. Het werk staat geheel los van de maker. Kwam het tegen in [[Information edited by Ann Blair]] in lemma over de Reader.

      Don't disagree with the notion that readers glean meaning in layers from a text that the author not intended. But thinking about the author's intent is one of those layers. Separating the author from their work entirely is cutting yourself of from one source of potential meaning.

      In [[Generative AI detectie doe je met context 20230407085245]] I posit that seeing the author through the text is a neccesity as proof of human creation, not #algogen My point there is that there's only a scriptor and no author who's own meaning, intention and existence becomes visible in a text.

    1. https://www.agconnect.nl/tech-en-toekomst/artificial-intelligence/liquid-neural-networks-in-ai-is-groter-niet-altijd-beter Liquid Neural Networks (liquid i.e. the nodes in a neuronal network remain flexible and adaptable after training (different from deep learning and LL models). They are also smaller. This improves explainability of its working. This reduces energy consumption (#openvraag is the energy consumption of usage a concern or rather the training? here it reduces the usage energy)

      Number of nodes reduction can be orders of magnitude. Autonomous steering example talks about 4 orders of magnitude (19 versus 100k nodes)

      Mainly useful for data streams like audio/video, real time data from meteo / mobility sensors. Applications in areas with limited energy (battery usage) and real time data inputs.

    1. https://web.archive.org/web/20230822131150/https://www.nature.com/articles/d41586-023-02600-x I wondered about this EU brain modelling project, as I came across it in a book from the 2010s announcing it. A quick google didn't give me much. Also see paper refs at end.

    1. Project that is the EU part of iBOL, the international barcode of life consortium. A DNA base of life forms. Asked them if they are in touch w any citizen science groups in NL.

    1. Dr Christina Lynggaard, Molecular Ecology and Evolution. Does eDNA Profile lists a number of additional publications on this topic.

    1. We provide evidence for the spatial movement and temporal patterns of airborne eDNA and for the influence of weather conditions on vertebrate detections. This study demonstrates airborne eDNA for high-resolution biomonitoring of vertebrates in terrestrial systems and elucidates its potential to guide global nature management and conservation efforts in the ongoing biodiversity crisis.

      eDNA not just useful for presence detection but also for movement across space and time.

    1. https://ecoevo.social/@biodiversity/110790626800847007

      Dr Christina Lynggaard, University of Copenhagen, shows an air sampler for DNA. eDNA as a way to do species observation.

    1. eDNA sampling is dna sampled from the environment, not from organisms. Can be sampled from air. Do I know of eDNA citizen science projects?

    1. [[Information edited by Ann Blair]] bought #2023/08/19 in Groningen at Godert Walter

    1. It should be trivially easy to create a new Activity, and it ought to be possible to create such a workspace even when you’re part-way into already doing the thing. This is a common, frequent need: While working on something (or playing games, reading news,…) I get an email/call from a contact wherein they ask me for some insight into how I might be able to help them. My context has switched, though my PC doesn’t know it yet. I send them an email, some links, documents and so on, some to-and-fro happens via several channels, and suddenly I find myself in the midst of a new Acivity that already has some history. I need a way to hotkey a new Project and say to it, “And include these existing artefacts, the links between them, and their history and provenance.”

      One is usually not aware of a new project (as a set of activities) starting, only some time after you have started do you realise it is a project. Meaning that 'starting' a project in your (pkm) system, always includes a bit of existing history. Starting templates / sequences (like making folder structures etc) should incorporate that existing brief history.

      I recognise this, but this description also seems to assume that a project starts in a sort-of vacuum without pre-existing context and notes, until you creat the first few steps before realising it is indeed a project. Having an established note making routine (day logs, etc whatever) means projects are emergent out of ongoing activity, out of an existing ratcheting effect. Vgl [[Vastklik notes als ratchet zonder terugval 20220302102702]] Meaning you can always point back to existing notes, tracing the evolution of something into a project. That can be covered by a few pointers/fields/tags in a new project's template.

    1. The original accident is een concept van de Franse filosoof Paul Virilio, waarmee hij waarschuwt voor de onbedoelde gevolgen van technologische ontwikkeling. Uiteindelijk stuit elke technologie op een grens waardoor er een ongeval zal ontstaan, zo stelt hij. Daarmee leren we wat er verbeterd moet worden. Tegelijkertijd maakte hij zich steeds meer zorgen over de onbeheersbaarheid van technologische vooruitgang. Stevenen we af op een doomsday?

      Original accident: elke tech heeft een onbedoeld gevolg, en dat leidt uiteindelijk tot een 'ongeval'. zo leer je meer over het wezen van die tech, en wat er verbeterd moet worden. Virilio vreest kennelijk dat huidige tech dev tempo te hard is om dat proces beheersbaar te laten verlopen.



      "Accidents reveal the substance"

    1. “historical method” laid out by Ernst Bernheim and later Seignobos/Langlois in the late 1800s.

      [[Lehrbuch der historischen Methode und der Geschichtsphilosophie by Ernst Bernheim]] 1889 https://archive.org/details/lehrbuchderhist03berngoog/mode/1up (1908)

      See also https://philarchive.org/archive/ASSSOH-2 Arthur Alfaix Assis, Schemes of Historical Method in the Late 19th Century pp105-125 in Contributions to Theory and Comparative History of Histiography, German and Brazilian perspectives, by eds Luiz Estevam de Oilveira Fernandes, Luísa Rauter Pereira and Sérgio da Mata

    1. Thomas Stoffregen and his team

      The virtual reality head-mounted display Oculus Rift induces motion sickness and is sexist in its effects https://pubmed.ncbi.nlm.nih.gov/27915367/ Downloaded to Zotero

    2. I tracked down military reports about gender bias in simulator sickness, much of which dated back to the 1960s

      in the 1960s the US military had reports on gender bias wrt simulator sickness. (Such simulators would likely have been more of the physical (rotation, speeds etc.) than virtual (screens / vr))

    3. This led me to run a series of psych experiments where my data suggested that people’s ability to be able to navigate 3D VR seems to be correlated with the dominance of certain sex hormones in their system. Folks with high levels of estrogen and low levels of testosterone – many of whom would identify as women – were more likely to get nauseous navigating VR than those who have high levels of testosterone streaming through their body. What was even stranger was that changes to hormonal levels appeared to shape how people respond to these environments.

      estrogen / testosteron levels influence responses to VR environment and increase getting nauseous navigating in VR.

    4. https://web.archive.org/web/20230809191748/http://www.zephoria.org/thoughts/archives/2023/08/06/still-trying-to-ignore-the-metaverse.html

      There are many reasons why Meta's Metaverse is a dud (Vgl https://zylstra.org/blog/2021/11/metaverse-reprise/ and https://www.zylstra.org/blog/2022/02/was-second-life-ahead-or-metaverse-nothing-really-new/ ) but boyd points to a whole other range of reasons: women and men respond entirely different to VR based on hormonal levels.

      Potential antilib [[Making a Metaverse That Matters by Wagner James Au]]

    1. Unlike 20 years ago, the people poised to be early adopters today are those who are most toxic, those who get pleasure from making others miserable. This means that the rollout has to be carefully nurtured

      Interesting observation/postion: current early adopters of new platforms are not motivated by shiny new tech syndrom but are motivated by finding amplification for their toxicity. Sounds intriguing but I wonder about causality and the earlier mentioned norm setting. New platforms may have diff norms they set. Toxicity is an outcome of the norms promoted by tech functionality (amplification/engagement goading) Will that carry over into other things (does it carry over into other non-collapsed contexts e.g. in practice?: sometimes, mostly not I think). Tocivity is probably not intrincis to the people involved, but learned. And can be unlearned, when encountering different social expectations.

    2. I should note that blitzscaling is not the only approach we’re seeing right now. The other (and I would argue wiser) approach to managing dense network formation is through invitation-based mechanisms. Heighten the desire, the FOMO, make participating feel special. Actively nurture the network. When done well, this can get people to go deeper in their participation, to form community.

      This seems a false dichotomy. There are more than two ways to do this, more than 'blitzscaling' and 'invitation-based' (which I have come to see as manipulative and a clear sign to stay away as it makes you the means not the goal right from the start of a platform, talking about norm setting). Federation is e.g. very different (and not even uniform in how it's different from both those options: from open to all to starting from a pre-existing small social graph offline). This like above seems to disregard, despite saying building tools is not the same as building community somewhere above, the body of knowledge about stewarding communities / network that exists outside of tech. Vgl [[Invisible hand of networks 20180616115141]]

    3. context collapse, a term that Alice Marwick and I coined long ago

      huh? Isn't this an 'old' thing from within communication/psychology? I spent quite some time with my therapist in 97/98 discussing why I purposefully avoided context collapse as a kid preventing different circles from overlapping. 2010 is the ref'd paper, I use it in my blog in May 2009 https://www.zylstra.org/blog/2009/05/hate_mailers_un/ (though I may have been aware of boyd or Michael Wesch using it then). Wikipedia https://en.wikipedia.org/wiki/Context_collapse says boyd is credited with coining 'collapsed contexts' (which is both a hedge by WP editors and different from the claim here). Did she already use it when I first encountered her (work) in 2006 during her Phd?

    4. Cuz that’s the thing about social media. For people to devote their time and energy to helping enable vibrancy, they have to gain something from it. Something that makes them feel enriched and whole, something that gives them pleasure (even if at someone else’s pain). Social media doesn’t come to life through military tactics. It comes to life because people devote their energies into making it vibrant for those that are around them. And this ripples through networks.

      boyd here stating what has been a core notion of community stewarding since late 90s knowledge management: participation value to members. (e.g. Wenger 1998/9 and 2002)

  4. Jul 2023
    1. https://web.archive.org/web/20230709085606/https://kolektiva.social/@ophiocephalic/110680030293653277

      Good description of ZAD, zone a defense, not as gatekeeping (keeping others out that would also enjoy what's inside) but as defending a zone (keeping others out to prevent the zone's destruction). ZAD I encountered in Nantes in the area where an airport was planned.

    1. I work in marketing, for my sins. This is mostly why I’m so entirely down on the marketing industry and many of the people who work in it. I also happen to have an MSc in psychology – actual psychology! – with a focus on behaviour change. On day 1 of your class about behaviour change in a science course, you learn that behaviour change is not a simple matter of information in, behaviour out. Human behaviour, and changing it, is big and complex. Meanwhile, on your marketing courses, which I have had the misfortune to attend, the model of changing behaviour is pretty much this: information in, behaviour out.

      Marketing assumes information in means behaviour out, and conveys that in marketing courses. Psychology teaches that behavioural change is not just info in behaviour out, but a complex thing. Marketing has clay feet.

  5. Jun 2023
    1. https://web.archive.org/web/20230625094359/https://orgmode.org/worg/org-syntax.html


      Proposal for org-mode syntax as the interoperability standard for tools for thought. The issue with things like markdown and opml is said to be the lack of semantic mark-up. Is that different in org-mode?

    1. https://web.archive.org/web/20230617185715/https://diggingthedigital.com/het-dilemma-van-de-digitale-diversiteit/

      Frank on having a different experience for your site than just a blog timeline.

      Ik herken wat je ze zegt. Ik zou het prettig vinden om meerdere soorten ingangen, tijdslijn, op thema of onderwerp, type content, setjes die onderling linken, etc. te kunnen bieden als een soort spectrum. Met name als voorpagina om niet alleen een blogtijdslijn te bieden aan een toevallige lezer of aan de explorerende lezer. Drie jaar geleden ben ik eens begonnen met een WordPress theme daarvoor. Maar ja, ik kan eigenlijk helemaal geen themes maken. Misschien dat het met Jan Boddez' IndieBlocks nu makkelijker zou gaan, want dan hoef ik in een nieuw theme niet ook nog eens al die IndieWeb dingen te regelen. Maar eens de project notities uit 2020 (toen, want toch thuis) afstoffen voor komend najaar. De zomer wordt dat niks, die is voor lezen.

      Zoals ik https://www.zylstra.org/blog/2020/11/15326/ schreef: The idea is to find a form factor that does not clearly say ‘this is a blog’ or ‘this is a wiki’, but presents a slightly confusing mix of stock and flow / garden and stream, something that shows the trees and the forest at the same time. So as to invite visitors to explore with a sense of wonder, rather than read the latest or read hierarchically. At the back-end nothing will fundamentally change, there still will be blogposts and pages with their current URLs, and the same-as-now feeds for them to subscribe to.

    1. Social software tools are all smaller than us, we control them individually

      Is this my first mention of [[Technologie kleiner dan ons 20160818122905]]? I know I used the concept in my talks back then. Need to relabel my note with correct timestamp.

      Updated [[Technologie kleiner dan ons 20050617122905]]

    1. Overview of how tech changes work moral changes. Seems to me a detailing of [[Monstertheorie 20030725114320]] diving into a specific part of it, where cultural categories are adapted to fit new tech in. #openvraag are the sources containing refs to either Monster theory by Smits or the anthropoligical work of Mary Douglas. Checked: it doesn't, but does cite refs by PP Verbeek and Marianne Boenink, so no wonder there's a parallel here.

      The first example mentioned points in this direction too: the 70s redefinition of death as brain death, where it used to be heart stopped (now heart failure is a cause of death), was a redefinition of cultural concepts to assimilate tech change. Third example is a direct parallel to my [[Empathie verschuift door Infrastructuur 20080627201224]] [[Hyperconnected individuen en empathie 20100420223511]]

      Where Monstertheory is a tool to understand and diagnose discussions of new tech, wherein the assmilation part (both cultural cats and tech get adapted) is the pragmatic route (where the mediation theory of PP Verbeek is located), it doesn't as such provide ways to act / intervene. Does this taxonomy provide agency?

      Or is this another way to locate where moral effects might take place, but still the various types of responses to Monsters still may determine the moral effect?

      Zotero antilib Mechanisms of Techno-moral Change

      Via Stephen Downes https://www.downes.ca/post/75320

    1. https://web.archive.org/web/20230616140838/https://www.theguardian.com/education/2023/jun/16/george-washington-university-professor-antisemitism-palestine-dc

      psychoanalysis was the guided internal journey of individuals, in the nineties CBT displaced this (visible in the sessions I did at the time), and now a new wave of psychoanalysis comes in that doesn't only take the individual as focus, but also the impact of the structures and systems around yourself. That's an interesting evolutionary sketch of the field.

      To me this article is as much about power and generations as it is about a lack of a professional field being able to apply its own expertise to itself.

      culture war as generational war and but also US specific perhaps. Also the culture war seems to be precisely about taking the individual vs the collective influence on the individual. The old guard feeling individually blamed for things that the new guard says is a collective thing to reckon with. Where again the responses of each are seen through the other lens. There's now no way to resolve that easily. Change happens when the old people die said Howard. Seems to be at issue here too.

    1. https://web.archive.org/web/20230613121025/https://www.workfutures.io/p/note-what-do-we-do-when-we-cant-predict

      Stowe says the 'unpredictability' e.g. investors see comes down that there's no way to assess risk in the global network created complexity. Points to older piece on uncertainty risk and ambiguity. https://www.sunsama.com/blog/uncertainty-risk-and-ambiguity explore.

      I would say that in complexity you don't try to predict the future, as that is based on linear causal chains of the knowable an known realms, you try to probe the future, running multiple small probes (some contradictory) and feed those that yield results.

    1. In an ever more unequal world, it is perhaps not surprising that we are splitting into news haves and have-nots. Those who can afford and are motivated to pay for subscriptions to access high-quality news have a wealth of choices: newspapers such as The Times, The Washington Post, The Wall Street Journal and The Financial Times compete for their business, along with magazines such as The New Yorker and The Atlantic. Niche subscription news products serving elite audiences are also thriving and attracting investment — publications like Punchbowl News, Puck and Air Mail. The people who subscribe to these publications tend to be affluent and educated.It bodes ill for our democracy that those who cannot pay — or choose not to — are left with whatever our broken information ecosystem manages to serve up, a crazy quilt that includes television news of diminishing ambition, social media, aggregation sites, partisan news and talk radio. Yes, a few ambitious nonprofit journalism outlets and quality digital news organizations remain, but they are hanging on by their fingernails. Some news organizations are experimenting with A.I.-generated news, which could make articles reported and written by actual human beings another bauble for the Air Mail set, along with Loro Piana loafers and silk coats from the Row.

      Opinion piece on how news is becoming a have/have-not thing. I assume it was always thus, with the exception of public TV/radio news broadcasting and then the web. So how did 'we' deal with it then?

    1. https://web.archive.org/web/20230612101920/https://thefugue.space/thoughts/the-glimmer

      Spatial computing

      what of early insights wrt [[Ambient Findability by Peter Morville]] 2006, and my conclusion 2008 that though adding an info layer while interacting in the physical world was key, we put it all in our pocket. I doubt it will end up as ski goggles on our head much.

      Via [[Boris Mann]] https://blog.bmannconsulting.com/2023/06/08/kharis-oconnell-has.html

    1. I don’t think we have them, except piecemeal and by chance, or through the grace of socially gifted moderators and community leads who patch bad product design with their own EQ

      indeed. Reminds me of Andrew Keen 2009 in Hamburg raging about the lack of community in socmed and then stating, "except Twitter, that's a real community". Disqualifying himself entirely in a single sentence and being laughed at by the audience at Next09. Taking community stewarding aspects as starting point for tools would yield very different results. [[Communitydenken Wenger 20200924110143]]

    2. All this unmobilized love

      Good title, puts humanity front and center in socsoft discussion. Key wrt [[Menselijk en digitaal netwerk zijn gelijksoortig 20200810142551]]

    3. But we also need new generations of user-accountable institutions to realize the potential of new tech tools—which loops back to what I think Holgren was writing toward on Bluesky. I think it’s at the institutional and constitutional levels that healthier and more life-enhancing big-world tools and places for community and sociability will emerge—and are already emerging

      institutionalising as a way for socsoft to become sustainable, other than through for profit structures that have just one aim. Vgl [[2022 Public Spaces Conference]], I have doubts as institutions are slow by design which is what gives them their desirable stability. Vgl [[Invisible hand of networks 20180616115141]] vs markets.

      Also : generations are institutions too. It is needed to repeat these things to new gens, as they take what is currently there as given. Is currently true for things like open data too.

    4. I’ll be speaking with and writing about people working on some of the tools and communities that I think help point ways forward—and with people who’ve built fruitful, immediately useful theories and practices

      Sounds interesting. Add to feeds. Wrt [[Invisible hand of networks 20180616115141]] scaling comes from moving sideways, repetition and replication. And that takes gathering and sharing (through the network) of examples. Vgl [[OurData.eu Open Data Voorbeelden 20090720142847]] but for civic tech, socsoft? What would it look like?

    5. The big promise of federated social tools is neither Mastodon (or Calckey or any of the other things I’ve seen yet) nor the single-server Bluesky beta—it’s new things built in new ways that use protocols like AT and ActivityPub to interact with the big world.

      Vgl [[Build protocols not platforms 20190821202019]] I agree. Kissane says use protocols in new ways for new tools, starting from the premise of actually social software.

    6. we’ve seen weirdly little experimentation with social forms at scale

      yes, we call it social media these days, and the focus is on media, not social. Yet [[Menselijk en digitaal netwerk zijn gelijksoortig 20200810142551]], meaning we should design such tools starting from human social dynamics.

    7. Where are the networks that deeply in their bones understand hospitality vs. performance, safe-to vs. safe-from, double-edged visibility, thresholds vs. hearths, gifts vs. barter, bystanders vs. safety-builders, even something as foundational as power differentials?


    8. Even most of the emergent gestures in our interfaces are tweaks on tech-first features—@ symbols push Twitter to implement threading, hyperlinks eventually get automated into retweets, quote-tweets go on TikTok and become duets. “Swipe left to discard a person” is one of a handful of new gestures, and it’s ten years old.

      Author discusses specific socially oriented interface functions (left/right swiping, @-mentions) that are few and old. There's also the personal notes on new connections in Xing and LinkedIn (later), imo. And the groupings/circles in various platforms. Wrt social, adding qualitative descriptions to a connection to be able to do pattern detection e.g. would be interesting, as is moving beyond just hub with spokes (me and my connections) and allowing me to add connections I see between people I'm connected to. All non-public though, making it unlikely for socmed. Vgl [[Personal CRM as a Not-LinkedIn – Interdependent Thoughts 20210214170304]]

    9. https://web.archive.org/web/20230612090744/https://erinkissane.com/all-this-unmobilized-love

      Reminds me of https://www.zylstra.org/blog/2006/09/barcamp_brussel/ #2006/09/24 and the session I did with [[Boris Mann]] on 'all the things I need from social media, they don't provide yet' phrasing [[People Centered Navigation 20060930163901]]. http://barcamp.org/w/page/400567/BarCampBrussels

    1. https://web.archive.org/web/20230609140440/https://techpolicy.press/artificial-intelligence-and-the-ever- receding-horizon-of-the-future/

      Via Timnit Gebru https://dair-community.social/@timnitGebru/110498978394074048

    2. As the EU heads toward significant AI regulation, Altman recently suggested such regulation might force his company to pull out of Europe. The proposed EU regulation, of course, is focused on copyright protection, privacy rights, and suggests a ban on certain uses of AI, particularly in policing — all concerns of the present day. That reality turns out to be much harder for AI proponents to confront than some speculative future

      While wrongly describing the EU regulation on AI, author rightly points to the geopolitical reality it is creating for the AI sector. AIR is focused on market regulation, risk mitigation wrt protection of civic rights and critical infrastructure, and monopoly-busting/level playing field. Threatening to pull out of the EU is an admission you don't want to be responsible for your tech at all. And it thus belies the ethical concerns voiced through proximate futurising. Also AIR is just one piece of that geopolitical construct, next to GDPR, DMA, DSA, DGA, DA and ODD which all consistently do the same things for different parts of the digital world.

    3. In 2010, Paul Dourish and Genevieve Bell wrote a book about tech innovation that described the way technologists fixate on the “proximate future” — a future that exists “just around the corner.” The authors, one a computer scientist, and the other a tech industry veteran, were examining emerging tech developments in “ubiquitous computing,” which promised that the sensors, mobile devices, and tiny computers embedded in our surroundings would lead to ease, efficiency, and general quality of life. Dourish and Bell argue that this future focus distracts us from the present while also absolving technologists of responsibility for the here and now.

      Proximate Future is a future that is 'nearly here' but never quite gets here. Ref posits this is a way to distract from issues around a tech now and thus lets technologists dodge responsibility and accountability for the now, as everyone debates the issues of a tech in the near future. It allows the technologists to set the narrative around the tech they develop. Ref: [[Divining a Digital Future by Paul Dourish Genevieve Bell]] 2010

      Vgl the suspicious call for reflection and pause wrt AI by OpenAI's people and other key players. It's a form of [[Ethics futurising dark pattern 20190529071000]]

      It may not be a fully intentional bait and switch all the time though: tech predictions, including G hypecycle put future key events a steady 10yrs into the future. And I've noticed when it comes to open data readiness and before that Knowledge management present vs desired [[Gap tussen eigen situatie en verwachting is constant 20071121211040]] It simply seems a measure of human capacity to project themselves into the future has a horizon of about 10yrs.

      Contrast with: adjacent possible which is how you make your path through [[Evolutionair vlak van mogelijkheden 20200826185412]]. Proximate Future skips actual adjacent possibles to hypothetical ones a bit further out.

    4. Looking to the “proximate future,” even one as dark and worrying as AI’s imagined existential threat, has some strategic value to those with interests and investments in the AI business: It creates urgency, but is ultimately unfalsifiable.

      Proximate future wrt AI creates a fear (always useful dark patterns wrt forcing change or selling something) that always remains unfalsifiable. Works the other way around to, as stalling tactic (tech will save us). Same effect.

    5. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

      What is missing here? The one thing with the highest probability as we are already living the impacts: climate. The phrase itself is not just a strategic bait and switch for the AI businesses, but also a more blatant bait and switch wrt climate politics.

    1. Enthusiasm about Apple's VIsion Pro. Rightly points out we've had 3D software for 3 decades (From Traveler, Wolfenstein3D through SL and now Roblox etc.) But skiing goggles do not a lifestyle make like Apple's ipod, iphone and watch did. It has better capabilities but there's no fundamental difference with the Oculus Rift et al, and the various versions of such devices lying unused gathering dust in my attic. Neck-RSI wave incoming if it does take off. Would you want to be seen wearing one in public? AR and MR are powerful, VR won't be mainstream imo unless as general addiction as per SF tropes.

    1. [[Jaan Tallinn]] is connected to Nick Bostrom wrt the risks of AI / other existential risks, which is problematic. It may be worthwile to map out these various institutions, donors and connections between them. This to have a better grasp of influences and formulate responses to the 'tescreal' bunch. Vgl [[2023-longtermism-an-odd-and-peculiar-ideology]] where I observe same.

    1. We are nowhere near having a self-driving cars on our roads, which confirms that we are nowhere near AGI.

      This does not follow. The reason we don't have self driving cars is because the entire effort is car based not physical environment based. Self driving trains are self driving because of rails and external sensors and signals. Make rails of data, and self driving cars are like trains. No AI, let alone AGI needed. Self driving cars as indicator for AGI make no sense. Vgl https://www.zylstra.org/blog/2015/10/why-false-dilemmas-must-be-killed-to-program-self-driving-cars/ and [[Triz denken in systeemniveaus 20200826114731]]

  6. May 2023
    1. Ooit in NB bij de weg vragen zei een ouder iemand 'straks rechts de macadamweg op', ipv asfaltweg. Macadam roads, named after MacAdam, are a 18th/19th road building concept of layers of stones in decreasing sizes (the top layer smaller than the average wheel), enabling easier road building and maintenance. Tar was used sometimes to reduce dust,, esp after the intro of cars who had much wider tires than carriage wheels and created more dust. Until the top layer stones and the tar were pre-mixed as asphalt. Tarmac= tarred-macadam

      Vgl https://hypothes.is/a/h9luNPx5Ee2ZnxcNCCTotA

    1. Interesting examples of shrinking travel time (and costs) in the UK in the 18th and 19th centuries. These examples fit [[De 19e eeuwse infrastructuren 20080627201224]] [[Sociale effecten van 19e eeuwse infra 20080627201425]] I described at Reboot 10, 2008, where the scale of novel infra allowed a shift of regional perspectives to the aggregation level of a nation state. Stross compares travel times of 18th century roads and 19th century rail to the advent of mass flight in the 20th, which is similar in time/cost. It's also a qualitative shift away from nation to mass and global (but with the nation as go-between and shorthand)

    1. Dave Pollard writes about types of silence and its cultural role in different situations. Prompted by a K-cafe by David Gurteen. Great to see such old network connections still going strong.

      Book mentioned [[The Great Unheard at Work by Mark Cole and John Higgins]] something for the antilib re power assymmetries?

    1. Chatti notes that Connectivism misses some concepts, which are crucial for learning, such as reflection, learning from failures, error detection and correction, and inquiry. He introduces the Learning as a Network (LaaN) theory which builds upon connectivism, complexity theory, and double-loop learning. LaaN starts from the learner and views learning as the continuous creation of a personal knowledge network (PKN).[18]

      Learning as a Network LaaN and Personal Knowledge Network PKN , do these labels give me anything new?

      Mohamed Amine Chatti: The LaaN Theory. In: Personalization in Technology Enhanced Learning: A Social Software Perspective. Aachen, Germany: Shaker Verlag, 2010, pp. 19-42. http://mohamedaminechatti.blogspot.de/2013/01/the-laan-theory.html I've followed Chatti's blog in the past I think. Prof. Dr. Mohamed Amine Chatti is professor of computer science and head of the Social Computing Group in the Department of Computer Science and Applied Cognitive Science at the University of Duisburg-Essen. (did his PhD at RWTH in 2010, which is presumably how I came across him, through Ralf Klamma)

    1. Dave Troy is a US investigative journalist, looking at the US infosphere. Places resistance against disinformation not as a matter of factchecking and technology but one of reshaping social capital and cultural network topologies.

      Early work by Valdis Krebs comes to mind vgl [[Netwerkviz en people nav 20091112072001]] and how the Finnish 'method' seemed to be a mix of [[Crap detection is civic duty 2018010073052]] and social capital aspects. Also re taking an algogen text as is / stand alone artefact vs seeing its provenance and entanglement with real world events, people and things.

    1. The linked Mastodon thread gives a great example of using Obsidian (but could easily have been Tinderbox of any similar tool) for a journalism project. I can see me do this for some parts of my work too. To verify, see patterns, find omissions etc. Basically this is what Tinderbox is for, while writing keep track of characters, timelines, events etc.

    1. This simple approach to avoiding bad decisions is an example of second-level thinking. Instead of going for the most immediate, obvious, comfortable decision, using your future regrets as a tool for thought is a way to ensure you consider the potential negative outcomes.

      Avoiding bad decisions isn't the same as making a constructive decision though. This here is more akin to postponed gratification.

    2. This visualisation technique can be used for small and big decisions alike. Thinking of eating that extra piece of cake? Walk yourself through the likely thoughts of your future self. Want to spend a large sum of money on a piece of tech you’re not sure yet how you will use? Think about how your future self will feel about the decision

      Note that these are examples that imply that using regret of future self in decision making is mostly for deciding against a certain action (eat cake, buy new toy).

    3. Instead of letting your present self make the decision on their own, ignoring the experience of your future self who will need to deal with the consequences later, turn the one-way decision process into a conversation between your present and future self.

      As part of decision making involve a 'future self' so that different perspective(s) can get taken into account in a personal decision on an action.

    4. Bring your future self in the decision-making process

      Vgl Vinay Gupta's [[Verantwoording aan de kinderen 20200616102016]] as a way of including future selves, by tying consequence evalution to the human rights of children.

    5. In-the-moment decisions have a compound effect: while each of them doesn’t feel like a big deal, they add up overtime.

      Compounding plays a role in any current decision. Vgl [[Compound interest van implementatie en adoptie 20210216134309]] [[Compound interest of habits 20200916065059]]

    6. temporal discounting. The further in the future the consequences, the least we pay attention to them

      Temporal discounting: future consequences are taken into account as an inverse of time. It's based on urgency as a survival trait.

    1. Agent-regret seems a useful term to explore. Also in less morally extreme settings than the accidental killing in this piece.

    1. New to me form of censorship evasion: easter egg room in a mainstream online game that itself is not censored. Finnish news paper Helsingin Sanomat has been putting their reporting on the Russian war on Ukraine inside a level of online FPS game Counter Strike, translated into Russian. This as a way to circumvent Russian censorship that blocks Finnish media. It saw 2k downloads from unknown geographic origins, so the effect might be very limited.

    1. After 29 billion USD in 2 yrs, Metaverse is still where it was and where Second Life already was in 2003 (Linden Labs and their product Second Life still exist and have been profitable since their start.) I warned a client about jumping into this stuff that Meta while the talk and the walk were not a single thing beyond capabilities that have existed for two decades. https://www.zylstra.org/blog/2022/02/was-second-life-ahead-or-metaverse-nothing-really-new/ en https://www.zylstra.org/blog/2021/11/metaverse-reprise/ Good thing they didn't change their name to anything related .....

    1. Where are the thinkers who always have “a living community before their eyes”?

      I suspect within the living community in question. The scientific model of being an outside observer falls flat in a complex environment, as any self-styled observer is part of it, and can only succeed by realising that. Brings me to action research too. If they're hard to find from outside such a living community that's probably because they don't partake in the academic status games that run separate from those living communities. How would you recognise one if you aren't at least yourself a boundary spanner to the living community they are part of?

    2. For intellectuals of this sort, even when they were writing learned tomes in the solitude of their studies, there was always a living community before their eyes

      This quote is about early Christian bishops from The Spirit of Early Christian Thought by Robert Wilken. Not otherwise of interest to me, except this quote that Ayjay lifts from it. 'Always a living community before their eyes' is I realise my take on pragmatism. Goes back to [[Heinz Wittenbrink]] when he wrote about my 'method' in the context of #stm18 https://www.zylstra.org/blog/2018/09/heinz-on-stm18/

    1. Another downside to using Gutenberg’s sidebar panels is that, as long as I want to keep supporting the classic editor, I’ve basically got to maintain two copies of the same code, one in PHP and another in JavaScript.

      Note to self: getting into WP Gutenberg is a shift deeper into JS and less PHP. My usually entry into creating something for myself is to base it on *AMP (MAMP now) so I can re-use what I have in PHP and MySQL as a homecook.

    1. The amount of EVs in Norway is impacting air quality ('we have solved the NOx issue' it says) in Oslo. Mentions electrified building machinery also reducing noise and NOx on building sites. This has been a long time coming: in [[Ljubljana 2013]] there was this Norwegian guy who told me EVs had started leading new car sales. via Bryan Alexander.


    1. https://web.archive.org/web/20230507143729/https://ec.europa.eu/commission/presscorner/detail/en/ip_23_2413

      The EC has designated the first batch of VLOP and VLOSE under the DSA

      consultation on data access to researchers is opened until 25 May. t:: need to better read Article 41? wrt this access. Lots of conspiracytalk around it re censorship, what does the law say?

    1. European digital infrastructure consortia are as of #2022/12/14 a new legal entity. Decision (EU) 2022/2481 of 14 December 2022 establishing the Digital Decade Policy Programme 2030

      Requirement is that Member States may implement a multi-country project by means of an EDIC. The EC will than create them as legal entity by the act of an EC decision on the consortium funding. There is a public register for them.

      No mention of UBO (although if members are publshed, those members will have UBO registered).

    1. Amazon has a new set of services that include an LLM called Titan and corresponsing cloud/compute services, to roll your own chatbots etc.

    1. Databricks is a US company that released Dolly 2.0 an open source LLM.

      (I see little mention of stuff like BLOOM, is that because it currently isn't usable, US-centrism or something else?)

    1. What Obs Canvas provides is a whiteboard where you can add notes, embed anything, create new notes, and export of the result.

      Six example categories of using Canvas in Obsidian. - Dashboard - Create flow charts - Mindmaps - Mapping out ideas as Graph View replacement - Writing, structure an article ([[Ik noem mijn MOCs Olifantenpaadjes 20210313094501]]) - Brainstorming (also a Graph View replacement)

      I have used [[Tinderbox]] as canvas / outliner (as it allows view-switch between them) for dashboards mostly, as well as for braindumping and then mapping it for ideas and patterns.

      Canvas w Excalibur may help escape the linearity of a note writing window (atomic notes are fine as linear texts)

    1. I have decided that the most efficient way to develop a note taking system isn’t to start at the beginning, but to start at the end. What this means, is simply to think about what the notes are going to be used for

      yes. Me: re-usable insights from project work, exploring defined fields of interest to see adjacent topics I may move into or parts to currently focus on, blogposts on same, see evolutionary patterns in my stuff.

      Btw need to find a diff term than output, too much productivity overtones. life isn't 'output', it's lived.

    2. seriously considering moving my research into a different app, or vault to keep it segregated from the slip box

      ? the notes are the research/learning, no? Not only a residue of it. Is this a mix-up between the old stock and flow disc in (P)KM and the sense it needs to be one or the other? Both! That allows dancing with it.

    1. Kate Darling wrote a great book called The New Breed where she argues we should think of robots as animals – as a companion species who compliments our skills. I think this approach easily extends to language models.

      Kate Darling (MIT, Econ/Law from Uni Basel and ETH ZH) https://en.wikipedia.org/wiki/Kate_Darling http://www.katedarling.org/ https://octodon.social/@grok

      antilibrary add [[The New Breed by Kate Darling]] 2021 https://libris.nl/boek?authortitle=kate-darling/the-new-breed--9781250296115#

      Vgl the 'alloys' in [[Meru by S.B. Divya]]

    2. Language models are very good at some things humans are not good at, such as search and discovery, role-playing identities/characters, rapidly organising and synthesising huge amounts of data, and turning fuzzy natural language inputs into structured computational outputs.And humans are good at many things models are bad at, such as checking claims against physical reality, long-term memory and coherence, embodied knowledge, understanding social contexts, and having emotional intelligence.So we should use models to do things we can’t do, not things we’re quite good at and happy doing. We should leverage the best of both kinds of “minds.”

      The Engelbart perspective on how models can augment our cognitive abilities. Machines for search/discovery (of patterns I'd add, and novel outliers), role play (?, NPCs?, conversational partner Luhmann like, learning buddy?), structuring, lines of reasoning, summaries. (Of the last, those may actually be needed human work, go from the broader richer to the summarised outline as part of the internalisation process in learning).

      Human: access to reality, social context, emotional intelligence, access to reality, longterm memory (machines can help here too obvs), embodied K. And actual real world goals / purposes!

    3. Making these models smaller and more specialised would also allow us to run them on local devices instead of relying on access via large corporations.

      this. Vgl [[CPUs, GPUs, and Now AI Chips]] hardware with ai on them. Vgl [[Everymans Allemans AI 20190807141523]]

    4. They're just interim artefacts in our thinking and research process.

      weave models into your processes not shove it between me and the world by having it create the output. doing that is diminishing yourself and your own agency. Vgl [[Everymans Allemans AI 20190807141523]]

    5. One alternate approach is to start with our own curated datasets we trust. These could be repositories of published scientific papers, our own personal notes, or public databases like Wikipedia.We can then run many small specialised model tasks over them.

      Yes, if I could run my own notes of 3 decades or so on an LLM locally (where it doesn't feed the general model), that I would do instantly.

    6. The question I want everyone to leave with is which of these possible futures would you like to make happen? Or not make happen?
      1. Passing the reverse Turing test
      2. Higher standards, higher floors and ceilings
      3. Human centipede epistemology (ugh what an image)
      4. Meatspace premium
      5. Decentralised human authentication
      6. The filtered web

      Intuitively I think 1, 4, and 6 already de facto exist in the pre-generative AI web, and will get more important. Tech bros will go all in on 5, and I do see a role for it (e.g. to vouch that a certain agent acts on my behalf). I can see the floor raising of 2, and the ceiling raising too, but only if it is a temporary effect to a next 'stable' point (or it will be a race we'll loose), grow sideways not only up). Future 3 is def happening in essence, but it will make the web useless so there's a hard stop to this scenario, at high societal cost. Human K as such isn't dependent on the web or a single medium, and if it all turns to ashes, other pathways will come up (which may again be exposed to the same effect though)

    7. A more ideal form of this is the human and the AI agent are collaborative partners doing things together. These are often called human-in-the-loop systems.

      collaborative is different from shifting the locus of agency to the human, it implies shared agency. Also human in the loop I usually see used not for agency but for control (final decision is a human) and hence liability. (Which is often problematic because the human is biased to accept conclusions presented to them. ) Meant as safeguard only, not changing the role of the model agent, or intended to shift agency.

    8. I’m on Twitter @mappletonsI’m sure lots of people think I’ve said at least one utterly sacrilegious and misguided thing in this talk.You can go try to main character me while Twitter is still a thing.

      Ha! :D

    9. I tried to come up with three snappy principles for building products with language models. I expect these to evolve over time, but this is my first passFirst, protect human agency. Second, treat models as reasoning engines, not sources of truth And third, augment cognitive abilities rather than replace them.

      Use LLM in tools that 1. protect human agency 2. treat models as reasoning engines, not source of truth / oracles 3. augment cog abilities, no greedy reductionism to replace them

      I would not just protect human agency, which turns our human efforts into a preserve, LLM tools need to increase human agency (individually and societally) 3 yes, we must keep Engelbarting! lack of 2 is the source of the hype balloon we need to pop. It starts with avoiding anthromorphizing through our idiom around these tools. It will be hard. People want their magic wand, not the colder realism of 2 (you need to keep sorting out your own messes, but with a better shovel)

    10. At this point I should make clear generative AI is not the destructive force here. The way we’re choosing to deploy it in the world is. The product decisions that expand the dark forestness of the web are the problem.So if you are working on a tool that enables people to churn out large volumes of text without fact-checking, reflection, and critical thinking. And then publish it to every platform in parallel... please god, stop.So what should you be building instead?

      tech bro's will tech bro, in short. I fully agree, I wonder if this one sentence is enough to balance the entire talk until now not challenging the context of these tool deployments, but only addressing the symptoms and effects it's causing?

    11. We will eventually find it absurd that anyone would browse the “raw web” without their personal model filtering it.

      yes, it already is that way in effect.

    12. In the same way, very few of us would voluntarily browse the dark web. We’re quite sure we don’t want to know what’s on it.

      indeed, that's what it currently looks like. However....I would not mind my agents going over the darkweb as precaution or as check for patterns. At issue is that me doing that personally now takes way too much time for the small possibility I catch something significant. If I can send out agents the time spent wouldn't matter. Of course at scale it would remove the dark web one more step into the dark, as when all send their agents the darkweb is fully illuminated.

    13. We will have to design this very carefully, or it'll give a whole new meaning to filter bubbles.

      Not just bubble, it will be the FB timeline. Key here is agency, and design for human biases. A model is likely much better than I to manage the diversity of sources for me, if I give it a starting point myself, or to see which outliers to include etc. Again I think it also means moving away from single artefacts. Often I'm not interested in what everyone is saying about X, but am interested in who is talking about X. Patterns not singular artefacts. See [[Mijn ideale feedreader 20180703063626]]

    14. I expect these to be baked into browsers or at the OS level.These specialised models will help us identify generated content (if possible), debunk claims, flag misinformation, hunt down sources for us, curate and suggest content, and ideally solve our discovery and search problems.

      Appleton suggests agents to fact check / filter / summarise / curate and suggest (those last two are more personal than the others, which are the grunt work of infostrats) would become part of your browser. Only if I can myself strongly influence what it does (otherwise it is the FB timeline all over again!)

      If these models become part of the browser, do we still need the browser as a metaphor for a window on the web, or surfing the net? Why wouldn't those models come up with whatever they grabbed from the web/net/darkweb in the right spot in my own infostrats? The browser is itself not a part of my infostrats, it's the starting point of it, the viewer on the raw material. Whatever I keep from browsing is when PKM starts. When the model filters / curates why not put that in the right spots for me to start working with it / on it / processing it? The model not as part of the browser, but doing the actual browsing, an active agent going out there to flag patterns of interest (based on my prefs/current issues etc) and organising it for me for my next steps? [[Individuele software agents 20200402151419]]

    15. Those were all a bit negative but there is some hope in this future.We can certainly fight fire with fire.I think it’s reasonable to assume we’ll each have a set of personal language models helping us filter and manage information on the web

      Yes, agency at the edges. Ppl running their own agents. Have your agents talk to my agents to arrange a meeting etc. That actually frees up time. Have my agent check out the context and background of a text to judge whether it's a human author or not etc. [[Persoonlijke algoritmes als agents 20180417200200]] [[Individuele software agents 20200402151419]]

    16. People will move back to cities and densely populated areas. In-person events will become preferable.

      Ppl never stopped moving into cities. Cities are an efficient form human organisation. [[De stad als efficientie 20200811085014]]

      In person events have always been preferable because we're human. Living further away with online access has mitigated that, but not undone it.

    17. Once two people, they can confirm the humanity of everyone else they've met IRL. Two people who know each of these people can confirm each other's humanity because of this trust network.

      ssl parties etc. Threema. mentioned above. Catfish! Scale is an issue in the sense that social distance will remain social distance, so it still leaves you with the question how to deal with something that is from a far away social distance (as is an issue on the web now: how we solve it is lurking / interacting and then when the felt distance is smaller go to IRL)

    18. As we start to doubt all “people” online, the only way to confirm humanity is to meet offline over coffee or a drink.

      this is already common for decades, not because of doubt, but because of being human. My blogging since 2002 has created many new connections to people ('you imaginary friends' the irl friends of a friend call them teasingly), and almost immediately there was a shared need felt to meet up in person. Online allowed me to cast a wider net for connections, but over time that was spun into something IRL. I visited conferences for this, organised conferences for it, traveled to people's homes, many meet-ups, our birthday unconferences are also a shape of this. Vgl [[Menselijk en digitaal netwerk zijn gelijksoortig 20200810142551]] Dopplr serviced this.

    19. Next, we have the meatspace premium.We will begin to preference offline-first interactions. Or as I like to call them, meatspace interactions.

      meat-space premium, chuckle.

    20. study done this past December to get a sense of how possible this is: Comparing scientific abstracts generated by ChatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers" – Catherine Gao, et al. (2022)Blinded human reviewers were given a mix of real paper abstracts and ChatGPT-generated abstracts for submission to 5 of the highest-impact medical journals.

      I think these types of tests can only result in showing human failing at them. Because the test is reduced to judging only the single artefact as a thing in itself, no context etc. That's the basic element of all cons: make you focus narrowly on something, where the facade is, and not where you would find out it's fake. Turing isn't about whether something's human, but whether we can be made to believe it is human. And humans can be made to believe a lot. Turing needs to keep you from looking behind the curtain / in the room to make the test work, even in its shape as a thought experiment. The study (judging by the sentences here) is a Turing test in the real world. Why would you not look behind the curtain? This is the equivalent of MIT's tedious trolley problem fixation and calling it ethics of technology, without ever realising that the way out of their false dilemma's is acknowledging nothing is ever a di-lemma but always a multi-lemma, there are always myriad options to go for.

    21. Takes the replication crisis to a whole new level.Just because words are published in journals does not make them true.

      Agreed, still this was true before generative AI too. There's a qualitative impact to be expected from this quantitative shift [[Kwantiteit leidt tot kwaliteit 20201211155505]], and it may well be the further/complete erosion of scientific publishing in its current form. Which likely isn't bad, as it is way past its original purpose already: making dissemination cheaper so other scientists can build on it. Dissemination has no marginal costs attached anymore since digitisation. Needs a new trusted human system for sharing publications though, where peer network precedes submission of things to a pool of K.

    22. if content generated from models becomes our source of truth, the way we know things is simply that a language model once said them. Then they're forever captured in the circular flow of generated information

      This is definitely a feedback loop in play, as already LLMs emulate bland SEO optimised text very well because most of the internet is already full of that crap. It's just a bunch of sites, and mostly other sources that serve as source of K though, is it not? So the feedback loop exposes to more people that they shouldn't see 'the internet' as the source of all truth? And is this feedbackloop not pointing to people simply stopping to take this stuff in (the writing part does not matter when there's no reader for it)? Unless curated, filtered etc by verifiable human actors? Are we about to see personal generative agents that can do lots of pattern hunting for me on my [[Social Distance als ordeningsprincipe 20190612143232]] en [[Social netwerk als filter 20060930194648]]

    23. We can publish multi-modal work that covers both text and audio and video. This defence will probably only last another 6-12 months.

      Multi-modal output can for now still suggest there's a human at work, not a generative agent. But multi-modal output can soon if not already also be generated. This still seems to focus on the output being the thing authenticated to identify human making. output that is connected to other generated output. There's still no link to things outside the output, into the authors life e.g. Can one fake the human process towards output, which is not a one-off thing (me writing this in a certain way), but a continuous and evolving thing (me writing this in a certain way as part of a certain information process, connected to certain of my work processes etc.). Seen from processes multi-modal output isn't a different media format, it is also work results, projects created, agency in the physical world. In those processes all output is an intermediate result. Because of those evolving processes my [[Blogs als avatar 20030731084659]]. Vgl [[Kunst-artefact is (tussen)uitkomst proces 20140505070232]] There was this article about an artist I can't find back that saw all his outputs over time as intermediate and expression of one narrative. This https://www.flickr.com/photos/tonz/52849988531/in/datetaken/ comes to mind to. Provenance and entanglement as indicators of authenticity.

    24. But some people will realise they shouldn’t be letting language models literally write words for them. Instead, they'll strategically use them as part of their process to become even better writers.They'll integrate them by using them as sounding boards while developing ideas, research helpers, organisers, debate partners, and Socratic questioners.

      This hints towards prompt-engineering, and the role of prompts in human interaction itself [[Prompting skill in conversation and AI chat 20230301120740]]

      High Q use of generative AI will be about where in a creative / work process you employ to what purpose. Not in accepting the current face presented to us in e.g. chatGPT: give me an input and I'll give you an output. This in turn requires an understanding of one's own creative work processes, and where tools can help reduce friction (and where the friction is the cognitive actual work and must not be taken out)

    25. Some of these people will become even more mediocre. They will try to outsource too much cognitive work to the language model and end up replacing their critical thinking and insights with boring, predictable work. Because that’s exactly the kind of writing language models are trained to do, by definition.

      If you use LLMs to improve your mediocre writing it will help. If you use it to outsource too much of your own cognitive work it will get you the bland SEO texts the LLMs were trained on and the result will be more mediocre. Greedy reductionism will get punished.

    26. This raises both the floor and the ceiling for the quality of writing.

      I wonder about reading after this entire section about writing. Why would I ever bother reading generated texts (apart from 'anonymous' texts like manuals? It does not negate the need to be able to identify a human author, on the contrary, but it would also make even the cheapest way of generating too costly if noone will ever read it or act upon it. Current troll farming has effect because we read it, and still assume it's human written and genuine. As soon as that assumption is fully eroded whatever gets generated will not have impact, because there's no reader left to be impacted. The current transitional assymmetry in judging output vs generating it is costly to humans, people will learn to avoid that cost. Another angle is humans pretending to be the actual author of generated texts.

    27. And lastly, we can push ourselves to do higher quality writing, research, and critical thinking. At the moment models still can't do sophisticated long-form writing full of legitimate citations and original insights.

      Is this not merely entering an 'arms race' against our own tools? With the rat race effect of higher demands over time?

      What about moving sideways not up? Bringing in the richness of the layering of our (internal) reality and lives? The entire fabric that makes up our lives, work, communities, societies, indicately more richly in our artefacts. Which is where my sense of beauty is [[Schoonheidsbegrip 20151023132920]] as [[Making sense is deeply emotional 20181217130024]]

    28. On the new web, we’re the ones under scrutiny. Everyone is assumed to be a model until they can prove they're human.

      On a web with many generative agents, all actors are going to be assumed models until it is clear they're really human.

      Maggie Appleton calls this 'passing the reverse Turing test'. She suggests using different languages than English, insider jargon etc, may delay this effect by a few months at most (and she's right, I've had conversations with LLMs in several languages now, and there's no real difference anymore with English as there was last fall.)

    29. When you read someone else’s writing online, it’s an invitation to connect with them. You can reply to their work, direct message them, meet for coffee or a drink, and ideally become friends or intellectual sparring partners. I’ve had this happen with so many people. Highly recommend.There is always someone on the other side of the work who you can have a full human relationship with.Some of us might argue this is the whole point of writing on the web.

      The web is conversation (my blog def is), texts are a means to enter into a conversation, connection. For algogens the texts are the purpose (and human time spend evaluating its utility and finding it generated an externalised cost, assymmetric as an LLM can generate more than one can ever evaluate for authenticity). Behind a generated text there's no author to connect to. Not in terms of annotation (cause no author intention) and not in terms of actual connection to the human behind the text.

    30. This clearly does not represent all human cultures and languages and ways of being.We are taking an already dominant way of seeing the world and generating even more content reinforcing that dominance

      Amplifying dominant perspectives, a feedback loop that ignores all of humanity falling outside the original trainingset, which is impovering itself, while likely also extending the societal inequality that the data represents. Given how such early weaving errors determine the future (see fridges), I don't expect that to change even with more data in the future. The first discrepancy will not be overcome.

    31. This means they primarily represent the generalised views of a majority English-speaking, western population who have written a lot on Reddit and lived between about 1900 and 2023.Which in the grand scheme of history and geography, is an incredibly narrow slice of humanity.

      Appleton points to the inherent severely limited trainingset and hence perspective that is embedded in LLMs. Most of current human society, of history and future is excluded. This goes back to my take on data and blind faith in using it: [[Data geeft klein deel werkelijkheid slecht weer 20201219122618]] en [[Check data against reality 20201219145507]]

    32. But a language model is not a person with a fixed identity.They know nothing about the cultural context of who they’re talking to. They take on different characters depending on how you prompt them and don’t hold fixed opinions. They are not speaking from one stable social position.

      Algogens aren't fixed social entities/identities, but mirrors of the prompts

    33. Everything we say is situated in a social context.

      Conversation / social interaction / contactivity is the human condition.

    34. A big part of this limitation is that these models only deal with language.And language is only one small part of how a human understands and processes the world.We perceive and reason and interact with the world via spatial reasoning, embodiment, sense of time, touch, taste, memory, vision, and sound. These are all pre-linguistic. And they live in an entirely separate part of the brain from language.Generating text strings is not the end-all be-all of what it means to be intelligent or human.

      Algogens are disconnected from reality. And, seems a key point, our own cognition and relation to reality is not just through language (and by extension not just through the language center in our brain): spatial awareness, embodiment, senses, time awareness are all not language. It is overly reductionist to treat intelligence or even humanity as language only.

    35. This disconnect between its superhuman intelligence and incompetence is one of the hardest things to reconcile.

      generative AI as very smart and super incompetent at the same time, which is hard to reconcile. Is this a [[Monstertheorie 20030725114320]] style cultural category challenge? Or is the basic one replacing human cognition?

    36. But there are a few key differences between content generated by models versus content made by humans.First is its connection to reality. Second, the social context they live within. And finally their potential for human relationships.

      yes, all generated content is devoid of an author context e.g. It's flat and 2D in that sense, and usually fully self contained no references to actual experiences, experiments or things outside the scope of the immediate text. As I describe https://hypothes.is/a/kpthXCuQEe2TcGOizzoJrQ

    37. I think we’re about to enter a stage of sharing the web with lots of non-human agents that are very different to our current bots – they have a lot more data on how behave like realistic humans and are rapidly going to get more and more capable.Soon we won’t be able to tell the difference between generative agents and real humans on the web.Sharing the web with agents isn’t inherently bad and could have good use cases such as automated moderators and search assistants, but it’s going to get complicated.

      Having the internet swarmed by generative agents is unlike current bots and scripts. It will be harder to see diff between humans and machines online. This may be problematic for those of us who treat the web as a space for human interaction.

    38. There's a new library called AgentGPT that's making it easier to build these kind of agents. It's not as sophisticated as the sim character version, but follows the same idea of autonomous agents with memory, reflection, and tools available. It's now relatively easy to spin up similar agents that can interact with the web.

      AgentGPT https://agentgpt.reworkd.ai/nl is a version of such Generative Agents. It can be run locally or in your own cloud space. https://github.com/reworkd/AgentGPT

    39. These language-model-powered sims had some key features, such as a long-term memory database they could read and write to, the ability to reflect on their experiences, planning what to do next, and interacting with other sim agents in the game

      Generative agents have a database for long term memory, and can do internal prompting/outputs

    40. Recently, people have taken this idea further and developed what are being called “generative agents”.Just over two weeks ago, this paper "Generative Agents: Interactive Simulacra of Human Behavior" came out outlining an experiment where they made a sim-like game (as in, The Sims) filled with little people, each controlled by a language-model agent.

      Generative agents are a sort of indefinite prompt chaining: an NPC or interactive thing can be LLM controlled. https://www.youtube.com/watch?v=Gz6mAX41fs0 shows this for Skyrim. Appleton mentions a paper https://arxiv.org/abs/2304.03442 which does it for simlike stuff. See Zotero copy Vgl [[Stealing Worlds by Karl Schroeder]] where NPC were a mix of such agents and real people taking on an NPC role.

    41. Recently, people have been developing more sophisticated methods of prompting language models, such as "prompt chaining" or composition.Ought has been researching this for a few years. Recently released libraries like LangChain make it much easier to do.This approach solves many of the weaknesses of language models, such as a lack of knowledge of recent events, inaccuracy, difficulty with mathematics, lack of long-term memory, and their inability to interact with the rest of our digital systems.Prompt chaining is a way of setting up a language model to mimic a reasoning loop in combination with external tools.You give it a goal to achieve, and then the model loops through a set of steps: it observes and reflects on what it knows so far and then decides on a course of action. It can pick from a set of tools to help solve the problem, such as searching the web, writing and running code, querying a database, using a calculator, hitting an API, connecting to Zapier or IFTTT, etc.After each action, the model reflects on what it's learned and then picks another action, continuing the loop until it arrives at the final output.This gives us much more sophisticated answers than a single language model call, making them more accurate and able to do more complex tasks.This mimics a very basic version of how humans reason. It's similar to the OODA loop (Observe, Orient, Decide, Act).

      Prompt chaining is when you iterate through multiple steps from an input to a final result, where the output of intermediate steps is input for the next. This is what AutoGPT does too. Appleton's employer Ought is working in this area too. https://www.zylstra.org/blog/2023/05/playing-with-autogpt/

    42. Most of the tools and examples I’ve shown so far have a fairly simple architecture.They’re made by feeding a single input, or prompt, into the big black mystery box of a language model. (We call them black boxes because we don't know that much about how they reason or produce answers. It's a mystery to everyone, including their creators.)And we get a single output – an image, some text, or an article.

      generative AI currently follows the pattern of 1 input and 1 output. There's no reason to expect it will stay that way. outputs can scale : if you can generate one text supporting your viewpoint, you can generate 1000 and spread them all as original content. Using those outputs will get more clever.

    43. By now language models have been turned into lots of easy-to-use products. You don't need any understanding of models or technical skills to use them.These are some popular copywriting apps out in the world: Jasper, Copy.ai, Moonbeam

      Mentioned copy writing algogens * Jasper * Wordtune * copy.ai * quillbot * sudowrite * copysmith * moonbeam

    44. These are machine-learning models that can generate content that before this point in history, only humans could make. This includes text, images, videos, and audio.

      Appleton posits that the waves of generative AI output will expand the dark forest enormously in the sense of feeling all alone as a human online voice in an otherwise automated sea of content.

    45. However, even personal websites and newsletters can sometimes be too public, so we retreat further into gatekept private chat apps like Slack, Discord, and WhatsApp.These apps allow us to spend most of our time in real human relationships and express our ideas, with things we say taken in good faith and opportunities for real discussions.The problem is that none of this is indexed or searchable, and we’re hiding collective knowledge in private databases that we don’t own. Good luck searching on Discord!

      Appleton sketches a layering of dark forest web (silos mainly), cozy web (personal sites, newsletters, public but intentionally less reach), and private chat groups, where you are in pseudo closed or closed groups. This is not searchable so any knowledge gained / expressed there is inaccessible to the wider community. Another issue I think is that these closed groups only feel private, but are in fact not. Examples mentioned like Slack, Discord and Whatsapp are definitely not private. The landlord is wacthing over your shoulder and gathering data as much as the silos up in the dark forest.

    46. We end up retreating to what’s been called the “cozy web.”This term was coined by Venkat Rao in The Extended Internet Universe – a direct response to the dark forest theory of the web. Venkat pointed out that we’ve all started going underground, as if were.We move to semi-private spaces like newsletters and personal websites where we’re less at risk of attack.

      Cozy Web is like Strickler/Liu's black zones above. Sounds friendlier.

    47. The overwhelming flood of this low-quality content makes us retreat away from public spaces of the web. It's too costly to spend our time and energy wading through it.

      Strickler compares this to black zones as described in [[Three Body Problem _ Dark Forest by Cixin Liu]], withdraw into something smaller which is safe but also excluding yourself permanently from the greater whole. Liu describes planets that lower the speed of light around them on purpose so they can't escape their own planet anymore. Which makes others leave them alone, because they can't approach them either.

    48. It’s difficult to find people who are being sincere, seeking coherence, and building collective knowledge in public.While I understand that not everyone wants to engage in these activities on the web all the time, some people just want to dance on TikTok, and that’s fine!However, I’m interested in enabling productive discourse and community building on at least some parts of the web. I imagine that others here feel the same way.Rather than being a primarily threatening and inhuman place where nothing is taken in good faith.

      Personal websites like mine since mid 90s fit this. #openvraag what incentives are there actually for people now to start their own site for online interaction, if you 'grew up' in the silos? My team is largely not on-line at all, they use services but don't interact outside their own circles.

    49. Many people choose not to engage on the public web because it's become a sincerely dangerous place to express your true thoughts.

      The toxicity made me leave FB and reduce my LinkedIn and Twitter exposure. Strickler calls remaining nonetheless the bowling alley effect: you don't like bowling but you know you'll meet your group of regular friends there.

    50. This is a theory proposed by Yancey Striker in 2019 in the article The Dark Forest Theory of the InternetYancey describes some trends and shifts around what it feels like to be in the public spaces of the web.

      Hardly a 'theory', a metaphor re-applied to experiencing online interaction. (Strickler ipv Striker)

      The internet feels lifeless: ads, trolling factories, SEO optimisation, crypto scams, all automated. No human voices. The internet unleashes predators: aggressie behaviour at scale if you do show yourself to be a human. This is the equivalent of Dark Forest.

      Yancey Strickler https://onezero.medium.com/the-dark-forest-theory-of-the-internet-7dc3e68a7cb1 https://onezero.medium.com/beyond-the-dark-forest-a905e2dd8ae0 https://www.ystrickler.com/

    51. the dark forest theory of the universe

      A specific proposed solution to [[Fermi Paradox 20201123150738]] where is everybody? Dark forest, it's full of life but if you walk through it it seems empty. Universe seems empty of intelligent life to us as well. Because life forms know that if you let yourself be heard/seen you'll be attacked by predators. Leading theme in [[Three Body Problem _ Dark Forest by Cixin Liu]]

    52. Secondly, I’m what we call “very online”. I live on Twitter and write a lot online. I hang out with people who do the same, and we write blog posts and essays to each other while researching. As if we're 18th-century men of letters. This has led to lots of friends and collaborators and wonderful jobs.Being a sincere human on the web has been an overwhelmingly positive experience for me, and I want others to have that same experience.

      True for me (and E) too. For me it largely was because the internet became a thing right around when I entered uni in the late 80s, and it always was about connecting. Blogging esp early in the years 2002-2009 led to a large part of my personal and professional peers network.

      '18th c. men of letters' I've sometimes thought about it like that actually, and treat meet-ups etc like the Salons of old vgl. [[Salons organiseren 20201216205547]]

    53. https://web.archive.org/web/20230503150426/https://maggieappleton.com/forest-talk

      Maggie Appleton on the impact of generative AI on internet, with a focus on it being a place for humans and human connection. Take out some of the concepts as shorthand, some of the examples mentioned are new to me --> add to lists, sketch out argumentation line and arguments. The talk represents an updated version of earlier essay https://maggieappleton.com/ai-dark-forest which I probably want to go through next for additional details.

    1. Ought makes Elicit (a tool I should use more often). Maggie Appleton works here. A non-profit research lab into machine learning systems to delegate open-ended thinking to.

    1. https://web.archive.org/web/20230503191702/https://www.rechtenraat.nl/artikel-10-evrm-en-woo/ Caroline Raat start zaak tegen RvS om toepassing uitzonderingsgronden WOO.

      Argumentatie: - 2017 EHRM arrest stelt dat public watchdogs (journo's, bloggers, ngo's, wetenschap) direct beroep kunnen doen op art 10 EVRM voor toegang tot documenten van de overheid. - Als het zo'n watchdog is die toegang vraagt, gaat EHRM boven WOO/WOB. - Watchdog hoeft alleen aan te tonen dat info nodig voor publieke info-voorlichting - Watchdog hoeft geen bijzondere omstandigheden aan te tonen - Weigering kan alleen als daar dringen maatschappelijk reden toe is - Weigering kan alleen op basis genoemde uitzonderingen in art10lid2 EVRM zelf, en weigering moet als noodzakelijk voor de samenleving gemotiveerd worden - Andere weigeringsgronden in WOB/WOO zijn niet van toepassing.

      Dit zou bijv heel ander verloop van Shell papers kunnen geven.

    1. ICs as hardware versions of AI. Interesting this is happening. Who are the players, what is on those chips? In a sense this is also full circle for neuronal networks, back in the late 80s / early 90s at uni neuronal networks were made in hardware, before software simulations took over as they scaled much better both in number of nodes and in number of layers between inputs and output. #openvraag Any open source hardware on the horizon for AI? #openvraag a step towards an 'AI in the wall' Vgl [[AI voor MakerHouseholds 20190715141142]] [[Everymans Allemans AI 20190807141523]]

    1. https://web.archive.org/web/20230503153010/https://subconscious.substack.com/p/llms-break-the-internet-signing-everything

      Gordon Brander on how Maggie Appleton's point in her talk may be addressed: by humans signing their output (it doesn't preclude humans signing generated output I suppose, which amounts to the same result as not signing) Appleton suggests IRL meet-ups are key here for the signing. Reminds me of the 'parties' where we'd sign / vouch for each others SSL certs. Or how already in Threema IRL meet-ups are used to verify Threema profiles as mutually trusted. Noosphere is more than this though? It would replace the current web with its own layer. (and issues). Like Maggie Appleton mentions Dead Internet Theory