- Last 7 days
-
www.mcgarrsolicitors.ie www.mcgarrsolicitors.ie
-
First review of the EDPB opinion on personal data in AI models by an Irish lawyer.
-
-
www.edpb.europa.eu www.edpb.europa.eu
-
Opinion by EU Data Protection Board wrt personal data processing in AI models.
-
- Dec 2024
-
www.datalandelijkgebied.nl www.datalandelijkgebied.nlOver DDF2
-
https://web.archive.org/web/20241214082330/https://www.datalandelijkgebied.nl/pages/over-ddf
Digitale Data Faciliteit DDF
-
De Digitale Data Faciliteit is een dienst van het Samenwerkingsverband van zes Rijksuitvoerings-organisaties (RUO’s), in opdracht van het Ministerie van Landbouw, Visserij, Voedselzekerheid en Natuur (LVVN). Samen ondersteunen we de provincies in hun regierol en andere decentrale overheden en gebiedspartners bij het opstellen en uitvoeren van gebied specifieke plannen. Deze zes RUO’s zijn: Kadaster Staatsbosbeheer Rijksvastgoedbedrijf Rijkswaterstaat Rijksdienst voor Cultureel Erfgoed Rijksdienst voor Ondernemend Nederland
Zes landelijke uitvoeringsorganisaties werken in DDF samen. Kadaster, Staatsbosbeheer, Rijksvastgoedbedrijf, Rijkswaterstaat, RCE, en RVO
Tags
Annotators
URL
-
-
www.oneworld.nl www.oneworld.nl
-
De onzin van ons extreemrechtse kabinet over de niet bestaande asielcrisis.
-
-
netzpolitik.org netzpolitik.org
-
This seems to describe the proposal akwardly, bc as is would run afoul of AI Act. I think it actually says: AI used in realtime to detect suspicious behaviour/movements. Then w human decision, follow a person specifically in vid streams, then recordings to fish out face to be compared w existing databases.
this is not the same as real time mass identification which is disallowd in AI Act. The detection is automated, id upon human decision later.
Also a mention of a faces dbase based on public online images for police to use.
-
-
artofmemory.com artofmemory.com
-
Dominic system (after Dominic O'Brien is a Person-Action image association system for numbers in specific order. Uses it to turn two digit numbers into famous people. Also associates an action with a famous person, representing the same two digit number. Now you can imagine a 4 digit number as a person doing an action A mobile number would be 2 person action combinations in sequence. The upfront work is remembering the persons and actions as images for each of the 00-99 two digit numbers. Then putting a four digit number together requires putting them in sequence e.g. in a one of your preselected [[Memory palaces 20201007192310]], The act of remembering is constructing the images and placing them in the memory palace of choice.
There is also a Person-Action-Object system PAO, which allows you to do three pairs of digits in one image. Allowing 1 million numbers to remember.
Tags
Annotators
URL
-
-
tim.blog tim.blog
-
Tim Ferris posting a text by Gabriel Wyner from 2014 on learning a new language in several steps 1) hear the novel sounds in the language and how to spell them 2) learn a list of basic words by connecting them to their image not their translatiojn 3) learn (simplified) grammar 4) continue the game (adding focused vocab, reading, listening speaking etc)
-
My book, Fluent Forever: How to learn any language fast and never forget it, is an in-depth journey into the language learning process, full of tips, guidelines and research into the most efficient methods for learning and retaining foreign languages.
[[Fluent Forever by Gabriel Wyner]] 2014. vgl [[7 talen in 7 dagen door Gaston Dorren]] which starts more with grammar and reading comprehension actually.
-
Fluency in speech is not the ability to know every word and grammatical formation in a language; it’s the ability to use whatever words and grammar you know to say whatever’s on your mind. When you go to a pharmacy and ask for “That thing you swallow to make your head not have so much pain,” or “The medicine that makes my nose stop dripping water” – THAT is fluency. As soon as you can deftly dance around the words you don’t know, you are effectively fluent in your target language. This turns out to be a learned skill, and you practice it in only one situation: When you try to say something, you don’t know the words to say it, and you force yourself to say it in your target language anyways. If you want to build fluency as efficiently as possible, put yourself in situations that are challenging, situations in which you don’t know the words you need. And every time that happens, stay in your target language no matter what.
speaking fluency comes from staying in the target language.
-
Podcasts and radio broadcasts are usually too hard for an intermediate learner. Movies, too, can be frustrating, because you may not understand what’s going on
suggests podcasts, movies, and radio are too hard to follow at intermediate level.
-
Reading: Books boost your vocabulary whether or not you stop every 10 seconds to look up a word. So instead of torturously plodding through some famous piece of literature with a dictionary, do this: Find a book in a genre that you actually like (The Harry Potter translations are reliably great!) Find and read a chapter-by-chapter summary of it in your target language (you’ll often find them on Wikipedia). This is where you can look up and make flashcards for some key words, if you’d like. Find an audiobook for your book. Listen to that audiobook while reading along, and don’t stop, even when you don’t understand everything. The audiobook will help push you through, you’ll have read an entire book, and you’ll find that it was downright pleasurable by the end.
Reading to deepen understanding suggests any book and go through, find online chapter summaries in target langauge, listen to audiobook while reading it, as it forces you along.
-
Vocabulary Customization: Learning the top 1000 words in your target language is a slam-dunk in terms of efficiency, but what about the next thousand words? And the thousand after that? When do frequency lists stop paying dividends? Generally, I’d suggest stopping somewhere between word #1000 and word #2000. At that point, you’ll get better gains by customizing. What do you want your language to do? If you want to order food at a restaurant, learn food vocabulary. If you plan to go to a foreign university, learn academic vocabulary
Adding to vocabulary has diminishing returns if you go by freq of usage after 1k-2k words. Use thematic lists for your purposes. E.g. [[% Interessevelden 20200523102304]] as starting point. Then go back to the flashcards w images used before. I can see building sets like these.
-
Stage 4: The Language Game 3 Months (or as long as you want to keep playing)
Stage 4 is the deepening / getting to fluency bit. Reinforced by actual usage. Either through adding more vocab, reading texts, listening to speakers etc.
-
On its surface, Google Images is a humble image search engine. But hiding beneath that surface is a language-learning goldmine: billions of illustrated example sentences, which are both searchable and machine translatable
Suggest that google image headlines are a good source of additional example sentences for grammar learning, as it includes machine translation in the search results on mouse over. Grabs those sentences for flash cards. I think the time used to make the cards may well be the key intervention.
-
How do you learn all the complicated bits of “My homework was eaten by my dog”? Simple: Use the explanations and translations in your grammar book to understand what a sentence means, and then use flashcards to memorize that sentence’s component parts, like this:
Suggests making flashcards for each of the three types of changes, in any given example. allows speeding up compared to the book, as you do them w visuals on flash cards, and the spaced rep takes out most examples in a grammar book, leaving you with the repetition you need only.
-
n every single language, grammar is conveyed using some combination of three basic operations: grammar adds words (You like it -> Do you like it?), it changes existing words (I eat it -> I ate it), or it changes the order of those words (This is nice -> Is this nice?). That’s it. It’s all we can do. And that lets us break sentences down into grammatical chunks that are very easy to memorize.
Boils grammar down to adding words, changing existing words, changing the order of words. Allows [[Chunking 20210312215715]] that makes it easier to memorise.
-
2-3 months Now it’s time to crack open your grammar book. And when you do, you’ll notice some interesting things: First, you’ll find that you’ve built a rock-solid foundation in the spelling and pronunciation system of your language. You won’t even need to think about spelling anymore, which will allow you to focus exclusively on the grammar. Second, you’ll find that you already know most of the words in your textbook’s example sentences. You learned the most frequent words in Stage 2, after all. All you need to do now is discover how your language puts those words together.
3rd stage is the grammar. Suggests using a book, but with the advantage of already knowing the words and spelling of any examples, allowing focus on the grammar. Takes 2-3 months.
-
To begin any language, I suggest starting with the most common, concrete words,
Suggestion to start learning words with a basic list. Author compiled a list of 625. See [[A Base Vocabulary List for Any Language 20241208160954]]
Suggests the basic list takes about 1-2 months
-
These are words that are common in every language and can be learned using pictures, rather than translations: words like dog, ball, to eat, red, to jump. Your goal is two-fold: first, when you learn these words, you’re reinforcing the sound and spelling foundation you built in the first stage, and second, you’re learning to think in your target language.
Use flashcards with images to learn words in a new language. Skip the translation part. Also reinforces the visual/spatial brain connection. Search images in the target language not with the translation, so subtle diffs in meaning are maintained.
-
Spelling is the easiest part of this process. Nearly every grammar book comes with a list of example words for every spelling. Take that list and make flashcards to learn the spelling system of your language, using pictures and native speaker recordings to make those example words easier to remember.
To learn spelling find a grammar book that has lists of examples. Turn those into flashcards for spelling.
Flashcards are the primary mnemonic tactic in this article.
-
This gives you a few super powers: your well-trained ears will give your listening comprehension a huge boost from the start, and your mouth will be producing accurate sounds. By doing this in the beginning, you’re going to save yourself a great deal of time, since you won’t have to unlearn bad pronunciation habits later on. You’ll find that native speakers will actually speak with you in their language, rather than switching to English at the earliest opportunity.
Hearing and pronunciation tackled upfront makes you sound more fluent. Prevents the effect of never getting a chance to use it bc others switch to your language.
-
Once your ears begin to cooperate, mastering pronunciation becomes a lot easier.
listening precedes pronouncing. Vgl how I 'suddenly' heard the begin and end of words in Vorarlbergerisch and then quickly learned to speak it too.
-
to rewire your ears to hear new sounds, you need to find pairs of similar sounds, listen to one of them at random (“tyuk!”), guess which one you thought you heard (“Was it ‘gyuk’?”), and get immediate feedback as to whether you were right (“Nope! It was tyuk!”). When you go through this cycle, your ears adapt, and the foreign sounds of a new language will rapidly become familiar and recognizable.
this sounds like an impossible step if you are indeed foreign to a language. How would you ever find such pairings? The vid doesn't say other than describe a feedback system to learn to hear new nuances. I think perhaps using DeepL or some such to read texts to me would help.
-
If I had rushed ahead and started learning words and grammar immediately, I’d have been at a severe disadvantage whenever I learned words with those letter combinations, because I’d be missing the sound connection when trying to build memories for those words
being familiar with the sound of pronunciation will help better memorise the words later. Adding a sense to the memory. Vgl [[Fenomenologie Husserl 20200924110518]]
-
Spelling and Sound: Learn how to hear, produce and spell the sounds of your target language
Create a foundation for spelling and sounds, to get a feel/sense of it, making it less 'other'.
-
-
www.youtube.com www.youtube.com
-
Vid of learning to hear diff in novel sounds in foreign language you can't easily tell apart. Find them in a language. Have a script play them to you randomly and choose an answer. Feedback will bring you up from random to about 80% being right. Rewiring your brain to hear the differences. I bet non-anglo speakers wiill find this easier as they are never accomodated outside their own country.
Tags
Annotators
URL
-
-
artofmemory.com artofmemory.com
-
https://web.archive.org/web/20241207143328/https://artofmemory.com/ Art of memory community platform.
-
-
www.presidency.ro www.presidency.ro
-
The Romanian Presidency has released declassified material looking at the social media role in the unexpected presidential election outcome.
-
-
www.theguardian.com www.theguardian.com
-
Nesrine Malik on changes in the Arab world, pointing to how across the region cultural heritage is disappearing and destroyed. A wider pattern underneath the headlines of late, that has been in place for decades.
I have [[We Need New Stories by Nesrine Malik]] picked up fr Sh&co in [[Paris 2021]].
-
-
modelcontextprotocol.io modelcontextprotocol.io
-
https://web.archive.org/web/20241202062809/https://modelcontextprotocol.io/introduction
Anthropic's Model Context Protocol MCP documentation. Includes basic server exercises. The spec doesn't say much at first glance about how resources would actually be connected to a MCP server to serve as context.
Tags
Annotators
URL
-
-
github.com github.com
-
https://web.archive.org/web/20241202062707/https://github.com/modelcontextprotocol
Github repositories for MCP by Anthropic. MIT Licences at first glance.
-
-
www.forbes.com www.forbes.com
-
Anthropic proposes 'Model Context Protocol' MCP on how to connect local/external info sources to LLMs and agents, as a standard. To make ai tools more context aware. Article says MCP is open source. Idea is to attach a MCP server to every source and have that interact over MCP with the MCP client attached to a model and/or tools.
Anthropic is the org of Claude model.
-
-
writing.bobdoto.computer writing.bobdoto.computer
-
Reinforces the communal nature of knowledge workAll ideas are in communication with and informed by others, regardless of whether we work in direct collaboration with others. A collaborative zettelkasten not only shows this in real time, but allows participants to actively engage with a collective web of insight.
This is key imo. The link between personal knowledge and communal K. In context of TGL it also means fleshing out the purpose, identity and intent of our work. A step to and in support of [[Networked Agency 20160818213155]] , here the functioning as a company Can I express this to the team?
This is a benefit that Doto does not express (because he stays within the context of ZK, and this one becomes apparent if you look at the ZK in the context of the group of collab. Any non-random and pre-existing group will find theur benefit in that context, rather than in the instrument's built-in affordances. tech+issue=value.
-
Once the ideas have been organized in a way that makes sense, the real writing begins. Bring the ideas and any useful comments into a new writing doc. Decide on who will do what, taking into consideration each participant's strength.
In TGL this would the diff domain teams and project teams, putting stuff to their own purpose.
-
simply make note of the connection, state why you've done so, and move on to the next note. No consultation between participants is required.
Agreed, but indeed any connection must be annotated, to be understood by collab partners. Counterexample is the usually meaningless linking in Wikipedia.
-
A Collaborative Zettelkasten for Collaborative Output
Vgl landscapes, IEC writing projects, deskresearch/essays in general.
-
Working with the Collaborative Network of Ideas
For me the purpose of a collab zk would need to be aligned to what drives the collaborators. E.g. how I tie pkm to individual professional activism and autonomy, and extended/aggregated to teamkm it drives the core value of constructive activism of my company, and how we use [[Systems convening denken Wenger Trayner 20230914131102]] to translate that into interventions and desirable client projects. Vgl [[PKM systems convening activisme relatie 20241123085857]] expressing that connection.
-
Participants may or may not have a common output, goal, or project in mind when they start. The only requirements are: all participants add to the collection of main notes all participants establish connections between ideas all participants are free to pull from the zettelkasten for their writing projects
This describes a wiki too. What difference? Wiki tends to follow the Wikipedia model perhaps, aiming for completeness / definitive state? Wikipedia is not atomic in the ZK sense. Also public wiki's (the ones one is by def aware of) are an output themselves. My internal wiki 2004-2012 was much more atomic and not an output but an instrument. So if wiki then more of the instrument style, iow ZK by another name. A collective network of meaning and sense making
-
-
lmstudio.ai lmstudio.ai
-
LM Studio can run LLMs locally (I have llama and phi installed). It also has an API over a localhost webserver. I use that API to make llama available in Obsidian using the Copilot plugin.
This is the API documentation. #openvraag other scripts / [[Persoonlijke tools 20200619203600]] I can use this in?
Tags
Annotators
URL
-
-
www.dreamsongs.com www.dreamsongs.com
-
https://web.archive.org/web/20241201071240/https://www.dreamsongs.com/WorseIsBetter.html
Richard P Gabriel documents the history behind 'worse is better' a talk he held in Cambridge in #1989/ The role of LISP in the then AI wave stands out to me. And the emergence of C++ on Unix and OOP. I remember doing a study project (~91) w Andre en Martin in C++ v2 because we realised w OOP it would be easier to solve and the teacher thought it would be harder for us to use a diff language.
via via via Chris Aldrich in h. to Christian Tietze, https://forum.zettelkasten.de/discussion/comment/22075/#Comment_22075 to Christine Lemmer-Webber https://dustycloud.org/blog/how-decentralized-is-bluesky/ to here.
-[ ] find overv of AI history waves and what tech / languages drove them at the time
Tags
Annotators
URL
-
-
-
But perhaps that's too ambitious to suggest taking on for either camp. And maybe it doesn't matter insofar as the real lessons of Worse is Better is that both first mover advantage on a quicker and popular solution outpaces the ability to deliver a more correct and robust position, and entrenches the less ideal system. It can be really challenging for a system that is in place to change itself from its present position, which is a bit depressing.
Succinct description of worse is better
The 'worse' bit moves you along in the adjacent possible paths of the [[Evolutionair vlak van mogelijkheden 20200826185412]], where as the 'better' bit puts you at a peak in the evol landscape from which you can't move and hard to get to for others.
via via Chris Aldrich in h. pointing to Christian Tietze comment https://forum.zettelkasten.de/discussion/comment/22075/#Comment_22075 pointing to this Christine Lemmer-Webber post, following it onwards to https://www.dreamsongs.com/WorseIsBetter.html by Richard P. Gabriel
-
- Nov 2024
-
writing.bobdoto.computer writing.bobdoto.computer
-
https://web.archive.org/web/20241130143502/https://writing.bobdoto.computer/how-a-collaborative-zettelkasten-might-work-a-modest-proposal/ [[Bob Doto]] proposes a collab note collection. The collab is that multiple people add notes, connections, in parallel to their own. Collab is both using it as resource individually and in writing collectively. Read again, think about SC and essay machine contexts for TGL. We already have shared vaults, but no process. Add instructions to Geheugen and link in Notes? Via [[Chris Aldrich]] h.
-
-
annehelen.substack.com annehelen.substack.com
-
https://web.archive.org/web/20241130094952/https://annehelen.substack.com/p/the-kids-are-too-soft
'the kids are too soft'. on shifting perceptions of gens, point out the obvious lack of attention to cause and effect. walking barefoot to school in the snow etc etc as if it's a sine quae non. Reminds me of a Gulag discussion, what doesn't kill you makes you harder, is actually the other way around. The hardest going survive, and more might have in conditions had been better. We see more people survive and mistake it for lack of selection and rigour, where it's the lifting of more if not all boats.
Tags
Annotators
URL
-
-
dougaldlamont.substack.com dougaldlamont.substack.com
-
https://web.archive.org/web/20241130094714/https://dougaldlamont.substack.com/p/if-mmt-is-wrong-why-is-it-so-much On Modern Monetary Theory.
Vgl [[%Money geld OP]]
-
-
-
Vgl Sunnekloas Ameland
-
-
boffosocko.com boffosocko.com
-
those with a card index or zettelkasten-based reading and note making practice will realize that they’re probably automatically following the advice
Note making does create space for reflection, is the thinking. But even before notes, annotation does too.
n:: notes as thinking vgl 'writing is thinking' annotation as thinking vs conversation, conversation as thinking through expression
-
followed by a mention that no one does this with the implication that information overload and the pressures of time don’t allow this.
I only feel such overload if I don't pause to reflect and merely keep taking stuff in. [[Information overload 20040327145709]] [[Info overload of overvloed verschil is surprisal 20220810090704]] Information value is not determined by the sender but by the receiver. If I don't pause the firehose just isn't information. It takes me as an observer to collapse noise into information in one's personal reality.
-
Conkin admonished students that for every hour they spend reading, they should spend an hour in reflection.
Paul Conkin 1929-2022 no wikipedia page, US historian suggested to graduate students to spend an hour reflecting for every hour read. Anecdotal quote from David Blight lecture (link to vid in post). There is a point here, wrt to keeping one's own pace and intent behind attention. [[Attention literacy and the value of slow learning 20211209063437]] [[Stuur aandacht met intentie 20220213080032]]
-
-
www.aitidbits.ai www.aitidbits.ai
-
On AI Agents, open source tools. Vgl [[small band AI personal assistant]] these tools need to be small and personal. Not platformed, but local.
Tags
Annotators
URL
-
-
-
Meta's next step in their malicious compliance with GDPR/DMA is making less-tracking (not non, still DOB and location used) ads more annoying than tracking ones. Another round of delaying tactics to clearly spelled out legal reqs: no perceptible diff in service allowed.
-
-
www.wrecka.ge www.wrecka.ge
-
https://web.archive.org/web/20241127105840/https://www.wrecka.ge/against-the-dark-forest/
This looks very interesting to read (via [[Stephen Downes]]). Any link w Maggie Appleton's dark forest metaphor use [[The Expanding Dark Forest and Generative AI]]? Seems to settle on [[People Centered Navigation 20060930163901]] in the end?
Tags
Annotators
URL
-
-
openfuture.eu openfuture.eu
-
Open Future report on exclusivity clauses in the ODD, and the Google book scanning projects. This text seems to say exclusivity clauses are regulated since the latest incarnation of the ODD, but in fact they have been in place since the very first PSI Directive 2003, and steadily tightened in 2010, 2019, as well as in the DGA. Is the report more nuanced? Report in Zotero
-
-
techpolicy.press techpolicy.press
-
data center usage of water and electricity for cooling are inversely connected. Either huge amounts of water, or huge amounts of electricity. Beware if plans are focused on just one of the two.
-
-
openknowledge.worldbank.org openknowledge.worldbank.org
-
2021 paper looking at data governance legal frameworks globally in 80 countries, EU and USA are absent (Estonia, UK included though), and lumps Europe and Central Asia together, which leads to phrases like 'UK and Estonia have this, but elsewhere in the region Kyrgyzstan hasn't' so the region is mediocre at best. Apples/pears. Mentions GDPR more or less as the single EU framework here, despite the 2018 free flow of non-personal data regulation which became applicable in May 2019. (others within the EU Data Strategy / single market for data had been announced or proposed by 2021 but not in place and aren't mentioned here either.)
-
-
wetransform.to wetransform.to
-
WeTransform compares different but similar (D, DK, NL) approaches to adding HVD metadata for harvesting and discovery.
-
-
support.signal.org support.signal.org
-
Signal allows you to set usernames. They are unique but temporary (and you can have only 1 at a time). User names can be used to connect to you without sharing your phone number. Set an optional username in Settings Profile. They have two numbers at the end (you can set them).
User names can be shared in three ways: - tell someone (and then change it so they cannot communicate it further) - share a QR code - share a unique URL (which does not contain your username in clear text)
Signal can't 'easily' see which phone number has which username. But given a username it can find the associated phonenumber. 'easily' means it can be done though, and thus both ways.
An old username will become available to others after a week, meaning imo they should not contain any identifiable or associative information.
Found this through someone suggesting that sharing your Signal username through Mastodon would allow private msgs. Yes, but the world will know your username, so you're open to all people who might think it fun to msg you.
-
-
en.wikipedia.org en.wikipedia.org
-
Stafford Beer coined and frequently used the term POSIWID (the purpose of a system is what it does) to refer to the commonly observed phenomenon that the de facto purpose of a system is often at odds with its official purpose
the purpose of a system is a what it does, POSIWID, Stafford Beer 2001. Used a starting point for understanding a system as opposed to intention, bias in expectations, moral judgment, and lacking context knowledge.
Tags
Annotators
URL
-
-
ali-alkhatib.com ali-alkhatib.com
-
I’ve come to feel like human-centered design (HCD) and the overarching project of HCI has reached a state of abject failure. Maybe it’s been there for a while, but I think the field’s inability to rise forcefully to the ascent of large language models and the pervasive use of chatbots as panaceas to every conceivable problem is uncharitably illustrative of its current state.
HCI and HCD as fields have failed to respond to LLM tools and chatbot interfaces a generic solution to everything forcefully.
-
hegemonic algorithmic systems (namely large language models and similar machine learning systems), and the overwhelming power of capital pushing these technologies on us
author calls LLMs and similar AI tools hegemonic, worsened by capital influx
-
gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm
Author moved from mitigating harm of algo systems to the moral standpoint that actively resisting, sabotaging, ending AI with attached political projects are valid reaction to harm. So he's moving from monster adaptation / cultural category adaptation to monster slaying cf [[Monstertheorie 20030725114320]]. I empathise but also wonder, bc of the mention of the political projects / structures attached, about polarisation in response to monster embracers (there are plenty) shifting the [[Overton window 20201024155353]] towards them.
-
https://web.archive.org/web/20241116074149/https://ali-alkhatib.com/blog/fuck-up-ai Ali Alkhatib (anthropology/informatics academic, QS and HCI) #2024/06/24 call for active resistance against AI / ML
-
-
workforcefuturist.substack.com workforcefuturist.substack.com
-
On AI agents, and the engineering to get one going. A few things stand out at first glance: frames it as the next hype (Vgl plateau in model dev), says it's for personal tools (doesn't square w hype which vc-fuelled, personal tools not of interest to them), and mentions a few personal use cases. e.g. automation, vgl [[Open Geodag 20241107100937]] Ed Parsons of Google AI on the same topic.
-
-
garymarcus.substack.com garymarcus.substack.com
-
https://web.archive.org/web/20241115134320/https://garymarcus.substack.com/p/confirmed-llms-have-indeed-reached?triedRedirect=true Gary Marcus in a told-you-so piece on algogens hitting a development wall, same as the other piece by Erik Hoel on models plateauing.
-
-
www.theintrinsicperspective.com www.theintrinsicperspective.com
-
https://web.archive.org/web/20241115134446/https://www.theintrinsicperspective.com/p/ai-progress-has-plateaued-at-gpt Erik Hoel notices that LLM development is stalling at the GPT-4 level. No big jumps in recent releases, across the various vendors. Additional scaling is not bringing results. Notice the graph, might be interesting to see an update in a few months. Mentions overfitting, to benchmarks as in teaching to a specific test.
-
-
dailyyonder.com dailyyonder.com
-
Counternarrative to rural-urban divide in US politics, says data suggests it was city dwellers not showing up for Harris that tipped the balance. Main point Trump got about the same popular vote numbers but where Biden got 81M in 2020, Harris got just under 72M losing the popular vote. The diff is in core metropolitan counties in swing states.
-
-
discussions.apple.com discussions.apple.com
-
defaults write com.apple.mail DisableInlineAttachmentViewing -boolean yes
This worked for me. I had to switch to Apple Mail a few months ago and it is extremely annoying that if you add small attachments it will preview inline. And at times even resizes them for that, and sends only the smaller version.
Tags
Annotators
URL
-
-
www.zotero.org www.zotero.org
-
A great collection of sources on tools for thought, by the looks of it by [[Chris Aldrich]] Books, texts and more.
-
-
diginomica.com diginomica.com
-
these teammates
Like MS Teams is your teammate, like your accounting software is your teammate. Do they call their own Atlassian tools teammates too? Do these people at Atlassian get out much? Or don't they realise that the other handles in their Slack channel represent people not just other bits of software? Remote work led to dehumanizing co-workers? How else to come up with this wording? Nothing makes you sound more human like talking about 'deploying' teammates. My money is on this article was mostly generated. Reverse-Turing says it's up to them to say otherwise.
-
There’s a lot to be said for the promise that AI agents bring to organizations.
And as usual in these articles the truth is at the end, it's again just promises.
-
People should always be at the center of an AI application, and agents are no different
At the center of an AI application, like what, mechanical Turks?
-
Don’t – remove the human aspect
After a section celebrating examples doing just that!
-
As various agents start to take care of routine tasks, provide real-time insights, create first drafts, and more, team members can focus on more meaningful interactions, collaboration,
This sentence preceded by 2 examples where interactions and collaboration were delegated to bots to hand-out generated warm feelings, does not convey much positive about Atlassian. This basically says that a lot of human interaction in the or is seen as meaningless, and please go do that with a bot, not a colleague. Did their branding ai-agent write this?
-
gents can also help build team morale by highlighting team members' contributions and encouraging colleagues to celebrate achievements through suggested notes
Like Linked-In wants you to congratulate people on their work-anniversary?
-
One of my favorite use cases for agents is related to team culture. Agents can be a great onboarding buddy — getting new team members up to speed by providing them with key information, resources, and introductions to team members.
Welcome in our company, you'll meet your first human colleague after you've interacted with our onboarding-robot for a week. No thanks.
-
inviting a new AI agent to join your team in service of your shared goa
anthropomorphing should be in this article's don't list. 'inviting someone on your team' is a highly social thing. Bringing in a software tool is a different thing.
-
One of our most popular agent use cases for a while was during our yearly performance reviews a few months back. People pointed an agent to our growth profiles and had it help them reframe their self-reflections to better align with career development goals and expectations. This was a simple agent to create an application that helped a wide range of Atlassians with something of high value to them.
An AI agent to help you speak corporate better, because no one actually writes/reflects/talks that way themselves. How did the receivers of these reports perceive this change in reports? Did they think it was better Q, or did all reflections now read the same?
-
Start by practising and experimenting with the basics, like small, repetitive tasks. This is often a great mix of value (time saved for you) and likely success (hard for the agent to screw up). For example, converting a simple list of topics into an agenda is one step of preparing for a meeting, but it's tedious and something that you can enlist an agent to do right away
Low end tasks for agents don't really need AI do they. Vgl Ed Parsons last week wrt automation as AI focus.
-
For instance, a 'Comms Crafter' agent is specialized in all things content, from blogs to press releases, and is designed to adhere to specific brand guidelines. A 'Decision Director' agent helps teams arrive at effective decisions faster by offering expertise on our specific decision-making framework. In fact, in less than six months, we’ve already created over 500 specialized agents internally.
This does not fully chime with my own perception of (AI) agents. At least the titles don't. The tails of descriptions 'trained to adhere to brand guidelines' and 'expertise in internal decision-making framework' makes more sense. I suppose I also rail against this being the org's agents, and don't seem to be the team's / pro's agents. Vibes of having an automated political officer in your unit. -[ ] explore nature and examples of AI agents better for within individual pro scope #ontwikkelingspelen #netag #30mins #4hr
-
-
www.experimental-history.com www.experimental-history.com
-
from 2024/01 by Adam Mastroianni
-
I've been down there enough times to see the same patterns repeat, and sometimes I can even interrupt them. That's why having goofy names for them matters so much, because it reminds me not to believe the biggest bog lie of all: that I'm stuck in a situation unlike any I, or anyone else, has ever seen before
Giving repeating neg patterns wrt procrastination / not getting into action, a silly name helps in defeating the pattern (rather than beating yourself up over it I suppose).
-
-
untoldmag.org untoldmag.org
-
Decolonizing AI is a multilayered endeavor, requiring a reaction against the philosophy of ‘universal computing’—an approach that is broad, universalistic, and often overrides the local. We must counteract this with varied and localized approaches, focusing on labor, ecological impact, bodies and embodiment, feminist frameworks of consent, and the inherent violence of the digital divide. This holistic thinking should connect the military use of AI-powered technologies with their seemingly innocent, everyday applications in apps and platforms. By exploring and unveiling the inner bond between these uses, we can understand how the normalization of day-to-day AI applications sometimes legitimizes more extreme and military employment of these technologies.There are normalized paths and routine ways to violence embedded in the very infrastructure of AI, such as the way prompts (text inputs, N.d.R.) are rendered into actual imagery. This process can contribute to dehumanizing people, making them legitimate targets by rendering them invisible.
Ameera Kawash (artist, researcher) def of decolonizing AI.
-
-
www.heise.de www.heise.de
-
Exolabs.net experiment running large LLMs locally on 4 combined Mac Mini's. Links to preview and github shared code. For 6600-9360 you can run a cluster of 4 Minis locally. Affordable for SME outfits.
-
-
lexfridman.com lexfridman.com
-
https://web.archive.org/web/20241112122725/https://lexfridman.com/dario-amodei-transcript
Transcript of 5+ hrs (!) of Dario Amodei (CEO Anthropic) talking about AI, AGI and more. Lots to go through it seems. Vgl [[My Last Five Years of Work]] by Amodei's 'chief of staff' whatever that means wrt a CEO other than sounding grandiose.
-
Please note that the transcript is human generated, and may have errors.
Ha, rather than "Please note that the transcript is machine generated and is certain to contain errors".
I'm tempted to run this transcript through Claude for summaries and structure. See how that works.
-
-
interconnected.org interconnected.org
-
That development time acceleration of 4 days down to 20 minutes… that’s equivalent to about 10 years of Moore’s Law cycles. That is, using generative AI like this is equivalent to computers getting 10 years better overnight. That was a real eye-opening framing for me. AI isn’t magical, it’s not sentient, it’s not the end of the world nor our saviour; we don’t need to endlessly debate “intelligence” or “reasoning.” It’s just that… computers got 10 years better.
To [[Matt Webb]] the project using GPT3 extracting data from web pages saved him 4d of work (compared to 20 mins coding up the GPT-3 instructions, and ignoring GPT-3 then ran overnight). Saying that's about 10yrs of Moore's law happening to him all at once. 'computers got 10yrs better' an enticing thought and framing. It depends on the use case probably, others will lose 10 yrs of their time making sense of generated nonsense. (Vgl the #pke24 experiments I did w text generation, none of it was usable bc enough was wrong to not be able to trust anything). Sticking to specific niches probably true : [[Waar AI al redelijk goed in is 20201226155259]], turning the issue into the time needed to spot those niches for yourself.
-
I was one of the first people to use gen-AI for data extraction instead of chatbots
[[Matt Webb]] used gpt-3 in Feb 23 to extract data from a bunch of webpages. Suggests it's the kernel for programmatic AI idea among SV hackers. Vgl Google AI [[Ed Parsons]] at [[Open Geodag 20241107100937^aiunstructdata]] last week where he mentioned using AI to turn unstructured (geo) data into structured. Page found via [[Frank Meeuwsen]] https://frankmeeuwsen.com/2024/11/11/vertragen-en-verdiepen.html
-
-
european-alternatives.eu european-alternatives.eu
-
overview of European alternatives for digital products. Not in all relevant categories (payments e.g.) but some novel providers in there.
Explore wrt [[Infosec ladder van techniek en gedrag 20190530190335]] in context TGL.
Tags
Annotators
URL
-
-
theconversation.com theconversation.com
-
On covid and long term impact on cognition
Vgl [[Wayfinding by Michael Bond]]
-
-
www.baldurbjarnason.com www.baldurbjarnason.com
-
Good and interesting points by Baldur Bjarnason. With arsonists in our own gov, and full blown fascists making a US clean sweep of all branches of gov, relevant q's. Vgl [[Mijn werk is politiek 20190921114750]] mbt tech en TGL. Wat haal ik hier uit? -[ ] lees dit in detail, maak vergelijking met mijn huidige werkzaamheden langs zelfde lijnen.
Tags
Annotators
URL
-
-
berlinwallmap.info berlinwallmap.info
-
Map of the exact location of the Berlin wall, plotted on a current map of streets and buildings. Useful in pinpointing exact locations of images have from my trip in 1987.
Tags
Annotators
URL
-
- Oct 2024
-
-
Hedgedoc is a collaborative markdown tool. Handig voor team mbt samen Obsidian notes bewerken. Kan op yunohost, al wordt een issue/dependency met chromium genoemd in 2023 wv onduidelijk is of het is opgelost.
-[ ] doe een testinstallatie v Hedgedoc op Yunohost #webbeheer #tgl
Tags
Annotators
URL
-
-
www.theguardian.com www.theguardian.com
-
Spanish immigration minister diff tone than other EU MS / Meloni.
-
-
en.wikipedia.org en.wikipedia.org
-
A knower does not stand apart from the universe, but participates personally within it. Our intellectual skills are driven by passionate commitments that motivate discovery and validation. According to Polanyi, a great scientist not only identifies patterns, but also significant questions likely to lead to a successful resolution. Innovators risk their reputation by committing to a hypothesis.
Knower / observer not separate from the universe, not outside the system boundary Vgl [[Systems convening landscape als macroscope 20230906115130]] where the convener is integral part of it too, not an external change agent.
-
-
nl.wikipedia.org nl.wikipedia.org
-
Daniel Clement Dennett (Boston (Massachusetts), 28 maart 1942 – Portland (Maine), 19 april 2024) was een Amerikaanse filosoof die gespecialiseerd was in vraagstukken betreffende het bewustzijn, de filosofie van de geest en kunstmatige intelligentie.
Hadn't realised Daniel Dennett died last April. I read his The Mind's I (1981), Consciousness Explained (1991) and Darwin's Dangerous Idea (1995) while at university, those last two as they appeared. Have Elbow Room (1984) on the reading stack currently.
Tags
Annotators
URL
-
-
alexwlchan.net alexwlchan.net
-
For a long time, I thought of HTML as a tool for publishing on the web, a way to create websites that other people can look at. But all these websites I’m creating are my local, personal archives – just for me. I’m surprised it took me this long to realise HTML isn’t just for sharing on the web.
Yes. I use lots of small local html/php pages. Also webforms to search websites elsewhere, without going there. I had local pages to browse local image files in the 90s. I started writing html by hand in '93 and still do for local stuff. I do use a local on-device webserver though, as I include php.
-
https://web.archive.org/web/20241017043750/https://alexwlchan.net/2024/static-websites/
I like this idea of having static html as page to explore folders, I had that in the 90s to better search for image files. Author offers no clues as to how he uses the affordance it provides though, in terms of 'showing the metadata' they care for and the little bits of extra functionality. And I wonder about the effort involved when adding new files. Presumably new files are added manually too, otherwise it's not 'static html'. Stores files by year, type and first letter of file name. That makes no immediate sense to me in terms of finding things back. Then again I never understood why you would have folders for file types. It's like sorting items on the type of box it came in. Good example though of making your computer your own.
Tags
Annotators
URL
-
-
www.palladiummag.com www.palladiummag.com
-
Avital Balwit lives in San Francisco and works as Chief of Staff to the CEO at Anthropic
CoS to a CEO is what? It all reads like your typical mid-20s arrogance before finding out you don't actually know all, mistaking a hunch/'vision' for reality.
Anthropic is the outfit that made the Claude model.
-
doing more than fairly basic math
another apples/oranges. We have software that is good at math. Regular people call them spreadsheets. What we don't have, also not in algogens is software that understands what it is they're doing. My model can do sums is not a useful comparison wrt if it can do cognitive tasks.
-
the widespread deployment of robotics
another over the horizon precondition for author's premise to happen mentioned here. Notices that robots are bound to laws of nature, and thus develop slower than software environs but doesn't notice same is true for AI. The diff is that those laws of nature show themselves in every robot, but for AI get magicked out of sight in data centers etc, although they still apply.
-
Essentially anything that a remote worker can do, AI will do better
Weird notion of remote work as only screen interaction. My team works remote, meaning they think independent from any screen tasks.
-
Machine learning is a young field,
? young? Author is in their 20s, case of 'my first encounter with something means it is globally new'?
-
I expect AI to get much better than it is today. Research on AI systems has shown that they predictably improve given better algorithms, more and better quality data, and more computational power. Labs are in the process of further scaling up their clusters—the groupings of computers that the algorithms run on.
Ah, article based on assumption of future improvement. compute and data are limiting factors, and you will end up making the equation if compute footprint is more efficient than doing it yourself. Data even more limiting, as the most meaningful stuff is qualitative rather than quantitative, and stats on the Q stuff won't give you meaning (LLMs case in point)
-
The shared goal of the field of artificial intelligence is to create a system that can do anything. I expect us to soon reach it.
Is it though? Wrt GAI that is as far away as before imo. The rainbow never gets nearer, because it is dependent on your position.
-
The economically and politically relevant comparison on most tasks is not whether the language model is better than the best human, it is whether they are better than the human who would otherwise do that task
True, and that is where this fails outside of bullshit tasks. The unmentioned assumption here is that algogen output can have meaning, rather than just coherence and plausibility.
-
The general reaction to language models among knowledge workers is one of denial.
equates 'content production' w k-work
-
my ability to write large amounts of content quickly
right. 'content production' where the actual meaning isn't relevant?
-
it can competently generate cogent content on a wide range of topics. It can summarize and analyze texts passably well
cogent content / passably well isn't the quality benchmark for K-work though.
-
-
www.sightful.com www.sightful.com
-
Sightful h1 2025 launch on Win via John Philpin https://john.philpin.com/2024/10/12/spatial-computing-has.html
every image change in the page moves focus from this form to page. Irritating
-
-
www.felicis.com www.felicis.com
-
https://web.archive.org/web/20241012060204/https://www.felicis.com/insight/the-agent-economy
for the image listing various AI 'agent' services. Some seem dubious as example, and not agents but generally 'ai tools'. -[ ] maak lijst v voorbeelden uit illustratie [[241012Felicislijstaiagentscorps.png]] #15mins mbt acties/sectoren die interessant lijken. Now an agent for that task....
-
an AI-first outsourced contact center
so much wrong with that phrase and a tell for how these corps view this tech. let's have your customers talk to machines.
-
-
www.theverge.com www.theverge.com
-
imperfect tools for low-stakes tasks.
seems that way, and to mostly remain that way. I'd be curious to incorporate agents in my tasks ([[Aazai CL]] list of such tasks)
also burying the lede much, this is the key verdict and it's in the penultimate paragraph?
-
For now, the concept seems to be mostly siloed in enterprise software stacks, not products for consumers.
Real agents would start at the individual level. It all smacks so much of corps automating away their own direct interaction with customers, bc they're a pain to talk to. Blind, see gripes of existing silo customers about the impossibility getting to talk to someone
-
a customer service agent
almost by def asymmetric, leaving customers to talk to a blind wall.
-
The gap between promise and reality also creates a compelling hype cycle that fuels funding
The gap is a constant I suspect. In the tech itself, since my EE days, and in people's expectations. Vgl [[Gap tussen eigen situatie en verwachting is constant 20071121211040]]
-
And they burn more energy than a conventional bot or voice assistant. Their need for significant computational power, especially when reasoning or interacting with multiple systems, makes them costly to run at scale.
Also costly to run at all. If this is to increase efficiency of a corp or individual it needs to be energy efficient too. Otherwise doing it yourself is the more efficient option. AI is bound to the same laws of nature as us. [[AI heeft dezelfde natuurwetten 20190715135542]] Hiding away the inefficiency in a data center's footprint and abstracting into a service fee doesn't change that dynamic ultimately.
-
AI agents offer a leap in potential, but for everyday tasks, they aren’t yet significantly better than bots, assistants, or scripts.
Again it's just a promise, which seems to be the AI mantra at every step.
-
Agents frequently run into issues with multi-step workflows or unexpected scenarios
multi step is what they're for no? Automator can do better than agents at this time it seems.
-
There was another, arguably more immediate problem: the demo didn’t work. The agent lacked enough information and incorrectly recorded dessert flavors, causing it to auto-populate flavors like vanilla and strawberry in a column, rather than saying it didn’t have that information.
Exactly. All promise no delivery yet. It may work if the other side is equally automated, but if it's human or a dumb web form it won't. It also reveals on the side of the human demonstrator a big lack in reflecting on their own preferences that the AI should attach to its choices.
-
The service is similar to a Google reservation-making bot called Duplex from 2018. But that bot could only handle the simplest scenarios — it turned out a quarter of its calls were actually made by humans.
Vgl Phillips voice automation train tickets in 90s. 'Where do you want to go' 'It's not for me but for my mom' 'Destination not found: mom'
-
Huet gave the agent a budget and some constraints for buying 400 chocolate-covered strawberries and asked it to place an order via a phone call to a fictitious shop.
Note this is only 'nice' from the buyer's perspective. The 'phone call' to the shop still means having a human be subject to a computer call. It also probably means you don't care about what's being bought. No back story to e.g. a gift. Beware [[Spammy handelings asymmetrie 20201220072726]]. You automate 10 million things be sent, but need to be deleted by a human e.g.
-
Tech companies have been trying to automate the personal assistant since at least the 1970s, and now, they promise they’re finally getting close.
Indeed. [[AI personal assistants 20201011124147]] https://www.zylstra.org/blog/2020/10/narrow-band-digital-personal-assistants/ We should start with the personal here, wrt automation, not the AI to get to quicker results: [[small band AI personal assistant]] where the personal limits the range of possible inputs for a task and the range of acceptable outputs for a task, leaving a smaller area for an AI agent to do its thing in and thus be more effective.
-
For individuals, AI companies are pitching a new era of productivity where routine tasks are automated, freeing up time for creative and strategic work.
Still, how much of that is already available to automate on-device? 'routine tasks automated' is not in need of AI. What are examples?
-
Instead of following a simple, rote set of instructions, they believe agents will be able to interact with environments, learn from feedback, and make decisions without constant human input. They could dynamically manage tasks like making purchases, booking travel, or scheduling meetings, adapting to unforeseen circumstances and interacting with systems that could include humans and other AI tools.
Agents are prompt chains that include fetching info (params!) from elsewhere for their function. vlg [[Standard operating procedures met parameters 20200820202042]] I wonder how you generalise them, other than 'go buy/book', and when you do if they are above what on-device automation can do. In the end individuals need to be able to set the params/boundaries of any agent, make it their own agent, rather than some corps agent. What I see at consumer facing level is not aiding consumers but aiding corps reduce human interaction with consumers. Agents should increase agency, is the lithmus test.
-
-
werd.io werd.io
-
For me, it was always a way to build community at scale.
yup
-
The web sits apart from the rest of technology; to me, it’s inherently more interesting. Silicon Valley’s origins (including the venture capital ecosystem) lie in defense technology. In contrast, the web was created in service of academic learning and mutual discovery, and both built and shared in a spirit of free and open access. Tim Berners-Lee, Robert Cailliau, and CERN did a wonderful thing by building a prototype and setting it free.
Ben Werdmüller makes an interesting distinction. Internet tech, and thus Silicon Valley, originated in defense (ARPA etc.), whereas the web originated in academia in a spirit of open academic debate (CERN). Now ARPA etc had deep ties w academia too, and it's mostly defense funding at play. Still there may be something to this distinction. You could also say perhaps it's an Atlantic divide, the web originated at CERN in Europe.
-
-
www.dbreunig.com www.dbreunig.com
-
Author says generation isn't a problem to solve for AI, there's enough 'content' as it is. Posits discovery as a bigger problem to solve. The issue there is, that's way more personal and less suited for VC funded efforts to create a generic tool that they can scale from the center. Discovery is not a thing, it's an individual act. It requires local stuff, tuned to my interests, networks etc. Curation is a personal thing, providing intent to discovery. Same why [[Algemene event discovery is moeilijk 20150926120836]], as [[Event discovery is sociale onderhandeling 20150926120120]] Still it's doable, but more agent like than central tool.
-
-
-
[[Peter Rukavina]] on how his blog is something others come across and make connections. Commented that [[Hoe emergence tot stand komt 20040513173612]] is from longer traces. My PKM system is leaving those traces for me, my blog for me and others. My blog is the longest, due to it being 22+ yrs old, trace I'm leaving publicly for others to connect around.
Tags
Annotators
URL
-
-
tracydurnell.com tracydurnell.com
-
The more friction existed, the higher the stakes felt to me, and the more it seemed like I needed to have something very important and worthwhile to say before I could (should) blog about it.
Friction can forestall writing. I've moved to blogging from inside my Obsidian notes through micropub, rather than using the WP back-end. Gives me two things: the back-end pushed me to only write when I had time to finish, the notes allow multiple writings in parallel, and publishing is one key click.
-
A “shit blog” is a thing of power.
Vgl [[Sturgeons Law most is crap 20190328205135]] https://www.zylstra.org/blog/2019/03/90-of-everything-is-crap-a-source-of-imposter-syndrome/ Provides agency and ratchet too.
-
Tracy Durnell on the personal affordances of keeping a blog. Haven't read just glanced, but usually she makes interesting points, marked to read -[ ] Read this wrt #pkm #15mins
via [[Euan Semple]]
-
-
-
https://web.archive.org/web/20241002103957/https://whyy.org/segments/is-giftedness-a-form-of-neurodivergence/ mentioned in Mensa Heurekasig mailinglist. Art from #2024/05/20 on seeing hiq as neurodivergence, wrt 'gifted burnout'
-
-
forum.obsidian.md forum.obsidian.md
-
adding to what clemp wrote. Structure or categorisation is earned imo and emergent from working with my material. Any categorisation, indexing, tagging also is personal imo meaning no external standard as to how things should be organised applies in any way. Structures are personal tools and can be temporary. Which ones do you need and can add to over time while your interacting with your material? That way there’s a ratchet effect, but no need to structure everything as a separate task. I start everything I do with a search in my stuff. I add to the things I find and seem relevant at that time as tags the things I was searching for. If I found a piece about gardening while searching for things about health, I will add that health relation as tag. Or as link to another note. This lengthens the traces of my work with my material, and longer traces I’m more likely to cross. Over time I will see the stuff emerge that is most relevant to me over time. The start for me is when I save something external I always add the following 2 things: the reason I wanted to save it, what made me interested, in my own words (might include some tags). And always a link to something already in my notes that I associate it with. For me the switch in mindset is that there is no intrinsic information contained in anything I keep, all meaning is in my own eyes when I use it later. Any structuring reflects that, and I work form the assumption there are no objective descriptors I must use as categories or tags etc. Rather than organize/structure during note taking, I organize/structure during note using. With my initial remark and internal link as curation to help me on my way.
my comment, in response to someone getting lost in up front organising of notes, and ending up in a 'mess'. Embrace the mess, lengthen traces to stumble upon, earn structure (they're a personal tool not an outside standard or demand). Organise during note usage rather than during note taking, except for curation when saving something external with a remark (tags sometimes) and an internal link.
-
-
-
Spruce Pine is a key resource for high grade quartz. There's only a few places this is available. It has been hit by a tropical storm in North Carolina and now temp closed. The lenght of closure may impact semiconductor production.
-
-
data.europa.eu data.europa.eu
-
Het EU dataportaal heeft een ERPD data flag voor Chapter II v.d. DGA. Vu NL moet iov met CBS data.overheid.nl dit verzorgen. 1 jr na van toepassing worden zijn er ~1400 datasets veel CZ, maar ook NL zie ik van CBS zelf en rivm. 1045
-[ ] vraag #bzkdga en #cbs om hoe dit tot stand komt wel/niet via data.overheid.nl? #10mins #prio
Tags
Annotators
URL
-
- Sep 2024
-
-
Google’s chaos makes Apple’s control seem reasonable. I can already hear John and Seb typing: “…and this is why the EU shouldn’t turn Apple into Google.” Let’s be real—Google Play and the App Store don’t compete. They collaborate. Same rates, same model, same unchecked power. Call it a monopoly, call it a duopoly. They share the mobile market without too much crossfire: Apple takes those who can or want to pay, Google takes the rest. Google Play is not an alternative to the App Store. It’s not “Go there if you don’t like Apple.” Google Play is a very lazy, very sloppy carbon copy of the App Store. Their collaboration is not metaphorical. It goes beyond the way their shared control over the mobile app market. Apple collects privacy points, then cashes them in by making Google the default search on iPhone. A lot of that privacy-free Search money flows right back from Google to Apple. 20 Billion USD in 2022. In 2020, “Google’s payments to Apple constituted 17.5% of the iPhone maker’s operating income.” (Bloomberg) And no one really cares, as long as it’s convenient. But as a developer in Europe, we’re glad that the EU does. ↩
iAwriter pointing out that from their perspective G and A appstores don't compete but divvy things up between them. A 20B USD / 17,5% revenue deal makes it tangible. Say they appreciate the DMA because of it.
-
-
pivot-to-ai.com pivot-to-ai.com
-
Academic publishers are pushing authors to speed up delivering manuscripts and articles (incl suggesting peer review be done in 15d) to meet the quota they promised the AI companies they sold their soul to. Taylor&Francis/Routledge 75M USD/yr, Wiley 44M USD. No opt-outs etc. What if you ask those #algogens if this is a good idea?
-
-
deadsuperhero.com deadsuperhero.com
-
We have to break this illusion that the organization is anything other than hired people sponsored by corporations working on some common shared goals together
well yes that is standardisation. If you exclude intended users from its creation it won't be one. A standard gets created by the intended field of users, who commit to adopting it once created. It's not idealism or altruism, it's industry.
-
So, what’s the problem?
This entire piece gave me nothing to understand 'what's the problem' other han a personal beef with a key figure, a dislike for organisation and not understanding standardisation as an industry effort. So I see the author's problems, but still don't know anything about Social Web Foundation, other than that many people seem to feel left out.
-
Why is he like this?
This entire thing indeed seems to be about the author's personal perspective on Evan Prodromou
-
My growing concern is over what place the community will have in the governance process, or any decision-making process. As the echelons of power consolidate into a handful of decision-makers, as the emphasis focuses more on making a profit, as the gap widens between “leadership” and the poor sods hanging around at the bottom, the mutual aspect of community welfare gives way towards a dynamic very reminiscent of what we were all trying to get away from at one point or another: a fucking mall on the Internet, where people used to hang out.
organising is suspect by def then?
-
I understand the argument that “having too many standards can hinder innovation and hurt collaborative efforts”, and while I don’t completely agree with it, I can see some validity in how the case can be made. However, telling people they’re wrong because their standard didn’t get a seal of approval
a standard is only a standard if it is adopted by those in the user group. Creating your 'own' by definition isn't a standard, at most it's a method or protocol.
-
The creation of The Social Web Foundation deftly and carefully subverts that context, in such a way that the term “Social Web” only equals “Fediverse”. It even goes as far as wringing out the Fediverse’s own historical context as a multiprotocol polyglot network, by equating the Fediverse to just the ActivityPub
The Social Web by naming itself thus reduces social web to fediverse and then to AP only.
-
he term “Social Web” has been used on and off for a little while now, most prominently being offered as a simpler, cleaner name than “Fediverse”. Unfortunately, the term is a bit vague, in that it simply puts two words in a blender and mixes them together. During a discussion prior to FediForum March 2024, I proposed an alternate name: “Womp Triangle”, because it holds just about as much meaning and insight
And 'Fediverse' is what? Some trekkie couldn't decide between universe and The Federation. Idk what the problem is yet w the foundation, but this is the lamest critique thinkable.
-
-
www.theguardian.com www.theguardian.com
-
Typhoid of a antibiotics resistant variety has taken hold in Pakistan. International action required in the face of superbugs.
-
-
www.cnn.com www.cnn.com
-
23andme postorder DNA profiling is slowly collapsing. CEO wants to take it private, and board has resigned in protest. This is one of the corps that went public using a SPAC to capitalise on the height of hype. Founded 2006, SPAC in 2021. Revenue is down, and money should run out soon. Seems there's no business model on top of the 1 time purchase of a DNA test. Key asset obv is the data, so I think we can wait for it to be sold to whoever bids most.
-
-
wandering.shop wandering.shop
-
Charlie Stross provides a personal anecdotal data point to [[Changes in memory and cognition during the SARS-CoV-2 human challenge study]] that revising his writing after covid showed him cognitive issues he didn't realise as he was writing. Comments / responses add to it. Personally I use a type of puzzle and a timer to gauge my concentration and have done for several years. Since my last Covid and in my burnout I now at times don't finish or make mistakes in a puzzle, where I would consistently be in the 5% fastest solvers before.
Tags
Annotators
URL
-
-
nlnet.nl nlnet.nl
-
Maemo Leste is intended to be a mobile OS independent and completely separate from Android and iOS. e/OS in comparison I think is more a degoogled Android. Its origins are in a Nokia project called Maemo (never heard of it).
-
-
www.baldurbjarnason.com www.baldurbjarnason.com
-
glorious rant by Baldur Bjarnason, but not much surprisal here. As with other stuff, albeit agile scrum, getting things done, and any of the pitched perfect ways to make notes, whenever the process becomes the thing rather than a tool in the hand of an knowledge artisan stuff is useless and boring.
It's about output, not in units or volume, but in quality. Needing to know why your are making these notes, and weaving your network of meaning.
The people who do things with their system usually don't talk about it much. I've done it on occasion and am happy to share and show how/why I do things, but never with the intention to convince another to do the same or similar.
-
-
www.trend-mill.com www.trend-mill.com
-
Stephen Moore rants about the internet he does not like. Incl the deterioration of search, the adtech, growth hacking for engagement etc. Now no longer human created even but generated slop from bots. Calls it a addictive dopamine machine without joy.
Tags
Annotators
URL
-
-
donaldclarkplanb.blogspot.com donaldclarkplanb.blogspot.com
-
Has ChatGPTo1 just become a 'Critical Thinker'?
What was that old news editor adagio again? Never use a question mark in the title bc it signals the answer is 'No'. (If it is demonstrably yes, then the title would be affirmative. Iow a question means you're hedging and nevertheless choose the uncertain sensational for the eyeballs.)
-
-
www.thelancet.com www.thelancet.com
-
(Even mild) covid cases are associated with persistent cognitive damage. Empirical data from 2021/2022. Largest diff between measured groups wrt memory and executive functions. No volunteers self-reported cognitive symptoms. Iow covid is associated with cognitive damage, but you won't notice yourself.
Trender et al 2024.
-
-
wordpress.org wordpress.org
-
https://web.archive.org/web/20240923064617/https://wordpress.org/news/2024/09/wp-engine/
Matt Mullenweg calls WP Engine 'not WordPress' bc it disables key features to save hosting costs. Calls it a VC funded cancer putting the century goal of WP at risk.
Tags
Annotators
URL
-
-
www.theguardian.com www.theguardian.com
-
Kara-Murza’s grasp of history underpins his certainty that Putin’s regime will collapse – quickly and without warning. “That’s how things happen in Russia. Both the Romanov empire in the early 20th century, and the Soviet regime at the end of the 20th century collapsed in three days. That’s not a metaphor, it was literally three days in both cases.” He believes passionately that the best chance of a free and democratic Russia and peace in Europe rests on Russia’s defeat in Ukraine.
Kara-Murza's take on Russia is that collapse will be swift, much like twice before, 1917 and 1991.
-
-
www.rachelwu.com www.rachelwu.comLWtL V64
-
they risk experiencing delays in learning or learning something irrelevant,wasting time and energ
Again lineair and productivity/effectiveness overtones. 'learning something irrelevant' as 'wasting time and energy'? ugh. Curiosity and interestingness/surprisal can be directed with intention without being goal oriented, which seems to be the premise here.
-
Learning what to learn entailsunderstanding what is relevant versus irrelevant
#openvraag I wonder if Wu put relevance in the eye of the learner or not. Vgl Feynman's [[Twaalf favoriete vraagstukken 20201006163045]] vs 'society's' relevance.
-
Once a learner figures out what to learn, then theremaining task is to learn the information, which can still be a challenge depending on thecomplexity of the information
This is a highly linear sketch, figure out what to learn, gather information, done. In complexity figuring out what to learn does not then give you a clear path to the 'right' information, as it doesn't exist in that form. You iterate your way forward based on pattern recog. Fractals of figuring out what to learn repeatedly along the way
-
http://www.rachelwu.com/Wu_2019.pdf
proposes ...adaptation is relevant for all age groups because the environment is dynamic, suggesting that learning what to learn is a problem relevant across the lifespan
reviews new research demonstrating the importance and ways of learning what to learn across the lifespan, from objects to real-world skills 2018/2019pub
Tags
Annotators
URL
-
-
github.com github.com
-
I don't think anyone has reliable information about post-2021 language usage by humans. The open Web (via OSCAR) was one of wordfreq's data sources. Now the Web at large is full of slop generated by large language models, written by no one to communicate nothing. Including this slop in the data skews the word frequencies. Sure, there was spam in the wordfreq data sources, but it was manageable and often identifiable. Large language models generate text that masquerades as real language with intention behind it, even though there is none, and their output crops up everywhere.
Robyn Speer will no update longer Wordfreq States that n:: there is no reliable post-2021 language usage data! Wordfreq was using open web sources, but it getting pollutted by #algogens output
-
The field I know as "natural language processing" is hard to find these days. It's all being devoured by generative AI. Other techniques still exist but generative AI sucks up all the air in the room and gets all the money. It's rare to see NLP research that doesn't have a dependency on closed data controlled by OpenAI and Google
Robyn Speer says in his view natural language processing as a field has been taken over by #algogens And most NLP research now depends on closed data from the #algogens providers.
-
Reddit also stopped providing public data archives, and now they sell their archives at a price that only OpenAI will pay.
Reddit was another key data source for wordfreq but they too no longer provide public archives, and sell it at high prices (to the likes of the #algogens)
-
Twitter is gone anyway, its public APIs have shut down
Twitter was a key resource for wordfreq for colloquial use of words. No longer as API shut down and the population of X is skewed to hatemongering in a way that makes it lose utility as data source.
-
As one example, Philip Shapira reports that ChatGPT (OpenAI's popular brand of generative language model circa 2024) is obsessed with the word "delve" in a way that people never have been, and caused its overall frequency to increase by an order of magnitude.
Example of how #algogens slop pollutes corpus data: ChatGPT uses the word 'delve' a lot, an order of magnitude above human usage. #openvraag Is this to do with the 'need' for #algogens to sound more human by switching words around (dial down the randomness, and it will give the same stuff every time, but will stand out immediately as computer generated too)?
-
-
www.404media.co www.404media.co
-
paywalled article.
Wordfreq is shutting down because LLM output on the web is polluting its data to the point of uselessness. It would track longitudinally the change in use of words across a variety of languages. Vgl human centipede epistomology in [[Talk The Expanding Dark Forest and Generative AI]] by [[Maggie Appleton]]
-
-
-
This creates two representations of the same file: one the drawing presented visually, one the text content outside the visual. Zsolt calls it the ‘flip side’ of a drawing, being a note accompanying the drawing. I see it more like two different views on the same thing. I have a hotkey (cmd arrow down) enabled to flip a note between both views. Putting both views next to each other, and working in both at the same time, allows me a seamless mode of working, switching between visual material and text writing
Obsidian is a viewer, each tab essentially a diff one. Putting the two representations each in their own tab makes it possible to work both in the text and visual version of the same file. A combination of the 'flip' that Zsolt's plugin allows plus the tabs in Obsidian.
-
nn: visueel en text naadloos naast elkaar maakt visueel makkelijker adopteerbaar.
-
-
www.myrasecurity.com www.myrasecurity.com
-
https://web.archive.org/web/20240919071804/https://www.myrasecurity.com/
Myra CDN, based in Germany. CDNs need to temp decrypt https traffic, potentially creating a gap in GDPR (and DSA?) compliance. Unless the CDN provider can be shown to be part of the compliance chain. Thus select one that ensures this / is based inside the EU.
-
-
techpolicy.press techpolicy.press
-
each of these instances will need to comply with a set of minimumobligations for intermediary and hosting services, including having a single point of contact and legal representative, providing clear terms and conditions, publishing bi-annual transparency reports, having a notice and action mechanism and, communicating information about removals or restrictions to both notice and content providers
Revisiting this after 2yrs, now as board member of mastodon.nl, need to read relevant DSA sections again, and think about if/how this applies. The list here as such is almost completely covered by default, except for the reports, which will come as we're heading towards ANBI status anyway.
-[ ] lees DSA met mastodon.nl bril tav kleine platforms #activityclub #30mins
-
-
matrix.org matrix.org
-
it feels we’re creeping ever closer to that goal of providing the missing communication layer for the open Web. The European Union’s Digital Markets Act (DMA) is a huge step in that direction - regulation that mandates that if the large centralised messaging providers are to operate in the EU, they must interoperate. We’ve been busy working away to make this a reality, including participating in the IETF for the first time as part of the MIMI working group - demonstrating concretely how (for instance) Android Messages could natively speak Matrix in order to interoperate with other services, while preserving end-to-end encryption.
Matrix seeing DMA as supportive towards their goal of open web's communication layer. Actively demo'ng Android interoperability while preserving E2EE, and participating in IETF / MIMI ( https://datatracker.ietf.org/group/mimi/about/ )
Tags
Annotators
URL
-
-
andymatuschak.org andymatuschak.org
-
but I think much of it is me slowly—still—becoming less reactive to the discomfort of sitting in stillness and confusion.
The ability to do deepwork is related to create less distraction because of discomfort during it. To keep at it, where a more gratifying small task is just a click away.
-
how to cultivate deep, stable concentration in the face of complex, ill-structured creative problems?” I now have several years of data on self-reported focus and energy levels, and it’s comforting to see that this does get easier with practice.
it is easy to pretend interruption is the work. Matuschak descibes focus work as a trained thing he improved upon.
-
When the insight arrived, I didn’t notice the connection to the trail I’d laid on the preceding pages. My experience was of making no progress, and then, finally, making some. In hindsight, I can see that I had been making plenty of progress over those weeks; I just couldn’t tell at the time. I suspect this is pretty common in my work. So, “I feel like I’m not making progress” is probably not a good local heuristic for guiding my work. Alternately, the lesson might be that I need to become more sensitive to the many subtler flavors of progress in this kind of work
This rings true. The friction, the struggle is the work, at least when it comes to my knowledge work. Interesting is that when the jump happens I tend to phrase it as an escape, a way of fleeing forward. When I got stuck in a major research project in 2020, the key insight to unlock it was a gasp of desperation more than a bolt of lightning. Colleagues immediately told me that was the key, but to me it felt like using a cheat code. Now in hindsight, I think it was the best possible outcome but that oroginal sense of escape remains.
-
ut I’ve also noticed that when I focus my work on particular people in particular contexts, that more immediate emotional connection sometimes overpowers the day-to-day frustration that comes with being lost in the woods. For several long stretches this year, I found the work really gratifying, both in the moment, and retrospectively over the long term.
Matuschak through focusing on specific people in a specific context, entering into a deeper emotional connection would provide meaning both in the now and over the long term. This certainly applies to my work #hazp08 and perhaps all my project that stand out in hindsight either have that or are singular efforts where the doing held such a link to myself.
-
By contrast, when I’m doing work that I find gratifying and meaningful over the long term, the day-to-day experience is usually frustrating and unpleasant. The work is gratifying because it’s deep and personal and unique. Unfortunately, in my projects, those same attributes also mean that progress tends to be inconsistent and hard to discern; it’s rarely clear what to do next; there’s rarely anyone I can ask for help; I usually feel incapable.
I find the concepts behind my work meaningful, and enjoyable, but usualll not the work. The most enjoyable work usually is disconnected from anything else.
-
Throughout my career, I’ve struggled with a paradox in the feeling of my work. When I’ve found my work quite gratifying in the moment, day-to-day, I’ve found it hollow and unsatisfying retrospectively, over the long term. For example, when I was working at Apple, there was so much energy; I was surrounded by brilliant people; I felt very competent, it was clear what to do next; it was easy to see my progress each day. That all felt great. But then, looking back on my work at the end of each year, I felt deeply dissatisfied: I wasn’t making a personal creative contribution. If someone else had done the projects I’d done, the results would have been different, but not in a way that mattered. The work wasn’t reflective of ideas or values that mattered to me. I felt numbed, creatively and intellectually.
[[Andy Matischak]] on the value and quality of his work. Over the long haul, he found his work (at Apple) meaningless, even if it felt good at the time. The statement 'if someonee alse had done the work' the results would have been similar chimes. My work may be seen by others as meaningful in the moment, but I only see that it doesn't matter in the long run. A million others for any of us. M writes he felt confident, I never had any answer to the question what I'm good at. I just get total internal silence in response, and always have gotten.
Tags
Annotators
URL
-
-
www.matthewsiu.com www.matthewsiu.com
-
Answers are often hiding in our discursive notebooks, buried over time in reams of the mundane
vgl [[Sturgeons Law most is crap 20190328205135]]
-
One key motivation for Latticework was how wonderful it feels to stumble upon a past moment of shining clarity, to point and revel. We want to be able to carry those moments with us, to see them all at once when we’re lost, and to use them as landmarks as we navigate our messy notebooks. We’ve used Latticework to do this in small ways so far, and we’re excited to see how our upcoming projects might feel different with its extra affordances.
this paragraph reads like making commonplacing navigable in a new way. Also turns 'snippets' into potenital entry points without them being separate notes, and pivots like tags. Note the clear spatial overtones (landmarks, being lost, navigate, ways, stumble upon, point).
-
We had a strong personal motivation for this project: we often find ourselves stuck in our own creative work. Latticework’s links might make you think of citations and primary sources—tools for finding the truth in a rigorous research process. But our work on Latticework was mostly driven by the problems of getting emotionally stuck, of feeling disconnected from our framing of the project or our work on it.
Again the important distinction, here in the context which itch Latticework scratches, between 'evidence' and 'kindle' perspectives. The latter is an emotional thing, where knowledge is not an external thing, but a internal network of meaning.
-
Our test users were largely quite enthusiastic, but our sessions with them usually lasted less than an hour. Future work should be informed by extended use in demanding situations.
I recognise this. We spent an hour in Feb. Which was fun, and useful because it was a real effort on actual notes and for an actual purpose (for me a workshop design). Then afterwards I didn't use it much due to tech hurdles, so I didn't get to experience ongoing value. Reinstalled it now because of this article. (Which [[Maarten den Braber]] pointed me to.
-
Adjustable snippet ranges. After working with a snippet link, some test users found that they wanted to shift its endpoints, to include more context or to tighten its focus. Latticework doesn’t currently allow this, but one could create an interaction which modified its current snippet links accordingly
adjustable snippet ranges, letting your emergent insight impact your original highlighting/annotation sounds like a very interesting idea. Not because you're pinpointing the info at source more accurately, but because the emergent purpose of your sensemaking reflects back on your source material. It shifts the exact point where your Surprisal originates around.
-
Giving a cluster a name can impose formality prematurely, adding friction to the process.
Naming clusters can be incorporated into sensemaking efforts though, when not used as result but as intermediate step. As in [[2 step archetype extraction 20121130152904]]
-
we rarely know the shape of our categories in advance. Often we’re just reacting: “this seems important”; “this is related to that”; “this makes me think of…”; and so on.
exactly. Tags are emergent structure, and are not per se to describe the information stated nor to be used as a taxonomy. Vgl [[%On Tagging 20200818120917]] as associative emergence, as search/find history, as pivots in an exploratory path etc.
-
Text as a medium for sensemaking. In QDA tools, the “working document” where you make sense of your excerpts might be a spreadsheet, or a database query, or a whiteboard. By contrast, Latticework emphasizes a textual canvas, where freeform notes and snippets can mix arbitrarily. That mainly comes from a difference in the role of the snippets: we view them less as “evidence” or “data points”, and more as “kindling” which might be consumed and discarded on the way to insight. In the latter setting, when even the problem being solved is undefined, the only way forward is often to write in circles, until some sense starts to emerge. This writing may weave chaotically between new observations and snippets from old documents. Some QDA tools, like Dovetail, include freeform text editors, but their affordances emphasize communication to stakeholders, rather than sensemaking.
This is good way to make the distinction with qualitative research tools, incl. those where the narrators of qualitative bits do their own signification which then serves as filters. Tagging like those two types serve a different purpose, spotting patterns 'out there' rather than provoking thinking 'in here'. Both useful and not unrelated, but different activities. The 'evidence' vs 'kindling' metaphors make sense to me. Diff points of application.
-
Discontinuity with source material. Most digital canvases, like Muse and Kinopio, aren’t designed for the workflow we’ve been discussing. They’re not tightly integrated with a reading environment. If you just paste plaintext snippets into them, the resulting cards aren’t linked to the source document
This is where I derive value from Excalidraw as Obsidian plugin: text and image are joined in the same note. And it can have hyperlinks to othern notes, drawings, as well as embed.
The discontinuity between visual and textual has been a main issue for me for decades
-
For now, even on the Apple Vision Pro, display resolutions are currently too low to display text at the physical size of a sticky note
this is problematic even if you ignore the obvious hurdle of having to wear overheavy ski goggles.
-
Latticework’s portal-based marginalia allow commentary to be created and viewed from either “side” of a snippet link, interchangeably. Snippet links can be serialized to standard URLs in standard Markdown plaintext. (Modern systems only support links to blocks, not arbitrary text ranges. For block links, Obsidian uses non-standard anchors; others use proprietary database formats.) Snippet links can be quickly created (either from source or at the destination) using key commands. Transcluded snippets can be collapsed to increase density, and re-expanded when needed. Pane-based navigation allows users to preview and visit links while maintaining a consistent view of the linking pane.
5 reasons L's bidirectionality is different to pkm tools / Xanadu. The first two are most key imo: they can be initiated from both sides of a link, and result in the same thing. (e.g. diff from [[Webmention 20200926203019]], and two the links are standard URLs (in standard Markdown), whereas Roam / Notion abstract them away in a db, moving outside their role as viewer, and Obs maintains its viewer role, but adds things to the notes not obviously interpretable outside of it.
I'd add that the linked snippets are a different unit of linking from what Obs et al support. Much closer to the granularity where the knowledge work is done. Surprisal I rarely find in a paragraph, mostly in part of a sentence. Questions latch onto a single word sometimes.
-
Snippet links are a kind of hypertext, and embedded snippet links are a kind of transclusion. Originally proposed by Ted Nelson as part of his Xanadu system, transclusions present part of one document within another, while maintaining bi-directional links for navigation and orientatio
Bidirectionality here explicitly tied to Ted Nelson's Xanadu. Call block transclusion in pkm tool transclusion "primitives" which sound like a right charcterisation. Say Latticework differs from both
-
Latticework’s design evolved through many iterations, driven in large part by user interview and observation sessions. Through our personal networks and the Obsidian forum, we recruited experienced Obsidian users who needed to distill some insight from a large collection of unstructured notes. We’d like to discuss some observations from those sessions, and from our own use of the system.
I did #2024/02/01 https://www.zylstra.org/blog/2024/02/matthew-and-andy-watched-me-test-the-obsidian-reference-plugin/ where my own similar observations are captured. - shifting from highlighting to linking as emergence occurs - structure is earned over time - keeping the copy commands straight was hard (I kept forgetting too) - preview and multiple panes is what I did too, but it seemed to clash with the plugin then - selection of snippets is not blocks, but phrases inside par's and sentences. Not whole blocks (and this is why I hardly use block links inside my notes) - after paraphrasing collapsing snippets keeps overview possible, or to collect examples, and treat list as tasks, collapsing when done.
-
We’ve been careful to implement Latticework’s features in the same spirit. Snippet links are stored as ordinary links using standard W3C Selector URL fragments to specify an arbitrary text range
Using Latticework does not break the 'Obsidian is only a viewer' principle cf [[3 Distributed Eigenschappen 20180703150724]]. It adds markdown style links according to W3C selector url standard. Nice, because it maintains readable plain text files.
-
Alongside disorientation, working memory overload is one of the biggest problems when distilling these large unstructured documents. We believe that’s why people in these situations so often try to collect everything important into once place: that way, everything can be viewed at once, and it’s possible to notice connections and themes without relying on working memory. Unfortunately, as snippets accumulate, the working document itself can become quite long—leaving you stuck scrolling around, trying to remember where everything is.
The processing document can get as unwieldy as the source material for which it is a solution. Latticework lets you collapse stuff therefore.
-
Latticework uses a similar pane-aware interaction
This pane awareness is what seemd to clash with some other plugin I run. At least it did in feb. I notice their code repo still warns about clashes with other plugins and to run it in a separate vault with no other plugins.
-
While you’re gathering these snippets, you may also want to capture observations about them. Each workflow has a natural way to handle this. If you’re reading a source document with a highlighter, you can write comments in the margins. If you’re copying snippets into a working document, you can type observations alongside them. As with highlighting and copying, Latticework makes these operations interchangeable.
adding small observations to either the foraging or sensemaking side of things is reflected in the other. Another bi-directionality. Nifty, also because this is exactly what happened to me when I tried out an early version with Matt/Andy watching. Working with material leads to new thoughts/observations which I threw in for later follow-up/expansion. It allows me to capture my conversation with a text both as annotation at source, and as refinement in the working doc.
-
You’ll get the same result no matter which direction you go—a highlight in the source document and a snippet link in the working document. Conceptually, highlighting doesn’t actually modify the source document. Highlights are a dynamic style applied to all the snippets linked in your working document. So if you delete a snippet link, the corresponding highlight will disappear, too
Bi-directionality is a key feature in Latticework, which is great. At the same time, on the source end it is also ephemeral. If your remove a linked snippet from the sensemaking document, the highlight, which is just a styling element, gets removed from the source. The source is not modified to produce the highlight. (Any permanent link to source should be made consciously, which is right) Bidirectionality, other than linking seems to me a key affordance in #pkm, something that not now exists in either my annotations / reading / processing flows. #openvraag: where in my workflows would bidirectional trace leaving be useful.
-
Latticework’s main goal, then, is to enable fluid movement between these foraging and sensemaking stances. By extension, that means fluid movement between acting on source documents (which emphasize foraging) and on your working document (which emphasizes sensemaking)
Latticework sees foraging as tied to source, and sensemaking as tied to working document, and aims to make the movement between the two fluid, so you can shift focus between the two docs and thus the two activities. It leaves traces of your work in both (vgl [[Hoe emergence tot stand komt 20040513173612]] for the role of such longer traces to more easily stumble across in emergence.)
-
this process isn’t linear. It’s often convenient to do a bit of preliminary sensemaking in the midst of foraging; conversely, observations you uncover during sensemaking will often lead to another round of foraging, and so on, in a loop.
Making sense of material is not a linear process of ever more refinement, as e.g. Tiago Forte suggests with [[Progressive summarising 20200922080651]]. Siu/Matischak embrace the non-linear, recognising you go from 'foraging' (their term, great, K-garden style) to annotating, rearranging, noting an idea, back to foraging, back to rearranging etc. This is a key thing imo.
-
Latticework is built to support the workflow we described in the introduction
The video demo also shows adding comments to a snippet either in source or in the sensemaking document. I wonder if they put that in the released plugin (the say it doesn't do everything they did in the research project)
-
when you’re trying to make sense of a confusing situation, you need to get everything into one place, where you can see, rearrange, and elaborate the pieces into a new whole.
Intended effect of Latticework plugin (great name btw) Latticework allows you to fetch snippets in one note, and paste untalterable into another, with a link added in the original and copy. In the new note you can rearrange, paraphrase etc, purposefully add a link to source material, and otherwise do away with the snippet over time. Allows a better overview of what comes from where etc, preventing getting lost in the source material, which often happens.
As annotations already flow into my notes this helps reinforce their use.
-
This is the 'final' result of [[Matthew Siu]] [[Andy Matischak]] research into an Obsidian plugin for making sense of several sources into one, emerging an outline. I tested an earlier beta on #2024/02/1 [[Andy Matthew Obsidian plugin]] https://www.zylstra.org/blog/2024/02/matthew-and-andy-watched-me-test-the-obsidian-reference-plugin/ I stopped using it after a few weeks due to clashes with other plugins I could not pin down. At first glance this is a good description of the process and intended purpose. Re-installed this version of the plugin.
Tags
- netwerkleren
- navigation
- discontinuity
- marginalia
- notemaking
- markdown
- visual_vs_text
- impostorsyndrom
- xanadu
- snippets
- infostrats
- conceptmaps
- pkm
- latticework
- vr
- obsidian
- annotation
- ar
- clustering
- emergence
- knowledge
- transclusion
- plugins
- tagging
- workflow
- granularity
- sensemaking
- bidirectionality
- longer_traces
- information_granularity
Annotators
URL
-
-
-
iBestuur brengt samen wat in K-plannen over digitalisering/data staat.
-
-
www.biblonia.com www.biblonia.com
-
In an age where "corporate" evokes images of towering glass buildings and faceless multinational conglomerates, it's easy to forget that the roots of the word lie in something far more tangible and human: the body.In the medieval period, the idea of a corporation wasn't about shareholder value or quarterly profits; it was about flesh and blood, a community bound together as a single "body"—a corpus.
Via [[Lee Bryant]]
corporation from corpus. Medieval roots of corporation were people brought together in a single purpose/economic entity. Guilds, cities. Based on Roman law roots, where a corpus could have legal personhood status. Overtones of collective identity, governance. Pointer suggests a difference with how we see corporations as does the first paragraph here, but the piece itself sees mostly parallels actually. Note that Roman/medieval corpora were about property, (royal) privileges. That is a diff e.g. in US where corporates seek to both be a legal person (wrt politics/finance) and seek distance from accountability a person would have (pollution, externalising negative impacts). I treat a legal entity also as a trade: it bestows certain protections and privileges on me as entrepreneur, but also certain conditions and obligations (public transparancy, financial reporting etc.)
A contrast with ME corpus is seeing [[Corporations as Slow AI 20180201210258]] (anonymous processes, mindlessly wandering to a financial goal)
-