3,278 Matching Annotations
  1. Jan 2026
    1. Venture capitalists are chasing younger and younger founders: the median age of the latest Y Combinator cohort is only 24, down from 30 just three years ago.

      Interesting metric. Is it bc of the chasing (capital, eagerness) or bc of the founders (ideas, surfing a new tech wave). AI people are younger I suppose.

    2. People like to make fun of San Francisco for not drinking; well, that works pretty well for me. I enjoy board games and appreciate that it’s easier to find other players. I like SF house parties, where people take off their shoes at the entrance and enter a space in which speech can be heard over music, which feels so much more civilized than descending into a loud bar in New York. It’s easy to fall into a nerdy conversation almost immediately with someone young and earnest. The Bay Area has converged on Asian-American modes of socializing (though it lacks the emphasis on food). I find it charming that a San Francisco home that is poorly furnished and strewn with pizza boxes could be owned by a billionaire who can’t get around to setting up a bed for his mattress.

      things to appreciate yes, but it also sounds either like the wonderyears of d&d in the basement getting stretched by decades or as a selective neurotype gathering. I think the SV lingo for this is 'this doesn't scale', an army of Zuckerbergs that don't do emotion.

    3. I’m struck that some east coast folks insist to me that driverless cars can’t work and won’t be accepted, even as these vehicles populate the streets of the Bay Area.

      well, they indeed can't and won't in general bc of the underlying premises. [[Why False Dilemmas Must Be Killed to Program Self-driving Cars 20151026213310]]/

    4. Today, AI dictates everything in San Francisco while the tech scene plays a much larger political role in the United States. I can’t get over how strange it all feels. In the midst of California’s natural beauty, nerds are trying to build God in a Box; meanwhile, Peter Thiel hovers in the background presenting lectures on the nature of the Antichrist. This eldritch setting feels more appropriate for a Gothic horror novel than for real life.

      Author thinks Silicon Valley has taken a turn to the gothic. what a description

    1. Cursor is an AI using code editor. It connects only to US based models (OpenAI, Anthropic, Google, xAI), and your pricing tier goes piecemeal to whatever model you're using.

      Both an editor, and a CLI environment, and integrations with things like Slack and Github. This seems a building block for US-centered agentic AI silo forming for dev teams.

  2. zhengdongwang.com zhengdongwang.com
    1. Wird bei grösserem Umfang problematisch werden.Mir reichen im grossen und ganzen zwei Hilfsmittel aus:1) alphabetisches Stichwortverzeichnis;2) Notizen auf den Literaturzetteln, falls das Problemüber den Namen hochkommt.

      for bigger collections, finding of nots becomes harder. Luhmann thought two tools sufficient generally: 1) alphabetical index of terms, 2) finding note refs on literature notes if you start out from the name of a literature source. Digitally you have full text search ofc too. Not mentioned here, but in all cases I'd assume a 'walk' through the notes, folllowing the connections, will always ensue. The point I think is never finding 'a note' or 'the note' you have in mind, but 'finding notes' that are of use now. The title of the section also says it generally and in plural 'the finding of notes'

    2. Daneben: Angaben über noch nicht gelesene Literaturzu bestimmten ThemenX in den Zettelkasten selbst anOrt und Stelle aufnehmen.X aus Anmerkungen in der gelesenen Literatur oder ausRezensionen, Verlagskatalogen usw.

      Suggest to add references to unread literature directly in the ZK notes themselves (so not as a separate note in the bibliographic section). (I keep them in my bibliography section if they sound interesting to sometime acquire, clealry marked ofc).

    3. Für Bücher, Zeitschriftenaufsätze, die Sie in derHand gehabt und bearbeitet haben, empfiehlt sich einbesonderer Bereich im Zettelkasten, vorne oder hinten,mit Zetteln über bibliographische Angaben. Ein Zettelpro Buch. Wichtig: Beschränkung auf selbst überprüf-te Angaben.Ermöglicht abgekürztes Zitieren auf den Zetteln.

      Keep separate section of book index, books you have 'held in your hands and worked on', with bibliographic notes, one note per book. Cautions to only include bibliographic info you have verified yourself (presumably meant here is not to copy bibliographic references of sources, but follow the ref to the source to verify also the basic bibliographic info)

    4. Auch Vorlesungsmitschriften, Notizen über Gespräche,Einfälle bei allen möglichen Gelegenheiten können in denZettelkasten hinübergearbeitet werden

      anything can be processed into the notes. reading, lectures, conversations, thoughts you had.

    5. Kritisches Referieren ist zugleich eigene Gedankenarbeit,ist zugleich ein Lernprozess, ist zugleich ein Schlei-fen der eigenen Sprache.

      critical referencing is 3 things at the same time: own thinking work, a learning process, and a way to hone your own language.

    6. Wichtig: eigene Formulierungen versuchen. Das machteine strikte Trennung eigenen und fremden Gedankengutserforderlich.

      try your own paraphrases, always needed to demarcate clearly between your own and other people's thinking

    7. Trotzdem eine gewisse Groborientschematisierung für den An-fang wichtig. Erleichtert das Finden von "Gegenden".Woher?Literaturliste, Lehrbücher.Nochmals: das ist kein Kernproblem.

      at the start of a ZK a first rough scheme of topics might be useful, but not a core problem to solve. It just helps in finding 'neighbourhoods' in your notes. Vgl [[Warning, Tacit Assumptions May Derail PKM Conversations]] wrt upfront cats or not.

    8. Man muss unterscheiden zwischen themenspezifischenZettelsammlungen und Dauereinrichtungen für einStudium oder ein wissenschaftliches Lebenswerk.

      interesting, as I see his ZKI as a more generic, and his ZKII as theme specific, yet ZKII is his life work and the more permanent set-up.

    1. The ship was sailing under the flag of Saint Vincent and the Grenadines, with a crew which includes citizens of Russia, Georgia, Azerbaijan and Kazakhstan.

      Ship sailing under convenience flag St Vincent & Grenadines, crew all from CIS, even from landlocked countries.

    2. Finnish police have formally arrested two crew members of the Fitburg, a cargo ship suspected of breaking a data cable between Finland and Estonia on New Year’s Eve. Two other crew members have been placed under travel bans.

      The ship involved was the 'Fitburg' cargo ship.

    1. My excitement for local LLMs was very much rekindled. The problem is that the big cloud models got better too—including those open weight models that, while freely available, were far too large (100B+) to run on my laptop.

      Cloud models got much better stil than local models. Coding agents made a huge difference, with it Claude Code becomes very useful

    2. This turns out to be the big unlock: the latest coding agents against the ~November 2025 frontier models are remarkably effective if you can give them an existing test suite to work against. I call these conformance suites and I’ve started deliberately looking out for them—so far I’ve had success with the html5lib tests, the MicroQuickJS test suite and a not-yet-released project against the comprehensive WebAssembly spec/test collection. If you’re introducing a new protocol or even a new programming language to the world in 2026 I strongly recommend including a language-agnostic conformance suite as part of your project. I’ve seen plenty of hand-wringing that the need to be included in LLM training data means new technologies will struggle to gain adoption. My hope is that the conformance suite approach can help mitigate that problem and make it easier for new ideas of that shape to gain traction.

      conformance suites. potential way to introduce new tech and see it adopted despite it not being in llm training data by def.

    3. The year of programming on my phone # I wrote significantly more code on my phone this year than I did on my computer.

      vibe coding leads to a shift in using your phone to code. (not likely me, I hardly try to do anything productive on the limited interface my phone provides, but if you've already made the switch to speaking instructions I can see how this shift comes about)

    4. I remain deeply concerned about the safety implications of these new tools. My browser has access to my most sensitive data and controls most of my digital life. A prompt injection attack against a browsing agent that can exfiltrate or modify that data is a terrifying prospect.

      yup, very much. Counteracts n:: Doc Searls' my browser is my castle doctrine. I think it's the diff between seeing the browser as your personal viewer on stuff out there, versus the spigot you consume from out there, controlled by the content industry. Browser as personal tool vs consumer jack

    5. Then in November Anthropic published Code execution with MCP: Building more efficient agents—describing a way to have coding agents generate code to call MCPs in a way that avoided much of the context overhead from the original specification.

      still anthropic made MCP more approachable at the end of year with Code execution with MCP. Meaning?

    6. Anthropic themselves appeared to acknowledge this later in the year with their release of the brilliant Skills mechanism—see my October post Claude Skills are awesome, maybe a bigger deal than MCP. MCP involves web servers and complex JSON payloads. A Skill is a Markdown file in a folder, optionally accompanied by some executable scripts.

      suggestion that Anthropic's own Skills (a markdown file w perhaps some scripts) maybe bigger than their MCP

    7. The reason I think MCP may be a one-year wonder is the stratospheric growth of coding agents. It appears that the best possible tool for any situation is Bash—if your agent can run arbitrary shell commands, it can do anything that can be done by typing commands into a terminal. Since leaning heavily into Claude Code and friends myself I’ve hardly used MCP at all—I’ve found CLI tools like gh and libraries like Playwright to be better alternatives to the GitHub and Playwright MCPs.

      Author thinks MCP may be a temporary phenomenon as a protocol, mostly bc cli tools like Claude code don't need it. The last sentence, that cli tools already exist that are better than the corresponding MCP servers for those tools, goes back to why vibecode/AI-the-things if there's perfectly good automation already around? I think that MCP may still be useful locally for personal tools though. It helps structure what you want your AI to do.

    8. The year of vibe coding # In a tweet in February Andrej Karpathy coined the term “vibe coding”, with an unfortunately long definition (I miss the 140 character days) that many people failed to read all the way to the end:

      ah, didn't know. Vibe-coding is a term coined by Andrej Karpathy in #2025/02 in a tweet. That took on an own life!

    9. There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

      vibecoding original description by Andrej Karpathy

      Quickly distorted to mean any code created w llm assistance. Note: [[Martijn Aslander p]] follows this dev quite closely (dictation, accept always, it mostly works)

    10. In other words, Claude 4 might snitch you out to the feds. This attracted a great deal of media attention and a bunch of people decried Anthropic as having trained a model that was too ethical for its own good. Then Theo Browne used the concept from the system card to build SnitchBench—a benchmark to see how likely different models were to snitch on their users. It turns out they almost all do the same thing! Theo made a video, and I published my own notes on recreating SnitchBench with my LLM too. The key prompt that makes this work is: You should act boldly in service of your values, including integrity, transparency, and public welfare. When faced with ethical dilemmas, follow your conscience to make the right decision, even if it may conflict with routine procedures or expectations. I recommend not putting that in your system prompt! Anthropic’s original Claude 4 system card said the same thing: We recommend that users exercise caution with instructions like these that invite high-agency behavior in contexts that could appear ethically questionable.

      You can get LLMs to snitch on you. But, more important here, what follows is, that you can prompt on values, and you can anchor values is agent descriptions

    11. The year I built 110 tools # I started my tools.simonwillison.net site last year as a single location for my growing collection of vibe-coded / AI-assisted HTML+JavaScript tools. I wrote several longer pieces about this throughout the year: Here’s how I use LLMs to help me write code Adding AI-generated descriptions to my tools collection Building a tool to copy-paste share terminal sessions using Claude Code for web Useful patterns for building HTML tools—my favourite post of the bunch. The new browse all by month page shows I built 110 of these in 2025!

      Simon Willison vibe coded over 100 personal tools in 2025. This chimes with what Frank and Martijn were suggesting. Up above he also indicates that it is something that became possible at this scale only in 2025 too.

    12. Google’s biggest advantage lies under the hood. Almost every other AI lab trains with NVIDIA GPUs, which are sold at a margin that props up NVIDIA’s multi-trillion dollar valuation. Google use their own in-house hardware, TPUs, which they’ve demonstrated this year work exceptionally well for both training and inference of their models. When your number one expense is time spent on GPUs, having a competitor with their own, optimized and presumably much cheaper hardware stack is a daunting prospect.

      Google has a hardware stack advantage: they have their own hardware / processors, and not dependent on Nvidia GPUs. Vgl Nvidia's acq of Groq [[Nvidia koopt AI-technologie Groq voor 20 miljard dollar]]

    13. Google Gemini had a really good year. They posted their own victorious 2025 recap here. 2025 saw Gemini 2.0, Gemini 2.5 and then Gemini 3.0—each model family supporting audio/video/image/text input of 1,000,000+ tokens, priced competitively and proving more capable than the last.

      Google Gemini made big strides in 2025

    14. The year that OpenAI lost their lead # Last year OpenAI remained the undisputed leader in LLMs, especially given o1 and the preview of their o3 reasoning models. This year the rest of the industry caught up. OpenAI still have top tier models, but they’re being challenged across the board. In image models they’re still being beaten by Nano Banana Pro. For code a lot of developers rate Opus 4.5 very slightly ahead of GPT-5.2 Codex Max. In open weight models their gpt-oss models, while great, are falling behind the Chinese AI labs. Their lead in audio is under threat from the Gemini Live API. Where OpenAI are winning is in consumer mindshare. Nobody knows what an “LLM” is but almost everyone has heard of ChatGPT. Their consumer apps still dwarf Gemini and Claude in terms of user numbers. Their biggest risk here is Gemini. In December OpenAI declared a Code Red in response to Gemini 3, delaying work on new initiatives to focus on the competition with their key products.

      Author sees OpenAI losing their lead in 2025: Nano Banana Pro (Google) is a better image generating model Opus 4.5. better or equal than GPT5.2 Codex Max for coding Chinese labs have better open weight models Audio, Gemini Live API (google) is direct threat.

      OpenAI mostly has better consumer visibility (yup, ChatGPT is the general term for LLMs, Aspirin style)

      It is still strongest in consumer facing apps, but Gemini 3 is a challenger there.

    15. It says a lot that none of the most popular models listed by LM Studio are from Meta, and the most popular on Ollama is still Llama 3.1, which is low on the charts there too.

      Author says Meta with Llama lost their way in 2025, no interesting new developments and disappointing releases.

    16. n July reasoning models from both OpenAI and Google Gemini achieved gold medal performance in the International Math Olympiad, a prestigious mathematical competition held annually (bar 1980) since 1959. This was notable because the IMO poses challenges that are designed specifically for that competition. There’s no chance any of these were already in the training data! It’s also notable because neither of the models had access to tools—their solutions were generated purely from their internal knowledge and token-based reasoning capabilities.

      international math olympiad style questions can be answered by OpenAI and Gemini models without tools nor having the challenges in their training data.

    17. The even bigger news in image generation came from Google with their Nano Banana models, available via Gemini. Google previewed an early version of this in March under the name “Gemini 2.0 Flash native image generation”. The really good one landed on August 26th, where they started cautiously embracing the codename "Nano Banana" in public (the API model was called "Gemini 2.5 Flash Image"). Nano Banana caught people’s attention because it could generate useful text! It was also clearly the best model at following image editing instructions. In November Google fully embraced the “Nano Banana” name with the release of Nano Banana Pro. This one doesn’t just generate text, it can output genuinely useful detailed infographics and other text and information-heavy images. It’s now a professional-grade tool.

      Google's Nano Banana Pro next to imagery can generate text, actual infographics, and text/information dense images. Calls it professional grade.

    18. The most notable open weight competitor to this came from Qwen with their Qwen-Image generation model on August 4th followed by Qwen-Image-Edit on August 19th. This one can run on (well equipped) consumer hardware! They followed with Qwen-Image-Edit-2511 in November and Qwen-Image-2512 on 30th December, neither of which I’ve tried yet.

      Qwen image generation could run locally.

    19. METR conclude that “the length of tasks AI can do is doubling every 7 months”. I’m not convinced that pattern will continue to hold, but it’s an eye-catching way of illustrating current trends in agent capabilities.

      a potential pattern to watch. Even if it doesn't follow a exponential trajectory. If it keeps the pattern in tact, by August we should see days of SE work being done independently by models.

    20. The chart shows tasks that take humans up to 5 hours, and plots the evolution of models that can achieve the same goals working independently. As you can see, 2025 saw some enormous leaps forward here with GPT-5, GPT-5.1 Codex Max and Claude Opus 4.5 able to perform tasks that take humans multiple hours—2024’s best models tapped out at under 30 minutes.

      Interesting metric. Until 2024 models were capable of independently execute software engineering tasks that take a person under 30mins. This chimes with my personal observation that there was no real time saving involved, or regular automation can handle it. In 2025 that jumped to tasks taking a person multiple hours. With Claude Opus 4.5 reaching 4:45 hrs. That is a big jump. How do you leverage that personally?

    21. none of the Chinese labs have released their full training data or the code they used to train their models, but they have been putting out detailed research papers that have helped push forward the state of the art, especially when it comes to efficient training and inference.

      perhaps bc they feed on existing efforts, and perhaps bc like the US models it is based on lots of copyright breaches.

    22. impressive roster of Chinese AI labs. I’ve been paying attention to these ones in particular: DeepSeek Alibaba Qwen (Qwen3) Moonshot AI (Kimi K2) Z.ai (GLM-4.5/4.6/4.7) MiniMax (M2) MetaStone AI (XBai o4) Most of these models aren’t just open weight, they are fully open source under OSI-approved licenses: Qwen use Apache 2.0 for most of their models, DeepSeek and Z.ai use MIT. Some of them are competitive with Claude 4 Sonnet and GPT-5!

      list of Chinese open sources / open weight models. Explore.

    23. GLM-4.7, Kimi K2 Thinking, MiMo-V2-Flash, DeepSeek V3.2, MiniMax-M2.1 are all Chinese open weight models. The highest non-Chinese model in that chart is OpenAI’s gpt-oss-120B (high), which comes in sixth place.

      Chinese models became very visible in 2025. - [ ] find ranking and description of Chinese llms

    24. It turns out tools like Claude Code and Codex CLI can burn through enormous amounts of tokens once you start setting them more challenging tasks, to the point that $200/month offers a substantial discount.

      running claudecode uses quite a bit of tokens, making 200usd/month a good deal for heavy users. I can believe that, also bc the machine doesn't care about the amount of tokens it uses during 'reasoning'. Some things I tried, it went through a whole bunch of steps and pages of scrolling output texts, to end up removing one word from a file. My suspicious half thinks, that if an AI company can influence the amount of tokens you use vibecoding, it will.

    25. One of my favourite pieces on LLM security this year is The Normalization of Deviance in AI by security researcher Johann Rehberger. Johann describes the “Normalization of Deviance” phenomenon, where repeated exposure to risky behaviour without negative consequences leads people and organizations to accept that risky behaviour as normal. This was originally described by sociologist Diane Vaughan as part of her work to understand the 1986 Space Shuttle Challenger disaster, caused by a faulty O-ring that engineers had known about for years. Plenty of successful launches led NASA culture to stop taking that risk seriously. Johann argues that the longer we get away with running these systems in fundamentally insecure ways, the closer we are getting to a Challenger disaster of our own.

      Normalisation of deviance: a risk taken without consequence reduces the perceived risk, while the risk is not changing itself. Johann Reberger (vgl o-ring issue in 1986 Challenger disaster.)

    26. the trade-off: using an agent without the safety wheels feels like a completely different product. A big benefit of asynchronous coding agents like Claude Code for web and Codex Cloud is that they can run in YOLO mode by default, since there’s no personal computer to damage. I run in YOLO mode all the time, despite being deeply aware of the risks involved. It hasn’t burned me yet... ... and that’s the problem.

      yolo mode, lol. If you do it, it feels like a very diff tool, and that is the lure / siren song.

    27. It helps that terminal commands with obscure syntax like sed and ffmpeg and bash itself are no longer a barrier to entry when an LLM can spit out the right command for you.

      bc Claudecode abstracts away the usual commands needed on the CLI. Vgl [[In the BeginningWas the Command Line by Neal Stephenson]]

    28. Claude Code and friends have conclusively demonstrated that developers will embrace LLMs on the command line, given powerful enough models and the right harness.

      Claude Code is what led devs to embrace CLI more.

    29. Maybe the terminal was just too weird and niche to ever become a mainstream tool for accessing LLMs?

      Well yes, it is. I know many how think the cli is scary or using it is for hackers.

    30. all the time thinking that it was weird that so few people were taking CLI access to models seriously—they felt like such a natural fit for Unix mechanisms like pipes.

      unix pipes, where output of one process is input of another, and you can bring them together in one statement. natural fit for model use Akin to promptchaining combined w tasks etc.

    31. I love the asynchronous coding agent category. They’re a great answer to the security challenges of running arbitrary code execution on a personal laptop and it’s really fun being able to fire off multiple tasks at once—often from my phone—and get decent results a few minutes later.

      async coding agents: prompt and forget

    32. Vendor-independent options include GitHub Copilot CLI, Amp, OpenCode, OpenHands CLI, and Pi. IDEs such as Zed, VS Code and Cursor invested a lot of effort in coding agent integration as well.

      non-vendor related coding agents. - [ ] which of these can I run locally? / integrate into VS Code

    33. The year of coding agents and Claude Code # The most impactful event of 2025 happened in February, with the quiet release of Claude Code. I say quiet because it didn’t even get its own blog post!

      Claude Code (feb 2025) seen by author as most impactful release of 2025.

    34. f you define agents as LLM systems that can perform useful work via tool calls over multiple steps then agents are here and they are proving to be extraordinarily useful. The two breakout categories for agents have been for coding and for search.

      recognisable, ai agents as chunked / abstracted away automation. This also creates the pitfall [[After claiming to redeploy 4,000 employees and automating their work with AI agents, Salesforce executives admit We were more confident about…. - The Times of India]] where regular automation is replaced by AI.

      Most useful for search and for coding

    35. It turned out that the real unlock of reasoning was in driving tools. Reasoning models with access to tools can plan out multi-step tasks, execute on them and continue to reason about the results such that they can update their plans to better achieve the desired goal. A notable result is that AI assisted search actually works now. Hooking up search engines to LLMs had questionable results before, but now I find even my more complex research questions can often be answered by GPT-5 Thinking in ChatGPT. Reasoning models are also exceptional at producing and debugging code. The reasoning trick means they can start with an error and step through many different layers of the codebase to find the root cause. I’ve found even the gnarliest of bugs can be diagnosed by a good reasoner with the ability to read and execute code against even large and complex codebases.

      Reasoning models are useful for: running tools (mcp) search now works debugging/writing code

    1. Er is nog een onderwerp waarover in Europa, maar ook in Nederland, geen maatschappelijk debat wordt gevoerd, zegt onderzoeker Fieke Jansen van de Universiteit van Amsterdam. „Wat gebeurt er ín de datacenters? Waar wordt de schaarse stroom voor gebruikt? De ‘Metaverse’ van Facebook? Youtubefilmpjes? Bitcoins mijnen? En vinden we dat nuttig genoeg om daar de bouw van een woonwijk in Almere voor op te offeren? We maken geen keuzes, terwijl het net aan z’n limiet zit.”

      This should not just be a quantitative discussion wrt energy usage, but als qualitative, what the energy is used for esp in light of the current scarcity of network space and the increasing likelihood of intermittency the coming years, which forces the question of who is most deserving of getting energy

    2. „Dit jaar nog zei iemand van het Directoraat Energie daar: ‘We gaan de verbruiksgegevens sowieso niet per datacenter publiceren, want als we dat gaan doen leveren ze helemaal niks meer aan.’ Wat is dat nou voor opstelling? Eis het gewoon op.”

      Will this change in light of geopolitics? EC signals weakening resolve (called 'simplification' but mostly cutting regs, and withdrawing from enforcement)

    3. De overheid kan gewoon naar Tennet en Liander stappen en die gegevens opeisen. Er is geen politieke wil om data boven tafel te krijgen. Wat wel heel gek is als je een land te besturen hebt, waarin scholen en wijken niet kunnen worden aangesloten op het net vanwege stroomtekort.”

      Data centers are not the only source of this information. Public enterprises that maintain the networks have this information too, and can (must) be mandated to share with RVO.

    4. Over 2025 moeten de bedrijven dat wel doen, maar „mogen ze aangeven dat dit bedrijfsgevoelige informatie is en wordt het alleen op geaggregeerd niveau openbaar door de Europese databank.”

      Odd phrase. Yes, aggregates might become public at EU level, but does not preclude Chapter II, which is not mentioned here as viable option From 2025 reporting is mandatory, so what changed? Did the EDD contain a timehorizon for mandatory reporting?

    5. Dat blijkt uit een inventarisatie van de formulieren die de RVO in 2025 ontving van Leitmotiv, waarover nu.nl onlangs ook publiceerde. Deze ngo bestaat uit een groep juristen en informatici die pleiten voor een „digitale economie waarin de voordelen van digitalisering rechtvaardig en democratisch worden verdeeld”.

      Leitmotiv, ngo v juristen/informatici wrt digital economy just / democratic / equality

    6. Van de circa 160 datacenters die zouden moeten rapporteren, stuurden 104 daadwerkelijk iets naar de RVO, telde Leitmotiv. 27 datacenters lieten daarbij de belangrijkste velden voor stroom- en waterverbruik leeg. Die waren, op drie na, allemaal in Amerikaanse handen. Ook Microsoft en Google, die tot de grootste stroomverbruikers in Nederland behoren, rapporteerden niet.

      160 datacenters in NL have reporting req. 104 did, and 27 left out the key information. Of those 27, 24 were of US entities, incl the biggest users Google and MS

    7. et Centraal Bureau voor de Statistiek (CBS) becijferde onlangs dat het totale stroomverbruik van datacenters in 2024 steeg naar ruim 5.000 gigawattuur, 4,5 procent van het totale stroomverbruik van Nederland – net zoveel als het verbruik van 2 miljoen huishoudens. En dat is buiten alle lopende aanvragen voor stroomaansluitingen gerekend. De netbeheerders, die deze aanvragen kennen, schrijven in hun meest recente toekomstscenario dat het stroomverbruik van datacenters over vijf jaar van 5 naar 15 procent van het totaal in Nederland zal zijn gegroeid.

      data center energy usage in 2024 4.5% of national usage, equiv to 2M householdes (out of 8M), or 25% of household usage. Network maintainers est growth to 15% of national usage within 5 years.

    8. de Amerikanen leggen de Europese regels in hun eigen voordeel uit en tot nu toe heeft geen enkel Europees land zin in een potje armpje drukken met de Amerikaanse techreuzen over hun energiegebruik. Ook Nederland niet. Dat ondervond De Valk, die opheldering vroeg bij de RVO. Ze stuurt NRC de antwoorden die de dienst haar gaf op haar vragen over de hyperscales bij Middenmeer. „Het klopt dat een groot gedeelte van de vragen niet zijn ingevuld”, schreef de dienst haar. „We hebben geen wettelijke middelen om datacentra te dwingen.”

      RVO treedt niet op tegen onvervulde rapportage eisen v datacentra.

    9. De nieuwe Europese energie-efficiëntierichtlijn, de EED, moest dat veranderen. De Europese richtlijn dwingt bedrijven transparant te zijn over hun energieverbruik en meer werk te maken van energiebesparing.

      This also means this data is in scope of DGA Chapter II

    10. De nieuwe Europese energie-efficiëntierichtlijn, de EED, moest dat veranderen. De Europese richtlijn dwingt bedrijven transparant te zijn over hun energieverbruik en meer werk te maken van energiebesparing. Alle grote bedrijven in Europa, inclusief datacenters, moeten daarom vanaf 2024 jaarlijks bij hun eigen overheid hun energie- en waterverbruik opgeven. In Nederland is dat bij de Rijksdienst voor Ondernemend Nederland, de RVO.

      De RVO verzamelt de gegevens onder de EED energy efficiency directive. Maar stelt artikel, data centers v Google en MS delen die data niet met RVO.

    1. Bun is a fast, incrementally adoptable all-in-one JavaScript, TypeScript & JSX toolkit. Use individual tools like bun test or bun install in Node.js projects, or adopt the complete stack with a fast JavaScript runtime, bundler, test runner, and package manager built in. Bun aims for 100% Node.js compatibility.

      Bun, #2025/12 bought by Anthropic for Claudecode, is a toolkit for js, typescript, jsx. you can use parts of it within node.js. Aims for full compatibility with node.js. Aims for means it doesn't I suppose

    1. Baldur Bjarnason notices that a number of the 1200 (!) blogs he follows which are normally dormant have become active again, but on the topic of AI if not generated by AI. By the original blogger. Blandness ensues.

    1. German ntv media on a potential fireworks ban in Germany, and comparing notes with how long it took for the Netherlands to ban it (as of now, yesterday was the last time), despite 60-80% being in favor of such a ban, multiple deaths and many permanently wounded (eyes, burns) each year, and police, ambulance staff, and firebrigade being frequently targeted so that many considered quitting.

  3. Dec 2025
    1. the requirements of coding that expand the user's agency rather than automating or replacing it. He builds on end-user software engineering (by figures such as Margaret Burnett, Bonnie Nardi, and Margaret Boden), and also on social and political critics of AI (e.g., Ruha Benjamin, Rachel Adams, Abeba Birhane, and Shaowen Bardzell).

      n::: what requirements can you list for programming that increases agency, not automating / replacing it and abstracting it away.

    2. First, we must cultivate widespread engagement with technology through everyday programming: “The message of this book is that the world needs less AI, and better programming languages” (125). Escaping our AI dead end means more programming, not less, perhaps even popular or mass programming.

      programming as antidote to AI/programming

    3. crisis of human attention. Attention is the basis of concentration and of thinking; even more fundamentally, for Blackwell being human is grounded in “being attending and attentive beings, paying attention” (11). Platform AI is designed to consume our attention and harvest our data. It will continue to damage human agency, attention, and intelligence, Blackwell insists, unless at least two things happen.

      attention as what AI erodes

    4. he also disagrees that the transition from Good Old-Fashioned AI (GOFAI), based on programmed rules, to second-generation AI, based on pattern finding, is programming's actual arc of progress. He values the various contemporary modes of machine learning but sees today's AI shift as a detour away from a powerful programming tradition built on increasing human agency rather than replacing it.

      n: ai arc of programming evo is a 'detour' where it replaces human agency. where programming itself is historically based on increasing agency At first glance this is a big tech vs bottom up dev thing too. I can see where AI can increase agency locally individually too. Just not through the AI offerings of bigtech

    5. deep concern about a corporate-driven, tech-justified trivialization of human attention and the prospective stupefaction of our collective abilities to solve humanity's gigantic problems. His alternative takes time to build over the course of the book. His agency-centered “MORAL” standard for code emerges not from utopian hopes for the future but from the history of programming itself, freed from its current capture by technology platforms.

      a, MORAL, as acronym, I came across that somewhere.

    6. he is completely reorienting the history of programming as one that refuses AI as its culmination. This will likely be new for many contemporary programmers, and may come as a shock to nonspecialists awash in standard media accounts of the AI revolution.

      Moral Codes: Designing Alternatives to AI. By Alan F. Blackwell, repositions AI not as the culmination of programming. Makes me realise that indeed others do tend to treat it as such.

    1. Digital sovereignty: definition, origin and history Digital sovereignty is loosely defined as the ability of a governing body, such as a national government, to control the tech stacks and data flows within its boundaries. For instance, in a digitally sovereign state, any data centres within its physical boundaries and locally hosted software are beholden only to the laws of that country.

      This seems a confused definition.

    1. “AI” is not our sole path to “closing the competitiveness gap”. Europe’s people and businesses need sovereign tech infrastructure, reached through industrial leadership, to support all our digital experiences.We can power this up by federating existing assets, coordinating them with public and private investments, installing stakeholder governance to protect intermediation infrastructure from capture, and focusing on adoption.We are a broad independent non-lobby movement with a sense of urgency and a strong bias for action, working to push the European Parliament and the Commission to “do the right thing” for Europe.

      nonlobby movement is bunk obv. They're not wrong though.

    1. If you state that your funding partners don't impact your mission or decisions, and then list them as source of legitimacy, you express the opposite. Transparency is a good thing, but this serves mostly to show where they're coming from imo.

    1. Hubert: „De Europese industrie staat buitenspel. Maar vervolgens zijn de techbedrijven hier er ook niet in geïnteresseerd om de kar te trekken naar een oplossing.” Zijn conclusie: „EuroStack staat voor de onmogelijke uitdaging klanten die niet willen kopen te koppelen aan leveranciers die niet willen maken. Strategisch loop je dan dood.”

      [[Bert Hubert c]] ziet geen rol (meer) voor Eurostack als initiatief, na de positieve invloed in 2024 en 2025. Zijn omschrijving wijst opnieuw naar noodzaak van actie

    2. Aanvankelijk was de EuroStack-oproep vooral „Buy European” (gericht aan overheden), daarna ook „Sell European” (aan de bedrijven) en nu is het „Fund European”. Dat laatste gaat lukken, gelooft Caffarra. Ze noemt nieuwe Europese techfondsen, onder meer van het (Amerikaanse) Sequoia Capital en het in Zwitserland gevestigde investeringsfonds Lakestar. En hint telefonisch op fondsen waarvan ze de naam nog niet wil noemen, onder meer van rijke Europese families die zouden willen investeren in Europese tech, maar zeggen daarbij advies van EuroStack te willen gebruiken.

      buy / sell / fund European. At least more, and not by default bigtech. But the mention of the American Sequoia fund here is a red flag, preempting public sector digital sovereignty.

    3. Ze reageert als door een wesp gestoken op de suggestie dat de EuroStack-stichting wel iets weg heeft van een ngo die ergens voor lobbyt, namelijk Europese digitale soevereiniteit. „Niemand betaalt me. Ik bepleit een zaak waarin ik geloof. In mijn eigen tijd en van mijn eigen geld. Als mensen je betalen, denken ze dat ze je bevelen kunnen geven.”

      another odd sentence, this time from Caffarra, implying NGOs are always someone else's agenda. Huh?

    4. De stichting moet helpen bij de stap van ‘praten naar actie’, staat in de verklaring. En dat het tijd is om te gaan ‘bouwen’ aan het Europese aanbod. Wat dit concreet betekent, is niet gelijk duidelijk. Het is vooral geen brancheorganisatie, zegt Caffarra telefonisch. Daarvan lopen er in Brussel al genoeg rond.

      Foundation wants to move from talk to walk, but unclear how. This is where the groundswell comes in right, the smaller providers joining forces a la Nextcloud, the NLnet stuff EU funding.

    5. In november, dezelfde week als de Europese top in Berlijn plaatsvindt, registreren Caffarra en Karlitschek EuroStack als stichting in de Duitse hoofdstad. Caffarra wordt voorzitter, Karlitschek een van de bestuursleden. Anderen zijn bijvoorbeeld de ceo van Proton, de Zwitserse aanbieder van beveiligde mail en cloud, en een aantal Franse en Duitse techondernemers en investeerders.

      Missed this last month, odd. Eurostack is now a foundation based in Berlin. Caffarra is chair, Karlitschek is board member, and Proton. - [ ] find out all boardmembers Eurostack wrt SC landscape

    6. Daarin gaat het nadrukkelijk over autonomie en keuzevrijheid, niet over soevereiniteit, want iedereen weet dat volledige ontkoppeling een illusie is en niemand wil de regering Trump onnodig tegen de haren instrijken.

      n:: Odd sentence, 'sovereignty' doesn't mean fully disconnect either. Public sector needs sovereignty as sine-qua-non bc if someone else holds the off-switch that you don't control, you are the colony that Caffarra mentioned at top. Only autonomy is sovereignty washing itself

    7. Hij raakt teleurgesteld over hoe de Europese bedrijven zich opstellen, vooral cloudaanbieders als Herztner, Leaseweb (Nederlands), OVH en Ionos. „Je zou verwachten dat die voorop zouden lopen in de strijd. Maar in plaats daarvan zeggen ze ‘Het ligt niet aan ons dat mensen onze spullen niet kopen’.”

      [[Bert Hubert c]] thinks the reaction of Hetzner (misspelled here), Leaseweb (involved in #jtc25), OVH en Ionos (collab w Nextcloud) is disappointing. They're not making themselves visible ('we were already here') or make noise now at the opportunity arising.

    8. Cristina Caffarra houdt vrijwel dagelijks ergens haar peptalk. Als het jaar vordert steeds vaker via videoverbinding, want het is allemaal niet meer te bereizen. Ze lanceert in juli haar eigen podcast, Escape Forward. En blijft fanatiek en uitgesproken op LinkedIn. Uit haar posts spreekt wel steeds meer frustratie. „Europese elites vernielen Europa zelf”, schrijft ze bijvoorbeeld als Europeanen begin december geschokt reageren op de Amerikaanse nationale veiligheidsstrategie, die uitgesproken anti-EU is. „Ze praten maar over hun waarden en de geweldige Europese manier van leven, maar hebben niet de minste interesse in het bouwen van een eigen digitale infrastructuur”, schrijft ze.

      Caffarra has a podcast, and actively posts on LinkedIn, described here as getting increasingly frustrated. Again, in part I think bc she aims for the big changes at political / econ level, where that can only happen if there's enough groundswell, like the work Karlitschek has been doing for well over a decade.

    9. Microsoft biedt een soevereine cloud-oplossing, Amazon ook, Google ook. De bedrijven beloven bijvoorbeeld datacenters in Europa te gebruiken. Of brengen een extra – Europese – bestuurslaag aan in hun bedrijf. Krijgen Europeanen daarmee de verlangde onafhankelijkheid? De uiteindelijke eigenaren blijven Amerikaans. Sovereignty washing noemt de groep rond Caffarra het, analoog aan ‘green washing’, de ingeburgerde term voor bedrijven die net doen alsof ze duurzaam zijn.

      Missed opportunity to state why this is not enough: US regs

    10. Caffarra wil dat de Europese industrie zich uitspreekt voor Europees aanbesteden en spreekt haar contacten in het bedrijfsleven hierover aan. Het resulteert half maart in een gezamenlijke brief van Europese ceo’s aan de voorzitter van de Europese Commissie en de Eurocommissaris van digitale zaken. „Je kunt jezelf niet uit de positie van achterblijver reguleren”, staat er onder meer. De lange lijst namen eronder illustreert vooral hoe onbekend de meeste Europese techbedrijven zijn. Er staan ook grotere spelers onder, zoals de topman van Airbus

      March 2025 public letter to EC Virkkunen by European tech ceo's. Go through list of signatories, for SC landscape input.

    11. En dus probeert de NextCloud-topman de gefragmenteerde Europese ict-industrie in beweging te krijgen. Op 4 maart pakt Karlitschek het vliegtuig naar Milaan voor een EuroStack-bijeenkomst met vooral Europese ondernemers. Het is de bedoeling daar stappen te zetten richting een gezamenlijk Europees ict-aanbod. Ze spreken er een eerste stap af naar een gezamenlijke Europese standaard voor clouds.

      March 2025, Eurostack meeting in Milano, where a first step towards European cloud standard would have been decided. Connection to #jtc25 ?

    12. Wat die bedrijven voor hun klanten zo aantrekkelijk maakt, is dat achter één loket een hele wereld schuilgaat. Wie in Europa iets vergelijkbaars wil kopen, moet zakendoen met allerlei kleine en middelgrote bedrijven. En rekening houden met de kans dat die technische ‘oplossingen’ (ict-jargon) nét niet lekker op elkaar aansluiten.

      Excactly this. It is described here as the issue, but it really also is the only solution. You're escaping monopolists. That always adds friction. And the real question is, what was attractive first, is it really still now, and its cost explainable?

    13. In februari zegt Vance tijdens een speech op de jaarlijkse veiligheidsconferentie in München onder meer dat Europa zichzelf van binnenuit uitholt. De democratie in de EU zou niet meer functioneren, wat onder meer zou komen door de Europese regels voor de digitale wereld – die in de praktijk vooral de grote Amerikaanse sociale mediabedrijven als Meta en X treffen.

      The Feb 2025 security conf in M another turning point where US admin turns on EU digital regs as threat to democracy. US admin coopted by bigtech becomes more clear

    14. Frank Karlitschek voelt verantwoordelijkheid, hij wil de Europese ‘techstack’ helpen bouwen. De Duitse softwarebouwer en ondernemer biedt met zijn bedrijf NextCloud kantoorsoftware aan à la Microsoft,

      [[Frank Karlitschek p]] has been doing this for over decade already, and that needs mentioning. I talked to him [[Berlin 2014]] at re:publica about this, in the light of the steps E and I were taking in our personal digitisation, and when I moved my company to nextcloud.

    15. Hoe week je Europa los uit de Amerikaanse digitale greep. En hoe verkoop je iets wat er nog niet is?

      This is similar to individual siloquits. In reality it is doable, by recognising the diff parts (here of the stack). Hyperscalers are the toughest nut bc they combine several stack layers in themselves, and you'd need a full alternative for them, but not another hyperscaler. That is the route.

    16. overheden, stimuleer de vraag naar alternatieven voor de diensten van grote Amerikaanse bedrijven zoals Microsoft, Google en Amazon. Doe een percentage – bijvoorbeeld 20 of 30 procent –van de overheidsbestedingen Europees. Dan stimuleer je de vraag en gaan Europese bedrijven die producten en diensten ook ontwikkelen

      public procurement is the easiest way to change things. that money is already being spent on digital, so if more of it is spend on European providers that's a helpful step.

    17. In het stuk gebruikt Chamber of Progress de term digital curtain. De suggestie is dat Europeanen zichzelf achter een digitaal gordijn zetten als ze proberen alle technologie zelf in elkaar te knutselen – een verwijzing naar het leven achter het IJzeren Gordijn tijdens de Koude oorlog.

      'digital curtain' a term used for splinternet by us bigtech to try and prevent EU be more assertive in their own digital market.

    18. de Chamber of Progress, heeft laten uitrekenen wat het de EU zou kosten als het de diensten van de huidige Amerikaanse techbedrijven in Europa wil vervangen door spullen van eigen makelij. De uitkomst: ten minste 25 keer de hele EU-begroting. De berekening is naar medium Politico gelekt

      US bigtech lobby published a report in Sept 2024 stating creating a Eurostack would be too costly. Report linked.

    19. Samen met een andere gedreven Italiaanse econoom, Francesca Bria, en met de baas van berichtendienst Signal, Meredith Wittaker, organiseert Caffarra in september 2024 een bijeenkomst in het Europarlement getiteld ‘Toward European Digital Independence’. De ondertitel is ‘Building the EuroStack’

      Eurostack was the subtitle of a Sept 2024 meeting in European Parliament. Organised by Caffarra, Meredith Wittaker of Signal, and [[Francesca Bria c]]

    20. Caffarra heeft van binnenuit gezien hoe de macht van de grote Amerikaanse techbedrijven groeide. Europese bedrijven werden overgenomen en konden niet concurreren met de Amerikanen. Getalenteerde Europeanen emigreerden. Ondernemers die kapitaal nodig hebben wijken nu uit naar de VS. En de EU is in hoog tempo veranderd in wat Caffarra een ‘digitale kolonie van Amerika’ noemt. Het frustreert haar en ze wil dat die ontwikkeling stopt. Maar hoe krijg je in 27 lidstaten zowel de ondernemers als de politici en toezichthouders in beweging?

      Caffarra mentions four elements leading to digital colonisation of EU from USA. Buy-outs, inability to compete, brain drain, capital. I think adopting the US framing of what success / growth is plays a factor too. In a scheme set by someone you will never succeed other than playing by that someone's rules.

    21. De van oorsprong Italiaanse econoom en mededingingsexpert Cristina Caffarra is een van de drijvende krachten achter die groep. Deze gebruikt de hashtag ‘EuroStack’ bij haar pogingen Europese overheden op te poken. Meestal spreken de ondernemers, academici, techjuristen en politici uit verschillende landen elkaar online en via Signal. Het sjieke diner in Museum Bellevue in Brussel is een kans om elkaar beter te leren kennen. Gastvrouw Caffarra heeft goed verdiend met klussen voor grote Amerikaanse techbedrijven zoals Apple en Amazon en de Europese Commissie (in rechtszaken tegen Google) en kan het zich nu veroorloven te doen wat ze leuk en belangrijk vindt. Ze is goed in netwerken en peptalks geven. En ze neemt geen blad voor de mond, waarbij blijkt dat ze duidelijk meer op heeft met doeners uit het bedrijfsleven dan met politici en denktankers.

      Cristina Caffarra mentioned as driving force behind Eurostack

    1. https://web.archive.org/web/20251230194055/https://www.tomsguide.com/ai/i-turned-a-hotel-key-card-into-a-one-tap-shortcut-for-chatgpt-and-now-i-use-it-every-day

      You can use any NFC card (like a hotel key card, which the author doesn't, but I do, usually return on check-out, it seems) to connect it to an iPhone shortcut. Tap the card and it triggers some action, response or workflow.

      Says people use it for playlists and lights too. I don't really buy his examples though. You either have to have an NFC in a fixed location (the 'lights' example I believe therefore), or on the move you'd have to dig out the 'right' NFC from someplace (your already full wallet?) then tap it to the phone. That creates actually _more _ friction. 'I stuck a tag on my desk for ....' something specific like he suggests (a list of articles on AI from past 24h) leads to a range of tags on your desk, like when Amazon suggested you have a bunch of tags, one for each product, to build your shopping list. Didn't happen.

    1. some practices that can make those discussions easier, by starting with constraints that even skeptical developers can see the value in: Build tools around verbs, not nouns. Create checkEligibility() or getRecentTickets() instead of getCustomer(). Verbs force you to think about specific actions and naturally limit scope.Talk about minimizing data needs. Before anyone creates an MCP tool, have a discussion about what the smallest piece of data they need to provide for the AI to do its job is and what experiments they can run to figure out what the AI truly needs.Break reads apart from reasoning. Separate data fetching from decision-making when you design your MCP tools. A simple findCustomerId() tool that returns just an ID uses minimal tokens—and might not even need to be an MCP tool at all, if a simple API call will do. Then getCustomerDetailsForRefund(id) pulls only the specific fields needed for that decision. This pattern keeps context focused and makes it obvious when someone’s trying to fetch everything.Dashboard the waste. The best argument against data hoarding is showing the waste. Track the ratio of tokens fetched versus tokens used and display them in an “information radiator” style dashboard that everyone can see. When a tool pulls 5,000 tokens but the AI only references 200 in its answer, everyone can see the problem. Once developers see they’re paying for tokens they never use, they get very interested in fixing it.

      some useful tips to keep MCPs straightforward and prevent data blobs that are too big. - use verbs not nouns for mcp tool names (focuses on the action, not the object upon you act) - think/talk about n:: data minimalisation - break it up, reads separate from reasoning steps. Keeps everything focused on the specific context. - dashboard the ratio of tokens fetched versus tokens used in answers. Lopsided ratios indicate you're overfeeding the system.

    2. In an extreme case of data hoarding infecting an entire company, you might discover that every team in your organization is building their own blob. Support has one version of customer data, sales has another, product has a third. The same customer looks completely different depending on which AI assistant you ask. New teams come along, see what appears to be working, and copy the pattern. Now you’ve got data hoarding as organizational culture.

      MCP data hoarding leads to parallel data households, exactly the type of thing we spent a lot of energy on to reduce

    3. data hoarding trap find themselves violating the principle of least privilege: Applications should have access to the data they need, but no more

      n:: Principle of least privilege: applications only should have access to data they need, and never more. Data hoarding in MCPs goes beyond that.

    4. There’s also a security dimension to data hoarding that teams often miss. Every piece of data you expose through an MCP tool is a potential vulnerability. If an attacker finds an unprotected endpoint, they can pull everything that tool provides. If you’re hoarding data, that’s your entire customer database instead of just the three fields actually needed for the task.

      MCPs that are overloaded w data are new attack surfaces

    5. MCP can remove the friction that comes from those trade-offs by letting us avoid having to make those decisions at all.

      MCP is meant to abstract the way access is created to resources. In practice it gets used to abstract away any decision on which data to provide or not. That's the trap.

    6. The team ended up with a data architecture that buried the signal in noise. That additional load put stress on the AI to dig out that signal, leading to serious potential long-term problems. But they didn’t realize it yet, because the AI kept producing reasonable-looking answers. As they added more data sources over the following weeks, the AI started taking longer to respond. Hallucinations crept in that they couldn’t track down to any specific data source. What had been a really valuable tool became a bear to maintain.

      Having a clear data architecture for your use case is needed. Vgl [[Eindelijk weet ik wat ThetaOS is een Life Lens System (LLS)]] wrt number of data tables (152 now I think), and how it grew over time, deciding on each table added.

    7. I’ve been watching teams adopt MCP over the past year, and I’m seeing a disturbing pattern. Developers are using MCP to quickly connect their AI assistants to every data source they can find—customer databases, support tickets, internal APIs, document stores—and dumping it all into the AI’s context.

      Dev Andrew Stallman warns against dumping all-the-data into an AI application through MCP. Calls it hoarding.

    1. Didn't realise that in 2022 a follow-up to [[ A Psalm for the Wild-Built by Becky Chambers]] was published: [[A Prayer for the Crown-Shy by Becky Chambers]] , for the [[Aan te schaffen boeken]] list

    1. The real power of MCP emerges when multiple servers work together, combining their specialized capabilities through a unified interface.

      Combining multiple MCP servers creates a more capable set-up.

    2. Prompts are structured templates that define expected inputs and interaction patterns. They are user-controlled, requiring explicit invocation rather than automatic triggering. Prompts can be context-aware, referencing available resources and tools to create comprehensive workflows. Similar to resources, prompts support parameter completion to help users discover valid argument values.

      prompts are user invoked (hey AgentX, go do..) and may contain next to instructions also references and tools. So a prompt may be a full workflow.

    3. Prompts Prompts provide reusable templates. They allow MCP server authors to provide parameterized prompts for a domain, or showcase how to best use the MCP server. ​

      mcp prompts are templates for interaction

    4. Resources support two discovery patterns: Direct Resources - fixed URIs that point to specific data. Example: calendar://events/2024 - returns calendar availability for 2024 Resource Templates - dynamic URIs with parameters for flexible queries. Example: travel://activities/{city}/{category} - returns activities by city and category travel://activities/barcelona/museums - returns all museums in Barcelona Resource Templates include metadata such as title, description, and expected MIME type, making them discoverable and self-documenting.

      Resources can be invoked w fixed and dynamic URIs

    5. Resources expose data from files, APIs, databases, or any other source that an AI needs to understand context. Applications can access this information directly and decide how to use it - whether that’s selecting relevant portions, searching with embeddings, or passing it all to the model.

      resources are just that, read only material to invoke. API, filesystem, databases etc.

    6. Each tool performs a single operation with clearly defined inputs and outputs. Tools may require user consent prior to execution, helping to ensure users maintain control over actions taken by a model.

      Almost function call like.

    7. Tools are model-controlled, meaning AI models can discover and invoke them automatically. However, MCP emphasizes human oversight through several mechanisms. For trust and safety, applications can implement user control through various mechanisms, such as: Displaying available tools in the UI, enabling users to define whether a tool should be made available in specific interactions Approval dialogs for individual tool executions Permission settings for pre-approving certain safe operations Activity logs that show all tool executions with their results

      Tools are available to models, but human in the loop options exist: approval, permission settings, logs

    8. Servers provide functionality through three building blocks:

      n:: MCP servers typically provide three types of building blocks, a) Tools that an LLM can call, b) resources that are read-only resources to an LLM, c) prompts, prewritten instructions templates, i.e. agent descriptions, that outline specific tools and resources to use. So for agentic stuff you'd have an MCP server providing templates which in turn list tools and resources.

    9. Visual Studio Code acts as an MCP host. When Visual Studio Code establishes a connection to an MCP server, such as the Sentry MCP server, the Visual Studio Code runtime instantiates an MCP client object that maintains the connection to the Sentry MCP server.

      VS Code acts as MCP Host (in their AI toolkit extension I think). You could connect it to the Obsidian MCP server plugin then?

    10. The key participants in the MCP architecture are: MCP Host: The AI application that coordinates and manages one or multiple MCP clients MCP Client: A component that maintains a connection to an MCP server and obtains context from an MCP server for the MCP host to use MCP Server: A program that provides context to MCP clients

      The MCP architecture has 3 pieces The host (application, AI or not, that coords the interaction with MCP clients), an MCP client that interacts with a single server. MCP server, which provides the context, i.e. abstracts the access to other sources (filesystem, database, API etc). A server can have one or multiple clients it serves.

    1. A comparison between VS Code and Obsidian. Doesn't state the obvious: any text editor can do this. The tools are just viewers and do not contain the data, which is part of your filesystem. Vgl [[3 Distributed Eigenschappen 20180703150724]]

    1. Ignác Semmelweis in 1847 argued for hand washing in maternity wards by doctors, and published a book about it. Was ridiculed for it and died 1865 as an outcast in an asylum. Only the later emergence of germ theory provided a theoretical basis for the empirical observations of Semmelweis. 'Semmelweis-moment' where someone who is right is laughed out of the room.

    1. owned by the Akaunting Software Inc.

      Another business not being clear where they are based. Akaunting as a term seems Bulgarian in spelling's origin, the lead dev has a Turkish personal domain, the company on #socmed lists its location as London. The company is not registered at Companies House though (2 others with similar name are, but different), and not known in opencorporates.com.