bookkeeping foss. Not immediately clear if self-hosting possible.
- Dec 2025
-
-
-
www.consilium.europa.eu www.consilium.europa.eu
-
Member states holding the presidency work together closely in groups of three, called 'trios'. This system was introduced by the Lisbon Treaty in 2009. The trio sets long-term goals and prepares a common agenda determining the topics and major issues that will be addressed by the Council over an 18-month period. On the basis of this programme, each of the three countries prepares its own more detailed six-month programme.
Three MS always work together over their 18 months. This provides continuity for long-term goals. This is arranged in the Lisbon Treaty (2009).
-
The rotating presidency of the Council of the EU (i.e. the MS)
Page holds the basic documents by the current presidency (focus, etc)
-
-
www.consilium.europa.eu www.consilium.europa.eu
-
Official schedule for Council EU presidency to 2030:
2026 Cyprus Ireland (trio IRL, LT, GR) 2027 Lithuania (trio IRL, LT, GR) Greece (trio IRL, LT, GR) 2028 Italy (trio I, LV, L) Latvia (trio I, LV, L) 2029 Luxembourg (trio I, LV, L) Netherlands (trio NL, SK, M) 2030 Slovakia (trio NL, SK, M) Malta (trio NL, SK, M)
NL in H2 2029 (and a trio looking 1 yr further ahead)
-
-
blog.johanneslink.net blog.johanneslink.net
-
For us personally, this means that we no longer use generative AI – neither for private nor professional purposes.
Authors avoid the use of generative AI. But realise that is difficult for most to do, and as such a privileged tech capable position
-
To Gen or Not To Gen: The Ethical Use of Generative AI 33 minute read This blog entry started out as a translation of an article that my colleague Jakob and I wrote for a German magazine. After that we added more stuff and enriched it by additional references and sources. We aim at giving an overview about many - but not all - aspects that we learned about GenAI and that we consider relevant for an informed ethical opinion. As for the depth of information, we are just scratching the surface; hopefully, the loads of references can lead you to diving in deeper wherever you want. Since we are both software developers our views are biased and distorted. Keep also in mind that any writing about a “hot” topic like this is nothing but a snapshot of what we think to know today. By the time you read it the authors’ knowledge and opinions have already changed. Last Update: December 8, 2025. Table of ContentsPermalink Abstract About us Johannes Link Jakob Schnell Introduction Ethics, what does that even mean? Clarification of terms Basics Can LLMs think? What LLMs are good at GenAI as a knowledge source GenAI in software development Actual vs. promised benefits Harmful aspects of GenAI GenAI is an ecological disaster Power Water Electronic Waste GenAI threatens education and science GenAI is destroying the free internet. GenAI is a danger to democracy GenAI versus human creativity Digital colonialism Political aspects Conclusion Can there be ethical GenAI? How to act ethically AbstractPermalink ChatGPT, Gemini, Copilot. The number of generative AI applications (GenAI) and models is growing every day. In the field of software development in particular, code generation, coding assistants and vibe coding are on everyone’s lips. Like any technology, GenAI has two sides. The great promises are offset by numerous disadvantages: immense energy consumption, mountains of electronic waste, the proliferation of misinformation on the internet and the dubious handling of intellectual property are just a few of the many negative aspects. Ethically responsible behaviour requires us to look at all the advantages, disadvantages and collateral damages of a technology before we use it or recommend its use to others. In this article, we examine both sides and eventually arrive at our personal and naturally subjective answer to whether and how GenAI can be used in an ethical manner. About usPermalink Johannes LinkPermalink … has been programming for over 40 years, 30 of them professionally. Since the end of the last century, extreme programming and other human-centred software development approaches have been at the heart of his work. The meaningful and ethical implementation of his private and professional life has been his driving force for years. He has been involved with GenAI since the early days of OpenAI’s GPT language models. More about Johannes can be found at https://johanneslink.net. Jakob SchnellPermalink … studied mathematics and computer science and has been working as a software developer for 5 years. He works as a lecturer and course director in university and non-university settings. As a youth leader, he also comes into regular contact with the lives of children and young people. In all these environments, he observes the growing use of GenAI and its impact on people. IntroductionPermalink Ethics, what does that even mean?Permalink Ethical behaviour sounds like the title of a boring university seminar. However, if you look at the wikipedia article of the term 1, you will find that ‘how individuals behave when confronted with ethical dilemmas’ is at the heart of the definition. So it’s about us as humans taking responsibility and weighing up whether and how we do or don’t do certain things based on our values. We have to consider ethical questions in our work because all the technologies we use and promote have an impact on us and on others. Therefore, they are neither neutral nor without alternative. It is about weighing up the advantages and potential against the damage and risks; and that applies to everyone, not just us personally. Because often those who benefit from a development are different from those who suffer the consequences. As individuals and as a society, we have the right to decide whether and how we want to use technologies. Ideally, this should be in a way that benefits us all; but under no circumstances should it be in a way that benefits a small group and harms the majority. The crux of the matter is that ethical behaviour does not come for free. Ethics are neither efficient nor do they enhance your economic profit. That means that by acting according to your values you will, at some point, have to give something up. If you’re not willing to do that, you don’t have values - just opinions. Clarification of termsPermalink When we write ‘generative AI’ (GenAI), we are referring to a very specific subset of the many techniques and approaches that fall under the term ‘artificial intelligence’. Strictly speaking, these are a variety of very different approaches that range from symbolic logic, over automated planning up to the broad field of machine learning (ML). Nowadays most effort, hype and money goes into deep learning (DL): a subfield of ML that uses multi-layered artificial neural networks to discover statistical correlations (aka patterns) based on very large amounts of training data in order to reproduce those patterns later. Large language models (LLM) and related methods for generating images, videos and speech now make it possible to apply this idea to completely unstructured data. While traditional ML methods often managed with a few dozen parameters, these models now work with several trillion (10^12) parameters. In order for this to produce the desired results, both the amount of training data and the training duration must be increased by several orders of magnitude. This brings us to the definition of what we mean by ‘GenAI’ in this article: Hyperscaled models that can only be developed, trained and deployed by a handful of companies in the world. These are primarily the GenAI services provided by OpenAI, Anthropic, Google and Microsoft, or based on these services. We also focus primarily on language models; the generation of images, videos, speech and music plays only a minor role in this article. Our focus on hyperscale services does not mean that other ML methods are free of ethical problems; however, we are dealing with a completely different order of magnitude of damage and risk here. For example, there do exist variations of GenAI that use the same or similar techniques, but on a much smaller scale and restricted domains (e.g. AlphaFold 2). These approaches tend to bring more value with fewer downsides. BasicsPermalink GenAI models are designed to interpolate and extrapolate 3, i.e. to fill in the gaps between training data and speculate beyond the limits of the training data. Together with the stochastic nature of the training data, this results in some interesting properties: GenAI models ‘invent’ answers; with LLMs, we like to refer to this as ‘hallucinations’. GenAI models do not know what is true or false, good or bad, efficient or effective, only what is statistically probable or improbable in relation to training data, context and query (aka prompt). GenAI models cannot explain their output; they have no capability of introspection. What is sold as introspection is just more output, with the previous output re-injected. GenAI models do not learn from you; they only draw from their training material. The learning experience is faked by reinjecting prior input into a conversation’s context 4. The context, i.e. the set of input parameters provided, is decisive for the accuracy of the generated result, but can also steer the model in the wrong direction. Increasing the context window makes a query much more computation-intensive - likely in a quadratic way. Therefore, the promised increase of “maximum context window” of many models is mostly fake 5. The reliability of LLMs cannot be fundamentally increased by even greater scaling 6. Can LLMs think?Permalink Proponents of the language-of-thought hypothesis 7 believe it is possible for purely language-based models to acquire the capabilities of the human brain – reasoning, modelling, abstraction and much more. Some enthusiasts even claim that today’s models have already acquired this capability. However, recent studies 8 9 show that today’s models are neither capable of genuine reasoning nor do they build internal models of the world. Moreover, “…according to current neuroscience, human thinking is largely independent of human language 10” and there is fundamental scientific doubt that achieving human cognition through computation is achievable in practice let alone by scaling up training of deep networks 11. An example of a lack of understanding of the world is the prompt ‘Give me a random number between 0 and 50’. The typical GenAI response to this is ‘27’, and it is significantly more reliable than true randomness would allow. (If you don’t believe it, just try it out!) This is because 27 is the most likely answer in the GenAI training data – and not because the model understands what ‘random’ means. ‘Chain of Thought (CoT)’ approaches and ‘Reasoning models’ attempt to improve reasoning by breaking down a prompt, the query to the model, into individual (logical) steps and then delegating these individual steps back to the LLM. This allows some well-known reasoning benchmarks to be met, but it also multiplies the necessary computational effort by a factor between 30 and 700 12. In addition, multistep reasoning lets individual errors chain together to form large errors. And yet, CoT models do not seem to possess any real reasoning abilities 13 14 and improve the overall accuracy of LLMs only marginally 15. The following thought experiment from 16 underscores the lack of real “thinking” capabilities: LLMs have simultaneous access to significantly more knowledge than humans. Together with the postulated ability of LLMs to think logically and draw conclusions, new insights should just fall from the sky. But they don’t. Getting new insights from LLMs would require these to be already encoded in the existing training material, and to be decoded and extracted by pure statistical means. What LLMs are good atPermalink Undoubtedly, LLMs represent a major qualitative advance when it comes to extracting information from texts, generating texts in natural and artificial languages, and machine translation. But even here, the error rate, and above all the type of error (‘hallucinations’), is so high that autonomous, unsupervised use in serious applications must be considered highly negligent. GenAI as a knowledge sourcePermalink As we have pointed out above, LLMs cannot differentiate between true and false - regardless of the training material. It does not answer the question “What is XYZ?” but the question “How would an answer to question ‘What is XYZ?’ look like?”. Nevertheless, many people claim that the answers that ChatGPT and alike provide for the typical what-how-when-who queries are good enough and often better than what a “normal” web search would have given us. Arguably, this is the most prevalent use case for “AI” bots today. The problem is that most of the time we will never learn about the inaccuracies, left-outs, distortions and biases that the answer contained - unless we re-check everything, which defies the whole purpose of speeding up knowledge retrieval. The less we already know, the better the “AI’s” answer looks to us, but the less equipped we are to spot the problems. A recent by the BBC and 22 Public Service Media organizations shows that 45% of all “AI” assistants’ answers on questions about news and current affairs have significant errors 17. Moreover, LLMs are easy prey for manipulation - either by the service providing organization or by third parties. A recent study claims that even multi-billion-parameter models can be “poisoned” by injecting just a few corrupted documents 18. So, if anything is at stake all output from LLMs must be carefully validated. Doing that, however, would contradict the whole point of using “AI” to speed up knowledge acquisition. GenAI in software developmentPermalink The creation and modification of computer programmes is considered a prime domain for the use of LLMs. This is partly because programming languages have less linguistic variance and ambiguity than natural languages. Moreover, there are many methods for automatically checking generated source code, such as compiling, static code analysis and automated testing. This simplifies the validation of generated code and thereby gives an additional feeling of trust. Nevertheless, individual reports on the success of coding assistants such as Copilot, Cursor, etc. vary greatly. They range from ‘completely replacing me as a developer’ to ‘significantly hindering my work’. Some argue that coding agents considerably reduce the time they have to invest in “boilerplate” work, like writing tests, creating data transfer objects or connecting your domain code to external libraries. Others counter by pointing out that delegating these drudgeries to GenAI makes you miss opportunities to get rid of them, e.g. by introducing a new abstraction or automating parts of your pipeline, and to learn about the intricacies and failure modes of the external library. Other than old-school code generation or code libraries prompting a coding agent is not “just another layer of abstraction”. It misses out on several crucial aspects of a useful abstraction: Its output is not deterministic. You cannot rely on any agent producing the same code next time you feed it the same prompt. The agent does not hide the implementation details, nor does it allow you to reliably change those details if the previous implementation turns out to be inadequate. Code that is output by an LLM, even if it is generated “for free”, has to be considered and maintained each time you touch the related logic or feature. The agent does not tell you if the amount of details you give in your prompt is sufficient for figuring out an adequate implementation. On the contrary, the LLM will always fill the specification holes with some statistically derived assumptions. Sadly, serious studies on the actual benefits of GenAI in software development are rare. The randomised trial by Metr 19 provides an initial indication, measuring a decline in development speed for experienced developers. An informal study by ThoughtWorks estimates the potential productivity gain from using GenAI in software development at around 5-15% 20. If “AI coding” were increasing programmers’ productivity by any big number, we would see a measurable growth of new software in app stores and OSS repositories. But we don’t, the numbers are flat at best 2122. But even if we assume a productivity increase in coding through GenAI, there are still two points that further diminish this postulated efficiency gain: Firstly, the results of the generation must still be cross-checked by human developers. However, it is well known that humans are poor checkers and lose both attention and enjoyment in the process. Secondly, software development is only to a small extent about writing and changing code. The most important part is discovering solutions and learning about the use of these solutions in their context. Peter Naur calls this ‘programming as theory building’ 23. Even the perfect coding assistant can therefore only take over the coding part of software development. For the essential rest, we still need humans. If we now also consider the finding that using AI can relatively quickly lead to a loss of problem-solving skills 24 or that these skills are not acquired at all, then the overall benefit of using GenAI in professional software development is more than questionable. As long as programming - and every technicality that comes with it - will not be fully replaced by some kind of AI, we will still need expert developers who can programm, maintain and debug code to the finest level of detail. Where, we wonder, will those senior developers come from when companies replace their junior staff with coding agents? Actual vs. promised benefitsPermalink If you read testimonials about the use of GenAI that people perceive as successful, you will mostly encounter scenarios in which ‘AI’ helps to make tasks that are perceived as boring, unnecessarily time-consuming or actually pointless faster or more pleasant. So it’s mainly about personal convenience and perceived efficiency. Entertainment also plays a major role: the poem for Grandma’s birthday, the funny song for the company anniversary or the humorous image for the presentation are quickly and supposedly inexpensively generated by ‘AI’. However, the promises made by the dominant GenAI companies are quite different: solving the climate crisis, providing the best medical advice for everyone, revolutionising science, ‘democratising’ education and much more. GPT5, for example, is touted by Sam Altman, CEO of OpenAI, as follows: ‘With GPT-5, it’s now like talking to an expert — a legitimate PhD-level expert in any area you need […] they can help you with whatever your goals are.’ 25 However, to date, there is still no actual use case that provides a real qualitative benefit for humanity or at least larger groups. The question ‘What significant problem (for us as a society) does GenAI solve?’ remains unanswered. On the contrary: While machine learning and deep learning methods certainly have useful applications, the most profitable area of application for ‘AI’ at present is the discovery and development of new oil and gas fields 26. Harmful aspects of GenAIPermalink But regardless of how one assesses the benefits of this technology, we must also consider the downsides, because only then can we ultimately make an informed and fair assessment. In fact, the range of negative effects of hyperscaled generative AI that can already be observed is vast. Added to this are numerous risks that have the potential to cause great social harm. Let’s take a look at what we consider to be the biggest threats: GenAI is an ecological disasterPermalink PowerPermalink The data centres required for training and operating large generative models 27 far exceed today’s dimensions in terms of both number and size. The projected data centre energy demand in the USA is predicted to grow from 4.4% of total electricity in 2023 to 22% in 2028 28. In addition, the typical data centre electricity mix is more CO2-intensive than the average mix. There is an estimated raise of ~11 percent for coal generated electricity in the US, as well as tripled emissions of greenhouse gases worldwide by 2030 - compared to the scenario without GenAI technology 29. Just recently Sam Altman from OpenAI blogged some numbers about the energy and water usage of ChatGPT for “the average query” 30. On the one hand, an average is rather meaningless when a distribution is heavily unsymmetric; the numbers for queries with large contexts or “chain of reasoning” computations would be orders of magnitude higher. Thus, the potential efficiency gains from more economical language models are more than offset by the proliferation of use, e.g. through CoT approaches and ‘agent systems’. On the other hand, big tech’s disclosure of energy consumption (e.g. by Google 31) is intentionally selective. Ketan Joshi goes into quite some details why experts think that the AI industry is hiding the full picture 32. Since building new power plants - even coal or gas fuelled ones - takes a lot of time, data center companies are even reviving old jet engines for powering their new hyper-scalers 33. You have to be aware that those engines are not only much more noisy than other power plants but also pump out nitrous oxide, one of the main chemicals responsible for acid rain 34. WaterPermalink Another problem is the immensely high water consumption of these data centres 35. After all, cooling requires clean water in drinking quality in order to not contaminate or clog the cooling pipes and pumps. Already today, new data centre locations are competing with human consumption of drinking water. According to Bloomberg News about two-thirds of data-centers that were built or developed in 2022 are located in areas that are already under “water-stress” 36. In the US alone “AI servers […] could generate an annual water footprint ranging from 731 to 1,125 million m3” 37. It’s not only an American problem, though. In other areas of the world the water-thirsty data centers also compete with the drinking water supply for humans 38. Electronic WastePermalink Another ecological problem is being noticeably exacerbated by ‘AI’: the amount of electronic waste (e-waste) that we ship mainly to “Third World” countries and which is responsible for soil contamination there. Efficient training and querying of very large neural networks requires very large quantities of specialised chips (GPUs). These chips often have to be replaced and disposed of within two years. The typical data center might not last longer than 3 to 5 years before it has to be rebuilt in large parts39. In summary, it can be said that GenAI is at least an accelerator of the ecological catastrophe that threatens the earth. And it is the argument for Google, Amazon and Microsoft to completely abolish their zero CO2 targets 40 and replace them with investments of several hundred billion dollars for new data centers. GenAI threatens education and sciencePermalink People often try to use GenAI in areas where they feel overloaded and overwhelmed: training, studying, nursing, psychotherapeutic care, etc. The fields of application for ‘AI’ are therefore a good indication of socially neglected and underfunded areas. The fact that LLMs are very good at conveying the impression of genuine knowledge and competence makes their use particularly attractive in these areas. A teacher under the simultaneous pressure of lesson preparation, corrections and covering for sick colleagues turns to ChatGPT to quickly create an exercise sheet. A student under pressure to get good grades has their English essay corrected by ‘AI’. The researcher under pressure to publish will ‘save’ research time by reading the AI-generated summary of relevant papers – even if they are completely wrong in terms of content 41. Tech companies like OpenAI and Microsoft play on that situation by offering their ‘AI’ for free or for little money to students and universities. The goal is obvious: Students that get hooked on outsourcing some of their “tedious” task to a service will continue to use - and eventually buy - this service after graduation. What falls by the wayside are problem-solving skills, engagement with complex sources, and the generation of knowledge through understanding and supplementing existing knowledge. Some even argue that AI is destroying critical education and learning itself 42: Students aren’t just learning less; their brains are learning not to learn. The training cycle of schools and universities is fast. Teachers are already reporting that pupils and students have acquired noticeably less competence in recent years, but have instead become dependent on unreliable ‘tools’ 43. The real problem with using GenAI to do assignments is not cheating, but students “are not just undermining their ability to learn, but to someday lead.” 44 GenAI is destroying the free internet.Permalink The fight against bots on the internet is almost as old as the internet itself – and has been quite successful so far. Multifactor authentication, reCaptcha, honeypots and browser fingerprinting are just a few of the tools that help protect against automated abuse. However, GenAI takes this problem to a new level – in two ways. To make ‘the internet’ usable as the main source for training LLMs, AI companies use so-called ‘crawlers’. These essentially behave like DDoS attackers: They send tens of thousands of requests at once, from several hundred IPs in a very short time. Robot.txt files are ignored; instead, the source IP and user agent are obscured 45. These practices have massive disadvantages for providers of genuine content: Costs for additional bandwidth. Lost advertising revenue, as search engines now offer LLM-generated summaries instead of links to the sources. This threatens the existence of remaining independent journalism in particular 46. Misuse of own content for AI-supported competition. If the place where knowledge is generated is separated from the place where it is consumed, and if this makes the performance of generation even more opaque than before, the motivation to continue generating knowledge also declines. For projects such as Wikipedia, this means fewer donors and fewer contributors. Open communities often have no other option but to shut themselves off. Another aspect is the flooding of the internet with generated content that cannot be automatically distinguished from non-generated content. This content overwhelms the maintainers of open source software or portals such as Wikipedia 47. If this content is then also entered by humans – often in the belief that they are doing good – it is no longer possible to take action against the methodology. In the long run, this means that less and less authentic training material will lead to increasingly poor results from the models. Last but not least, autonomously acting agents make the already dire state of internet security much worse 48. Think of handing all your personal data and credentials to a robot that is distributing and using that data across the web, wherever and whenever it deems it necessary for reaching some goal. is controlled by LLMs who are vulnerable to all kinds of prompt injection attacs 49. is controlled by and reporting to companies that do not have your best interest in mind. has no awareness and knowledge about the implication of its actions. is acting on your behalf and thereby making you accountable. GenAI is a danger to democracyPermalink The manipulation of public opinion through social media precedes the arrival of LLMs. However, this technology gives the manipulators much more leverage. By flooding the web with fake news, fake videos and fake everything undemocratic (or just criminal) parties make it harder and harder for any serious media and journalism to get the attention of the public. People no longer have a common factual basis, which is necessary for all social negotiations. If you don’t agree on at least some basic facts, arguing about policies and measures to take is pointless. Without negotiations democracy will be dying; in many parts of the world it already is. GenAI versus human creativityPermalink Art and creativity are also threatened by generative AI. The impact on artists’ incomes of logos, images and illustrations now being easily and quickly created by AI prompts is obvious. A similar effect can also be observed in other areas. Studies show that poems written by LLMs are indistinguishable from those written by humans and that generative AI products are often rated more highly 50. This can be explained by a trend towards the middle and the average, which can also be observed in the music and film scenes film scene: due to its basic function, GenAI cannot create anything fundamentally new, but replicates familiar patterns, which is precisely why it is so well received by the public. Ironically, ‘AI’ draws its ‘creativity’ from the content of those it seeks to replace. Much of this content was used as training material against the will of the rights holders. Whether this constitutes a copyright infringement has not yet been decided; morally, the situation seems clear. The creative community is the first to be seriously threatened by GenAI in its livelihood 51. It’s not a coincidence that a big part of GenAI efforts is targeted at “democratizing art”. This framing is completely upside down. Art has been one of the most democratic activities for a very long time. Everybody can do it; but not everybody wants to do put in the effort, the practicing time and the soul. Real art is not about the product but about the process, which requires real humans. Generating art without the friction is about getting rid of the humans in the loop - and still making money. Digital colonialismPermalink The huge amount of data required by hyperscaled AI approaches makes it impossible to completely curate the learning content. And yet, one would like to avoid the reproduction of racist, inhuman and criminal content. Attempts are being made to get the problem under control by subsequently adapting the models to human preferences and local laws through additional ‘reinforcement learning from human feedback (RLHF)’ 52. The cheap labour for this very costly process can be found in the Global South. There, people are exposed to hours of hate speech, child abuse, domestic violence and other horrific scenarios in their poorly paid jobs in order to filter them out of the training material of large AI companies 53. Many emerge from these activities traumatised. However, it is not only people who are exploited in the less developed regions of the world, but also nature: the poisoning of the soil with chemicals during the extraction of raw materials for digital chips, as well as the contamination caused by our electronic waste and its improper disposal, are collateral damage that we willingly accept and whose long-term consequences are currently extremely difficult to assess. Here, too, the “developed” world profits, whereas the negative aspects are outsourced to the former colonies and other poor regions of the world. Political aspectsPermalink As software developers, we would like to ‘leave politics out of it’ and instead focus entirely on the cool tech. However, this is impossible when the advocates of this technology pursue strong political and ideological goals. In the case of GenAI, we can cleary see that the US corporations behind it (OpenAI, Google, Meta, Microsoft, etc.) have no problem with the current authoritarian – some say fascist – US government 54. In concrete terms, this means, among other things, that the models are explicitly manipulated to be less liberal or simply not to generate any output that could upset the CEO or the president 55. Even more serious is the fact that many of the leading minds behind these corporations and their financiers adhere to beliefs that can be broadly described as digital fascism. These include Peter Thiel, Marc Andreessen, Alex Karp, JD Vance, Elon Musk and many others on “The Authoritarian Stack” 56. Their ideologies, disguised as rational theories, are called longtermism and effective altruism. What they have in common is that they consider democracy and the state to be obsolete models, compassion to be ‘woke’, and that the current problems of humanity are insignificant, as our future lies in the colonisation of space and the merging of humans with artificial superintelligence 57. Do we want to give people who adhere to these ideologies (even) more power, money and influence by using and paying for their products? Do we want to feed their computer systems with our data? Do we really want to expose ourselves and our children to the answers from chatbots which they have manipulated? Not quite as abstruse, but similarly misanthropic, is the imminent displacement of many jobs by AI, as postulated by the same corporations in order to put pressure on employees with this claim. Demanding a large salary? Insisting on your legal rights? Complaining about too much workload? Doubts about the company’s goals? Then we’ll just replace you with cheap and uncomplaining AI! Whichever way you look at it, AI and GenAI are already being used politically. If we go along without resistance, we are endorsing this approach and supporting it with our time, our attention and our money. ConclusionPermalink Ideally, we would like to quantify our assessment by adding up the advantages, adding up the disadvantages and finally checking whether the balance is positive or negative. Unfortunately, in our specific case, neither the benefits nor the harm are easily quantifiable; we must therefore consult our social and personal values. Discussions about GenAI usually revolve purely around its benefits. Often, the capabilities of all ‘AI’ technologies (e.g. protein folding with AlphaFold 2) are lumped together, even though they have little in common with hyperscaling GenAI. However, if we consider the consequences and do not ignore the problems this technology entails – i.e. if we consider both sides in terms of ethics – the assessment changes. Convenience, speed and entertainment are then weighed against numerous damages and risks to the environment, the state and humanity. In this sense, the ethical use and further expansion of GenAI in its current form is not possible. Can there be ethical GenAI?Permalink If the use of GenAI is not ethical today what would have to change, which negative effects of GenAI would have to disappear or at least be greatly reduced in order to tip the balance between benefits and harms in the other direction? The models would have to be trained exclusively with publicly known content whose original creators consent to its use in training AI models. The environmental damage would have to be reduced to such an extent that it does not further fuel the climate crisis. Society would have to get full access to the training and operation of the models in order to rule out manipulation by third parties and restrict their use to beneficial purposes. This would require democratic processes, good regulation and oversight through judges and courts. The misuse and harming of others, e.g., through copyright theft or digital colonialism, would have to be prevented. Is such a change conceivable? Perhaps. Is it likely, given the interest groups and political aspects involved? Probably not
All these factors are achievable I think, or will be soonish. Smaller models, better sourced data sets, niche models, etc. But not with current actors as mentioned at the end.
-
Johannes Link and Jakob Schnell made this overview of ethical considerations around generative AI.
-
-
-
Irish gov will use their presidency (2nd half of 2026 I think, coming half year is Cyprus, no?) to look into ID-verified social media in the EU. On Mastodon this was posted w t question how it relates to Fediverse. Presumably this will be based on the Digital Services Act, DSA (and GDPR). My current assessment is that DSA hardly applies to fediverse, esp not if there's plenty of federation vs centralisation on a handful of instances. The DSA legislates platforms, not social features, and those features are possible without platforms.
-
-
cdn.ceps.eu cdn.ceps.eu
-
CEPS report on Fediverse wrt DSA, from early 2025. I mistook this one for the IVIR one from 2024 IVIR DSA en Fediverse 2024 in Zotero, which I have remarks about. I have seen this doc, but not sure if I actually took it in.
Was pointed out to me wrt ID reqs for social media, and if they would apply to fediverse (my first approx is no, if you federate much more agressively, yes if very large instances)
Relevant wrt DSA too, size and revenue type.
-
-
pierce.dev pierce.dev
-
https://web.archive.org/web/20251229121559/https://pierce.dev/notes/go-ahead-self-host-postgres
Posting explaining selfhosting db server not all that difficult or hassle.
By Pierce Freeman, USA SF based ML researcher & systems engineer
-
I'm not advocating that everyone should self-host everything. But the pendulum has swung too far toward managed services. There's a large sweet spot where self-hosting makes perfect sense, and more teams should seriously consider it. Start small. If you're paying more than $200/month for RDS, spin up a test server and migrate a non-critical database. You might be surprised by how straightforward it is. The future of infrastructure is almost certainly more hybrid than it's been recently: managed services where they add genuine value, self-hosted where they're just expensive abstractions. Postgres often falls into the latter category. Footnotes They're either just hosting a vanilla postgres instance that's tied to the deployed hardware config, or doing something opaque with edge deploys and sharding. In the latter case they near guarantee your DB will stay highly available but costs can quickly spiral out of control. ↩ Maybe up to billions at this point. ↩ Even on otherwise absolutely snail speed hardware. ↩ This was Jeff Bezos's favorite phrase during the early AWS days, and it stuck. ↩ Similar options include OVH, Hetzner dedicated instances, or even bare metal from providers like Equinix. ↩ AWS RDS & S3 has had several major outages over the years. The most memorable was the 2017 US-East-1 outage that took down half the internet. ↩
Cloud hosting can become an expensive abstraction layer quickly. I also think there's an entire generation of coders / engineers who treat silo'd cloudhosting as a given, without considering other options and their benefits. Large window for selfhosting in which postgres almost always falls
-
Write-Ahead Logging is critical for durability and performance:
WAL config also needs attention in postgres selfhosting
-
Storage Tuning: NVMe SSDs make having content on disk less harmful than conventional spinning hard drives, so you'll want to pay attention to the disk type that you're hosted on:
storage tuning is a selfhosting postgres concern too
-
Making fresh connections in postgres has pretty expensive overhead, so you almost always want to put a load balancer on front of it. I'm using pgbouncer on all my projects by default - even when load might not call for it. Python asyncio applications just work better with a centralized connection pooler.
Postgres parallel connections is something you want to stay on top of. load balancing needed
-
Memory Configuration: This is where most people mess up. Pulling the standard postgres docker image won't cut it. You have to configure memory bounds with static limits that correspond to hardware. I've automated some of these configurations. But whether you do it manually or use some auto-config, tweaking these params is a must.
Selfhosting Postgres requires to set static limits wrt memory.
-
When self-hosting doesn't make sense I'd argue self-hosting is the right choice for basically everyone, with the few exceptions at both ends of the extreme: If you're just starting out in software & want to get something working quickly with vibe coding, it's easier to treat Postgres as just another remote API that you can call from your single deployed app If you're a really big company and are reaching the scale where you need trained database engineers to just work on your stack, you might get economies of scale by just outsourcing that work to a cloud company that has guaranteed talent in that area. The second full freight salaries come into play, outsourcing looks a bit cheaper. Regulated workloads (PCI-DSS, FedRAMP, HIPAA, etc.) sometimes require a managed platform with signed BAAs or explicit compliance attestations.
Sees use for silo'd postgres hosting on the extremes of the spectrum: when you start without knowledge and are vibecoding, so you can treat the database as just another API, and when you are megacorp (outsourcing looks cheaper quickly if you have to otherwise pay multiple FTE salaries otherwise), or/and have to prove regulatory compliance.
-
The real operational complexity
Good overview of tasks wrt selfhosting (or rather dedicated server in a non silo datacenter), weekly, monthly, quarterly. Amounts to half a day per quarter at most. You do need to time risky updates etc better, if you run stuff others depend upon, so that you can do incident response
-
I helped prove this to myself when I migrated off RDS. I took a pg_dump of my RDS instance, restored it to a self-hosted server with identical specs, and ran my application's test suite. Performance was identical. In some cases, it was actually better because I could tune parameters that RDS locks down.
This reads like what I did wrt Gmail 2014 and Amazon ebooks 2025. [[Leaving a walled garden or silo 20160820203833]] is listing the bits it does and originally improved for you, then rebuild outside the silo based on same components. Not difficult but needs bit of focus.
-
But the actual database engine? It's the same Postgres running the same SQL queries with the same performance characteristics.
Underneath the wrappers it is all the same.
-
The value proposition is operational: they handle the monitoring, alerting, backup verification, and incident response. It's also a production ready configuration at minute zero of your first deployment.
The value proposition is twofold: 1. operational handling (monitoring, backups all that) 2. production ready config out of the box (which feels like fast progress, but also locks you in ofc)
-
For the most part managed database services aren't running some magical proprietary technology. They're just running the same open-source Postgres you can download with some operational tooling wrapped around it. Take AWS RDS. Under the hood, it's: Standard Postgres compiled with some AWS-specific monitoring hooks A custom backup system using EBS snapshots Automated configuration management via Chef/Puppet/Ansible Load balancers and connection pooling (PgBouncer) Monitoring integration with CloudWatch Automated failover scripting
AWS RDS is not much else as open source postgres with operational tooling that is not complex itself.
-
Fast forward to 2025 and I hope the pendulum might be swinging back. RDS pricing has grown considerably more aggressive. A db.r6g.xlarge instance (4 vCPUs, 32GB RAM) now costs $328/month before you add storage, backups, or multi-AZ deployment. For that price, you could rent a dedicated server with 32 cores and 256GB of RAM.
Amazon database servers have become much more expensive since 2015. You could run a dedicated full server for similar money.
-
The real shift happened around 2015 when cloud adoption accelerated. Companies started to view any infrastructure management as "undifferentiated heavy lifting"4. Running your own database became associated with legacy thinking. A new orthodoxy emerged of focusing on your application logic and letting AWS handle the infrastructure.
This post places the shift to hyperscaler dependency in 2015, when (I presume: software) companies began to view any involvement in digital infrastructure management as hassle.
-
-
www.linkedin.com www.linkedin.com
-
* "Digital platforms are used for hybrid campaigns."* "EU can't compete with US tech ON THEIR TERMS."* "Post-reality US is what happens when tech is unregulated."* "Ireland is a Trojan Horse for Big Tech."* "The Digital Omnibus is sabotage."
Quotes van [[Defend Democracy o]] event w DK EU presidency cohosting. All convey an aspect of where work is needed. On each I see one could define [[Handelen 20040327155224]] as [[SC landscape van EU Dataspace]] interventions and broader.
the last one pertains to the AI / GDPR omnibus, not the data one, I think.
-
-
-
Web2.0 seems to have been a mistake and is being rolled back entirely. The whole concept of having your own web property that other people can write to (leave comments and other things) has gone away. That means also there’s no need for a dynamic website with database anymore.
The second follows from the first, but the first not necessarily true. The commenting thing is bigger than ever, but coopted in the silos to create ad revenue. I have more interaction on my blog again than I had in years, due to the non-silo'd social platforms. I wonder, interaction is only ever needed on newish stuff. I like the speed of static content. So what if I had a mix, latest x posts are dynamic, the rest served statically on the same urls as before. Vgl [[Providing Blog Posts in Plain Text – Interdependent Thoughts 20251229121518]]
-
Reading back all those old posts and weeknotes I have here is super nice and reminds me:Keeping a record of things is really valuable. Just write and trust that it will come in handy at some point.I used to do so many things in a given week. Compared to what I’m doing now, my life was insanely eventful.I was consistently (too) early on a lot of things. For instance: I read myself complaining about restaurants and food in Amsterdam, something which is mostly solved now.
Like myself Alper is his own most frequent reader of his blog. Mentions realising how much he did in a week earlier in his life. Same for me, things that now might be a big thing in a week, were Tuesday afternoon 15yrs ago. It's not just age I suspect, but also an overall attenuation that Covid brought?
-
PHP/MySQL is losing adoption
states the AMP stack is losing adoption. Any numbers to find?
-
https://web.archive.org/web/20251229105350/https://alper.nl/blog/18730/
[[Alper Çuğun p]] moving away from WordPress to Hugo / markdown.
-
-
netzpolitik.org netzpolitik.org
-
Max Schrems #2025/12 interviewed by Nextpolitik about GDPR and reform, and the position of GDPR in Germany. Mentions digital omnibus and upcoming digital fitness test by EC of GDPR
-
-
doc.anytype.io doc.anytype.io
-
Our backup nodes are located in Switzerland, and we use AWS (Amazon Web Services).
Anytype uses Swiss data centers for backup nodes, and uses AWS to get stuff there. Everything is encrypted so in that sense not problematic, but still it means Amazon holds the off-switch.
-
-
doc.anytype.io doc.anytype.io
-
Media files are not directly downloaded in overall syncing to save bandwidth. Instead, when that file is requested, it is streamed to your device from the backup node or your devices on the network. For example, if you have a 4K Video, it will be streamed from the backup node or P2P devices to your device. So when you open an object with an image, it downloads. When you press play on video & audio, it begins to download. After that, this file will be stored in the application cache.
media files may not be locally available, and require a internet connection to be streamed/downloaded on demand. Generally excluded from syncing to save bandwidth. Doesn't this also mean that media files aren't backed-up, in the sense that people will treat sync as back-ups.
-
-
doc.anytype.io doc.anytype.io
-
Pricing for Self-HostersSelf-hosters can manage the limits of Viewers/Editors they invite to their Spaces themselves. Those who would like to purchase a name in the Anytype naming system or access priority support, can purchase a membership at the same price as other beta testers.
The free tier is tied to Anytype hosting the syncing stuff. Self-hosters do not have limitations. You do not have a name in the Anytype naming system (needed for IPFS, they have a 'private' IPFS network set-up). This is your lock-in right there. Why would you opt-in to that?
-
-
doc.anytype.io doc.anytype.io
-
Frontmatter (the --- metadata at the top) → becomes object properties.
Obsidian frontmatter will be changed into properties. This also means that any other data markers, not as frontmatter, will be ignored and treated as regular text.
-
Step 1: Export from ObsidianClean up your notesConvert embeds like ![[Note Title]] into simple links [[Note Title]]Remove or convert plugin-specific syntax, such as Dataview queries, Templater fields, etc.
One can import their Obsidian folder into Anytype, but you will lose some information. Embeds don't convert, nor scripting language. I think this is externalising friction. I get that it doesn't convert as functionality, but the convertor should be able to recognise these elements and ignore or adapt them. Most people won't know how to even find all their inclusions e.g.
-
-
-
Pick the plan that works for you now, add more teammates and additional storage anytime you need. Switch or cancel anytime.
Anytype has paid tiers (4-20/month depending on volume of remote storage, collaborative channels Free tier has 100mb remote storage, 10 collab channels. You can use it locally fully though, w local storage.
-
-
anytype.io anytype.io
-
Furthermore, the choice of incorporating as an Association rather than an LLC or other legal structure distinguishes ANY as a company managed in accordance with commercial practices, but which exists for the greater good. We consider ourselves a remote team, but we maintain one hub in Berlin (where our team is employed by our local GmbH, a subsidiary of the Swiss Association).
ANY is a Swiss association as it provides more IP protection and signals a community affinity. All work remote, but have an office in Berlin. The staff in Berlin is employed in a German GmbH, as a subsidiary. So based in B, not CH.
-
Our applications, on the other hand, are distributed under a source available license.
Anytype is 'source available'
Tags
Annotators
URL
-
-
-
supported by
vc funded with all that introduces
-
-
anytype.io anytype.io
-
Single objects, infinite possibilities Visualise connections using graph & database views
Provides several views, as everything is an object, the graphs can make any combi/selection. Graph, table, kanban, gallery. That is however just two basic views (graph and table), unlike e.g. Tinderbox (treemap, network graph, outline, timeline, ecology, landscape)
-
Templates
everything in Anytype is an object, vgl Mediamatic's 'things' in Thing CMS.
You can template objects. Something you can also do in a markdown / Obsidian setting. - [ ] check of ik meer object gerichte templates wil/kan hanteren dan nu (boek, project). #pkm #30mins
-
Databases
Databases seem to be 'areas' in the gtd sense, and/or projects within them?
-
Nobody is mediating the connection between your devices
Syncing and cross device availability is through IPFS (mentioned in documentation elsewhere).
-
Nobody can see what’s in your vault, except for you
locally and on device encrypted. Means you can't access the same material through something other than Anytype? Iow it's not a viewer but a gatekeeper? Runs foul of [[3 Distributed Eigenschappen 20180703150724]] req.
-
Anytype is an offline first note taking / database tool for both personal and group use.
-
discuss, organize, remember
Anytype has a collaborative mode.
Tags
Annotators
URL
-
-
en.wikipedia.org en.wikipedia.org
-
LCP is DRM standard (.lcpl files), a JSON/XML extension and ISO standard. https://readium.org/lcp-specs/
Came across it in Norwegian e-book store. This page suggests it is in use elsewhere too. Seems to be a format used for lending books too.
Elsewhere I find that Readium 1.0 can be de-drmd easily but 2.5 is more secure from that.
Tags
Annotators
URL
-
-
www.epubor.com www.epubor.com
-
A: The Calibre dedrm plugin only removes the Readium 1.0 drm. As for Readium 2.5, it employs the tougher drm, it cannot be handled with Calibre drdrm plugin.
Calibre DeDRM plugin can only work w Readium 1 lcpl, current version 2.5 needs diff path.
-
The DeDRM plugin i
Calibre DeDRM plugin, but an older version needed to remove Lcpl drm, you do need the passphrase still. For reading in Calibre no need to de-drm but needed if you want to move it to a device that does not support lcp.
-
you can download the .lcpl within the Calibre directly.
Calibre works directly with .lcpl LCP drm
-
All you need to open (or decrypt) LCP eBooks is the account passphrase given to you by the eBook provider - the very same passphrase you'd have to enter into your eBook reader device (once) to read LCP-encrypted books. Sometime, the passphrase can be the passwords.
.lcpl LCP drmd books have a passphrase. Typically this will be the one for your account from the platform where you bought the book
Tags
Annotators
URL
-
-
www.conferencesthatwork.com www.conferencesthatwork.com
-
https://web.archive.org/web/20251228132740/https://www.conferencesthatwork.com/index.php/event-design/2017/01/stories-have-dark-side/ A good story is not per def a true or helpful story. Dark stories can evoke dark passions in audiences. Call to as event organisers to not just platform stories bc they're compelling, but determine if you want to provide a megaphone. Duh. Does lead me to think, e.g. like during epsiplatform, and IndieWeb meetups, can you put some of that down upfront (rather than 'feel' it) as a charter internally or externally (if it doesn't turn the charter into the story). Platforming or not, is choosing to pay attention and [[Aandacht is een morele keuze 20201217074345]]
-
-
www.wrecka.ge www.wrecka.ge
-
https://web.archive.org/web/20251228125455/https://www.wrecka.ge/landslide-a-ghost-story/
Essay. by Erin Kissane on 'collective derangement' as societal problem. How knowledge is more a collective thing than a personal (and we're eroding the collective part), and how a algorithmic infodiet feels like being informed but isn't.
Tags
Annotators
URL
-
-
-
https://web.archive.org/web/20251228115234/https://www.downes.ca/post/78645
[[Stephen Downes p]] on an article by Erin Kissane on collective and individual knowledge. Sees parallel with [[Ludwig Wittgenstein h]] 'riverbed propositions' analogy „Man könnte sich vorstellen, daß gewisse Sätze von der Form der Erfahrungssätze erstarrt wären und als Leitung für die nicht erstarrten, flüssigen Erfahrungssätze funktionierten; und daß sich dieses Verhältnis mit der Zeit änderte, indem flüssige Sätze erstarrten und feste flüssig würden. // Die Mythologie [i. e. die unbezweifelten Sätze] kann wieder in Fluß geraten, das Flußbett der Gedanken sich verschieben. Aber ich unterscheide zwischen der Bewegung des Wassers im Flußbett und der Verschiebung dieses; obwohl es eine scharfe Trennung der beiden nicht gibt.“ (Über die Gewißheit § 96 und 97) which distinguishes between the river flow in fixed boundaries, and the shifting of the river itself over time. Vgl [[Waarheid en kennis kent historische periodes 20250914161603]] wrt epistemic periods (Foucault)
-
-
text.tchncs.de text.tchncs.de
-
drei Erkenntnisse
Three insights / results from doing it a month 1) small patterns emerge, after 2 weeks. Repeated observations bring them to the fore, and no longer done a way with as incident. The weekly review was key in this. 2) more aware of positive moments (a pattern in itself imo), again weekly review key Vgl #microsucces 3) reflection changed his practice. Small feedback loop was doing something slightly diff the next session, based on the action formulated the previous one. The review served to see the impact of micro-interventions. The reflection provided agency as professional.
-
Zusätzlich habe ich mir jeden Freitag eine halbe Stunde Zeit genommen für einen wöchentlichen Rückblick. Dabei habe ich die Einträge durchgeblättert, Muster markiert und die wichtigsten Episoden noch einmal verdichtet. Dieser Rückblick half mir besonders, nicht nur auf einzelne Stunden zu reagieren, sondern auch langfristige Entwicklungen zu erkennen, z. B. wiederkehrende Probleme bei Gruppenarbeiten oder Fortschritte in der Diskussionskultur.
Used a weekly review of 30mins to see patterns across individual entries, and recurring issues in teaching (or rather recurring observations, not the same thing)
-
Der Zeitaufwand war überschaubar. In der Regel war ich in rund 10 Minuten fertig. Entscheidend war die Routine: Das Journal hatte in meinem Ablauf denselben Stellenwert wie das Aufräumen des Kursraums.
Vgl [[Gewoonte maak het makkelijk 20201008140324]] mbt template en tijdsbesteding. Anchored journaling in the same frame as clearing the classroom afterwards.
-
Ich habe mein Journal in einem einfachen A5-Notizbuch geführt. Jede Doppelseite gehörte einer Unterrichtseinheit. Links: die Events in Stichpunkten. Rechts: Episode und Analyse. Der Ablauf sah so aus: Direkt nach der Lektion: fünf Minuten für die Events – die wichtigsten Beobachtungen notieren. Episode auswählen: eine kleine Szene beschreiben, sodass ich sie auch in drei Monaten noch klar vor Augen habe. Analysis: Reflexion über Ursachen und Konsequenzen, verbunden mit einem konkreten nächsten Schritt.
Did it by hand on paper. A5 note book, two pages for each teaching session. Left for events, right for an single episodic event and reflection on it, ending with a tangible next step
-
Einfache Struktur: Events – Episode – Analysis Beim Start meiner Challenge habe ich bewusst eine schlanke Form gewählt. In Richards & Farrell bin ich auf ein Modell gestossen, das mir sofort einleuchtete: Events – Episode – Analysis. Events: kurze, stichwortartige Notizen zu den wichtigsten Geschehnissen der Lektion. Episode: eine kleine Szene, die ich erzählerisch ausformuliere: vielleicht eine gute Diskussion, vielleicht ein technisches Problem. Analysis: meine Gedanken dazu: Warum war diese Episode wichtig? Was hat dazu geführt? Und was nehme ich mir für die nächste Einheit vor?
Example of a simple format that guides you through the reflection, Vgl prompting questions in sensemaking, of voor blogposts.
-
ein Arbeitsinstrument. Es schafft einen Raum, um nach dem Unterricht innezuhalten, zentrale Ereignisse zu notieren, kleine Episoden festzuhalten und darüber nachzudenken, was gelungen ist – und was nicht.
A learningjournal prestend here as work tool, a knowledge tool, like an [[AAR after action reviews 20030913131201]].
-
Donald Schön hat den Begriff „reflection-on-action“ geprägt: Wir lernen, indem wir nach der Handlung darüber nachdenken, warum wir etwas getan haben, und welche Alternativen möglich gewesen wären [2].
Reflection-on-action, also on other alternatives that would have been possible. Vgl [[Action Research is vraag-reflectief leven 20031215142900]]
The ref is to D. A. Schön, The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books, 1983. - [ ] zoek boek [[The Reflective Practitioner by Donald A. Schön ]]1983. #pkm
-
Pädagogische Forschung und Praxis empfehlen es seit Jahren als Werkzeug, um den eigenen Unterricht zu reflektieren und weiterzuentwickeln. Jack Richards und Thomas Farrell definieren es schlicht als „a teacher’s written record of classroom events and of the teacher’s own thoughts and reflections about teaching“ [1].
Learning journals are seen as useful both in research/theory as in practice. A basic def by Jack Richards and Thomas Farrell is a teacher's record of classroom events, and connected thoughts and reflection on teaching. The ref is to J. C. Richards und T. S. C. Farrell, Professional Development for Language Teachers: Strategies for Teacher Learning. Cambridge: Cambridge University Press, 2005, Kap. 5.
Would this be useful for mentoring, or even change mgmt consultancy?
-
30 Tage lang wollte ich nach jedem Unterricht ein handschriftliches Lehrjournal führen. Kein grosses Projekt, einfach ein Notizbuch und der feste Vorsatz, nach der Lektion zehn bis fünfzehn Minuten zu schreiben. Am Anfang war ich skeptisch, ob sich dieser zusätzliche Aufwand lohnen würde. Heute weiss ich: Es war eine der produktivsten Routinen, die ich mir als Dozent angewöhnt habe.
As a teacher he kept a journal in a note book to write about a lesson he taught immediately afterwards.
-
Reflection by Michael Gisiger on using a lourning journal for 30 days
-
-
groq.com groq.com
-
the Language Processing Unit (LPU), a new category of processor. Groq created and built the LPU from the ground up to meet the unique needs of AI. LPUs run Large Language Models (LLMs) and other leading models at substantially faster speeds and, on an architectural level, up to 10x more efficiently from an energy perspective compared to GPUs.
Groq self describes their core tech as a new cat of processor that run LLMs. Faster speeds. And saves energy an order of magnitude on 'an architectural level' compared to gpu's. That deliberate phrasing suggests there's a trade-off somewhere else.
Also contrast with FPGAs and their use for on-device ai applications like in [[Vydar wil het Europese bolwerk worden voor navigatie zonder GPS]]
-
-
-
[[Martijn Aslander p]] over zijn lifelens systeem en wat meer praktisch inzichten
-
-
-
Nvidia buys Groq (language processing units faster than gpu's Nvidia's thing). Prevent the bubble from popping by blowing into the bubble? Acq of Groq is partly admission gpu's not a solid footing anymore?
-
-
rote-hilfe.de rote-hilfe.de
-
Long existing German NGOs get their bank accounts closed down by German banks based on Trump admin irrational sanctions wrt 'antifa'. bc there's an 'automatic' propagation of such things into the EU. However that is based on a shared notion of what to sanction, which no longer applies. Meaning the outcome is unexplainable in European context. If one end of a negotiated relationship goes off the deep end, you need to realign your processes, which didn't happen here. Can't treat the US as rational actor currently.
-
-
timesofindia.indiatimes.com timesofindia.indiatimes.com
-
The company's stock has declined approximately 34% from its December 2024 peak,
ouch, a third gone in 1 year.
-
the raw intelligence that LLMs provide
meaning?
-
Benioff had recently told Business Insider that he's drafting the company's annual strategic document with data foundations—not AI models—as the top priority, explicitly citing concerns about "hallucinations" without proper data context.
The annual strategic document now puts data foundations in focus, not AI models. Well duh. How even get to the notion that you can AI-all the things, it implies an uncritical belief in the promises of vendors, or magical thinking. How do you get to be CEO if you fall for that. Vibe-leading iow, the wizard behind the curtain.
-
Phil Mui described as AI "drift" in an October blog post. When users ask irrelevant questions, AI agents lose focus on their primary objectives. For instance, a chatbot designed to guide form completion may become distracted when customers ask unrelated questions.
ha, you can distract chatbots, as we've seen from the start. This is the classic 'it's not for me but for my mom' train ticket sales automation hangup in response to 'to which destination would you like a ticket', and then 'unknown railway station 'for my mom' in a new guise. And they didn't even expect that to happen? It's an attack service!
-
Home security company Vivint, which uses Agentforce to handle customer support for 2.5 million customers, experienced these reliability problems firsthand. Despite providing clear instructions to send satisfaction surveys after each customer interaction, The Information reported that Agentforce sometimes failed to send surveys for unexplained reasons. Vivint worked with Salesforce to implement "deterministic triggers" to ensure consistent survey delivery.
wtf? Why ever use AI to send out a survey, something you probably already had fully automated beforehand. 'deterministic triggers' is a euphemism for regular scripted automation like 'clicking done on a ticket triggers an e-mail for feedback', which we've had for decades.
-
Chief Technology Officer of Agentforce, pointed out that when given more than eight instructions, the models begin omitting directives—a serious flaw for precision-dependent business tasks.
Whut? AI-so-human! Vgl 8-bits-schuifregister metafoor. [[Korte termijngeheugen 7 dingen 30 secs 20250630104247]] Is there a chunking style work-around? Where does this originate, token limit, bite sizes?
-
The company is now emphasizing that Agentforce can help "eliminate the inherent randomness of large models," marking a significant departure from the AI-first messaging that dominated the industry just months ago.
meaning? probabilities isn't random and isn't perfect. Dial down the temp on models and what do you get?
-
admission comes after Salesforce reportedly reduced its support staff from 9,000 to 5,000 employees
Salesforce upon roll-out of ai-agents dumped half their staff at support. ouch.
-
All of us were more confident about large language models a year ago," Parulekar stated, revealing the company's strategic shift away from generative AI toward more predictable "deterministic" automation in its flagship product, Agentforce.
Salesforce moving back from fully embracing llms, towards regular automation. I think this is symptomatic in diy enthusiasm too: there is likely an existing 'regular' automation that helps more.
-
How does this not impact brand reputation and revenue of Salesforce?
Tags
Annotators
URL
-
-
docs.rocket.chat docs.rocket.chat
-
Nextcloud integration into rocket.chat. Looked at this before, and decided not to use it for ourselves. Don't remember why though, something with how it assumed you'd interact with Nextcloud I think.
-
-
davidorban.com davidorban.com
-
would take seriously the fact that intelligence is now being scaled and distributed through organizations long before it is unified or fully understood
there's no other way, understanding comes from using it, and having stuff go wrong. The scandals around algos are important in this. Scale and distribution are different beasts. Distribution does not need scale (but a network effect helps) in order to work. The need for scale in digital is an outcome of the financing structure and chosen business model, and is the power grab essentially. #openvraag hoe zet je meer focus op distributie als tegenkracht tegen de schalingshonger van actoren?
-
examine power as an emergent consequence of deployment and incentives, not intent.
Intent def is there too though, much of this is entrenching, and much of it is a power grab (esp US tech at the mo), to get from capital/tech concentration to coopting governance structures
AI is a tech where by design it is not lowering a participation threshold, it positions itself as bigger-than-us, like nuclear reactors, not just anyone can run with it. That only after 3 years we see a budding diy / individual agency angle shows as much. It was only designed to create and entrench power (or transform it to another form), other digital techs originate as challenge to power, this one clearly the opposite. The companies involved fight against things that push towards smaller than us ai tech, like local offline first. E.g. DMA/DSA
-
Such a work would treat alignment as institutional design rather than a property of models alone.
yes. never look at something 'alone'
-
Empirical grounding. In 2015, scaling laws, emergent capabilities, and deployment‑driven feedback loops were speculative. Today, they are measurable. That shift changes the nature of responsibility, governance, and urgency in ways that were difficult to justify rigorously at the time.
States that, in contrast to a decade ago, now we can measure scaling, emergent capabilities, feedback loops. Interesting. - [ ] #30mins #ai-ethics werk dit uit in meer detail. Wat meet je dan, hoe kan dat er uit zien? Hoe vergelijkt dat met div beoordelingsmechanismen?
-
Political economy and power. The book largely brackets capital concentration, platform dynamics, and geopolitical competition. Today, these are central to any serious discussion of AI, not because the technology changed direction, but because it scaled fast enough to collide with real institutions and entrenched interests.
geopolitics, whether in shape of capital, tech or politics has become key, which he overlooked in 2015/8
-
Alignment as an operational problem. The book assumes that sufficiently advanced intelligences would recognize the value of cooperation, pluralism, and shared goals. A decade of observing misaligned incentives in human institutions amplified by algorithmic systems makes it clear that this assumption requires far more rigorous treatment. Alignment is not a philosophical preference. It is an engineering, economic, and institutional problem.
The book did not address alignment, assumed it would sort itself out (in contrast to [[AI begincondities en evolutie 20190715140742]] how starting conditions might influence that. David recognises how algo's are also used to make diffs worse.
-
somethingnew.davidorban.com
CC BY version of book available at https://somethingnew.davidorban.com
-
what it feels like to live through an intelligence transition that does not arrive as a single rupture, but as a rolling transformation, unevenly distributed across institutions, regions, and social strata.
More detailed formulation of Gibson future is already here but distributed. Add sectors/domains. There's more here to tease out wrt my change management work. - [ ] #30mins #ai-ethics vul in met concretere voorbeelden hoe deze quote vorm krijgt
-
As a result, the debate shifted. The central question is no longer “Can we build this?” but “What does this do to power, incentives, legitimacy, and trust?”
David posits questions that are all on the application side, what is the impact of using ai. There are also questions on the design side, how do we shape the tools wrt those concepts. Vgl [[AI begincondities en evolutie 20190715140742]] e.g. diff outcomes if you start from military ai params or civil aviation (much stricter), in ref to [[Novacene by James Lovelock]]
-
The book’s central argument was not about timelines or machines outperforming humans at specific tasks. It was about scale. Artificial intelligence, I argued, should not be understood at the level of an individual mind, but at the level of civilization. Technology does not merely support humanity. It shapes what humanity is. If AI crossed certain thresholds, it would not just automate tasks, but it would reconfigure social coordination, knowledge production, and agency itself. That framing has aged better than I expected, not because any particular prediction came true, but because the underlying question turned out to be the right one.
The premise of the book that scale mattered wrt AI (SU vibes). AI to be understood at societal level, not from an individual perspective, as tech and society mutually shape eachother (basic WWTS premise). Given certain thresholds it would impact coordination, knowledge and agency.
-
[[David Orban p]] wrote a 132p book on AI in 2015, [[Something New by David Orban]] Now he is releasing it under a CC BY license, after acquiring the rights back he says (from? It was independently published, I think it would have been SU).
-
-
somethingnew.davidorban.com somethingnew.davidorban.com
-
[[Something New by David Orban]] in html and in chapters
Tags
Annotators
URL
-
-
somethingnew.davidorban.com somethingnew.davidorban.com
-
[[Something New by David Orban]] PDF
-
-
www.huffpost.com www.huffpost.com
-
This article is an example from USA State Dept Legal Office about what [[Matt Gurney We will never fucking trust you again]] mentions wrt loyalists coming and likely staying for a long time, eroding the institution and its credibility
-
-
arxiv.org arxiv.org
-
"Sell It Before You Make It: Revolutionizing E-Commerce with Personalized AI-Generated Items"
Paper on generated items for sale at Alibaba, not produced until it actually sells
Tags
Annotators
URL
-
-
www.readtheline.ca www.readtheline.ca
-
America’s former role is gone. And I think that Americans themselves are having the hardest time of all coming to terms with what that might actually mean in the long run.
USians will have hard time coming to terms with this. Vgl Bush years where tourists claimed to be Canadian. The ugly American in the Whitehouse etc. With Bush the Lesser there was a return to normal (bc institutions were kept in place), with Trump that road is cut off.
-
The officer then said that even a swift return of America to its former role won’t matter. Because “we will never fucking trust you again.”The Americans at the table seemed somewhat startled by the heat of that pronouncement. I agreed with it entirely. So, it seemed to me, did most of the non-Americans. This wasn’t the only such moment at the forum this year, but it was, to me, the most interesting. And it was still being talked about the next day. “Thank God,” one allied official said to me. “Someone had to tell them.”
Whatever happens in the USA in the coming 3 yrs: "We will never trust you again". This has very deep reaching impacts.
-
But before I could worry about it too much, a senior military officer from a major (non-American) allied nation drove a stake right through the heart of the matter.America has blown 80 years of accumulated goodwill and trust among its allies, our American moderator was told. A rock-steady assumption of allied defence and security planning for literally generations has been that America would act in its own interests, sure, but that those interests would be rational, and would still generally value the institutions that America itself worked so hard to build after the Second World War. America’s recent actions have destroyed the ability of any ally to continue to have faith in America to act even within its own strategic self-interest, let alone that of any ally.
8 decades of softpower squandered, rationality gone and institutionalised governance dismantled. In short the US cannot even be assumed to act within its own self-interest
-
And the damage to America’s soft power — the shutting down of aid programs and things like Voice of America — can’t be undone rapidly no matter who wins the midterms. U.S. troops that are pulled out of bases where the U.S. no longer sees a strategic reason for their presence aren’t likely to come back.And, this is the critical part, wouldn’t necessarily be welcomed even if they did.
Undo many decisions will be impossible
-
The damage is already done. I know firsthand that a great many Americans who really do believe in the post-1945 global order, and of America’s prior role in the world and the value of that role to America and Americans, are still inside the U.S. government. But I also know that many of them are retiring, or seeking early retirement, or switching to consulting gigs. They can’t stomach what U.S. foreign policy is becoming, and they won’t be a part of it.Good for them. But every single person who departs is being replaced by someone who is totally fine with the new U.S. foreign policy. And sometimes is actually quite enthusiastic about it. That will accelerate the process that’s already underway. And those new people are going to have long careers, shaping things both in public and behind the scenes.
author calls bs on back-to-normal hopes. Many officials are leaving and get replaced by younger ones who buy into the new US policy and will shape it for decades.
-
The session, over dinner, was a small group. It was about America’s moral leadership in the world. Our moderator was a now-former American official. She was pretty frank and clear-eyed about how America’s allies currently view the country’s place in the world, but also expressed some hope that after the midterms next year or maybe the next presidential election, things would start to get more back to normal. We were assured that a lot of people in America are still with us. Some of the other Americans present nodded their heads.
Pre-Trump officials in the US think there's a road back to where the US was before
-
Amazon or any other U.S.-based company will then make the decision that best promotes and protects long-term shareholder value. And that decision will be, in every case, to submit and comply. Everyone in the room knew that. America is different, now. It’s inescapable.
klept Gleichschaltung
-
There were two fascinating things about that exchange (it starts around the 17-minute mark of that video). The first was the question itself; it alone was a signal of how much things have changed. The second interesting thing is that Zapolsky’s answer was, with respect, bullshit. I can see why he’s a legal officer! He gave an answer that was legally correct — the only way that the U.S. government can officially bar Amazon from providing cloud services for a foreign military, for example, would be by sanctions or some comparable legislation.
The evasive answer is bs bc it isn't how it would go in reality
-
What would happen is that someone senior at Amazon, maybe Jeff Bezos himself, would get a call from some golf partner or drinking buddy in the administration, and the message will be simple: “Stop, or you won’t get contracts. We’ll arrange some hearings into your operations. Your little spaceflight company will find itself under way more levels of regulatory review than your Musk-owned competitor. This is what the boss wants. Make it happen.”
mobster governance. Klept
-
Tarabay dropped a humdinger of a question on Zapolsky. Here’s the quote (slightly cleaned up for clarity): “We’re in an age where there’s a government that puts pressure on companies [and] people for [Trump’s] own gain. You have been so steadfast in your support for Ukraine. What will Amazon do if your government says ‘Stop’?”Zapolsky replied that the company has contracts with foreign governments and NATO allies and said that Amazon would only change those relationships if it was legally forced to do so via something like a sanction.
Amazon when asked said they would change relationship if legally forced to via e.g. sanctions. Vgl parallel w the same bland avoidance Dutch Stas gave wrt Cloud Act.
-
but it raised a much deeper point — America has “walked away” from its allies. And the leader of the CODEL took no issue with that characterization.
in response US senator did not disagree
-
Kucher said this to Shaheen: “We’ve talked about allyship. What should the allies, who uphold democratic values, in the reality that the United States has walked away from them … what should the allies do?”
question by Canadian senator to US senator at Halifax security forum. Premise: USA walked away from democracy and allies
-
-
www.ncsc.nl www.ncsc.nl
-
2022 report National Cybersec centre MinJenV
-
-
wetten.overheid.nl wetten.overheid.nl
-
ARBIT, Algemene Rijksvoorwaarden bij IT-overeenkomsten
-
-
en.wikipedia.org en.wikipedia.org
-
I will not confirm or deny that that is happening, but there is nothing in 12333 to prevent that from happening.
Canary statement
-
Hypothetically, under 12333 the NSA could target a single foreigner abroad. And hypothetically if, while targeting that single person, they happened to collect every single Gmail and every single Facebook message on the company servers not just from the one person who is the target, but from everyone—then the NSA could keep and use the data from those three billion other people. That’s called 'incidental collection.'
Example of how EO12333 'can' be used: take all bigtech data as 'incidental' data around a legal foreign intelligence target.
-
Executive Order 12333 has been regarded by the American intelligence community as a fundamental document authorizing the expansion of data collection activities.[9] The document has been employed by the National Security Agency as legal authorization for its collection of unencrypted information flowing through the data centers of internet communications giants Google and Yahoo!.[9]
US intelligence see EO12333 as the primary ground for their data collection activities, such as collecting any unencrypted data that flows through bigtech data centers
-
Part 2.3 permits collection, retention and dissemination of the following types of information along with several others. .mw-parser-output .templatequote{overflow:hidden;margin:1em 0;padding:0 32px}.mw-parser-output .templatequotecite{line-height:1.5em;text-align:left;margin-top:0}@media(min-width:500px){.mw-parser-output .templatequotecite{padding-left:1.6em}}(c) Information obtained in the course of lawful foreign intelligence, counterintelligence, international narcotics or international terrorism investigation ... (i) Incidentally obtained information that may indicate involvement in activities that may violate federal, state, local or foreign laws[1]
EO12333 in part 2.3 permits the ability for collection / retention and sharing of any data obtained during lawful intelligence / international law enforcement
and any other data that may indicate violate a law
-
Executive Order 12333, 1981 (Reagan's 1st year). Extends US Intelligence powers.
-
-
berthub.eu berthub.eu
-
Amerikaanse wetgeving zoals de CLOUD Act (Clarifying Lawful Overseas Use of Data Act), de Foreign Intelligence Surveillance Act (FISA)5 en Executive Order 12333 komt te vallen?
- [ ] Naast Cloud Act ook FISA sect 702 en EO 12333 even expliciet opslaan. #digitalsovereignty #geonovumtb
-
Indien de Algemene Rijksvoorwaarden bij IT-overeenkomsten 2022 (ARBIT-2022)6 van toepassing verklaard is op de overeenkomst is er een ontbindingsgrond als er sprake is van een ingrijpende wijziging in de zeggenschap (wat het geval kan zijn bij fusies en overnames) met betrekking tot de onderneming van de wederpartij/opdrachtnemer. Door de landsadvocaat wordt momenteel onderzocht in hoeverre dit het geval is. Indien door de overname de nakoming van de verwerkersovereenkomst en de naleving van de AVG wordt bemoeilijkt of zelfs onmogelijk wordt, kan dit een grond vormen om de dienstverleningsovereenkomst respectievelijk de verwerkersovereenkomst te ontbinden. Dit laat overigens onverlet de mogelijkheid om in een concreet geval een overeenkomst op basis van de Algemene Rijksvoorwaarden op te kunnen zeggen.
Drie paden voor opzeggen ihkv Solvinity obv IT voorwaarden bij wijziging zeggenschap obv AVG als naleving en verwerkersovereenkomst onmogelijk wordt (bijv door buitenlandse inmenging) obv Alg Rijksvoorwaarden mbt ontbinding (afkopen ws?)
En hoe zit het met third country regels bij EU wetgeving en aanbesteding? Die lijken me hier ook relevant
-
Het versterken van digitale autonomie vergt daarom een Europese aanpak.
ja
-
De overeenkomsten tussen de Staat en Solvinity bieden aanknopingspunten om ten minste van Solvinity te verlangen dat er technische en organisatorische maatregelen worden getroffen om te waarborgen dat de gegevens waartoe zij toegang heeft op een wijze worden verwerkt die voldoet aan de in de EU geldende regels, zoals die uit de Algemene verordening gegevensbescherming. Welke maatregelen dat zullen zijn vormt onderwerp van de gesprekken tussen de Staat en Solvinity.
Dit is weer een non-antwoord, 'dat de boel AVG conform gaat'. Het punt hier is niet het niet voldoen aan Europese regels maar dat VS spelers moeten voldoen aan VS wetgeving, ook in Europa.
-
De drie genoemde wettelijke instrumenten maken het, in ieder geval in theorie, mogelijk dat autoriteiten in de VS onder de in deze wetgeving genoemde voorwaarden toegang kunnen krijgen tot de gegevens waarover een onderneming in de VS beschikt, óók wanneer de gegevens zich bevinden onder een dochtervennootschap en op servers buiten de VS. Als Solvinity wordt overgenomen door een onderneming in de VS brengt dit Solvinity onder de reikwijdte van deze wetgeving. Het gevolg daarvan kan, in ieder geval in theorie, zijn dat autoriteiten in de VS in voorkomend geval toegang krijgen tot de gegevens die door Solvinity in opdracht van de Staat worden verwerkt.
Stas geeft hier eindelijk toe dat de VS 'in theorie' toegang heeft tot alles wat een uiteindelijk Amerikaans bedrijf aan data heeft. Vgl [[Een goed gesprek over digitale soevereiniteit in de gemeente]] Gebruik dit voor de herh in feb bij Gem Amersfoort
-
-
onlinelibrary.wiley.com onlinelibrary.wiley.com
-
An aspect of the human use of information that has generally been overlooked in the automation of information services is the human tendency to locate information spatially. Computer-based systems do not necessarily assign any unique role to spatial tags, and so a feature of considerable importance for the organization of the user's memory seems to have been largely overlooked. The spatial dimension of human memory is discussed, and some suggestions are offered for exploiting it more effectively in the context of information retrieval services.
This 1968 paper(!) posits the importance of spatial memory in information use / design.
https://doi.org/10.1002/asi.5090190315
Spatial Memory George Miller Psychology and information in Zotero
-
-
www.yusufarslan.net www.yusufarslan.net
-
https://www.yusufarslan.net/sites/yusufarslan.net/files/upload/content/Miller1968.pdf
Response time in man-computer conversational transactions in Zotero
what do we experience as. instanteneous and when are we distracted
-
-
singularityhub.com singularityhub.com
-
The economics are far from certain though, and competition will be fierce. Even if NASA is able to spur a private orbital economy, there may not be enough business to support multiple private space stations.
from 'new era' to 'far from certain economics'. Any Chinese plans additionally?
-
All these projects hope to have NASA as an anchor tenant. But they are also heavily reliant on the idea that there are a broad range of potential customers also willing to pay for orbital office space.
The projects depend on public money, NASA as a tenant. So one out of these 4
-
In addition, Blue Origin, founded by Jeff Bezos, is working with Sierra Space and Boeing to build Orbital Reef,
Orbital Reef is another project by Blue Origin (Bezos), and Boeing ao.
-
Meanwhile, Voyager Space and Airbus are designing a space station called Starlab, which recently moved into “full-scale development” ahead of an expected 2028 launch. The station can host four astronauts, features an external robotic arm, and is designed to launch in one go aboard SpaceX’s forthcoming Starship rocket.
Voyager Space and Airbus jointly designing Starlab, to be launched in 2028, but depends on SpaceX starship rocket that doesn't exist yet.
-
Axiom Space, one of the companies vying for this funding, plans to piggyback on the ISS to build its space station. The company will first launch a power and heating module and connect it to the ISS. The module will be able to operate independently starting in 2028. They’ll then gradually add habitat and research modules alongside airlocks to create a full-fledged private space station.
Axiom Space also has LEO plans. Wants to use ISS as starting platform and add modules to work independently from ISS
-
The agency has paid out about $415 million in the program's first phase to help companies flesh out their designs. But next year, NASA plans to select one or more companies for Phase 2 contracts worth between $1 billion and $1.5 billion and set to run from 2026 to 2031.
NASA LEO program spent 415MUSD in phase 1 (Designs), and will fund 1-1.5BUSD 2026-2031 to operationalise some of them
-
Development of Vast’s second station relies on funding from NASA’s Commercial Low Earth Orbit Destinations program,
Haven-2 needs NASA public funding, from NASA LEO program
-
Haven-2, a larger modular station that Vast hopes could succeed the ISS.
Haven-2 is meant as potential ISS replacement
-
May 2026, when California-based startup Vast plans to launch its Haven-1 space station
Haven-1 by Vast is a PoC launch, to be launched by SpaceX Falcon 9
-
ISS is nearing the end of its planned lifespan and NASA’s been clear that it doesn’t intend to replace the space station.
ISS is planned until 2028 / 2030. NASA wants to replace it w market actor project (but does need a permanent presence in LEO) and then focus on Mars and moon projects.
-
The ISS was humanity's only permanent outpost in space for nearly a quarter of a century, until China’s Tiangong station was permanently crewed in 2022.
This sentence ignores Mir 1986-2001, so 20 years, not 'nearly a quarter century'.
China has Tiangong since 2022.
-
Page describes some planned market actor launches for space stations
In overhyped terms (speaking of a new 'era' where nothing is launched at all yet)
-
-
www.nasa.gov www.nasa.gov
-
ISS was approved in 1984, first parts launched in 1998, operational in 2000.
Tags
Annotators
URL
-
-
www.dailykos.com www.dailykos.com
-
US democratic community site notices the Trump admin steps against DSA enforcement, and describes it well.
-
-
www.deepl.com www.deepl.com
-
Been using DeepL for some time. For longer texts I'd need an account at 90-300 E / yr. Better to set it up with a local model? Or the EC's translation service? An own environment might be better to more seemlesly work in multiple languages?
-
-
lifehacker.com lifehacker.com
-
You just need an APK file to install it on Android, no need for a developer account as such (only for Playstore distribution).
Tags
Annotators
URL
-
-
developer.android.com developer.android.com
-
https://developer.android.com/get-started/codelabs
has a bunch of tutorials for specific types of functionality in android apps
Tags
Annotators
URL
-
-
developer.android.com developer.android.com
-
https://developer.android.com/courses has training courses for Android Apps in Kotlin, also one for beginners. The website tracks your Google account.
Tags
Annotators
URL
-
-
-
Jaguar made its last fossil fuel car. The last production line for fossil fuel powered cars, the F-Pace SUV has been shut down. Only EVs now. Meanwhile the German car industry pushed for and got a 5 year extension from the EC to keep producing obsolete fossil fuel cars.
-
-
en.wikipedia.org en.wikipedia.org
-
Main entry point[edit] Main article: Entry point As in C, C++, C#, Java, and Go, the entry point to a Kotlin program is a function named "main", which may be passed an array containing any command-line arguments. This is optional since Kotlin 1.3.[26] Perl, PHP, and Unix shell–style string interpolation is supported. Type inference is also supported.
Kotlin had a mandatory, but now optional function main as entry point. Like C++ and Java
-
The name is derived from Kotlin Island, a Russian island in the Gulf of Finland, near Saint Petersburg. Andrey Breslav, Kotlin's former lead designer, mentioned that the team decided to name it after an island, in imitation of the Java programming language which shares a name with the Indonesian island of Java
Kotlin is named after a Russian Island in the Gulf of Finland, a nod to Java (Andrey Breslav, Kotlin's originator is Russian).
-
On 7 May 2019, Google announced that the Kotlin programming language had become its preferred language for Android app developers.[7] Since the release of Android Studio 3.0 in October 2017, Kotlin has been included as an alternative to the standard Java compiler.
Kotlin is Google's preferred programming language for Android apps since mid 2019. Is included in AndroidStudio
-
Kotlin programming language, object oriented, java interoperability. Integrated in Android Studio
Tags
Annotators
URL
-
-
developer.android.com developer.android.com
-
Prerequisites Basic Kotlin knowledge
Kotlin https://kotlinlang.org/
-
Tutorial for Android Studio, to create a first simple app
-
-
developer.android.com developer.android.com
-
basic configuration of Android Studio
-
Android Studio is the official IDE for Android app dev. There is a MacOS version.
#openvraag is this usable to code up a personal app for mobile?
-
-
www.theguardian.com www.theguardian.com
-
https://web.archive.org/web/20251226113306/https://www.theguardian.com/commentisfree/2025/dec/26/ai-dark-ages-enlightenment Opinion piece asking if AI is taking on the similar (feudal) role of priests, kings and lords to outsource our decisions to. Leaving the enlightenment behind, and the romanticist invention of the self.
-
-
www.theguardian.com www.theguardian.com
-
Russian gov now cracking down on 'probiv' market for hacked/leaked data. It was of use to themselves, but now also used by Ukraine to strike inside Russia.
-
-
tryvoiceink.com tryvoiceink.com
-
VoiceInk , installed it.
-
-
tryvoiceink.com tryvoiceink.com
-
Once you have multiple enhancement prompts, you can switch between them on the fly without opening the main app window. This is done using keyboard shortcuts when the Mini Recorder is active. For a detailed guide, see Quickly Switching Enhancement Prompts. By configuring an AI provider, you unlock the full potential of VoiceInk's enhancement features, allowing you to transform your speech into perfectly formatted and context-aware text.
you can have various enhancement prompts (wrt styling, type of output etc), and you can switch through keyboard shortcuts.
-
In the Enhancement settings, you can also enable Clipboard Context and Context Awareness. These features provide the AI with additional information from your clipboard or screen to produce more accurate and relevant results.
Has context (both window and clipboard)
-
Select a Model: Once connected, you can choose from any of the models you have pulled in Ollama.
connecting to ollama let's me choose the model for text enhancement
-
Supported Providers Ollama (Free & Local): Run powerful open-source models locally on your machine. This is a great option for privacy and offline use.
Ah, enhancement can be done locally too, by connecting to ollama.
-
-
tryvoiceink.com tryvoiceink.com
-
Enhancement ModelsTransform your transcriptions with AI-powered enhancement and correction
Enhancement models (to clean up transcripts), mentions things I have locally, but suggests it uses third party external services for it.
-
Here are the best transcription models you can use with VoiceInkAI transcription modelDescriptionParakeet modelthe real-time offline model with the best accuracyWhisper large v3 turbofast and accurate Whisper model from OpenAI
VoiceInk recommends Parakeet (Nvidia) and Whisper large v3 Turbo as local models. I wonder what if I connect it to Euro context models (like Swiss Aperture)
Tags
Annotators
URL
-
-
voiceink.app voiceink.app
-
Made with ❤️ by Pax
Prakash Joshi Pax is Nepali (says their X account). Not sure where their business is based.
-
Open Source & Privacy-FirstThe source code of VoiceInk is available on GitHub
VoiceInk is open source, and on Github
-
VoiceInk works only on Apple Silicon Macs and requires macOS 14.0 or later. The local models require Apple Silicon's Neural Engine for fast, local AI processing. While we offer cloud models as an alternative, the local version is designed specifically for Apple Silicon. We recommend having at least 8GB of RAM for optimal performance.
Require Apple silicon, and its 'Neural Engine' what's that? You can connect to the cloud though as alternative.
-
The Cloud Enhancement feature is entirely optional - if enabled, only the transcribed text (not your voice) is processed by third-party providers for improved accuracy.
VoiceInk does have a cloud based component, opt-in, for processing already transcribed text by third parties.
-
SoloWorks on 1 macOS device
one time 25USD for single device usage.
-
Personal DictionaryTrain AI with your custom words, phrases, and replacements for faster, more personalized responses
allows own dictionary
-
Voice Ink, says it works w local models. Not sure yet, if I can set the model.
-
-
usefulai.com usefulai.com
-
VoiceInk is a macOS dictation app that uses local AI models to convert speech to text with complete privacy since all processing happens offline on your device.
VoiceInk is said to be fully local for dictation
-
Offline Mode: Works without internet connection
Only some Dutch is dealt with locally, English, French and German are all done in the cloud. So Apple Dictation not suited for my work
-
MacWhisper is a Mac-exclusive transcription app that runs OpenAI's Whisper AI model directly on your computer for speech-to-text conversion.
MacWhisper works locally, only on Mac. Uses OpenAI's model though. Really fully local?
-
The local processing makes SuperWhisper stand out for privacy-conscious users who want dictation without cloud dependency
SuperWhisper a mac only fully local tool
-
Apple Dictation is the built-in speech-to-text feature that comes with all Mac and iOS devices.
Can work offline / on-device, not in all languages though.
Tags
Annotators
URL
-
-
wisprflow.ai wisprflow.ai
-
Flow uses a combination of open-source models (i.e. LLAMA 3.1) and proprietary LLM providers (such as OpenAI) to provide its services. Wispr has agreements with all third party generative AI providers to ensure no data is stored or used for model training (zero data retention).
Wispr Flow uses both open (Llama) and closed LLMS, ao OpenAI . Server side though
Tags
Annotators
URL
-
-
wisprflow.ai wisprflow.ai
-
Privacy built-inWith Privacy Mode enabled, zero dictation data is stored on our servers. To enable it, go to Settings → Data & Privacy → Privacy Mode.
One can opt-out of dictation data being stored on their servers, default is storing it though. I suppose this means any dictation is dealt with online, server side and not locally? Otherwise they'd said that, no?
-
-
wisprflow.ai wisprflow.ai
-
Backed by the bestWe are fortunate to work with some of the best investors in the industry. Our backers include the top venture firms and some of the world’s most exceptional founders and product builders.
wispr flow is California, USA based, and VC funded, ao by individuals from OpenAI, Dropbox, Coinbase
-
-
world.hey.com world.hey.com
-
We are entering a time when the ability to create software is no longer a specialized skill. It is becoming a basic form of digital literacy, like writing a document or making a spreadsheet. Not everyone will do it. But everyone could, if they wanted to.
yes, agency does increase, where people realise this works locally.
-
Apple faces choices. They could try to restrict sideloading further, but that means fighting against a tide of users who simply want to run software they or their friends created.
History suggests this is the likely path. Most people will accept the phone, ipads and computers as they are. Like IndieWeb there is a population of people going against that current, but not a tide. Unless e.g. interoperability reqs from DSA, DMA force the issue. Sideloading is actually installing.
-
The implications are significant. Apple's control over iOS software distribution has always rested on two pillars. First, the App Store as the only legitimate channel. Second, the high barrier to creating software in the first place.The second pillar just collapsed. And without it, the first pillar looks different. The App Store is not going away. It will remain the home of professional, polished applications. But it will no longer be the only place where iPhone software lives.
Two moats, app store for legitimacy and the hurdle for creation. Second one has become more shallow and narrow now.
-
You need three things. A Mac with Xcode, which is free to download. A $99 per year Apple Developer account. And an AI tool that can write code based on your descriptions.
Three elements for making his iphone apps Xcode (which I use) Apple Developer account (99USD / yr) AI support in coding (he uses Claude Code, vgl [[Mijn vibe coding set-up 20251220143401]]
-
This way I made a lot of existing apps that I happily paid for absolutely obsolete. The stuff that I created was simply doing more of what I wished for, building on the ideas of all the apps I have seen before. A next iteration, but just for me.
Making personal tools makes generic ones obsolete. Yet, the generic ones do serve as starting point for inspiration and design choices. Personal iterations on top of what went before.
-
So far I have built six different apps this way in the past few weeks. A personal transit tracker. A task manager tailored to my exact workflow. Small tools that solve specific problems in my life.
[[Martijn Aslander]] has made several personal tools for his iphone.
-
a voice app called Wispr Flow to talk to my computer in Dutch
[[Martijn Aslander]] uses Wispr Flow to talk to his laptop.
-
Much of it feels like a monopoly hiding behind rhetoric about security and user experience.
Yup, also in ref to DMA, DSA
-
-
wisprflow.ai wisprflow.ai
-
Pricing free tier of 2k words/week on laptop individual / team tier, 144E/yr
-
-
wisprflow.ai wisprflow.ai
-
wispr flow available on Mac
-
-
www.ctol.digital www.ctol.digital
-
US admin sanctioned people involved in shaping the DSA. The wilfull misreading of DMA, DSA, AIR, GDPR in the US and bigtech is a clear confirmation of its need for the European market.
Article seems too narrow in looking at the dynamics. Tech platforms are not the context, single market and market rules and access are, including outside digital. Meaning every other party dealing with platforms has a very different set of considerations when choosing platforms of any size. Loss of market access is not about the tech, but about whether there are others willing to do business with you.
-
The base case isn't resolution. It's controlled escalation with higher compliance spend, modest margin drag, and forced substitution as the biggest platforms build moats from regulatory complexity itself.
The US admin escalation is likely a spasm, and if not a cause of bifurcation rather than response. The base case is shoulder shrugs anywhere outside the USA. The endpoint is no market access for non-compliant platforms of any size. Which does not mean a ban or tech blockade, but the absence of possibility to interact with the EU market as corporates, including the ad market e.g. The law of two feet is the largest fine here (not just the platforms need to be compliant, their businesspartners too, and for them to walk is the cheaper compliance path.
-
The wild card remains a behind-the-scenes "Digital Bretton Woods"—standardized frameworks for transparency, due process, and appeals that let both sides claim victory while lowering uncertainty.
Not a wild card (The wild card is the zero sum behaviour of US admin), but an aimed for outcome. Standardisation, transparency and interoperability are key digital policy aims. Note that it is exactly what big tech is clamoring against at the moment.
-
Second-order effects create opportunities: vendors selling compliance plumbing (audit trails, policy ops, transparency systems) gain; European "sovereignty stack" providers (cloud, identity, data governance) benefit if retaliation shifts to procurement preferences over fines.
This is not second order, but a primary policy aim for the EU digital single market.
-
The splinternet thesis gets its Western chapter. Not US-China separation, but US-EU divergence inside allied markets—subtler, but more margin-destructive. Big Tech that can operationally bifurcate wins near-term. X-style political defiance loses because EU enforcers smartly choose process violations over content disputes.
Regulatory differences are of all time and splinternet it is not, which implies hard (tech) breaks. Additionally the DSA is unifying for Europe, part of the digital single market. It only looks like divergence to any incumbents outside the EU.
-
That raises fixed costs and favors scale—paradoxically advantaging the largest platforms that can afford regional bifurcation while crushing subscale competitors.
Not really, it does not favor scale, as it's progressive compliance. The next sentence says as much. One platform's bifurcation is the same as having two smaller ones, who have less compliance costs and thus won't be crushed. Which is already the case even, global platforms already cater to diff regulatory regimes (and morally questionable at that). The underlying faulty assumption is that of global platforms for everyone and everything being the desired outcome at all. SV thinking and funding is the root cause. Other paths exist, just not in their world. Zebra's not unicorns. [[Zebra bedrijven zijn beter 20190907063530]] Vgl physical e.g. the German industrial base actually is one of many medium sized orgs being market leaders in some niche, not the car manufacturers usually mentioned as such.
-
Expect geo-fenced product design: "EU mode" platforms with different algorithmic defaults, transparency flows, and researcher access versus "US mode."
yes, likely in the short term. Thing is: once people globally see the outcome of those diff modes, which will they prefer? Vgl GDPR
-
U.S. willingness to treat compliance as hostile action changes the calculation.
yes, it means the USA cannot be treated as an equal or rational counterpart. n:: EU moving to adversarial interoperability in a sense? n:: Brusselscountereffect?
-
That increases risk premium on firms whose EU revenues depend on algorithmic distribution: social platforms, digital advertising, app stores, marketplace ranking.
in contradiction to the entrenching above. Of these adtech is the key thing, and algorithms aimed for engagement (ie rage)
-
The irony both sides miss: this conflict could entrench the very platforms Trump claims to defend and Europe claims to regulate. Compliance burden becomes incumbent moat.
Not following. By def the strictest stuff applies to the largest platforms, so no moat. n:: The compliance burden is progressive, like taxes are /should be.
-
The DSA doesn't mandate content removal based on viewpoint; it requires transparency in algorithmic curation, researcher access to platform data, and accountability for enforcement decisions. What the Trump administration calls "censorship," Europe frames as democratic governance of the digital public square—the same principle that makes "what is illegal offline illegal online."
Yes, such paragraphs need to be up front.
-
a sharper conflict emerges: this is about who owns the distribution layer of democracy—who sets the rules for how speech gets amplified, throttled, demonetized, and made discoverable on platforms where most political discourse now occurs.
yes, sort of. 'distribution layer of democracy' interesting phrase. amplification, throttling, monetisation (Freudian misspelling there?), discoverability all important. But the key thing: the platforms in question are not platforms in the strict sense, they actively shape the information there. So liability protections for platforms should not apply. Or gov also can set the boundaries of such shaping.
-
The EU had just levied a €120 million fine on X for DSA violations—the first major enforcement action under rules requiring platforms to moderate illegal content
There are many other fines levied (all without making any dents in the behaviour fined though), the 120ME one for Twitter was the first under DSA illegal content rules (which don't specify what illegal content, but mechanisms for moderation against them)
-
Secretary of State Marco Rubio framed it as combating a "global censorship-industrial complex" targeting American platforms and speech.
The choice of words is wrong on many diff levels. Global / censorship / industrial complex, three diff long explanations. One would need more populist labels for the actual character of platforms (not just bigtech) afoul of the DSA
-
visa restrictions on Thierry Breton, architect of the EU's Digital Services Act, alongside four anti-disinformation advocates: Imran Ahmed of the Center for Countering Digital Hate, Clare Melford of the Global Disinformation Index, and Anna-Lena von Hodenberg and Josephine Ballon of HateAid
Sanctioned are Thierry Breton (EC in the previous period), and people from the Global Disinformation Index, and HateAid. Such orgs have a role in research into the inner workings of platforms.
-
-
www.niemanlab.org www.niemanlab.org
-
The cost goes beyond simple inefficiency and becomes a mountain of invisible labor, usually absorbed by the most junior person in the room or whoever has the misfortune of being labeled as “good with computers.” It becomes a drag on every collaboration, the friction in every workflow, the meetings that take an extra ten minutes while someone (who is often paid twice the average salary of the other people in the meeting) figures out why they can’t access the shared folder the rest of us have been using for months. It’s the quiet erosion of patience and goodwill among people who are constantly expected to know and fix things that shouldn’t need fixing in the first place.
The cost of lack of skills is not just in the individual knowledge worker, it gets externalised to others to fix it, or multiplied in groups waiting on you to get something working. The incompetence spreads out.
-
Imagine a carpenter who couldn’t figure out how to adjust their table saw, or a surgeon who shrugged and said something like, “I’m just not a scalpel person.” We would never accept that. But in the field of knowledge work, “I’m just not a tech person” has become a permanent identity instead of a temporary gap to be filled.
I'm just not a scalpel person! Ha!
-
I’m talking about the basics: keyboard shortcuts that save hours per week. Understanding the difference between “reply” and “reply all.” Knowing how to search your own inbox or switch between work and personal accounts. Reading the words on your screen or an error message before throwing your hands up and declaring “something’s broken.” Learning how to unmute yourself or share your screen after years of being forced to do all our meetings on Zoom.
Basic skills often still lacking. Keyboard shortcuts first, or even knowing how to interact w interfaces through the keyboard, not the mouse
-
The number of professionals in journalism, media, communications, and academia who still don’t understand how to use the very tools they depend on for their livelihood is, frankly, staggering
knowledge workers are the largest group of people who don't know their own tools. Vgl [[Kenniswerk is ambacht 20040924200250]]
-
I assumed we would all finally reach a baseline level of digital fluency. I could not have been more wrong.
baseline digital fluency not reached
-