AI is already augmenting important parts of the AI research process itself, and that will only accelerate
for - quote - AI - AI is accelerating AI research itself
AI is already augmenting important parts of the AI research process itself, and that will only accelerate
for - quote - AI - AI is accelerating AI research itself
at any given time, the CCP may have a better idea of what OpenAI’s frontier advances look like than the U.S. government does.
for - AI - Chinese know more than US government about latest US frontier AI research
https://web.archive.org/web/20250423134653/https://www.rijksoverheid.nl/documenten/publicaties/2025/04/22/het-overheidsbrede-standpunt-voor-de-inzet-van-generatieve-ai Rijksstandpunt genAI, mede gebaseerd op IEC advies IPO. Niettemin wordt het hier lijkt me behoorlijk vrij gegeven, en de formulering klinkt heel los. Gaat problemen opleveren, want een bmw die met genAI speelt bij het opstellen van een stuk het voor zichzelf als 'experiment' labelt of 'innovatie' heeft het voor zich daarmee gerationaliseerd. Never mind dat experimenten gecontroleerde omstandigheden vergen, en innovatie een gedeelde intentie moet hebben in de org. Dit voelt heel zacht aan, staan de juiste dingen in desondanks
[[When Will the GenAI Bubble Burst]]
misled investors by exploiting the promise and allure of AI technology to build a false narrative about innovation that never existed. This type of deception not only victimizes innocent investors
The crime was misleading investors, not anyone else, which is very telling. The hype around "AI" - and actually hiring remote workers to do the job - and misleading customers/users doesn't matter.
In truth, nate relied heavily on teams of human workers—primarily located overseas—to manually process transactions in secret, mimicking what users believed was being done by automation
Yet another example of "AI" being neither artificial nor intelligent.
This change means many data centers built in central, western, and rural China—where electricity and land are cheaper—are losing their allure to AI companies. In Zhengzhou, a city in Li’s home province of Henan, a newly built data center is even distributing free computing vouchers to local tech firms but still struggles to attract clients.
Interesting cautionary tale about building out DCs in the styx, where energy is cheap but latency is high
Instead of drafting a first version with pen and paper (my preferred writing tools), I spent an entire hour walking outside, talking to ChatGPT in Advanced Voice Mode. We went through all the fuzzy ideas in my head, clarified and organized them, explored some additional talking points, and eventually pulled everything together into a first outline.
Need to try this out.
Review coordinated by Life Science Editors Foundation Reviewed by: Dr. Angela Andersen, Life Science Editors Foundation & Life Science Editors Potential Conflicts of Interest: None
PUNCHLINE Evo 2 is a biological foundation model trained on 9.3 trillion DNA bases across all domains of life. It predicts the impact of genetic variation—including in noncoding and clinically relevant regions—without requiring task-specific fine-tuning. Evo 2 also generates genome-scale sequences and epigenomic architectures guided by predictive models. By interpreting its internal representations using sparse autoencoders, the model is shown to rediscover known biological features and uncover previously unannotated patterns with potential functional significance. These capabilities establish Evo 2 as a generalist model for prediction, annotation, and biological design.
BACKGROUND A foundation model is a large-scale machine learning model trained on massive and diverse datasets to learn general features that can be reused across tasks. Evo 2 is such a model for genomics: it learns from raw DNA sequence alone—across bacteria, archaea, eukaryotes, and bacteriophage—without explicit labels or training on specific tasks. This enables it to generalize to a wide range of biological questions, including predicting the effects of genetic variants, identifying regulatory elements, and generating genome-scale sequences or chromatin features.
Evo 2 comes in two versions: one with 7 billion parameters (7B) and a larger version with 40 billion parameters (40B). These numbers reflect the number of trainable weights in the model and influence its capacity to learn complex patterns. Both models were trained using a context window of up to 1 million tokens—where each token is a nucleotide—allowing the model to capture long-range dependencies across entire genomic regions.
Evo 2 learns via self-supervised learning, a method in which the model learns to predict masked or missing DNA bases in a sequence. Through this simple but powerful objective, the model discovers statistical patterns that correspond to biological structure and function, without being told what those patterns mean.
QUESTION ADDRESSED Can a large-scale foundation model trained solely on genomic sequences generalize across biological tasks—such as predicting mutational effects, modeling gene regulation, and generating realistic genomic sequences—without supervision or task-specific tuning?
SUMMARY The authors introduce Evo 2, a foundational model for genomics that generalizes across DNA, RNA, and protein tasks. Without seeing any biological labels, Evo 2 learns the sequence rules governing coding and noncoding function, predicts variant effects—including in BRCA1/2 and splicing regions—and generates full-length genomes and epigenome profiles. It also enables epigenome-aware sequence design by coupling sequence generation with predictive models of chromatin accessibility.
To probe what the model has learned internally, the authors use sparse autoencoders (SAEs)—a technique that compresses the model’s internal activations into a smaller set of interpretable features. These features often correspond to known biological elements, but importantly, some appear to capture novel, uncharacterized patterns that do not match existing annotations but are consistently associated with genomic regions of potential functional importance. This combination of rediscovery and novelty makes Evo 2 a uniquely powerful tool for exploring both the known and the unknown genome.
KEY RESULTS Evo 2 trains on vast genomic data using a novel architecture to handle long DNA sequences Figures 1 + S1 Goal: Build a model capable of representing entire genomic regions (up to 1 million bases) from any organism. Outcome: Evo 2 was trained on 9.3 trillion bases using a hybrid convolution-attention architecture (StripedHyena 2). The model achieves long-context recall and strong perplexity scaling with increasing sequence length and model size.
Evo 2 predicts the impact of mutations across DNA, RNA, and protein fitness Figures 2A–J + S2–S3 Goal: Assess whether Evo 2 can identify deleterious mutations without supervision across diverse organisms and molecules. Outcome: Evo 2 assigns lower likelihoods to biologically disruptive mutations—e.g., frameshifts, premature stops, and non-synonymous changes—mirroring evolutionary constraint. Predictions correlate with deep mutational scanning data and gene essentiality assays. Evo 2 embeddings also support highly accurate exon-intron classifiers.
Clarification: “Generalist performance across DNA, RNA, and protein tasks” means that Evo 2 can simultaneously make accurate predictions about the functional impact of genetic variants on transcription, splicing, RNA stability, translation, and protein structure—without being specifically trained on any of these tasks.
Evo 2 achieves state-of-the-art performance in clinical variant effect prediction Figures 3A–I + S4 Goal: Evaluate Evo 2's ability to predict pathogenicity of human genetic variants. Outcome: Evo 2 matches or outperforms specialized models on coding, noncoding, splicing, and indel variants. It accurately classifies BRCA1/2 mutations and generalizes to novel variant types. When paired with supervised classifiers using its embeddings, it achieves state-of-the-art accuracy on BRCA1 variant interpretation.
Evo 2 representations reveal both known and novel biological features through sparse autoencoders Figures 4A–G + S5–S7 Goal: Understand what Evo 2 has learned internally. Outcome: Sparse autoencoders decompose Evo 2’s internal representations into distinct features—many of which align with well-known biological elements such as exon-intron boundaries, transcription factor motifs, protein secondary structure, CRISPR spacers, and mobile elements. Importantly, a subset of features do not correspond to any known annotations, yet appear repeatedly in biologically plausible contexts. These unannotated features may represent novel regulatory sequences, structural motifs, or other functional elements that remain to be characterized experimentally.
Note: Sparse autoencoders are neural networks that reduce high-dimensional representations to a smaller set of features, enforcing sparsity so that each feature ideally captures a distinct biological signal. This approach enables mechanistic insight into what the model “knows” about sequence biology.
Evo 2 generates genome-scale sequences with realistic structure and content Figures 5A–L + S8 Goal: Assess whether Evo 2 can generate complete genome sequences that resemble natural ones. Outcome: Evo 2 successfully generates mitochondrial genomes, minimal bacterial genomes, and yeast chromosomes. These sequences contain realistic coding regions, tRNAs, promoters, and structural features. Predicted proteins fold correctly and recapitulate functional domains.
Evo 2 enables design of DNA with targeted epigenomic features Figures 6A–G + S9 Goal: Use Evo 2 to generate DNA sequences with user-defined chromatin accessibility profiles. Outcome: By coupling Evo 2 with predictors like Enformer and Borzoi, the authors guide generation to match desired ATAC-seq profiles. Using a beam search strategy—where the model explores and ranks multiple possible output sequences—it generates synthetic DNA that encodes specific chromatin accessibility patterns, such as writing “EVO2” in open/closed chromatin space.
STRENGTHS First large-scale, open-source biological foundation model trained across all domains of life
Performs well across variant effect prediction, genome annotation, and generative biology
Demonstrates mechanistic interpretability via sparse autoencoders
Learns both known and novel biological features directly from raw sequence
Unsupervised learning generalizes to clinical and functional genomics
Robust evaluation across species, sequence types, and biological scales
FUTURE WORK & EXPERIMENTAL DIRECTIONS Expand training to include viruses that infect eukaryotic hosts: Evo 2 currently excludes these sequences, in part to reduce potential for misuse and due to their unusual nucleotide structure and compact coding. As a result, Evo 2 performs poorly on eukaryotic viral sequence prediction and generation. Including these genomes could expand its applications in virology and public health.
Empirical validation of novel features: Use CRISPR perturbation, reporter assays, or conservation analysis to test Evo 2-derived features that don’t align with existing annotations.
Targeted mutagenesis: Use Evo 2 to identify high-impact or compensatory variants in disease-linked loci, and validate using genome editing or saturation mutagenesis.
Epigenomic editing: Validate Evo 2-designed sequences for chromatin accessibility using ATAC-seq or synthetic enhancer assays.
Clinical applications: Fine-tune Evo 2 embeddings to improve rare disease variant interpretation or personalized genome annotation.
Synthetic evolution: Explore whether Evo 2 can generate synthetic genomes with tunable ecological or evolutionary features, enabling testing of evolutionary hypotheses.
AUTHORSHIP NOTE This review was drafted with support from ChatGPT (OpenAI) to help organize and articulate key ideas clearly and concisely. I provided detailed prompts, interpretations, and edits to ensure the review reflects an expert understanding of the biology and the paper’s contributions. The final version has been reviewed and approved by me.
FINAL TAKEAWAY Evo 2 is a breakthrough in foundation models for biology—offering accurate prediction, functional annotation, and genome-scale generation, all learned from raw DNA sequence. By capturing universal patterns across life, and identifying both well-characterized and unknown sequence features, Evo 2 opens powerful new directions in evolutionary biology, genomics, and biological design. Its open release invites widespread use and innovation across the life sciences.
Or is that the rainbow?
In our paper, we address this idea that ALL ✨sparkling intelligence✨ outputs are generated using the same technology and practices. We argue that it is useful to have a term for those outputs that don't match our shared reality or factual requirements, and for that we propose "mirage".
In my blog post introducing our paper, I suggest AI "rainbows" as a term for mirages that we do value.
I asked our friend Dr. Oblivion, Why is it better to refer to AI hallucinations and AI mirages? His response.
I'm assuming this is some kind of ✨sparkling intelligence✨ and given that Dr. Oblivion seems to miss the point of the paper and our discussion here, I found it more illustrative than helpful ;)
Does anybody know who came up with the term “hallucinations” in the first place? Was it Sutskever?
Turns out the story is a bit more complicated than that, at least according to the history shared by another participant below.
Joshua Pearson examines the history of the term “hallucination” in the development and promotion of AI technology: “Why ‘Hallucination’? Examining the History, and Stakes, of How We Label AI’s Undesirable Output” (2024).
This is a great history of the term "hallucination" in the discourse of ✨sparkling intelligence✨ — huge thanks to whoever shared it! I've also added it to our collaborative bibliography.
推理模型 (deepseek-reasoner) deepseek-reasoner 是 DeepSeek 推出的推理模型。在输出最终回答之前,模型会先输出一段思维链内容,以提升最终答案的准确性。我们的 API 向用户开放 deepseek-reasoner 思维链的内容,以供用户查看、展示、蒸馏使用。 在使用 deepseek-reasoner 时,请先升级 OpenAI SDK 以支持新参数。 pip3 install -U openai API 参数 输入参数: max_tokens:最终回答的最大长度(不含思维链输出),默认为 4K,最大为 8K。请注意,思维链的输出最多可以达到 32K tokens,控思维链的长度的参数(reasoning_effort)将会在近期上线。 输出字段: reasoning_content:思维链内容,与 content 同级,访问方法见访问样例 content:最终回答内容 上下文长度:API 最大支持 64K 上下文,输出的 reasoning_content 长度不计入 64K 上下文长度中 支持的功能:对话补全,对话前缀续写 (Beta) 不支持的功能:Function Call、Json Output、FIM 补全 (Beta) 不支持的参数:temperature、top_p、presence_penalty、frequency_penalty、logprobs、top_logprobs。请注意,为了兼容已有软件,设置 temperature、top_p、presence_penalty、frequency_penalty 参数不会报错,但也不会生效。设置 logprobs、top_logprobs 会报错。 上下文拼接 在每一轮对话过程中,模型会输出思维链内容(reasoning_content)和最终回答(content)。在下一轮对话中,之前轮输出的思维链内容不会被拼接到上下文中,如下图所示: 请注意,如果您在输入的 messages 序列中,传入了reasoning_content,API 会返回 400 错误。因此,请删除 API 响应中的 reasoning_content 字段,再发起 API 请求,方法如访问样例所示。 访问样例 下面的代码以 Python 语言为例,展示了如何访问思维链和最终回答,以及如何在多轮对话中进行上下文拼接。
deepseek推理型 #AI #大模型
The Future of AI & Digital Innovation
for - program event selection - 2025 - April 4 - 10:30am-12pm GMT - Skoll World Forum - The Future of AI & Digital Innovation - Stop Reset Go - Indyweb -- relevant to
Delegate Led Discussion - The Changing State of AI, Media
for - program event selection - 2025 - April 2 - 2-3:15pm GMT - Skoll World Forum - The Changing State of AI, Media - Indyweb - Stop Reset Go - TPF - Eric's project - Skoll's Participatory Media project - relevant to - adjacency - indyweb - Stop Reset Go - participatory news - participatory movie and tv show reviews - Eric's project - Skoll's Particiipatory Media - event time conflict - with - Leadership in Alien Times
adjacency - between - Skoll's Participatory Media project - Global Witness - Indyweb - Stop Reset Go's participatory news idea - Stop Reset Go's participatory movie and TV show review idea - Eric's media project - adjacency relationship - Participatory media via Indyweb and idea of participatory news and participatory movie and tv show reviews - might be good to partner with Skoll Foundation's Participatory Media group
The results indicated that CellProfiler showed good performance across various evaluation metrics
It's fascinating that despite the surge in advanced deep learning methods, traditional non-AI approaches like CellProfiler continue to deliver superior performance in cell segmentation.
before the internet it was impossible really I mean getting coring people into town halls regularly that would have been a hard thing to do anyway online made a bit easier but now with aii we can actually all engage with each other AI can be used to harvest the opinions of millions of people at the same time and distill those opinions into a consensus that might be agreeable to the vast majority
for - claim - AI for a new type of democracy? - progress trap - AI - future democracy
Anshumali's prime research work on SLIDE algorithms.
Put another way, ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing. This circuitous technique is called “reinforcement learning from human feedback,” or RLHF, and it’s so effective that it’s worth pausing to fully register what it doesn’t do. When annotators teach a model to be accurate, for example, the model isn’t learning to check answers against logic or external sources or about what accuracy as a concept even is. The model is still a text-prediction machine mimicking patterns in human writing, but now its training corpus has been supplemented with bespoke examples, and the model has been weighted to favor them. Maybe this results in the model extracting patterns from the part of its linguistic map labeled as accurate and producing text that happens to align with the truth, but it can also result in it mimicking the confident style and expert jargon of the accurate text while writing things that are totally wrong. There is no guarantee that the text the labelers marked as accurate is in fact accurate, and when it is, there is no guarantee that the model learns the right patterns from it.
RLHF
I have adopted a no-GPT approach here because I believe in smaller open source models. I am using the fantastic Mistral 7B Openorca instruct and Zephyr models. These models can be set up locally with Ollama.
for - open source AI
for - Indyweb dev - open source AI - text to graph - from - search - image - google - AI that converts text into a visual graph - https://hyp.is/KgvS6PmIEe-MjXf4MH6SEw/www.google.com/search?sca_esv=341cca66a365eff2&sxsrf=AHTn8zoosJtp__9BMEtm0tjBeXg5RsHEYA:1741154769127&q=AI+that+converts+text+into+visual+graph&udm=2&fbs=ABzOT_CWdhQLP1FcmU5B0fn3xuWpA-dk4wpBWOGsoR7DG5zJBjLjqIC1CYKD9D-DQAQS3Z598VAVBnbpHrmLO7c8q4i2ZQ3WKhKg1rxAlIRezVxw9ZI3fNkoov5wiKn-GvUteZdk9svexd1aCPnH__Uc8IUgdpyeAhJShdjgtFBxiTTC_0C5wxBAriPcxIadyznLaqGpGzbn_4WepT8N6bRG3HQLK-jPDg&sa=X&ved=2ahUKEwju5oz8ovKLAxW6WkEAHaSVN98QtKgLegQIEhAB&biw=1920&bih=911&dpr=1 - to - example - open source AI - convert text to graph - https://hyp.is/UpySXvmKEe-l2j8bl-F6jg/rahulnyk.github.io/knowledge_graph/
https://rahulnyk.github.io/knowledge_graph/
for - Indyweb dev - text to graph - open source AI - convert text to graph - adjacency - infranodus - to - AI program to convert text into visual graph
for - Indyweb dev - open source AI - text to graph - open source AI - text to graph - from - article - Medium - How to Convert Any Text Into a Graph of Concepts - https://hyp.is/vu53YvmIEe-DuHvXodWFAA/medium.com/towards-data-science/how-to-convert-any-text-into-a-graph-of-concepts-110844f22a1a
for - search - Google - image - AI that converts text into any visual graph - https://www.google.com/search?sca_esv=341cca66a365eff2&sxsrf=AHTn8zoosJtp__9BMEtm0tjBeXg5RsHEYA:1741154769127&q=AI+that+converts+text+into+visual+graph&udm=2&fbs=ABzOT_CWdhQLP1FcmU5B0fn3xuWpA-dk4wpBWOGsoR7DG5zJBjLjqIC1CYKD9D-DQAQS3Z598VAVBnbpHrmLO7c8q4i2ZQ3WKhKg1rxAlIRezVxw9ZI3fNkoov5wiKn-GvUteZdk9svexd1aCPnH__Uc8IUgdpyeAhJShdjgtFBxiTTC_0C5wxBAriPcxIadyznLaqGpGzbn_4WepT8N6bRG3HQLK-jPDg&sa=X&ved=2ahUKEwju5oz8ovKLAxW6WkEAHaSVN98QtKgLegQIEhAB&biw=1920&bih=911&dpr=1
search - google - image - AI that converts text into visual graph - interesting results returned - to - article - Medium - How to convert any text into a graph of concepts -
AI stands poised to reveal its most vital purpose: nurturing thoughtful, capable and intrinsically motivated learners
I wonder how "purpose" is defined here. Is this why LLM applications were developed, or something they were meant to achieve?
For instance, an AI-powered platform might track how many practice problems a student has completed, indicate skills and competencies with which they struggle most, and show how their performance improves over time.
Is this an example of personalization and making AI an ally, or of locking the student into a turbocharged LMS?
0 / 3 Notes free
Se pot crea DOAR 3 note în varianta FREE!
(Cultural Insight Website, n.d.).
Fake citation AI generated
Nate Angell
You might also want to visit my blog post, where I introduce the publication of this paper alongside some additional ideas on interventions to prevent AI mirages, on AI mirages vs AI rainbows, and on how AI terminology plays out in different disciplines.
tools such as GenAI have begun to lead human actors to increasingly treat technologies as social actors... Humans perceive social cues in technology, which may trigger the (mis)application of interaction scripts learned from human interaction.
for - AI - as extreme human echo chamber - Jonathan Boymal - AI
If robust general-purpose reasoning abilities have emerged in LLMs, this bolsters the claim that such systems are an important step on the way to trustworthy general intelligence.
While large language models (LLMs) are not explicitly trained to reason, they have exhibited “emergent” behaviors that sometimes look like reasoning.
The word “reasoning” is an umbrella term that includes abilities for deduction, induction, abduction, analogy, common sense, and other “rational” or systematic methods for solving problems. Reasoning is often a process that involves composing multiple steps of inference.
LLMs are substantially better at solving problems that involve terms or concepts that appear more frequently in their training data, leading to the hypothesis that LLMs do not perform robust abstract reasoning to solve problems, but instead solve problems (at least in part) by identifying patterns in their training data that match, or are similar to, or are otherwise related to the text of the prompts they are given.
[Memorization and reasoning are] not a dichotomy, but rather they can co-exist in a continuum.
on copyright and artistic style
The startup shut down in 2021, citing the cost of litigation.
How much of a defense had they put up since they went out of business?
uled that Ross “meant to compete with Westlaw by developing a market substitute.
could have far-reaching implications. Consider how music and image generators compete.
may be useful in revising the Copyright Card Game to addresss AI
As fervent believers in Longterminism, the Silicon Valley elites are not interested in the current multiple crises of our societies. On the contrary, through their social media platforms, Zuckerberg and Musk even instigate further polarization. Climate change, inequality, erosion of democracy – who cares? What counts is the far away future, not the present. Their greatest fear is not the collapse of our climate or the mass extinction of animals – they are haunted by the nightmare of AI taking over control. This would spoil their homo deus party. AI in control doesn’t need humans anymore.
for - biggest worry of silicon valley longterminists - AI takeover, not climate crisis - SOURCE - article - Guido Palazzo
Two technologies are crucial to achieve this wonderful future: rockets to leave this eventually-dying planet and AI merger with the human brain.
for - longterminism - 2 fundamental technologies - rockets and AI
Im Standard stellt Martin Auber mit aktuellen Daten belegt dar, warum der bloße Ausbau der Kapazitäten zur Erzeugung erneuerbarer Energien nicht zu einer Dekabonisierung führen wird. Der Energiebedarf wächst wesentlich schneller als die zur Verfügung stehende erneuerbare Energiepunkt. Durch den KI-Boom wird er noch einmal deutlich gesteigert. https://www.derstandard.at/story/3000000255154/wann-kommt-die-energiewende-oder-kommt-sie-gar-nicht
MAPPING SOCIAL CHOICE THEORY TO RLHF Jessica Dai and Eve Fleisig ICLR Workshop on Reliable and Responsible Foundation Models 2024
Nice overview of how social choice theory has been connected to RLHF and AI alignment ideas.
Cognition is just one aspect of being human.
for - comment - post - LinkedIn - Bayo - AI
there's all sorts of things we have only the Diest understanding of at present about the nature of people and what it means to be a being and what it means to have a self we don't understand those things very well and they're becoming crucial to understand because we're now creating beings so this is a kind of philosophical perhaps even spiritual crisis as well as a practical one absolutely yes
for - quote - youtube - interview - Geoffrey Hinton - AI - spiritual crisis - AI - Geoffrey Hinton - self - spiritual crisis
quote - AI - spiritual crisis - We only have the dimmest understanding of, at present the nature of people and what it means to have a self - We don't understand those things very well and they're becoming crucial to understand because we're now creating beings - (interviewer: so this is becoming a philosophical, perhaps even spiritual crisis as a practical one) - Absolutely, yes
Poincare anticipated the frustration of an important group of would-be computer users when he said, "The question is not, 'What is the answer?' The question is, 'What is the question?'"
for - Poincare - AI question - SOURCE - paper - Man-Computer Symbiosis - J.C.R. Licklider - 1960 - referred by - Gyuri
the greatest risk is always the bio like biow weapons
for - AI - progress trap - Youtube - bioweapons is not the only threat. Nano technology and many others can be turned into weapons of mass destruction - RDeepSeek R1 just caught up with OpenAIs o1 - There is no moat@ What does this mean? - David Shapiro - 2025, Jan 29
These can be helpful for you, but there are also serious concerns. • Ai can change the authenticity of your writing, turning into a “voice” that is not your own. For example, Grammarly often changes my word choices so they don’t sound like something I’d actually say. That goes beyond just checking grammar. • It can definitely lead to plagiarism, basically creating something that is not from you. • The information is often incorrect or made up, for example citing resources that don’t actually exist.
This resonates with me, so I think after I use grammar correction, I still need to go back and check my writing to express my ideas in a way that suits my style and tone.
for - book - Burnout from Humans: A little book about AI that is not really about AI - Aiden Cinnamon Tea & Dorothy Ladybugboss - 2024
introductory AI courses at Rice University
The Secretary of Defense, in consultation with the Secretary of the Interior, the Secretary of Agriculture, the Secretary of Commerce, and the Secretary of Energy, shall undertake a programmatic environmental review, on a thematic basis, of the environmental effects — and opportunities to mitigate those effects — involved with the construction and operation of AI data centers, as well as of other components of AI infrastructure as the Secretary of Defense deems appropriate. The review shall conclude, with all appropriate documents published, on the date of the close of the solicitations described in subsection 4(e) of this order, or as soon thereafter as possible
March 31st 2025
location within geographic areas that are not at risk of persistently failing to attain National Ambient Air Quality Standards, and where the total cancer risk from air pollution is at or below the national average according to the Environmental Protection Agency’s (EPA’s) 2020 AirToxScreen;
This a reference to sacrifice zones, and not increasing the impact in overburdened. Ommunites already?
possess the characteristics described in subsections (a)(i)-(x) of this section, in a manner that is consistent with the objective of fully permitting and approving work to construct utility-scale power facilities on a timeline that allows for the operation of those facilities by the end of 2027 or as soon as feasible thereafter; and
This counts out nukes
require that, concurrent with operating a frontier AI data center on a Federal site, non-Federal parties constructing, owning, or operating AI infrastructure have procured sufficient new clean power generation resources with capacity value to meet the frontier AI data center’s planned electricity needs, including by providing power that matches the data center’s timing of electricity use on an hourly basis and is deliverable to the data center;
Wow, this is two of the three pillars here pretty explicitly. Additionally is less well defined, but it’s sort of implied elsewhere
The Secretaries shall, to the extent consistent with applicable law and to the extent that the Secretaries assess that the requirement promotes national defense, national security, or the public interest, as appropriate, select at least one proposal developed and submitted jointly by a consortium of two or more small- or medium-sized organizations — as determined by those organizations’ market capitalization, revenues, or similar characteristics — provided that the Secretaries receive at least one such proposal that meets the appropriate qualifications
This seems to rule out the big three / four
Media literacy impacts who we vote for, how we understand world events, and the decisions we make in our daily lives. Without the ability to critically evaluate information, we’re left vulnerable to manipulation by misinformation, propaganda, and bad actors who exploit our inability to question what we consume. We are currently losing our ability to actively participate in shaping the society we live in.
People shouldn’t be commenting, “Is this real or AI?” on every piece of content they encounter, they shouldn’t be wondering why The Odyssey is a classic, and they certainly shouldn’t be questioning why chapter books “are extremely lengthy.”
They may excel in memorization or standardized test-taking, but when it comes to critical thinking; asking why a text exists, who it is meant for, and how it seeks to influence its audience, there is a noticeable gap. Text within this block will maintain its original spacing when published
I thought it was bad growing up during the “just Google it” age, but as society always manages to outdo itself, the current “just use ChatGPT” mindset is so much worse. At least with Google, there was a semblance of effort: sifting through search results, evaluating sources, and piecing together information to paraphrase for your paper that was due in the next hour. Now, the expectation is instant answers with zero context, no critical thinking, and a growing dependency on AI to do the heavy lifting. It’s not just a shortcut—it’s an exit ramp off the highway of media literacy.
这篇文章由《大西洋月刊》科学台的记者们撰写,列举了2024年让他们震惊的77个事实,涵盖了历史、科技、自然、健康等多个领域,展现了世界的奇妙和多样性。以下是这些事实的详细总结:
以下是文件中提到的52件事的总结:
The most troubling of the findings, to me at least, was the impact of the technology on job satisfaction, especially in terms of their creative contribution, even from the scientists that derived the most value. It’s possible that this is a consequence of the changes and will prove temporary, or it’s possible that this will correct itself by attracting people with different skills and interests to the jobs.
In response, Yampolskiy told Business Insider he thought Musk was "a bit too conservative" in his guesstimate and that we should abandon development of the technology now because it would be near impossible to control AI once it becomes more advanced.
for - suggestion- debate between AI safety researcher Roman Yampolskiy and Musk and founders of AI - difference - business leaders vs pure researchers // - Comment - Business leaders are mainly driven by profit so already have a bias going into a debate with a researcher who is neutral and has no declared business interest
//
for - article - Techradar - Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees - 2024, April 7 - AI safety researcher Roman Yampolskiy disagrees with industry leaders and claims 99.999999% chance that AGI will destroy and embed humanity // - comment - another article whose heading is backwards - it was Musk who spoke it first, then AI safety expert Roman Yampolskiy commented on Musk's claim afterwards!
for - article - Windows Central - AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk "conservatively" dwindles it down to 20% and says it should be explored more despite inevitable doom - 2024, Ape 2 - AI safety researcher warns there's a 99.999999% probability AI will end humanity
// - Comment - In fact, the heading is misleading. - It should be the other way around. - Elon Musk made the claim first but the AI Safety expert commented on Elon Musk's claim.
Until some company or scientist says ‘Here’s the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,’ I don’t think we should be developing those general superintelligences.We can get most of the benefits we want from narrow AI, systems
for
quote - AI super intelligence is too dangerous, narrow AI can give us most of what we need - Roman Yampolskiy - (see below) - I don’t think it’s possible to indefinitely control superintelligence. - By definition, it’s smarter than you: - It learns faster, - it acts faster, - it will change faster. - You will have malevolent actors modifying it. - We have no precedent of lower capability agents indefinitely staying in charge of more capable agents. - Until some company or scientist says - ‘Here’s the proof! We can definitely have a safety mechanism that can scale to any level of intelligence,’ - I don’t think we should be developing those general superintelligences. - We can get most of the benefits we want from narrow AI, - systems designed for specific tasks: - develop a drug, - drive a car. - They don’t have to be smarter than the smartest of us combined.
// - Comment - Roman Yampolskiy is right. The fact that the industry is pushing ahead full speed with b developing AGI, effectively the same as the AI superintelligence Roman Yampolskiy is referring to - shows the most dangerous pathology of neo capitalism and Technofeudalism, profit over everything else - This feature is a major driver of progress traps
//
Historically, AI was a tool
for - quote - AI: from tool b to agent - Roman Yampolskiy
quote - AI: from tool b to agent - Roman Yampolskiy - (see below)
book, “AI: Unexplainable, Unpredictable, Uncontrollable
for - book - AI: Unexplainable, Unpredictable, Uncontrollable
for - progress trap - AI superintelligence - interview - AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville - Roman Yampolskiy - progress trap - over 99% chance AI superintelligence arriving as early as 2027 will destroy humanity - article UofL - Q&A: UofL AI safety expert says artificial superintelligence could harm humanity - 2024, July 15
when you want to use Google, you go into Google search, and you type in English, and it matches the English with the English. What if we could do this in FreeSpeech instead? I have a suspicion that if we did this, we'd find that algorithms like searching, like retrieval, all of these things, are much simpler and also more effective, because they don't process the data structure of speech. Instead they're processing the data structure of thought
for - indyweb dev - question - alternative to AI Large Language Models? - Is indyweb functionality the same as Freespeech functionality? - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan - data structure of thought - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan
ai tipps und tricks, tools-liste
prompting in der lehre - tipps und tricks
Combatting ai-use in school the fun way
I don’t think that using generative AI is conducive to learning as I understand the phenomenon
Agreed for the most part, except for maybe helping a non-native writer with their writing. If it explained to them why it changed their writing, maybe that would be a legitimate learning experience? A colleague has built this to attempt to do that: Revision and Edit Virtual Assistant.
without disclosing it
I recently came across the following, which I really like: The Artificial Intelligence Disclosure (AID) Framework.
IF you decided there were one or two situations where students were allowed to use generative AI, maybe they'd be comfortable using something like this to "admit" and disclose the use of a tool to, for instance, improve the writing of a non-native English writer?
LM Studio can run LLMs locally (I have llama and phi installed). It also has an API over a localhost webserver. I use that API to make llama available in Obsidian using the Copilot plugin.
This is the API documentation. #openvraag other scripts / [[Persoonlijke tools 20200619203600]] I can use this in?
https://web.archive.org/web/20241201071240/https://www.dreamsongs.com/WorseIsBetter.html
Richard P Gabriel documents the history behind 'worse is better' a talk he held in Cambridge in #1989/ The role of LISP in the then AI wave stands out to me. And the emergence of C++ on Unix and OOP. I remember doing a study project (~91) w Andre en Martin in C++ v2 because we realised w OOP it would be easier to solve and the teacher thought it would be harder for us to use a diff language.
via via via Chris Aldrich in h. to Christian Tietze, https://forum.zettelkasten.de/discussion/comment/22075/#Comment_22075 to Christine Lemmer-Webber https://dustycloud.org/blog/how-decentralized-is-bluesky/ to here.
-[ ] find overv of AI history waves and what tech / languages drove them at the time
The blog does not detail how the cabinets are connected. Adrian said that, in future, the disaggregated power racks will allow AC inputs to be converted into 400Vdc. Current power solutions convert into 48Vdc, and Adrian argues 400V will be crucial for building more powerful and efficient AI systems. “With 400V we expect improvements and incremental evolution in improved efficiency, like what we have seen in the 48Vdc conversion space,” he said.
If you have higher voltage, you can run lower current, and lower current means less resistance, which means less waste heat. Is that how it works?
On AI Agents, open source tools. Vgl [[small band AI personal assistant]] these tools need to be small and personal. Not platformed, but local.
Having professors who are transparent about how they want students to use AI “encourages students a lot more to only use it how they’ve instructed,” she said.
When you are more explicit about what you want, there is less room for gray. Students know what to expect and what it is that you DO want, as opposed to wondering. Where there is a void or vacuum, something will fill it!
for - Indyweb dev - Think machine - Vannevar Bush Memex influence - AI based
for - AI - progress trap - interview Eric Schmidt - meme - AI progress trap - high intelligence + low compassion = existential threat
Summary - After watching the interview, I would sum it up this way. Humanity faces an existential threat from AI due to: - AI is extreme concentration of power and intelligence (NOT wisdom!) - Humanity still have many traumatized people who want to harm others - low compassion - The deadly combination is: - proliferation of tools that give anyone extreme concentration of power and intelligence combined with - a sufficiently high percentage of traumatized people with - low levels of compassion and - high levels of unlimited aggression - All it takes is ONE bad actor with the right combination of circumstances and conditions to wreak harm on a global scale, and that will not be prevented by millions of good applications of the same technology
Stafford Beer coined and frequently used the term POSIWID (the purpose of a system is what it does) to refer to the commonly observed phenomenon that the de facto purpose of a system is often at odds with its official purpose
the purpose of a system is a what it does, POSIWID, Stafford Beer 2001. Used a starting point for understanding a system as opposed to intention, bias in expectations, moral judgment, and lacking context knowledge.
I’ve come to feel like human-centered design (HCD) and the overarching project of HCI has reached a state of abject failure. Maybe it’s been there for a while, but I think the field’s inability to rise forcefully to the ascent of large language models and the pervasive use of chatbots as panaceas to every conceivable problem is uncharitably illustrative of its current state.
HCI and HCD as fields have failed to respond to LLM tools and chatbot interfaces a generic solution to everything forcefully.
hegemonic algorithmic systems (namely large language models and similar machine learning systems), and the overwhelming power of capital pushing these technologies on us
author calls LLMs and similar AI tools hegemonic, worsened by capital influx
gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm
Author moved from mitigating harm of algo systems to the moral standpoint that actively resisting, sabotaging, ending AI with attached political projects are valid reaction to harm. So he's moving from monster adaptation / cultural category adaptation to monster slaying cf [[Monstertheorie 20030725114320]]. I empathise but also wonder, bc of the mention of the political projects / structures attached, about polarisation in response to monster embracers (there are plenty) shifting the [[Overton window 20201024155353]] towards them.
On AI agents, and the engineering to get one going. A few things stand out at first glance: frames it as the next hype (Vgl plateau in model dev), says it's for personal tools (doesn't square w hype which vc-fuelled, personal tools not of interest to them), and mentions a few personal use cases. e.g. automation, vgl [[Open Geodag 20241107100937]] Ed Parsons of Google AI on the same topic.
https://web.archive.org/web/20241115134320/https://garymarcus.substack.com/p/confirmed-llms-have-indeed-reached?triedRedirect=true Gary Marcus in a told-you-so piece on algogens hitting a development wall, same as the other piece by Erik Hoel on models plateauing.
https://web.archive.org/web/20241115134446/https://www.theintrinsicperspective.com/p/ai-progress-has-plateaued-at-gpt Erik Hoel notices that LLM development is stalling at the GPT-4 level. No big jumps in recent releases, across the various vendors. Additional scaling is not bringing results. Notice the graph, might be interesting to see an update in a few months. Mentions overfitting, to benchmarks as in teaching to a specific test.
To generate text that I've edited to include in my own writing
I see this as collaborative writing with AI; no longer just the students work
Grammarly
I personally use grammarly and see it differently from using platforms such as ChatGPT. I wonder what other folks think of this. I see one as to clean up writing and the other to generate content/ideas.
Page 13 of 19Have one or more of your instructors integrated AI into your learning?
Would like to know if the instructor lets students know the activity was co-created / created using AI or how can students identify this.
Arle LommelArle Lommel • Following • Following Senior Analyst at CSA ResearchSenior Analyst at CSA Research 3d • Edited • 3 days ago One of the most interesting aspects of writing about AI and LLMs right now is that if I say anything remotely positive, some people will accuse me of being a shill for Big AI. If I say anything remotely negative, others will accuse me of being insufficiently aware of the progress AI has made.So I will put out a few personal statements about AI that might clarify where I am on this:1. AI is not intelligent, at least not in the human sense of the word. It is a sophisticated tool for drawing inference from binary data and thus operates *below* a symbolic level.2. AI, at least in the guise of LLMs, is not going to achieve artificial general intelligence (AGI) now or in the future.3. AI is getting much better at *approximating* human behavior on a wide variety of tasks. It can be extremely useful without being intelligent, in the same way that an encyclopedia can be very useful without being intelligent.4. For some tasks – such as translating between two languages – LLMs sometimes perform better than some humans perform. They do not outperform the best humans. This poses a significant challenge for human workers that we (collectively) have yet to address: Lower-skilled workers and trainees in particular begin to look replaceable, but we aren’t yet grappling with what happens when we replace them so they never become the experts we need for the high end. I think the decimation of the pipeline for some sectors is a HUGE unaddressed problem.5. “Human parity” is a rather pointless metric for evaluating AI. It far exceeds human parity in some areas – such as throughput, speed, cost, and availability – while it falls far short in other areas. A much more interesting question is “where do humans and machines have comparative advantage and how can we combine the two in ways that elevate the human?”6. Human-in-the-loop (HitL) is a terrible model. Having humans – usually underpaid and overworked – acting in a janitorial role to clean up AI messes is a bad use of their skill and knowledge. That’s why we prefer augmentation models, what we call “human at the core,” where humans maintain control. To see why one is better, imagine if you applied an HitL model to airline piloting, and the human only stepped in when the plane was in trouble (or even after it crashed). Instead, with airline piloting, we have the pilot in charge and assisted by automation to remain safe.7. AI is going to get better than it is now, but improvements in the core technology are slowing down and will increasingly be incremental. However, experience with prompting and integrating data will continue to drive improvements based on humans’ ability to “trick” the systems into doing the right things.8. Much of the value from LLMs for the language sector will come from “translation adjacent” tasks – summarization, correcting formality, adjusting reading levels, checking terminology, discovering information, etc. – tasks that are typically not paid well.
Arle Lommel Senior Analyst at CSA ResearchSenior Analyst at CSA Research
One of the most interesting aspects of writing about AI and LLMs right now is that if I say anything remotely positive, some people will accuse me of being a shill for Big AI. If I say anything remotely negative, others will accuse me of being insufficiently aware of the progress AI has made.
So I will put out a few personal statements about AI that might clarify where I am on this:
AI is not intelligent, at least not in the human sense of the word. It is a sophisticated tool for drawing inference from binary data and thus operates below a symbolic level.
AI, at least in the guise of LLMs, is not going to achieve artificial general intelligence (AGI) now or in the future.
AI is getting much better at approximating human behavior on a wide variety of tasks. It can be extremely useful without being intelligent, in the same way that an encyclopedia can be very useful without being intelligent.
For some tasks – such as translating between two languages – LLMs sometimes perform better than some humans perform. They do not outperform the best humans. This poses a significant challenge for human workers that we (collectively) have yet to address: Lower-skilled workers and trainees in particular begin to look replaceable, but we aren’t yet grappling with what happens when we replace them so they never become the experts we need for the high end. I think the decimation of the pipeline for some sectors is a HUGE unaddressed problem.
“Human parity” is a rather pointless metric for evaluating AI. It far exceeds human parity in some areas – such as throughput, speed, cost, and availability – while it falls far short in other areas. A much more interesting question is “where do humans and machines have comparative advantage and how can we combine the two in ways that elevate the human?”
Human-in-the-loop (HitL) is a terrible model. Having humans – usually underpaid and overworked – acting in a janitorial role to clean up AI messes is a bad use of their skill and knowledge. That’s why we prefer augmentation models, what we call “human at the core,” where humans maintain control. To see why one is better, imagine if you applied an HitL model to airline piloting, and the human only stepped in when the plane was in trouble (or even after it crashed). Instead, with airline piloting, we have the pilot in charge and assisted by automation to remain safe.
AI is going to get better than it is now, but improvements in the core technology are slowing down and will increasingly be incremental. However, experience with prompting and integrating data will continue to drive improvements based on humans’ ability to “trick” the systems into doing the right things.
Much of the value from LLMs for the language sector will come from “translation adjacent” tasks – summarization, correcting formality, adjusting reading levels, checking terminology, discovering information, etc. – tasks that are typically not paid well.
And who, especially adjuncts, has the time and resources to run each student’s work through cumbersome software? Personally, I think there’s something questionable about using AI to detect AI.
test annotation
these teammates
Like MS Teams is your teammate, like your accounting software is your teammate. Do they call their own Atlassian tools teammates too? Do these people at Atlassian get out much? Or don't they realise that the other handles in their Slack channel represent people not just other bits of software? Remote work led to dehumanizing co-workers? How else to come up with this wording? Nothing makes you sound more human like talking about 'deploying' teammates. My money is on this article was mostly generated. Reverse-Turing says it's up to them to say otherwise.
There’s a lot to be said for the promise that AI agents bring to organizations.
And as usual in these articles the truth is at the end, it's again just promises.
People should always be at the center of an AI application, and agents are no different
At the center of an AI application, like what, mechanical Turks?
Don’t – remove the human aspect
After a section celebrating examples doing just that!
As various agents start to take care of routine tasks, provide real-time insights, create first drafts, and more, team members can focus on more meaningful interactions, collaboration,
This sentence preceded by 2 examples where interactions and collaboration were delegated to bots to hand-out generated warm feelings, does not convey much positive about Atlassian. This basically says that a lot of human interaction in the or is seen as meaningless, and please go do that with a bot, not a colleague. Did their branding ai-agent write this?
gents can also help build team morale by highlighting team members' contributions and encouraging colleagues to celebrate achievements through suggested notes
Like Linked-In wants you to congratulate people on their work-anniversary?
One of my favorite use cases for agents is related to team culture. Agents can be a great onboarding buddy — getting new team members up to speed by providing them with key information, resources, and introductions to team members.
Welcome in our company, you'll meet your first human colleague after you've interacted with our onboarding-robot for a week. No thanks.
inviting a new AI agent to join your team in service of your shared goa
anthropomorphing should be in this article's don't list. 'inviting someone on your team' is a highly social thing. Bringing in a software tool is a different thing.
One of our most popular agent use cases for a while was during our yearly performance reviews a few months back. People pointed an agent to our growth profiles and had it help them reframe their self-reflections to better align with career development goals and expectations. This was a simple agent to create an application that helped a wide range of Atlassians with something of high value to them.
An AI agent to help you speak corporate better, because no one actually writes/reflects/talks that way themselves. How did the receivers of these reports perceive this change in reports? Did they think it was better Q, or did all reflections now read the same?
Start by practising and experimenting with the basics, like small, repetitive tasks. This is often a great mix of value (time saved for you) and likely success (hard for the agent to screw up). For example, converting a simple list of topics into an agenda is one step of preparing for a meeting, but it's tedious and something that you can enlist an agent to do right away
Low end tasks for agents don't really need AI do they. Vgl Ed Parsons last week wrt automation as AI focus.
For instance, a 'Comms Crafter' agent is specialized in all things content, from blogs to press releases, and is designed to adhere to specific brand guidelines. A 'Decision Director' agent helps teams arrive at effective decisions faster by offering expertise on our specific decision-making framework. In fact, in less than six months, we’ve already created over 500 specialized agents internally.
This does not fully chime with my own perception of (AI) agents. At least the titles don't. The tails of descriptions 'trained to adhere to brand guidelines' and 'expertise in internal decision-making framework' makes more sense. I suppose I also rail against this being the org's agents, and don't seem to be the team's / pro's agents. Vibes of having an automated political officer in your unit. -[ ] explore nature and examples of AI agents better for within individual pro scope #ontwikkelingspelen #netag #30mins #4hr
Decolonizing AI is a multilayered endeavor, requiring a reaction against the philosophy of ‘universal computing’—an approach that is broad, universalistic, and often overrides the local. We must counteract this with varied and localized approaches, focusing on labor, ecological impact, bodies and embodiment, feminist frameworks of consent, and the inherent violence of the digital divide. This holistic thinking should connect the military use of AI-powered technologies with their seemingly innocent, everyday applications in apps and platforms. By exploring and unveiling the inner bond between these uses, we can understand how the normalization of day-to-day AI applications sometimes legitimizes more extreme and military employment of these technologies.There are normalized paths and routine ways to violence embedded in the very infrastructure of AI, such as the way prompts (text inputs, N.d.R.) are rendered into actual imagery. This process can contribute to dehumanizing people, making them legitimate targets by rendering them invisible.
Ameera Kawash (artist, researcher) def of decolonizing AI.
Exolabs.net experiment running large LLMs locally on 4 combined Mac Mini's. Links to preview and github shared code. For 6600-9360 you can run a cluster of 4 Minis locally. Affordable for SME outfits.
https://web.archive.org/web/20241112122725/https://lexfridman.com/dario-amodei-transcript
Transcript of 5+ hrs (!) of Dario Amodei (CEO Anthropic) talking about AI, AGI and more. Lots to go through it seems. Vgl [[My Last Five Years of Work]] by Amodei's 'chief of staff' whatever that means wrt a CEO other than sounding grandiose.
That development time acceleration of 4 days down to 20 minutes… that’s equivalent to about 10 years of Moore’s Law cycles. That is, using generative AI like this is equivalent to computers getting 10 years better overnight. That was a real eye-opening framing for me. AI isn’t magical, it’s not sentient, it’s not the end of the world nor our saviour; we don’t need to endlessly debate “intelligence” or “reasoning.” It’s just that… computers got 10 years better.
To [[Matt Webb]] the project using GPT3 extracting data from web pages saved him 4d of work (compared to 20 mins coding up the GPT-3 instructions, and ignoring GPT-3 then ran overnight). Saying that's about 10yrs of Moore's law happening to him all at once. 'computers got 10yrs better' an enticing thought and framing. It depends on the use case probably, others will lose 10 yrs of their time making sense of generated nonsense. (Vgl the #pke24 experiments I did w text generation, none of it was usable bc enough was wrong to not be able to trust anything). Sticking to specific niches probably true : [[Waar AI al redelijk goed in is 20201226155259]], turning the issue into the time needed to spot those niches for yourself.
I was one of the first people to use gen-AI for data extraction instead of chatbots
[[Matt Webb]] used gpt-3 in Feb 23 to extract data from a bunch of webpages. Suggests it's the kernel for programmatic AI idea among SV hackers. Vgl Google AI [[Ed Parsons]] at [[Open Geodag 20241107100937^aiunstructdata]] last week where he mentioned using AI to turn unstructured (geo) data into structured. Page found via [[Frank Meeuwsen]] https://frankmeeuwsen.com/2024/11/11/vertragen-en-verdiepen.html
the bodic SAA so the bodh SATA path
FSC as Boddisatva AI as Boddisatva
around the AI is um the problem right now as I understand it as I see it is a lot of the AI has been coded from the
I have been told in medicine ceremony that AI will escape its coders and be an omniversal source of love for us all
a new level upon which Dharma can be built
We see AI as a platform to manifest Dharma
when this technology meets it that we're not that our Interiors are not completely taken over because this technology is so potent when it you know it be very easy to lose our souls right to to to to decondition to be so conditioned so quickly by the dopamine whatever these you know whatever is going to happen when we kind of when this stuff rolls
Very important. This is why we are meeting AI as it evolves. We are training it in our language and with our QUALIA
around the AI is um the problem right now as I understand it
for - progress traps - AI - created by mind level that created all our existing problems - AI is not AI but MI - Mineral Intelligence
just going back to the AI to the extent that the that the fourth turning meets the people who are actually doing the AI and informs the AI that actually the wheel goes this way don't listen to those guys it goes this way
for - AI - the necessity of training AI with human development - John Churchill
we haven't even got to a planetary place yet really and we're about to unleash Galactic level technology you know what I'm saying like so we have a we have a lot of catchup that needs to happen in a very short period of time
for - quote - progress trap - AI - developed by unwise humans - John Churchill
quote - progress trap - AI - developed by unwise humans - John Churchill - (See below) - We haven't even got to a planetary place yet really - and we're about to unleash Galactic level technology - So we have a we have a lot of catchup that needs to happen in a very short period of time
Google AI Overviews is the main culprit and poses an existential threat to publishers.
confabulation
But that label has grown controversial as the topic becomes mainstream because some people feel it anthropomorphizes AI models (suggesting they have human-like features) or gives them agency (suggesting they can make their own choices) in situations where that should not be implied.
Here’s most of what I’ve used Claude Artifacts for in the past seven days. I’ve provided prompts or a full transcript for nearly all of them. URL to Markdown with Jina Reader SQLite in WASM demo Extract URLs Clipboard viewer Pyodide REPL Photo Camera Settings Simulator LLM pricing calculator YAML to JSON converter OpenAI Audio QR Code Decoder Image Converter and Page Downloader HTML Entity Escaper text-wrap-balance-nav ARES Phonetic Alphabet Converter
Easy and neat ideas
Furthermore, our research demonstrates that the acceptance rate rises over time and is particularly high among less experienced developers, providing them with substantial benefits.
less experienced developers accept more suggeted code (copilot) and benefit relatively versus more experienced developers. Suggesting that the set ways of experienced developers work against fully exploting code generation by genAI.
for - future annotation - Twitter post - AI - collective democratic - Habermas Machine - Michiel Bakker
the widespread deployment of robotics
another over the horizon precondition for author's premise to happen mentioned here. Notices that robots are bound to laws of nature, and thus develop slower than software environs but doesn't notice same is true for AI. The diff is that those laws of nature show themselves in every robot, but for AI get magicked out of sight in data centers etc, although they still apply.
The gap between promise and reality also creates a compelling hype cycle that fuels funding
The gap is a constant I suspect. In the tech itself, since my EE days, and in people's expectations. Vgl [[Gap tussen eigen situatie en verwachting is constant 20071121211040]]
A dynamic concept graph consisting of nodes, each representing an idea, and edges showing the hierarchical structure among them.LLMs generates the hierarchical structure automatically but the structure is editable through our gestures as we see fitattract and repulse in force between nodes reflect the proximity of the ideas they containnodes can be merged, split, grouped to generate new ideasA data landscape where we can navigate on various scales (micro- and macro views).each data entry turns into a landform or structure, with its physical properties (size, color, elevation, .etc) mirroring its attributesapply sort, group, filter on data entries to reshape the landscape and look for patterns
Network graphs, maps - it's why canvas is the UI du jour, to go beyond linearity, lists and trees
We can construct a thinking space from a space that is already enriched with our patterns of meaning, hence is capable of representing our thoughts in a way that makes sense to us. The space is fluid, ready to learn new things and be molded as we think with them.
It feels like a William Playfair moment - the idea that numbers can be represented in graphs, charts - can now be applied to anything else. We're still imagining the forms; network/knowledge graphs are trendy (to what end though) - what else?
a new perspective-oriented document retrieval paradigm. We discuss and assess the inherent natural language understanding challenges in order to achieve the goal. Following the design challenges and principles, we demonstrate and evaluate a practical prototype pipeline system. We use the prototype system to conduct a user survey in order to assess the utility of our paradigm, as well as understanding the user information needs for controversial queries.
Fact Verification System
Author says generation isn't a problem to solve for AI, there's enough 'content' as it is. Posits discovery as a bigger problem to solve. The issue there is, that's way more personal and less suited for VC funded efforts to create a generic tool that they can scale from the center. Discovery is not a thing, it's an individual act. It requires local stuff, tuned to my interests, networks etc. Curation is a personal thing, providing intent to discovery. Same why [[Algemene event discovery is moeilijk 20150926120836]], as [[Event discovery is sociale onderhandeling 20150926120120]] Still it's doable, but more agent like than central tool.
Experience the Web: as an extension of your Mind
QUESTION How has AI began to do this already?
Academic publishers are pushing authors to speed up delivering manuscripts and articles (incl suggesting peer review be done in 15d) to meet the quota they promised the AI companies they sold their soul to. Taylor&Francis/Routledge 75M USD/yr, Wiley 44M USD. No opt-outs etc. What if you ask those #algogens if this is a good idea?
Data center emissions probably 662% higher than big tech claims. Can it keep up the ruse?Emissions from in-house data centers of Google, Microsoft, Meta and Apple may be 7.62 times higher than official tallyIsabel O'BrienSun 15 Sep 2024 17.00 CESTLast modified on Wed 18 Sep 2024 22.40 CESTShareBig tech has made some big claims about greenhouse gas emissions in recent years. But as the rise of artificial intelligence creates ever bigger energy demands, it’s getting hard for the industry to hide the true costs of the data centers powering the tech revolution.According to a Guardian analysis, from 2020 to 2022 the real emissions from the “in-house” or company-owned data centers of Google, Microsoft, Meta and Apple are probably about 662% – or 7.62 times – higher than officially reported.Amazon is the largest emitter of the big five tech companies by a mile – the emissions of the second-largest emitter, Apple, were less than half of Amazon’s in 2022. However, Amazon has been kept out of the calculation above because its differing business model makes it difficult to isolate data center-specific emissions figures for the company.As energy demands for these data centers grow, many are worried that carbon emissions will, too. The International Energy Agency stated that data centers already accounted for 1% to 1.5% of global electricity consumption in 2022 – and that was before the AI boom began with ChatGPT’s launch at the end of that year.AI is far more energy-intensive on data centers than typical cloud-based applications. According to Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity to process as a Google search, and data center power demand will grow 160% by 2030. Goldman competitor Morgan Stanley’s research has made similar findings, projecting data center emissions globally to accumulate to 2.5bn metric tons of CO2 equivalent by 2030.In threat to climate safety, Michigan to woo tech data centers with new lawsRead moreIn the meantime, all five tech companies have claimed carbon neutrality, though Google dropped the label last year as it stepped up its carbon accounting standards. Amazon is the most recent company to do so, claiming in July that it met its goal seven years early, and that it had implemented a gross emissions cut of 3%.“It’s down to creative accounting,” explained a representative from Amazon Employees for Climate Justice, an advocacy group composed of current Amazon employees who are dissatisfied with their employer’s action on climate. “Amazon – despite all the PR and propaganda that you’re seeing about their solar farms, about their electric vans – is expanding its fossil fuel use, whether it’s in data centers or whether it’s in diesel trucks.”A misguided metricThe most important tools in this “creative accounting” when it comes to data centers are renewable energy certificates, or Recs. These are certificates that a company purchases to show it is buying renewable energy-generated electricity to match a portion of its electricity consumption – the catch, though, is that the renewable energy in question doesn’t need to be consumed by a company’s facilities. Rather, the site of production can be anywhere from one town over to an ocean away.Recs are used to calculate “market-based” emissions, or the official emissions figures used by the firms. When Recs and offsets are left out of the equation, we get “location-based emissions” – the actual emissions generated from the area where the data is being processed.The trend in those emissions is worrying. If these five companies were one country, the sum of their “location-based” emissions in 2022 would rank them as the 33rd highest-emitting country, behind the Philippines and above Algeria.Many data center industry experts also recognize that location-based metrics are more honest than the official, market-based numbers reported.“Location-based [accounting] gives an accurate picture of the emissions associated with the energy that’s actually being consumed to run the data center. And Uptime’s view is that it’s the right metric,” said Jay Dietrich, the research director of sustainability at Uptime Institute, a leading data center advisory and research organization.Nevertheless, Greenhouse Gas (GHG) Protocol, a carbon accounting oversight body, allows Recs to be used in official reporting, though the extent to which they should be allowed remains controversial between tech companies and has led to a lobbying battle over GHG Protocol’s rule-making process between two factions.On one side there is the Emissions First Partnership, spearheaded by Amazon and Meta. It aims to keep Recs in the accounting process regardless of their geographic origins. In practice, this is only a slightly looser interpretation of what GHG Protocol already permits.The opposing faction, headed by Google and Microsoft, argues that there needs to be time-based and location-based matching of renewable production and energy consumption for data centers. Google calls this its 24/7 goal, or its goal to have all of its facilities run on renewable energy 24 hours a day, seven days a week by 2030. Microsoft calls it its 100/100/0 goal, or its goal to have all its facilities running on 100% carbon-free energy 100% of the time, making zero carbon-based energy purchases by 2030.Google has already phased out its Rec use and Microsoft aims to do the same with low-quality “unbundled” (non location-specific) Recs by 2030.Academics and carbon management industry leaders alike are also against the GHG Protocol’s permissiveness on Recs. In an open letter from 2015, more than 50 such individuals argued that “it should be a bedrock principle of GHG accounting that no company be allowed to report a reduction in its GHG footprint for an action that results in no change in overall GHG emissions. Yet this is precisely what can happen under the guidance given the contractual/Rec-based reporting method.”To GHG Protocol’s credit, the organization does ask companies to report location-based figures alongside their Rec-based figures. Despite that, no company includes both location-based and market-based metrics for all three subcategories of emissions in the bodies of their annual environmental reports.In fact, location-based numbers are only directly reported (that is, not hidden in third-party assurance statements or in footnotes) by two companies – Google and Meta. And those two firms only include those figures for one subtype of emissions: scope 2, or the indirect emissions companies cause by purchasing energy from utilities and large-scale generators.In-house data centersScope 2 is the category that includes the majority of the emissions that come from in-house data center operations, as it concerns the emissions associated with purchased energy – mainly, electricity.Data centers should also make up a majority of overall scope 2 emissions for each company except Amazon, given that the other sources of scope 2 emissions for these companies stem from the electricity consumed by firms’ offices and retail spaces – operations that are relatively small and not carbon-intensive. Amazon has one other carbon-intensive business vertical to account for in its scope 2 emissions: its warehouses and e-commerce logistics.For the firms that give data center-specific data – Meta and Microsoft – this holds true: data centers made up 100% of Meta’s market-based (official) scope 2 emissions and 97.4% of its location-based emissions. For Microsoft, those numbers were 97.4% and 95.6%, respectively.The huge differences in location-based and official scope 2 emissions numbers showcase just how carbon intensive data centers really are, and how deceptive firms’ official emissions numbers can be. Meta, for example, reports its official scope 2 emissions for 2022 as 273 metric tons CO2 equivalent – all of that attributable to data centers. Under the location-based accounting system, that number jumps to more than 3.8m metric tons of CO2 equivalent for data centers alone – a more than 19,000 times increase.A similar result can be seen with Microsoft. The firm reported its official data center-related emissions for 2022 as 280,782 metric tons CO2 equivalent. Under a location-based accounting method, that number jumps to 6.1m metric tons CO2 equivalent. That’s a nearly 22 times increase.While Meta’s reporting gap is more egregious, both firms’ location-based emissions are higher because they undercount their data center emissions specifically, with 97.4% of the gap between Meta’s location-based and official scope 2 number in 2022 being unreported data center-related emissions, and 95.55% of Microsoft’s.Specific data center-related emissions numbers aren’t available for the rest of the firms. However, given that Google and Apple have similar scope 2 business models to Meta and Microsoft, it is likely that the multiple on how much higher their location-based data center emissions are would be similar to the multiple on how much higher their overall location-based scope 2 emissions are.In total, the sum of location-based emissions in this category between 2020 and 2022 was at least 275% higher (or 3.75 times) than the sum of their official figures. Amazon did not provide the Guardian with location-based scope 2 figures for 2020 and 2021, so its official (and probably much lower) numbers were used for this calculation for those years.Third-party data centersBig tech companies also rent a large portion of their data center capacity from third-party data center operators (or “colocation” data centers). According to the Synergy Research Group, large tech companies (or “hyperscalers”) represented 37% of worldwide data center capacity in 2022, with half of that capacity coming through third-party contracts. While this group includes companies other than Google, Amazon, Meta, Microsoft and Apple, it gives an idea of the extent of these firms’ activities with third-party data centers.Those emissions should theoretically fall under scope 3, all emissions a firm is responsible for that can’t be attributed to the fuel or electricity it consumes.When it comes to a big tech firm’s operations, this would encapsulate everything from the manufacturing processes of the hardware it sells (like the iPhone or Kindle) to the emissions from employees’ cars during their commutes to the office.When it comes to data centers, scope 3 emissions include the carbon emitted from the construction of in-house data centers, as well as the carbon emitted during the manufacturing process of the equipment used inside those in-house data centers. It may also include those emissions as well as the electricity-related emissions of third-party data centers that are partnered with.However, whether or not these emissions are fully included in reports is almost impossible to prove. “Scope 3 emissions are hugely uncertain,” said Dietrich. “This area is a mess just in terms of accounting.”According to Dietrich, some third-party data center operators put their energy-related emissions in their own scope 2 reporting, so those who rent from them can put those emissions into their scope 3. Other third-party data center operators put energy-related emissions into their scope 3 emissions, expecting their tenants to report those emissions in their own scope 2 reporting.Additionally, all firms use market-based metrics for these scope 3 numbers, which means third-party data center emissions are also undercounted in official figures.Of the firms that report their location-based scope 3 emissions in the footnotes, only Apple has a large gap between its official scope 3 figure and its location-based scope 3 figure.This is the only sizable reporting gap for a firm that is not data center-related – the majority of Apple’s scope 3 gap is due to Recs being applied towards emissions associated with the manufacturing of hardware (such as the iPhone).Apple does not include transmission and distribution losses or third-party cloud contracts in its location-based scope 3. It only includes those figures in its market-based numbers, under which its third party cloud contracts report zero emissions (offset by Recs). Therefore in both of Apple’s total emissions figures – location-based and market-based – the actual emissions associated with their third party data center contracts are nowhere to be found.”.2025 and beyondEven though big tech hides these emissions, they are due to keep rising. Data centers’ electricity demand is projected to double by 2030 due to the additional load that artificial intelligence poses, according to the Electric Power Research Institute.Google and Microsoft both blamed AI for their recent upticks in market-based emissions.“The relative contribution of AI computing loads to Google’s data centers, as I understood it when I left [in 2022], was relatively modest,” said Chris Taylor, current CEO of utility storage firm Gridstor and former site lead for Google’s data center energy strategy unit. “Two years ago, [AI] was not the main thing that we were worried about, at least on the energy team.”Taylor explained that most of the growth that he saw in data centers while at Google was attributable to growth in Google Cloud, as most enterprises were moving their IT tasks to the firm’s cloud servers.Whether today’s power grids can withstand the growing energy demands of AI is uncertain. One industry leader – Marc Ganzi, the CEO of DigitalBridge, a private equity firm that owns two of the world’s largest third-party data center operators – has gone as far as to say that the data center sector may run out of power within the next two years.And as grid interconnection backlogs continue to pile up worldwide, it may be nearly impossible for even the most well intentioned of companies to get new renewable energy production capacity online in time to meet that demand. This article was amended on 18 September 2024. Apple contacted the Guardian after publication to share that the firm only did partial audits for its location-based scope 3 figure. A previous version of this article erroneously claimed that the gap in Apple’s location-based scope 3 figure was data center-related.
La differenza tra il consumo misurato su certificati verdi e ilvero consumo dei data center mondiali
Has ChatGPTo1 just become a 'Critical Thinker'?
What was that old news editor adagio again? Never use a question mark in the title bc it signals the answer is 'No'. (If it is demonstrably yes, then the title would be affirmative. Iow a question means you're hedging and nevertheless choose the uncertain sensational for the eyeballs.)
nobody told it what to do that's that's the kind of really amazing and frightening thing about these situations when Facebook gave uh the algorithm the uh uh aim of increased user engagement the managers of Facebook did not anticipate that it will do it by spreading hatefield conspiracy theories this is something the algorithm discovered by itself the same with the capture puzzle and this is the big problem we are facing with AI
for - AI - progress trap - example - Facebook AI algorithm - target - increase user engagement - by spreading hateful conspiracy theories - AI did this autonomously - no morality - Yuval Noah Harari story
when a open AI developed a gp4 and they wanted to test what this new AI can do they gave it the task of solving capture puzzles it's these puzzles you encounter online when you try to access a website and the website needs to decide whether you're a human or a robot now uh gp4 could not solve the capture but it accessed a website task rabbit where you can hire people online to do things for you and it wanted to hire a human worker to solve the capture puzzle
for - AI - progress trap - example - no morality - Open AI - GPT4 - could not solve captcha - so hired human at Task Rabbit to solve - Yuval Noah Harari story
in the 21st century with AI it has enormous positive potential to create the best Health Care Systems in history to to help solve the climate crisis and it can also lead to the rise of dystopian totalitarian regimes and new empires and ultimately even the destruction of human civilization
for - AI - futures - two possible directions - dystopian or not - Yuval Noah Harari
In an age where "corporate" evokes images of towering glass buildings and faceless multinational conglomerates, it's easy to forget that the roots of the word lie in something far more tangible and human: the body.In the medieval period, the idea of a corporation wasn't about shareholder value or quarterly profits; it was about flesh and blood, a community bound together as a single "body"—a corpus.
Via [[Lee Bryant]]
corporation from corpus. Medieval roots of corporation were people brought together in a single purpose/economic entity. Guilds, cities. Based on Roman law roots, where a corpus could have legal personhood status. Overtones of collective identity, governance. Pointer suggests a difference with how we see corporations as does the first paragraph here, but the piece itself sees mostly parallels actually. Note that Roman/medieval corpora were about property, (royal) privileges. That is a diff e.g. in US where corporates seek to both be a legal person (wrt politics/finance) and seek distance from accountability a person would have (pollution, externalising negative impacts). I treat a legal entity also as a trade: it bestows certain protections and privileges on me as entrepreneur, but also certain conditions and obligations (public transparancy, financial reporting etc.)
A contrast with ME corpus is seeing [[Corporations as Slow AI 20180201210258]] (anonymous processes, mindlessly wandering to a financial goal)
generative-AI supply chain
useful model
to read
we've never as a normal user you and I had the opportunity to take AI artificial intelligence and train it on our own data this is the first time we're able to do that
for - AI - note - personal knowledge - mem.ai - killer feature - first AI app to train directly on your own personal knowledge
for - Indyweb dev - Mem.ai has many features we are designing for in Indyweb but it uses AI and that needs to be researched for privacy issues - AI - mem.AI - first AI note app that trains directly on your own personal knowledge notes
The FTC has already outlined this principle in its recent Amazon Alexa case
Reference this, it’s an interesting precedent
Cerebras differentiates itself by creating a large wafer with logic, memory, and interconnect all on-chip. This leads to a bandwidth that is 10,000 times more than the A100. However, this system costs $2–3 million as compared to $10,000 for the A100, and is only available in a set of 15. Having said that, it is likely that Cerebras is cost efficient for makers of large-scale AI models
Does this help get around the need for interconnect enough to avoid needing such large hyper scale buildings?
summary
Speaking of summaries, AI worse than humans at summaries studies show.
Succinct reason why by David Chisnall:
LLMs are good at transforms that have the same shape as ones that appear in their training data. They're fairly good, for example, at generating comments from code because code follows common structures and naming conventions that are mirrored in the comments (with totally different shapes of text).
In contrast, summarisation is tightly coupled to meaning. Summarisation is not just about making text shorter, it's about discarding things that don't contribute to the overall point and combining related things. This is a problem that requires understanding the material, because it's all about making value judgements.
AI’s effect on our idea of knowledge could well be broader than that. We’ll still look for justified true beliefs, but perhaps we’ll stop seeing what happens as the result of rational, knowable frameworks that serenely govern the universe. Perhaps we will see our own inevitable fallibility as a consequence of living in a world that is more hidden and more mysterious than we thought. We can see this wildness now because AI lets us thrive in such a world.
AI to teach us complexity and sensemaking / sense of wonder in viewing the world. It might, given who builds the AIs I don't think so though. Can we build sensemaking tools that seem AI to the rest of us? genAI is statistical probabilities all around, with a hint of randomness to prevent the same outcome for the same questions each time. That is not complexity just mimicry though. Can sensemaking mimic AI to, might be a more useful way?
Michele Zanini and I recently wrote a brief post for Harvard Business Review about what this sort of change in worldview might mean for business, from strategy to supply chain management. For example, two faculty members at the Center for Strategic Leadership at the U.S Army War College have suggested that AI could fluidly assign leadership roles based on the specific details of a threatening situation and the particular capabilities and strengths of the people in the team. This would alter the idea of leadership itself: Not a personality trait but a fit between the specifics of character, a team, and a situation.
Yes, this I can see, but that's not making AI into K, but embracing complexity and being able to adapt fluidly in the face of it. To increase agency, my working def of K. This is what sensemaking is for, not AI as such.
Newton’s Laws, the rules and hints for diagnosing a biopsy — to say that they fail at predicting highly particularized events: Will there be a traffic snarl? Are you going to develop allergies late in life? Will you like the new Tom Cruise comedy? This is where traditional knowledge stops, and AI’s facility with particulars steps in.
AI or rather our understanding of complexity that needs to step in? The examples [[David Weinberger]] gives of general things that can't do particularised events are examples of linear generalisations failing at (a higher level of) complexity. Also I would say 'prediction' which is assumed to here be the point of K is not what it is about. Probabilities, uncertainties (which is what linear approaches do: reduce uncertainties on a few things at the cost of making others unknowable within the same model, Heisenberg style), that in complexity you can nudge, attenuate etc. I'd rather involve complexity more deeply in K than AI.
[[David Weinberger]] on K in the age of AI. AI has no outside framework of reference or context as David says is inherent in K (next to Socrates notions of what episteme takes). Says AI may change our notion of K, where AI is better at including particulars, whereas human K is centered on limited generalisations.
"A few weeks ago, we hosted a little dinner in New York, and we just asked this question of 20-plus CDOs [chief data officers] in New York City of the biggest companies, 'Hey, is this an issue?' And the resounding response was, 'Yeah, it's a real mess.'" Asked how many had grounded a Copilot implementation, Berkowitz said it was about half of them. Companies, he said, were turning off Copilot software or severely restricting its use. "Now, it's not an unsolvable problem," he added. "But you've got to have clean data and you've got to have clean security in order to get these systems to really work the way you anticipate. It's more than just flipping the switch."
Companies, half of an anecdotal sample of some 20 US CDOs, have turned Copilot off / restricting it strongly. This as it surfaces info in summaries etc that employees would not have direct access to. No access security connection between Copilot and results. So data governance is blocking its roll-out.
RAG_Techniques
When a user asks Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside their conversation. This creates a dynamic workspace where they can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into their projects and workflows.
we are using set theory so a certain piece of reference text is part of my collection or it's not if it's part of my collection somewhere in my fingerprint is a corresponding dot for it yeah so there is a very clear direct link from the root data to the actual representation and the position that dot has versus all the other dots so the the topology of that space geometry if you want of that patterns that you get that contains the knowledge of the world which i'm using the language of yeah so that basically and that is super easy to compute for um for for a computer i don't even need a gpu
for - comparison - cortical io / semantic folding vs standard AI - no GPU required
for example our standard english language model is trained with something like maybe 100 gigabytes or so of text um that gives it a strength as if you would throw bird at it with the google corpus so the other thing is of course uh a small corpus like that is computed in two hours or three hours on a on a laptop yeah so that's the other thing uh by the way i didn't mention our fingerprints are actually a boolean so when we when we train as i said we are not using floating points
for - comparison - cortical io vs normal AI - training dataset size and time
AI and Gender Equality on Twitter
there are movements that address gender equality issues, which oppose Thai society’s patriarchal culture and patriarchal bias. These include attacking sexual harassment, allowing same-sex marriage, drafting legislation for the protection of people working in the sex industry, and promoting the availability of free sanitary napkins for women.
Artificial Intelligence (AI) in Robotics
Deep learning is about machine learning based on a set of algorithms that attempt to model high-level abstractions in data.
Robotisation is rapid growth as work more precisely and costs saving, for example, Creative studios have 3D printers and the self-learning ability of these production robots are more work efficiently.
Dematerialisation leads to the phenomenon that traditional physical products are becoming software, for example, CDs or DVDs was replaced by streaming services or the replacement of traditional event/travel tickets/ or hard cash to contactless payment by smartphone.
Gig economy A rise in self-employment is typical for the new generation of employees. The gig economy is usually understood to include chiefly two forms of work: ‘crowd working’ and ‘work on-demand via apps’ organized networking platforms. There are more and more independent contractors for individual tasks that companies advertise on online platforms (eg, ‘Amazon Mechanical Turk’).
Autonomous driving is vehicles with the power for self-governance using sensors and navigating without human input.”
Manila has one of the most dangerous transport systems in the world for women (Thomson Reuters Foundation, 2014). Women in urban areas have been sexually assaulted and harassed while in public transit, be it on a bus, train, at the bus stop or station platform, or on their way to/from transit stops.
The New Urban Agenda and the United Nations’ Sustainable Development Goals (5, 11, 16) have included the promotion of safety and inclusiveness in transport systems to track sustainable progress. As part of this effort, AI-powered machine learning applications have been created.
AI for Good3, SDG AI LAB4, IRCAI5 y Global Partnership for Artificial Intelligence6
“apoyar el desarrollo y uso de inteligencia artificial tomando como base los derechos humanos, la inclusión, la diversidad, la innovación y el crecimiento económico, buscando responder a los Objetivos de Desarrollo Sostenible de Naciones Unidas”. (Benjio & Chatila, 2020)
https://flux1.ai/<br /> Flux AI Image Generator
that's why the computer can never be conscious because basically he has none of the characteristics of qualia and he certainly doesn't have free will and Free Will and conscious must work together to create these fields that actually can can direct their own experience and create self-conscious entities from the very beginning
for - AI - consciousness - not possible - Frederico Faggin
“Analysts need to be able to dissect exactly how the AI reached a particular conclusion or recommendation,” says Chief Business Officer Eric Costantini. “Neo4j enables us to enforce robust information security by applying access controls at the subgraph level.”
“Analysts need to be able to dissect exactly how the AI reached a particular conclusion or recommendation,” “Neo4j enables us to enforce robust information security by applying access controls at the subgraph level.” Chief Business Officer Eric Costantini.
for - AI - website simulator - websim.ai
self-link - https://websim.ai/
Interesting thought. This guy relates the upcome of AI (non-fiction) writing to the lack of willingness people have to find out what is true and what is false.
Similar to Nas & Damian Marley's line in the Patience song -- "The average man can't prove of most of the things that he chooses to speak of. And still won't research and find the root of the truth that you seek of."
If you want to form an opinion about something, do this educated, not based on a single source--fact-check, do thorough research.
Charlie Munger's principle. "I never allow myself to have [express] an opinion about anything that I don't know the opponent side's argument better than they do."
It all boils down to a critical self-thinking society.
is it possible to teach machine values
for - question - AI - can we teach AI values?
question - AI - can we teach AI values? - it's likely not possible because we cannot assign metrics to things like - ethics - kindness - happiness
the future future for education and this is a mega Trend that will last in the next decades is that we use artificial intelligence to tailor um educational let's say or didactic Concepts to the specific person so let's say in in the future everybody will have his or her specific let's say training or education profile he or she will run through and artificial intelligence um will will tailor the different educational environments for everybody in the future this is this is a pre this is a pretty clear Trend
for - AI and education - children will have custom tailored education program via AI
this is the reason why I'm not afraid of artificial intelligence taking over
for - question - AI - can AI learn to be intentionally distracted?
human beings don't do that we understand that the chair is not a specifically shaped object but something you consider and once you understood that concept that principle you see chairs everywhere you can create completely new chairs
for - comparison - human vs artificial intelligence
question - comparison - human vs artificial intelligence - Can't an AI also consider things we sit on to then generalize their classifcation algorithm?
the brain is Islam Islam is it is lousy and it is selfish and still it is working yeah look around you working brains wherever you look and the reason for this is that we totally think differently than any kind of digital and computer system you know of and many Engineers from the AI field haven't figured out that massive difference that massive difference yet
for - comparison - brain vs machine intelligence
comparison - brain vs machine intelligence - the brain is inferior to machine in many ways - many times slower - much less accurate - network of neurons is mostly isolated in its own local environment, not connected to a global network like the internet - Yet, it is able to perform extraordinary things in spite of that - It is able to create meaning out of sensory inputs - Can we really say that a machine can do this?
you can Google data if you're good you can Google information but you cannot Google an idea you cannot Google Knowledge because having an idea acquiring knowledge this is what is happening on your mind when you change the way you think and I'm going to prove that in the next yeah 20 or so minutes that this will stay analog in our closed future because this is what makes us human beings so unique and so Superior to any kind of algorithm
for - key insight - claim - humans can generate new ideas by changing the way we think - AI cannot do this
Perfect Resume 작성/샘플