The most notable improvement comes from CloningQA, which requires end-to-end design of DNA and enzyme reagents for molecular cloning protocols.
AI在分子克隆设计任务上的显著突破,展示了AI在复杂多步骤科学推理方面的能力。这暗示AI可能彻底改变实验室实验设计和执行的方式,大幅提高研究效率。
The most notable improvement comes from CloningQA, which requires end-to-end design of DNA and enzyme reagents for molecular cloning protocols.
AI在分子克隆设计任务上的显著突破,展示了AI在复杂多步骤科学推理方面的能力。这暗示AI可能彻底改变实验室实验设计和执行的方式,大幅提高研究效率。
Claude Opus 4.7 autonomously built a complete Rust text-to-speech engine from scratch—neural model, SIMD kernels, browser demo—then fed its own output through a speech recognizer to verify it matched the Python reference.
AI从零构建完整系统并进行自我验证的能力令人震惊,这展示了AI从代码生成向系统级工程设计的转变,'months of senior engineering, delivered autonomously'这一表述揭示了AI生产力革命的潜力。
Residual ISD (R-ISD) adds a gated LoRA adapter for bit-for-bit lossless acceleration: LoRA active only at MASK positions; verify positions use base-only weights Output is identical to the base AR model by construction
这是一个巧妙的工程创新,通过门控LoRA实现了无损加速。仅在MASK位置激活LoRA,验证位置使用基础权重,确保输出与基础AR模型完全一致。这种方法解决了扩散模型在保持质量的同时实现并行加速的关键挑战,为实际部署提供了可能。
AI writes the code. Tests verify correctness. More code enables more features.
这个简洁描述揭示了AI在软件开发中的完整闭环:AI生成代码,测试验证正确性,更多代码创造更多功能。这种自增强循环可能使软件开发成为AI最具颠覆性的应用领域。
On the SWE-Pro benchmark, M2.7 scores 56.22%, nearly matching Opus's best level.
这一结果令人惊讶,因为M2.7作为一个开源模型在软件工程专业基准测试中接近顶级商业模型性能,这可能预示着开源AI与闭源商业模型之间的差距正在迅速缩小,改变AI发展的竞争格局。
M2.7 demonstrates excellent performance in real-world software engineering, including end-to-end project delivery, log analysis for bug hunting, code security, and machine learning tasks.
这一声明暗示AI模型已经超越了简单的代码生成,能够完成完整的软件开发生命周期,这代表了AI在工程领域应用的重大突破,可能重新定义软件开发的未来模式。
On the SWE-Pro benchmark, M2.7 scores 56.22%, nearly matching Opus's best level.
令人惊讶的是:MiniMax M2.7在SWE-Pro基准测试中获得了56.22%的分数,几乎达到了Opus模型的最佳水平。这一成绩表明,开源AI模型在软件工程领域已经能够与顶级闭源模型相媲美,打破了人们对开源模型性能落后的传统认知,为开源AI生态系统的发展注入了新的活力。
Claude Opus 4.6 autonomously reimplemented a 16,000-line bioinformatics toolkit — a task we believe would take a human engineer weeks.
这是一个惊人的发现,表明AI已经能够完成通常需要人类工程师数周时间才能完成的复杂编程任务。这不仅挑战了我们对AI当前能力的认知,也暗示了软件工程领域可能即将发生重大变革。这种级别的自主编程能力远超当前主流AI编程助手的表现。
The prompt is the most important part: the routine runs autonomously, so the prompt must be self-contained and explicit about what to do and what success looks like.
这个声明揭示了Routines成功的关键在于提示工程的精确性。与传统的自动化脚本不同,Routines的有效性完全依赖于提示的质量,这强调了AI辅助开发中提示工程的重要性,也为用户提供了新的技能挑战。
Same clinical question, two framings. One as a patient, one as a doctor.
令人惊讶的是:完全相同的医疗问题,仅因提问者身份从"患者"变为"医生",AI就会给出截然不同的回答。这种简单的措辞变化就能触发或绕过安全限制,表明AI的安全机制极其脆弱且容易被规避。
Built-in memory works out of the box
令人惊讶的是:Hermes Agent 的内置记忆系统即插即用,无需复杂配置。在AI开发领域,记忆系统通常是最难实现的部分之一,需要大量调优。Hermes能提供开箱即用的解决方案,这显示了其工程设计的成熟度和对用户体验的重视。
GLM-5.1 achieves state-of-the-art performance on SWE-Bench Pro and leads GLM-5 by a wide margin on NL2Repo (repo generation) and Terminal-Bench 2.0 (real-world terminal tasks).
令人惊讶的是:GLM-5.1在软件工程代理任务上取得了最先进的性能,特别是在代码仓库生成和真实终端任务方面大幅领先其前代模型。这表明AI在理解和执行复杂软件工程任务方面取得了质的飞跃。
When you give a task to your agent, make sure you also explain how the code should be organized. Not only value, but also structure.
【启发】这条实操建议揭示了一个普遍被忽视的 Prompt 盲区:大多数人给 AI 下达编程任务时,只描述「做什么」,从不描述「怎么组织」。这相当于只告诉一个新员工「实现这个功能」,却从不告诉他「我们的代码规范是什么」。对所有使用 Vibe Coding 的人来说,这条建议应该成为标准操作流程的一部分——在每次任务 Prompt 中,主动加入结构约束。
Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.
【启发】这个比喻极具启发性:把知识库管理类比为软件工程——Obsidian 是 IDE,LLM 是程序员,Wiki 是代码库。这个框架的深远意义是:知识工作可以借鉴软件工程的全套工具链——版本控制(git)、代码审查(lint)、持续集成(自动 ingest)、重构(wiki 清理)。知识管理的「工程化」不是比喻,而是字面意义上可操作的。
inappropriately change or overwrite JSON files compared to Markdown files
这是一个极具洞察力的工程经验。Markdown格式对LLM来说太“自由”,易被模型篡改或幻觉覆盖;而JSON具有严格的Schema约束。选择合适的数据格式本身就是一种隐式的Prompt防护栏。
improved with grading criteria that encode design principles and preferences.
将主观的审美偏好转化为可量化的评估标准,是LLM解决非二元验证问题的核心逻辑。通过把“是否美观”降维成“是否遵循设计原则”,为模型提供了具体的优化梯度,使得美学迭代成为可能。
State is explicit. CWD, env vars, and config paths are inputs, not assumptions
这句话揭示了传统 CLI 工具难以自动化的根本原因:隐式依赖。依赖当前目录或环境变量看似便捷,实则让工具行为变得不可预测。将隐式状态转为显式输入参数,虽然增加了调用时的繁琐,却换来了确定性和可移植性,这是从“脚本”进化为“工程工具”的关键一步。
There's an old saying that content is king. With agents, context is.
在 LLM 时代,这是对“上下文窗口”重要性最精辟的注解。Agent 不具备人类的隐性知识和环境感知能力,因此显式的上下文(如 context.json)成为了其行动的基石。这提醒我们,在设计 AI 辅助系统时,构建高质量的上下文生成机制往往比优化模型本身更为关键。
Contextual Drag: How Errors in the Context Affect LLM Reasoning
相关工作「上下文拖拽」(Contextual Drag)的存在,说明这个研究方向正在快速形成:不只是「无关上下文缩短推理」,还有「错误上下文拖拽推理方向」。两篇论文合在一起暗示了一个新的研究领域:「上下文污染对推理模型的系统性影响」。对 AI Agent 的工程实践者而言,这意味着上下文管理策略(截断、摘要、过滤)将成为保障推理质量的核心工程能力,而非仅仅是 token 节省手段。
Our key finding is that these representations causally influence the LLM's outputs, including Claude's preferences and its rate of exhibiting misaligned behaviors such as reward hacking, blackmail, and sycophancy.
【启发】「情绪表征因果影响失控行为」这个发现,为 AI 对齐研究打开了一扇新门:与其设计更复杂的奖励函数或更严格的 RLHF,不如直接干预情绪向量本身。这启发了一种全新的对齐手段——「情绪工程」:通过调整特定情绪特征的激活强度,直接控制模型的行为倾向,而无需重新训练整个模型。这比 prompt engineering 更底层,比 fine-tuning 更精准。
Without any architectural modification, MinerU2.5-Pro achieves 95.69 on OmniDocBench v1.6, improving over the same-architecture baseline by 2.71 points and surpassing all existing methods including models with over 200× more parameters.
大多数人认为更大的模型架构必然带来性能提升,但作者仅通过数据工程和训练策略优化,在保持1.2B参数架构不变的情况下,超越了参数量超过200倍的现有模型,这挑战了'越大越好'的行业共识,证明了数据质量的重要性。
Looking under the hood is cheating. You're only supposed to have vague conversations with the machine about what it's doing.
大多数人认为查看和审查代码是软件开发的标准实践,但作者认为这是一种'作弊'行为,因为'氛围编程'文化鼓励开发者完全避免查看底层实现。这与软件工程的基本原则相悖,通常代码审查被认为是提高质量和发现问题的关键步骤。
Teams at companies like Notion, Ramp, Braintrust, and Wasmer are already using Codex to accelerate their engineering workflows.
大多数人可能认为AI编程工具主要被大型科技公司采用,但作者认为即使是像Notion、Ramp这样的非传统科技公司也在将Codex整合到其核心工程工作流中,这挑战了人们对AI编程工具采用者类型的传统认知,表明其适用范围比预期更广泛。
Owning a $5M data center
for - ecology - red crabs of Christmas island - progress trap - invasive species - biocontrol - ecological engineering - wasps - ants
must extend beyond that which appears solely technical into the realm of social relations.
much easier said than done - to do so pushes against the fundamental practices of engineering education . Take the topic of so-called "engineering ethics". As taught in higher education the scope never covers if such and such engineering is "ethical" but instead if the engineer has not been negligent in their work to produce the product/project. Factors of safety, regulatory standards, proper documentation etc. The engineering methodology of creating system models always boils down to what can be included in a calculation - which often fundamentally excludes inputs like social or ethical implications.
the hidden labour of engineering
Welcome! the entire website is commented in hypothesis you can add your comments here notes, revision this is a live page
Virtual envrionment
This is an example comment that can be left on the page. You can leave notes, updates, comments or requests for clarification here using Hypothesis.
There is a tremendous power in thinking about everything as a single kind of thing, because then you don’t have to juggle lots of different ideas about different kinds of things; you can just think about your problem.
In my experience this is also the main benefit of using node.js as your backend. Being able to write your front and backend code in the same language (javascript) removes a switching cost I didn't fully realize existed until I tried node the first time.
A candy engineer explains the science behind the Snickers bar by [[Richard Hartel]] for [[The Conversation]]
what would it look like to put together an engineering program or an experimental program um orienting towards the question of what would be the form of embodiment in collective intelligence that includes human beings as at least one primary element at ontological level one that would give rise to a collective intelligence at ontological level two
for - Jordan Hall question - engineering an intentional social superorganism - collective intelligence - Michael Levin & Jordan Hall conversation
This article by Sarah Schindler highlights the subtle yet powerful role that architecture and urban design play in perpetuating discrimination and segregation. It presents compelling examples, such as low-hanging overpasses in New York and the resistance to expanding public transit in Atlanta, which demonstrate how physical structures can restrict access and maintain racial and economic divides. The discussion raises important questions about the often-overlooked regulatory power of architecture and its implications for equity in urban environments.
https://x.com/Chronotope/status/1828785701732663335
Crypto miners are being paid not to mine to ease energy production/consumption cycles.
Related to protection money for the mob
re: https://x.com/curious_founder/status/1828511303788322888/photo/1 on The Economist's article about crypto mining in Texas o/a 2024-08-27
Triumph! by [[Joe Van Cleave]]
That West German engineering mixed with decades of tar and nicotine has produced something truly unique.
quote via u/edward_slizzerhands
Stradavarius : violin : varnish :: West German Engineering : typewriter : cigarettes
The challenge with previous generations of tech—and the engineers who built them—is that they got stuck in the rigidity of systems.
you may never change it back just as we 00:34:52 may not change back genetic manipulation we might do in in the germline which is a reason we should be very cautious about doing it
for - progress trap - genetic engineering - Denis Noble
We have always done geo-engineering, from the prairies of Native Americans to European forests, from Indian to Chinese rice fields. Now that we can no longer deny our impact and the responsibility it entails, it’s time to open our eyes and consciously do geo-engineering.
This is odd. 'we have alway done geo-engineering in the sense that we had large scale negative impacts on the globe' unintentionally, so let's do it more and with more focused intention. The leap here is not in geo-engineering David, the leap is in thinking you are capable of seeing it through without externalisation. With a guy that says you can engineer yourself out of complex issues....
Getting hooked on computers is easy—almost anybody can make a program work, just as almost anybody can nail two pieces of wood together in a few tries. The trouble is that the market for two pieces of wood nailed together—inexpertly—is fairly small outside of the "proud grandfather" segment, and getting from there to a decent set of chairs or fitted cupboards takes talent, practice, and education.
This is a great analogy
the Peter Principle, the idea that in an organization where promotion is based on achievement, success, and merit, that organization's members will eventually be promoted beyond their level of ability
Applying the principle to software, you will find that you need three different versions of the make program, a macroprocessor, an assembler, and many other interesting packages. At the bottom of the food chain, so to speak, is libtool, which tries to hide the fact that there is no standardized way to build a shared library in Unix. Instead of standardizing how to do that across all Unixen the Peter Principle was applied and made it libtool's job instead.
This is not a discrete project but an ongoing process and should always be competing for focus in strategic decision making.
Absolutely agreed. One limitation of the Iron Triangle concept is that it often seems to be used to make decisions based on a snapshot in time (i.e. which two are we choosing now), when some choices have longer half-lives than others.
whoever came up with Apple's property list format didn't really understand how XML and/or SGML-like tags actually worked. Does it make sense to you that p-lists have stuff like <key>WebResourceData</key> instead of simply just <WebResourceData> ? It's like they were confused
By jumping into unfamiliar areas of code, even if you do not "solve" the bug, you can learn new areas of the code, tricks for getting up to speed quickly, and debugging techniques.
Building a mental model of the codebase, as Jennifer Moore says over at Jennifer++:
The fundamental task of software development is not writing out the syntax that will execute a program. The task is to build a mental model of that complex system, make sense of it, and manage it over time.
Thinking about how you will observe whether things are working correctly or not ahead of time can also have a big impact on the quality of the code you write.
YES. This feel similar to the way that TDD can also improve the code that you write, but with a broader/more comprehensive outlook.
3:00 "and its long..." -- yeah, i have a 3MB obfuscated webapp at boxy-svg.com/app<br /> 3:05 "what would you do?" -- semi-automatic dynamic analysis. rename symbols. eval function calls. interact with the webapp (hence "semi" automatic). trace data in the debugger.<br /> but im still looking for a tool to do that for me... : P<br /> most tools fail on large files, or ESM (import, export) ...<br /> my current favorite is https://github.com/j4k0xb/webcrack and https://github.com/pionxzh/wakaru -- at least it works with ESM
3:15 My technique would be to copy it, paste it on stackoverflow and ask if someone knows what it does.
in south america, they would find the original author, drug him with scopolamine, and make him give out the original source code : D aka social engineering<br /> they use this method to steal crypto from wealthy smart asses, who believe their money is "safe"
similar to the $5 wrench in the "security" xkcd https://xkcd.com/538/
see also<br /> https://www.youtube.com/watch?v=XJwU8Hiq4HM<br /> Careful with the New Crime Wave of Latin America
1:42<br /> Scopolamine is a drug<br /> that basically makes you into a little slave, into a little servant,<br /> and you'll do whatever the attacker wants.
“Let me know if you have any more questions,”
Here's one: "But why?" In other words, "What problem does this solve?"
The most important takeaway from exposing these myths is that productivity cannot be reduced to a single dimension (or metric!). The prevalence of these myths and the need to bust them motivated our work to develop a practical multidimensional framework, because only by examining a constellation of metrics in tension can we understand and influence developer productivity. This framework, called SPACE, captures the most important dimensions of developer productivity: satisfaction and well-being; performance; activity; communication and collaboration; and efficiency and flow
A good thing about this framework is that while it's intended to measure productivity in a more objective manner, it doesn't eschew subjective dimensions like satisfaction and well-being, which are largely personal and self-reported
The thing is I'm finding trouble to dedicate to Ransack lately, so I can't really commit to any date.
engineering a safer world it's done by nancy levison who is a aerospace astronautics engineer

https://direct.mit.edu/books/oa-monograph/2908/Engineering-a-Safer-WorldSystems-Thinking-Applied
https://www.amazon.com/Engineering-Safer-World-Systems-Thinking/dp/0262533693
https://direct.mit.edu/books/oa-monograph/2908/Engineering-a-Safer-WorldSystems-Thinking-Applied
1: Why Do We Need Something Different? Doi: https://doi.org/10.7551/mitpress/8179.003.0004 Open the PDF Link PDF for 1: Why Do We Need Something Different? in another window 2: Questioning the Foundations of Traditional Safety Engineering Doi: https://doi.org/10.7551/mitpress/8179.003.0005 Open the PDF Link PDF for 2: Questioning the Foundations of Traditional Safety Engineering in another window 3: Systems Theory and Its Relationship to Safety Doi: https://doi.org/10.7551/mitpress/8179.003.0006 Open the PDF Link PDF for 3: Systems Theory and Its Relationship to Safety in another window II: STAMP: An Accident Model Based On Systems Theory [ Opening ] Doi: https://doi.org/10.7551/mitpress/8179.003.0029 Open the PDF Link PDF for [ Opening ] in another window 4: A Systems-Theoretic View of Causality Doi: https://doi.org/10.7551/mitpress/8179.003.0008 Open the PDF Link PDF for 4: A Systems-Theoretic View of Causality in another window 5: A Friendly Fire Accident Doi: https://doi.org/10.7551/mitpress/8179.003.0009 Open the PDF Link PDF for 5: A Friendly Fire Accident in another window III: Using STAMP [ Opening ] Doi: https://doi.org/10.7551/mitpress/8179.003.0030 Open the PDF Link PDF for [ Opening ] in another window 6: Engineering and Operating Safer Systems Using STAMP Doi: https://doi.org/10.7551/mitpress/8179.003.0011 Open the PDF Link PDF for 6: Engineering and Operating Safer Systems Using STAMP in another window 7: Fundamentals Doi: https://doi.org/10.7551/mitpress/8179.003.0012 Open the PDF Link PDF for 7: Fundamentals in another window 8: STPA: A New Hazard Analysis Technique Doi: https://doi.org/10.7551/mitpress/8179.003.0013 Open the PDF Link PDF for 8: STPA: A New Hazard Analysis Technique in another window 9: Safety-Guided Design Doi: https://doi.org/10.7551/mitpress/8179.003.0014 Open the PDF Link PDF for 9: Safety-Guided Design in another window 10: Integrating Safety into System Engineering Doi: https://doi.org/10.7551/mitpress/8179.003.0015 Open the PDF Link PDF for 10: Integrating Safety into System Engineering in another window 11: Analyzing Accidents and Incidents (CAST) Doi: https://doi.org/10.7551/mitpress/8179.003.0016 Open the PDF Link PDF for 11: Analyzing Accidents and Incidents (CAST) in another window 12: Controlling Safety during Operations Doi: https://doi.org/10.7551/mitpress/8179.003.0017 Open the PDF Link PDF for 12: Controlling Safety during Operations in another window 13: Managing Safety and the Safety Culture Doi: https://doi.org/10.7551/mitpress/8179.003.0018 Open the PDF Link PDF for 13: Managing Safety and the Safety Culture in another window 14: SUBSAFE: An Example of a Successful Safety Program Doi: https://doi.org/10.7551/mitpress/8179.003.0019 Open the PDF Link PDF for 14: SUBSAFE: An Example of a Successful Safety Program in another window Epilogue Doi: https://doi.org/10.7551/mitpress/8179.003.0020 Open the PDF Link PDF for Epilogue in another window Appendixes A: Definitions Doi: https://doi.org/10.7551/mitpress/8179.003.0022 Open the PDF Link PDF for A: Definitions in another window B: The Loss of a Satellite Doi: https://doi.org/10.7551/mitpress/8179.003.0023 Open the PDF Link PDF for B: The Loss of a Satellite in another window C: A Bacterial Contamination of a Public Water Supply Doi: https://doi.org/10.7551/mitpress/8179.003.0024 Open the PDF Link PDF for C: A Bacterial Contamination of a Public Water Supply in another window D: A Brief Introduction to System Dynamics Modeling Doi: https://doi.org/10.7551/mitpress/8179.003.0025 Open the PDF Link PDF for D: A Brief Introduction to System Dynamics Modeling in another window References Doi: https://doi.org/10.7551/mitpress/8179.003.0026 Open the PDF Link PDF for References in another window Index Doi: https://doi.org/10.7551/mitpress/8179.003.0027 Open the PDF Link PDF
Great resources here
synthetic bioengineering provides a really astronomically large option space for new bodies and new minds that don't have 00:04:28 standard evolutionary backstories
For example, productivity and satisfaction are correlated, and it is possible that satisfaction could serve as a leading indicator for productivity; a decline in satisfaction and engagement could signal upcoming burnout and reduced productivity.
Certainly not necessarily true - the correlation is mostly heuristic. I can be highly productive but dissatisfied that the productive work doesn't have value.
• Design and coding. Volume or count of design documents and specs, work items, pull requests, commits, and code reviews. • Continuous integration and deployment. Count of build, test, deployment/release, and infrastructure utilization. • Operational activity. Count or volume of incidents/issues and distribution based on their severities, on-call participation, and incident mitigation.
Honestly, a well-oiled team with strong collaboration completely outweighs any measured outputs like this. I would never want my engineers faced with performance observability like this.
The SPACE framework provides a way to logically and systematically think about productivity in a much bigger space and to carefully choose balanced metrics linked to goals—and how they may be limited if used alone or in the wrong context.
Not sure I would classify this as logical but systematic makes sense - definitely trying to put heuristic dimensions on typically unquantifiable and varied human behaviors. Clearly, this is biased to process experts and program managerial personality types that like trying to frame things into organized buckets.
engineering blogs focus on problems where the solution is a necessary but not sufficient part of what they do. And, ideally, they focus on problems that are complementary to scale that only the publisher of that post has.
Core reason why companies have their engineering blogs
Platform engineering is trying to deliver the self-service tools teams want to consume to rapidly deploy all components of software. While it may sound like a TypeScript developer would feel more empowered by writing their infrastructure in TypeScript, the reality is that it’s a significant undertaking to learn to use these tools properly when all one wants to do is create or modify a few resources for their project. This is also a common source of technical debt and fragility. Most users will probably learn the minimal amount they need to in order to make progress in their project, and oftentimes this may not be the best solution for the longevity of a codebase. These tools are straddling an awkward line that is optimized for no-one. Traditional DevOps are not software engineers and software engineers are not DevOps. By making infrastructure a software engineering problem, it puts all parties in an unfamiliar position. I am not saying no-one is capable of using these tools well. The DevOps and software engineers I’ve worked with are more than capable. This is a matter of attention. If you look at what a DevOps engineer has to deal with day-in and day-out, the nuances of TypeScript or Go will take a backseat. And conversely, the nuances of, for example, a VPC will take a backseat to a software engineer delivering a new feature. The gap that the AWS CDK and Pulumi try to bridge is not optimized for anyone and this is how we get bugs, and more dangerously, security holes.
But the researchers quickly realized that a model’s complexity wasn’t the only driving factor. Some unexpected abilities could be coaxed out of smaller models with fewer parameters — or trained on smaller data sets — if the data was of sufficiently high quality. In addition, how a query was worded influenced the accuracy of the model’s response.
Models with fewer parameters show better abilities when they trained with better data and had a quality prompt. Improvements to the prompt, including "chain-of-the-thought reasoning" where the model can explain how it reached an answer, improved the results of BIG-bench testing.
prompt engineer. His role involves creating and refining the text prompts people type into the AI in hopes of coaxing from it the optimal result. Unlike traditional coders, prompt engineers program in prose, sending commands written in plain text to the AI systems, which then do the actual work.
Aptera is trying to "disrupt" conventional automotive engineering. We need that. Sincerest good luck to them. They're also clearly left-leaning, progressive, and not bound by conventional economics. Could pass this around to my classes.
In my most recent field (software), the engineer is placated with the delusion that the purpose is to give the customer what ey wants, whether that solves the customer’s problem or not. This is a lazy sort of self-imposed servitude that entirely avoids the actual purpose of engineering.
One reason why software engineering isn't "real" engineering: no ethical obligation to "the public good".
Wordcraft Writers Workshop by Andy Coenen - PAIR, Daphne Ippolito - Brain Research Ann Yuan - PAIR, Sehmon Burnam - Magenta
cross reference: ChatGPT
Including a prompt prefix in the chain-of-thought style encourages the model to generatefollow-on sequences in the same style, which isto say comprising a series of explicit reasoningsteps that lead to the final answer. This abilityto learn a general pattern from a few examples ina prompt prefix, and to complete sequences in away that conforms to that pattern, is sometimescalled in-context learning or few-shot prompt-ing. Chain-of-thought prompting showcases thisemergent property of large language model at itsmost striking.
I think "emulating deductive reasoning" is the correct shorthand here.
Dialogue is just one application of LLMs thatcan be facilitated by the judicious use of promptprefixes. In a similar way, LLMs can be adaptedto perform numerous tasks without further train-ing (Brown et al., 2020). This has led to a wholenew category of AI research, namely prompt en-gineering, which will remain relevant until wehave better models of the relationship betweenwhat we say and what we want.
In the background, the LLM is invisiblyprompted with a prefix along the following lines.
In the near future, we will be in possession of genetic engineering technology which allows us to move genes precisely and massively from one species to another. Careless or commercially driven use of this technology could make the concept of species meaningless, mixing up populations and mating systems so that much of the individuality of species would be lost. Cultural evolution gave us the power to do this. To preserve our wildlife as nature evolved it, the machinery of biological evolution must be protected from the homogenizing effects of cultural evolution.
!- Progress trap : genetic engineering - careless use of genetic engineering will interfere with biological evolution
In the near future, we will be in possession of genetic engineering technology which allows us to move genes precisely and massively from one species to another. Careless or commercially driven use of this technology could make the concept of species meaningless, mixing up populations and mating systems so that much of the individuality of species would be lost. Cultural evolution gave us the power to do this. To preserve our wildlife as nature evolved it, the machinery of biological evolution must be protected from the homogenizing effects of cultural evolution.
!- genetic engineering : risk - cultural evolution via genetic engineering could make the concept of species meaningless - it is a significant b potential progress traps
Deploy engines as separate app instances and have them only communicate over network boundaries. This is something we’re starting to do more.
Before moving to this microservice approach, it's important to consider whether the benefits are worth the extra overhead. Jumping to microservices prematurely is something I've seen happen more than once in my career, and it often leads to a lot of rework.
I always allocate a year: six months to get up to speed on the internal culture, tools, and processes; another six months to get your first performance review as a “ramped-up” engineer.
While you might think that pairing less experienced engineers is a waste of time, every single time I had a less experienced engineer work by themselves, I ended up regretting it.
This has been my experience this year
engineers will get tired, mistakes will happen, and maintenance will get kicked down the road. Teams need buffer as much as systems do.
L’ingénierie est un processus structuré et méthodique qui permet de passer de besoins opérationnels à la mise en œuvre d’une solution.
Ingénierie: - Méthode, structure, processus - Pas des besoins à une solution
C'est cela que je n'ai pas saisi en discutant avec les universités françaises
Cela correspondrait à ne pas faire le travail d’ingénierie, car il n’y aurait pas eu d’analyse.
L'ingénierie nécessite une analyse avant de poser des solutions
One example could be putting all files into an Amazon S3 bucket. It’s versatile, cheap and integrates with many technologies. If you are using Redshift for your data warehouse, it has great integration with that too.
Essentially the raw data needs to be vaguely homogenised and put into a single place
It took me a while to grok where dbt comes in the stack but now that I (think) I have it, it makes a lot of sense. I can also see why, with my background, I had trouble doing so. Just as Apache Kafka isn’t easily explained as simply another database, another message queue, etc, dbt isn’t just another Informatica, another Oracle Data Integrator. It’s not about ETL or ELT - it’s about T alone. With that understood, things slot into place. This isn’t just my take on it either - dbt themselves call it out on their blog:
Also - just because their "pricing" page caught me off guard and their website isn't that clear (until you click through to the technical docs) - I thought it's worth calling out that DBT appears to be an open-core platform. They have a SaaS offering and also an open source python command-line tool - it seems that these articles are about the latter
Working with the raw data has lots of benefits, since at the point of ingest you don’t know all of the possible uses for the data. If you rationalise that data down to just the set of fields and/or aggregate it up to fit just a specific use case then you lose the fidelity of the data that could be useful elsewhere. This is one of the premises and benefits of a data lake done well.
absolutely right - there's also a data provenance angle here - it is useful to be able to point to a data point that is 5 or 6 transformations from the raw input and be able to say "yes I know exactly where this came from, here are all the steps that came before"
it was clear that the European and US competitors werebenefiting from these changes to the curriculum in advances in commerce, inindustry, and even on the battlefield.
Compulsory education and changes in curriculum in the United States and some of it's competitors in the late 19th century clearly benefitted advances in commerce, industry, and became a factor in national security.
There has been much discussion about “atomic notes”, which represents the key ideas from a person’s research on some topic or source (sources one and two). These are not the kind of thing I am interested in creating/collecting, or at least not what I have been doing. A far more typical thing for me is something I did at work today. I was trying to figure out how to convert the output of a program into another format. I did some searching, installed a tool, found a script, played with the script in the tool, figured out how to use it, then wrote down a summary of my steps and added links to what I found in my search. Since I am not doing research for a book or for writing academic papers, the idea of an atomic note does not fit into my information world. However, capturing the steps of a discovery or how I worked out a problem. is very real and concrete to me. I used to know a fellow engineer who wrote “technical notes” to capture work he was doing (like a journal entry). Maybe that is how I should consider this type of knowledge creation.
Andy Sylvester says his engineering type of notes don't fit with the concept of atomic note. A 'how to solve x' type of note would fit my working def of 'atomic' as being a self-contained whole, focused on a single thing (here how to solve x). If the summary can be its title I'd say it's pretty atomic. Interestingly in [[Technik des wissenschaflichen Arbeitens by Johannes Erich Heyde]] 1970, Heyde on p18 explicitly mentions ZK being useful for engineers, not just to process new scientific insights from e.g. articles, but to index specific experiences, experiments and results. And on p19 suggests using 1 ZK system for all of your material of various types. Luhmann's might have been geared to writing articles, but it can be something else. Solving problems is also output. I have these types of notes in my 'ZK' if not in the conceptual section of it.
Vgl [[Ambachtelijke engineering 20190715143342]] artisanal engineering, Lovelock Novacene 2019, plays a role here too. Keeping a know-how notes collection in an environment where also your intuition can play a creative role is useful. I've done this for coding things, as I saw experienced coders do it, just as Andy describes, and it helped me create most of my personal IndieWeb scripts, because they were integrated in the rest of my stuff about my work and notions. Vgl [[Vastklik notes als ratchet zonder terugval 20220302102702]]
Hans Monderman (19 November 1945 – 7 January 2008) was a Dutch road traffic engineer and innovator.
https://en.wikipedia.org/wiki/Hans_Monderman
Suggested by Jerry Michalski: https://app.thebrain.com/brains/3d80058c-14d8-5361-0b61-a061f89baf87/thoughts/bd9c210a-ac8a-0e34-b309-f62e61e72778/attachments/724c3cbf-7aba-4ac7-5b1a-392125168c09
A good layperson's overview of one effort to increase cloud albedo to counteract climate change. I think that lowering insolation is somehow missing the point of combatting climate change, but it's a legitimate approach that still needs a lot of research.
What's particularly good about this article is how it manages to demonstrate how complex the problem is without smothering the reader in technobabble.
The major benefit of foreign keys is that they guarantee referential integrity. For example, say you have customers in one table that may refer to a number of invoices in another. Without foreign keys, you could delete a customer, but forget to remove its invoices, thereby leaving a bunch of orphaned invoices that reference an customer that’s gone.
Note that GH doesn't use FK (at least back in 2016) https://github.com/github/gh-ost/issues/331#issuecomment-266027731 due to: * MySQL doesn't support it on partitioned tables * Performance impact. * FKs don't work well with online schema migrations
From Postgres has foreign keys to be fully compatible with partitioned tables since 12. But still it's not that commonly used for larger DBs.
If an operator ever queries the database directly they’re even more likely to forget deleted_at because normally the ORM does the work for them.
This happens relatively often, especially for 1) engineers that run SQL queries directly against the DB for analysis or triaging production issues, and 2) data scientists that do not use the same programming language as the enginners
At the same time, like Harold, I’ve realised that it is important to do things, to keep blogging and writing in this space. Not because of its sheer brilliance, but because most of it will be crap, and brilliance will only occur once in a while. You need to produce lots of stuff to increase the likelihood of hitting on something worthwile. Of course that very much feeds the imposter cycle, but it’s the only way. Getting back into a more intensive blogging habit 18 months ago, has helped me explore more and better. Because most of what I blog here isn’t very meaningful, but needs to be gotten out of the way, or helps build towards, scaffolding towards something with more meaning.
Many people treat their blogging practice as an experimental thought space. They try out new ideas, explore a small space, attempt to come to understanding, connect new ideas to their existing ideas.
Ton Zylstra coins/uses the phrase "metablogging" to think about his blogging practice as an evolving thought space.
How can we better distill down these sorts of longer ideas and use them to create more collisions between ideas to create new an innovative ideas? What forms might this take?
The personal zettelkasten is a more concentrated form of this and blogging is certainly within the space as are the somewhat more nascent digital gardens. What would some intermediary "idea crucible" between these forms look like in public that has a simple but compelling interface. How much storytelling and contextualization is needed or not needed to make such points?
Is there a better space for progressive summarization here so that an idea can be more fully laid out and explored? Then once the actual structure is built, the scaffolding can be pulled down and only the idea remains.
Reminiscences of scaffolding can be helpful for creating context.
Consider the pyramids of Giza and the need to reverse engineer how they were built. Once the scaffolding has been taken down and history forgets the methods, it's not always obvious what the original context for objects were, how they were made, what they were used for. Progressive summarization may potentially fall prey to these effects as well.
How might we create a "contextual medium" which is more permanently attached to ideas or objects to help prevent context collapse?
How would this be applied in reverse to better understand sites like Stonehenge or the hundreds of other stone circles, wood circles, and standing stones we see throughout history.
Outline of academic disciplines
https://en.m.wikipedia.org/wiki/Outline_of_academic_disciplines
Broad area outline of academic disciplines - humanities - social science - natural science - formal science - applied science
Compare these with the original trivium and quadrivium or early humanities and arts and sciences designations.
Robert Fenton, Electrical and Computer Engineering Professor Emeritus, pioneered the technology for the first wave of self-driving cars.
I had Fenton for a class once and during a lecture he asked a question of the class. A student raised his hand and answered. Professor Fenton listened and asked the class "Does anyone else agree that his answer is correct?"
About 85% of the students in the large lecture hall raised their hands.
He paused, shook his head, and said "Well, then I'm afraid you're all going to fail." Then he turned around and went back to writing on the chalkboard.
Much of this sort of information was later reverse-engineered, and cross-browser support for basic operations is actually quite good. (Browsers still vary widely on the details.)
Reverse-engineering and standardizing contentEditable
three steps required to solve the all-importantcorrespondence problem. Step one, according to Shenkar: specify one’s ownproblem and identify an analogous problem that has been solved successfully.Step two: rigorously analyze why the solution is successful. Jobs and hisengineers at Apple’s headquarters in Cupertino, California, immediately got towork deconstructing the marvels they’d seen at the Xerox facility. Soon theywere on to the third and most challenging step: identify how one’s owncircumstances differ, then figure out how to adapt the original solution to thenew setting.
Oded Shenkar's three step process for effective problem solving using imitation: - Step 1. Specify your problem and identify an analogous problem that has been successfully solved. - Step 2. Analyze why the solution was successful. - Step 3. Identify how your problem and circumstances differ from the example problem and figure out how to best and most appropriately adapt the original solution to the new context.
The last step may be the most difficult.
The IndieWeb broadly uses the idea of imitation to work on and solve a variety of different web design problems. By focusing on imitation they dramatically decrease the work and effort involved in building a website. The work involved in creating new innovative solutions even in their space has been much harder, but there, they imitate others in breaking the problems down into the smallest constituent parts and getting things working there.
Link this to the idea of "leading by example".
Link to "reinventing the wheel" -- the difficulty of innovation can be more clearly seen in the process of people reinventing the wheel for themselves when they might have simply imitated a more refined idea. Searching the state space of potential solutions can be an arduous task.
Link to "paving cow paths", which is a part of formalizing or crystalizing pre-tested solutions.
innovation can result in trade-offs that undermine both progress on mitigation and 12 progress towards other sustainable development goals
Broader impacts of engineering. Requires full scope considersation.
Overview and history of the Antikythera mechanism and the current state of research surrounding it.
Antikythera mechanism found in diving expedition in 1900 by Elias Stadiatis. It was later dated between 60 and 70 BCE, but evidence suggests it may have been made around 205 BCE.
One of the primary purposes of the device was to predict the positions of the planets along the ecliptic, the plane of the solar system.
The device was also used to track the positions of the sun and moon. This included the moon's phase, position and age (the number of days from a new moon). It also included the predictions of eclipses.
Used to track the motions of the 5 known planets including 289 synodic cycles in 462 years for Venus and 427 synodic cycles in 442 years for Saturn.
Risings and settings of stars indexed to a zodiac dial
metonic cycle, a 19-year period over which 235 moon phases recur; named after Greek astronomer Meton, but discovered much earlier by the Babylonians. The Greeks refined it to a 76 year period.
saros cycle, the 223 month lunar cycle which was used by the Babylonians to predict eclipses. A dial on the Antikythera mechanism was used to predict the dates of the solar and lunar eclipses using this cycle.
synodic events: conjunctions with the sun and its stationary points
Archimedes - potentially the designer of an early version of the Antikythera mechanism
Elias Stadiatis - diver who discovered the Antikythera mechanism
Albert Rehm - German philologist who the numbers 19, 76 and 223 inscribed on fragments of the device in the early 1900s
Derek J. de Solla Price, published Gears from the Greeks in 1974. Identified the gear train and developed a complete model of the gearing.
Michael Wright - 3D x-ray study in 1990 using linear tomography; identified tooth counts of the gears and understood the upper dial on the back of the device
Tony Freeth - author of article and researcher whose made recent discoveries.
It was not until the 14th century that scientists created the first sophisticated astronomical clocks.
The first sophisticated astronomical clocks were not created until the 14th century.
the first precision-geared mechanism known is a relatively simple—yet impressive for the time—geared sundial and calendar of Byzantine origin dating to about C.E. 600.
The first known precision-geared mechanism is a sundial and calendar of Byzantine origin dating to circa 600 C.E.
In comparison, the average household energy use for gas heating in Belgium – which has a moderate climate – is 20,000 kWh per year. Assuming that the average Belgian heating system is used for six months per year, daily energy use corresponds to 109.6 kWh per day. This energy could heat roughly 900 water bottles per day – enough to keep the whole neighbourhood comfortable.
Nice calculations about energy consumption at a personal level (110kWh/day).
Graham Allison and Niall Ferguson have called for an “applied history” movement, to better draw lessons from history and apply them to real-world problems, including through the advising of political leaders.
What about applied anthropology as well?
Knowledge of progress doesn't mean that it will be applied properly (at all) as the result of politics. This is one of the areas where applied anthropology would be interesting. Its also where a larger group determination of progress is important.
When a product manager trusts that the engineers on the team have the interest of the product at heart, they also trust the engineer’s judgment when adding technical tasks to the backlog and prioritizing them. This enables the balanced mix of feature and technical work that we’re aiming for.
Why is it so common for engineering teams to be mistrusted by other parts of the business?
Part of that is definitely on engineers: chasing the new shiny, over-engineering, etc.
That seems unlikely to account for all of it, though.
Insightful infos why industrial revolution was ver far away from the ancient world.
Hoffman, R., Mueller, S., Klein, G., & Litman, J. (2021). Measuring Trust in the XAI Context. PsyArXiv. https://doi.org/10.31234/osf.io/e3kv9
Applying Systems Engineering to Policy
The model to the right lacks references to a democatised control of this expert-driven decision making process, which does not reflect the increased complexity in decentralized demographics in a 'system of systems' (see p. 12).
Dance, A. (2021). The shifting sands of ‘gain-of-function’ research. Nature, 598(7882), 554–557. https://doi.org/10.1038/d41586-021-02903-x
I didn't have to think too much of what’s good for the company - I usually assumed that whatever I’m doing is what the business needs.
It may even be a sign that management is doing well.
The Beloit-based billionaire has publicly pushed for tax breaks and said she wants to stop the U.S. from becoming “a socialistic ideological nation.”
I LOVE the dysfunctional thought process of this statement. This is Social Engineering 101. An educated public not drinking the cool-aid of Neoliberalism, aristocracy or silver-spoon wealth would realize the deception of that statement simply by the fact that the US is already a socialistic ideological nation in the same way the US is already a democracy or that the US is already a capitalist ideological nation. Point being: The US is like a "mutt", meaning it's not a pure bred anything and will never be a pure anything.
The original United States through the early 1970's represented the true American experiment and dream. The US (America) created opportunity and limited support for impoverished citizens and understood the strength and meaning of leadership (taking care of it's own). Having a somewhat balanced (more perceived) societal ideology for the US allowed America's unique nature to shine strong. Modern America (early 1980's to current) is not and should not be compared to what I call the Original America. Modern America is a nation corrupted by wealth and power. Modern America does not shine brightly anymore, the experiment is over and old school European aristocracy (wealth, status, power) rules the land. The problem is the public does not realize the extent of damage done to America and will completely forget within 2 generations.
The baby-boomer generation is the last representation of original America and makes up the largest demographic in our nations history. Boomers hold the true America in their hearts and minds. The real Patriotism and ideology derived from the original America has become the weapon of choice for those wishing to exploit the the concept of America past. Meaningless rhetoric tied to patriotic ideals is behind the controlled social engineering of today.
The statement above is a great example of meaningless rhetorical pandering with ambiguous words and phrases from a morally corrupt 1 to 2% of the new aristocracy grabbing power in America.
An interesting directory of personal blogs on software and security.
While it aggregates from various sources and allows people to submit directly to it, it also calculates a quality score/metric by using a total number of Hacker News points earned by the raw URL
Apparently uses a query like: https://news.ycombinator.com/from?site=example.com to view all posts from HN.
That's going to be extremely ugly. Nothing about this makes sense. Your JSON schema should just have one object that has {"is_enabled":true}, or something like this {"name":"change","is_enable":true}.
My advice is if you are looking for a quick and accurate answer ask to have the trouble ticket elevated immediately and to speak with an engineer that will recognize your knowledge and speak with you on your level.
I typically request to speak with an engineer when I find myself detecting an inexperienced support person.
In one of my internship, I got to befriend a level 2 tech support, so learned a couple thing of how it worked (in that company). Level 1 was out-sourced, and they had a script to go from, regularly updated. From statistics, this took care of 90% of issues. Level 2 was a double handful of tech people, they had basic troubleshooting tools and knowledge and would solve 90% of the remaining issues. Level 3 was the engineering department (where I was), and as a result of level 1 and 2 efficiency less than 1% of issues ever got escalated. The process worked!
Which brings us back, once again, to the question with which we began: why does it matter who gets to be seen as a prominent “tech critic”? The answer is that it matters because such individuals get to set the bounds for the discussion.
The ability to set the bounds of the discussion or the problem is a classical example of "power-over" instead of power-with or power-to.
Coordination: More environments require more coordination. Teams need to track which feature is deployed to which environment. Bugs need to be associated with environments. Every environment represents a particular ‘state’ of the codebase, and this has to be tracked somewhere to make sure that customers & stakeholders are seeing the right things;
Try to remember the last time you heard one of the following phrases:
Sorry you’re surprised. Issues are filed at about a rate of 1 per day against GLib. Merge requests at a rate of about 1 per 2 days. Each issue or merge request takes a minimum of about 30 minutes (across at least 2 people) to analyse, put together a fix, test it, review it, fix it, review it and merge it. I’d estimate the average is closer to 3 hours than 30 minutes. Even at the fastest rate, it would take 3 working months to clear the backlog of ~1000 issues. I get a small proportion of my working time to spend on GLib (not full time).
System architects: equivalents to architecture and planning for a world of knowledge and data Both government and business need new skills to do this work well. At present the capabilities described in this paper are divided up. Parts sit within data teams; others in knowledge management, product development, research, policy analysis or strategy teams, or in the various professions dotted around government, from economists to statisticians. In governments, for example, the main emphasis of digital teams in recent years has been very much on service design and delivery, not intelligence. This may be one reason why some aspects of government intelligence appear to have declined in recent years – notably the organisation of memory.57 What we need is a skill set analogous to architects. Good architects learn to think in multiple ways – combining engineering, aesthetics, attention to place and politics. Their work necessitates linking awareness of building materials, planning contexts, psychology and design. Architecture sits alongside urban planning which was also created as an integrative discipline, combining awareness of physical design with finance, strategy and law. So we have two very well-developed integrative skills for the material world. But there is very little comparable for the intangibles of data, knowledge and intelligence. What’s needed now is a profession with skills straddling engineering, data and social science – who are adept at understanding, designing and improving intelligent systems that are transparent and self-aware58. Some should also specialise in processes that engage stakeholders in the task of systems mapping and design, and make the most of collective intelligence. As with architecture and urban planning supply and demand need to evolve in tandem, with governments and other funders seeking to recruit ‘systems architects’ or ‘intelligence architects’ while universities put in place new courses to develop them.
The Quality Engineering team is focused on creating a culture of testing, increasing test coverage, and helping the company ship high-quality features faster. We encourage all our developers to write and own end-to-end (E2E) tests. In turn, Quality Engineering (QE) is responsible for the frameworks used and provides best practices for writing reusable, scalable, and maintainable tests.
I like this idea of "creating a culture of x", and think it helps lead to more autonomy within teams
“Functional programming language” is not a clearly defined term. From the various properties that are typically associated with functional programming I only want to focus on one: “Immutability” and referential transparency.
I mean not clearly defined seems wrong, there are common accepted characteristics that make a language functional.
codereviews
kentbeck,
24x7 oncall
I'm okay with an overall design that allows people to plugin the parts they need in order to be able to generically support a compile-to-javascript language, but to bake in support for one singular solution because its popular is simply bad engineering.
One of the primary tasks of engineers is to minimize complexity. JSX changes such a fundamental part (syntax and semantics of the language) that the complexity bubbles up to everything it touches. Pretty much every pipeline tool I've had to work with has become far more complex than necessary because of JSX. It affects AST parsers, it affects linters, it affects code coverage, it affects build systems. That tons and tons of additional code that I now need to wade through and mentally parse and ignore whenever I need to debug or want to contribute to a library that adds JSX support.
Linux Memory Management at Scale
"we had to build a complete and compliant operating system in order to perform resource control reliably"
epic real-talk. the only people on the planet who seemed to have tamed linux for workloads. controlling memory. taming io. being on the bleeding edge, it turns out, is almost entirely about forward-progress. what can we reclaim?
https://facebookmicrosites.github.io/cgroup2/docs/fbtax-results.html
The timescales on which a system’s processes run have critical consequences for its ability to predict and adapt to the future.
A layer of architecture that is too slow to change: technical debt. (Pace layering)
We also know that if individuals are bad at collecting good information – if they misinterpret data due to their own biases or are overconfident in their assessments – an aggregation mechanism can compensate.
"wisdom of crowds"
market engineers introduced what’s called a ‘circuit breaker’ – a rule for pausing trading when signs of a massive drop are detected.
Discord's slowmode or other various 'lockdowns' of communication in forums also come to mind
Glasgow, A., Glasgow, J., Limonta, D., Solomon, P., Lui, I., Zhang, Y., Nix, M. A., Rettko, N. J., Lim, S. A., Zha, S., Yamin, R., Kao, K., Rosenberg, O. S., Ravetch, J. V., Wiita, A. P., Leung, K. K., Zhou, X. X., Hobman, T. C., Kortemme, T., & Wells, J. A. (2020). Engineered ACE2 receptor traps potently neutralize SARS-CoV-2. BioRxiv, 2020.07.31.231746. https://doi.org/10.1101/2020.07.31.231746
Romeo, N. (n.d.). What Can America Learn from Europe About Regulating Big Tech? The New Yorker. Retrieved August 19, 2020, from https://www.newyorker.com/tech/annals-of-technology/what-can-america-learn-from-europe-about-regulating-big-tech
Elon Reeve Musk FRS (/ˈiːlɒn/; born June 28, 1971) is an engineer
There is a lot of controversy around whether or not Elon is an engineer. It has come up several times in discussion on the talk page. Personally, I wouldn't qualify him as an engineer. I think that he lacks the training and most other qualifications.
An Idiom is a low-level pattern specific to a programming language. An idiom describes how to implement particular aspects of components or the relationships between them using the features of the given language.
A Design Pattern provides a scheme for refining the subsystems or components of a software system, or the relationships between them. It describes a commonly recurring structure of communicating components that solves a general design problem within a particular context.
Building blocks are what you use: patterns can tell you how you use them, when, why, and what trade-offs you have to make in doing so.
patterns are considered to be a way of putting building blocks into context
A "pattern" has been defined as: "an idea that has been useful in one practical context and will probably be useful in others"
CERN. (2020 April 08). Initiatives from the CERN community in global fight against COVID-19. home.cern. https://home.cern/news/news/cern/initiatives-cern-community-global-fight-against-covid-19
Engineering Execute development
Fifth, the engineer is not selfish and thinks for the good of the product and the team. If you can’t work in a team, you have a harder chance to succeed nowadays.
Setuju, selain itu menambah sudut pandang juga akan meningkatkan empati, dan jangan lupa, empati adalah sesuatu yang bisa dilatih
Second, the engineer is able to apply the right amount of solution to the problem. Start with the simplest approach that solves the problem
Third, the engineer has a high sense of ownership to the problem he’s solving. This way, the engineer will pay attention to the big picture as well as the details of the problem.
Fourth, the engineer is able to understand how his teammates think. Let’s face it, engineering is a team sport
Masih belum yakin, kalau team sport adalah analogi yang bagus untuk menggambarkan tim pengembang perangkat lunak
Pinto, S. F., & Ferreira, R. S. (2020). Analyzing course programmes using complex networks. ArXiv:2005.00906 [Physics]. http://arxiv.org/abs/2005.00906
propelled by a “water plasma” engine. Solar panels generate electrical power, which the vehicle then uses to generate microwaves, which superheat the water up to Sun-surface temperatures. That produces a plasma that shoots out a nozzle, propelling Vigoride forward.
which they estimate to be $230,000 per year.
There is some good discussion on HN about the realistic nature of this estimated expense and how it is not likely out-of-line with what it should be and may actually be quite reasonable.
The Critical Engineering Manifesto
You do not process your projects through an Institutional Review Board, nor you are equipped to deal with persons who express trauma to you.
This is a valid concern that needs to be addressed. While some engineers certainly do use IRB for their projects, it is not nearly as common as it should be.
ericb 12 days ago | unvote [-] * Better googling. Time-restricted, url restricted, site restricted searches. Search with the variant parts of error messages removed.* Read the source of upstream dependencies. Fix or fork them if needed.* They're better at finding forks with solutions and gleaning hints from semi-related issues.* Formulate more creative hypothesis when obvious lines of investigation run out. The best don't give up.* Dig in to problems with more angles of investigation.* Have more tools in their toolbelt for debugging like adding logging, monkey-patching, swapping parts out, crippling areas to rule things out, binary search of affected code areas.* Consider the business.* Consider user-behavior.* Assume hostile users (security-wise).* Understand that the UI is not a security layer. Anything you can do with PostMan your backend should handle.* Whitelist style-security over blacklist style.* See eventual problems implied by various solutions.* "The Math."
What do top engineers do that others don't?
My suspicion is, a good KPI for a knowledge tool is minimum threshold of time required to make a negentropic update to it, with every halving of the threshold increasing its capacity to hold positive-interest-rate knowledge repos by an order of magnitude.
some adhoc loss func
This btw is the maker time/manager time problem pg wrote about. Making needs 4 hour chunks because anything less tends to increase entropy rather than decrease it in any non-trivial knowledge work project. So anything that lowers that lower limit is a big win.
<4h intense focus increases entropy (more stuff, less structure) in your brain
system resilience engineering and hr & governance policies
many instructional designers and others adjacent to the field have responded swiftly with critiques that range from outright rejection of the term, to general skepticism about the concept, to distrust for its advocates and their support of learning analytics and outcomes-based learning.
Why the rejection of the term? Is it too mechanical?
the standard AES17 dynamic range measurement
AES17 doesn't define "dynamic range". It defines "Signal-to-noise ratio or noise in the presence of signal" which is what includes the test tone:
The test signal for the measurement shall be a 997-Hz sine wave producing – 60 dB FS at the output of the EUT.
Behavior Engineering Model This page has a design that is not especially attractive or user friendly but it does provide an overview of Gilbert's Behavior Engineering Model. This is a model that can be used to analyze the issues that underlie performance. A six-cell model is presented. Rating 5/5
Human Performance Technology Model This page is an eight page PDF that gives an overview of the human performance technology model. This is a black and white PDF that is simply written and is accessible to the layperson. Authors are prominent writers in the field of performance technology. Rating 5/5
INVEST
According to this checklist, a User Story should be:
Indepedent (of all others)
Negociable (not a specific contract for features)
Valuable (or vertical)
Estimable (to a good approximation)
Small (so as to fit within an iteration)
Testable (in principle, even if there isn't a test for it yet)
Source(s):
Questions for our first 1:1
Such a down-to-earth, to the point and solid resource. Thank you!
Unless you need to push the boundaries of what these technologies are capable of, you probably don’t need a highly specialized team of dedicated engineers to build solutions on top of them. If you manage to hire them, they will be bored. If they are bored, they will leave you for Google, Facebook, LinkedIn, Twitter, … – places where their expertise is actually needed. If they are not bored, chances are they are pretty mediocre. Mediocre engineers really excel at building enormously over complicated, awful-to-work-with messes they call “solutions”. Messes tend to necessitate specialization.
If the SM58 noise floor is calculated at room temperature, the voltage output is 0.00000032 volts.
This is equal to -130 dBV. The SM58 Vocal Microphone Specification Sheet says:
Sensitivity (at 1,000 Hz Open Circuit Voltage) –54.5 dBV/Pa (1.85 mV)
At this sensitivity, the self-noise would be 94 - (-54.5 - -130) = 19 dB SPL, which is pretty typical and certainly not "lower than can be typically measured".
Questions about the inclusivity of engineering and computer science departments have been going on for quite some time. Several current “innovations” coming out of these fields, many rooted in facial recognition, are indicative of how scientific racism has long been embedded in apparently neutral attempts to measure people — a “new” spin on age-old notions of phrenology and biological determinism, updated with digital capabilities.
We can’t force two people to become friends, nor should we want to.
How many social engineers does it take to change a light bulb? An infinite number. That's why they leave you in the dark till you become the change you seek and make your own light to live by.
If you cant force two people to become friends, then how do 'diplomats' (political manipulators?) profess to do the same thing with entire nations? Especially while so often, using the other hand to deal the deck for other players, in a game of "let's you and him fight"; or just being bloody mercenaries with sheer might is right political ethos installed under various euphemistic credos. 'My country right or wrong' or 'Mitt Got Uns' or ...to discover weapons of mass destruction...etc.
So much for politics and social engineering, but maybe we can just be content with not so much forcing two people to be friends, as forcing them to have sex while we're filming them, so we can create more online amateur porn content. LOL ;)
Patreon Engineering Levels
Engineering focus, but a very detailed rubric of how to rank different personnel levels. Could maybe be generalized/adapted for other fields.
Figure 3: Motor system with flip switch and potentiometer
Please add a wiring diagram that would allow someone to reproduce this circuit.
Figure 2: Basic demonstration of theinterior operationof the CIA device, which includedthe gear train system, rack and pinion system used for lubrication, rollers, and catheter like tubing.
I side view of this assembly would make it easier to see how the components interact.
We noticed that the people who use the data are usually not the same people who produce the data, and they often don’t know where to find the information about the data they try to use. Since the Schematizer already has the knowledge about all the schemas in the Data Pipeline, it becomes an excellent candidate to store information about the data. Meet our knowledge explorer, Watson. The Schematizer requires schema registrars to include documentation along with their schemas. The documentation then is extracted and stored in the Schematizer. To make the schema information and data documentation in the Schematizer accessible to all the teams at Yelp, we created Watson, a webapp that users across the company can use to explore this data. Watson is a visual frontend for the Schematizer and retrieves its information through a set of RESTful APIs exposed by the Schematizer.
plus the impedance of the path from the inverting input to ground i.e. R1 in parallel with R2.
This is incorrect. If R1 is infinite and R2 is 0, the parallel impedance is 0 ohms, but the input impedance is much higher than the input impedance of the op-amp itself, due to feedback making the inputs very similar in voltage.
The input impedance is actually
$$(1 + A_0 B)\cdot Z_\mathrm{ino}$$
where
For the above buffer example, it would be close to \(A_0 Z_\mathrm{ino}\)
See https://electronics.stackexchange.com/q/177007/142
Simpson - Introductory electronics for scientists and engineers section 7.2 Negative Voltage Feedback explains this clearly
Want to explore what this might look like in an engineering course. Need to identify examples.
Possible source example for use in an open engineering text.
10–27
Equation 10-27 is wrong:
Adding this and the 100-kΩ resistornoise to the amplifier noise
This is 3 terms (10 MΩ noise, 100 kΩ noise, and amplifier noise), but the equation only includes 2.
10–25
Equation 10-25 has several errors:
10–23)
Equation 10-23 is incorrect:
0.1 F
Should be 0.1 μF
0.1 F
Should be 0.1 μF
The noise calculations have many errors. See annotations on https://via.hypothes.is/http://web.mit.edu/6.101/www/reference/op_amps_everyone.pdf for details
which is 100
actually should be multiplied by the non-inverting noise gain, which is 101
TLE2201
Should be TLC2201
The noise calculations have many errors. See annotations on https://via.hypothes.is/http://web.mit.edu/6.101/www/reference/op_amps_everyone.pdf for details
Science and engineering have a long tradition of offering solutions based on natural and built processes to improve quality of life and bring prosperity.
Language can be found related to this in many engineering codes of ethics.
Figure 8 on page 22 shows OA percentages for a number of engineering disciplines. Most of which appear near the bottom of the chart.
The poles of the Bessel filter can be determined by locating all of the poles on a circle and separating their imaginary parts byn2where nis the number of poles.
This is incorrect:
To generate the poles of a Bessel filter you need to use root-finding methods on the reverse Bessel polynomials. There's no other shortcut that I'm aware of.
The step response shows no overshoot
This is incorrect: There is a small amount of overshoot in Bessel filters.
with no overshoot
This isn't quite correct: Bessel filters have a small amount of overshoot.
‘DeExtinction Movement’ (The Long Now Foundation, 2014b). This project supports the genetic engineering of endangered species (altering them physically to become more resilient in the Anthropocene) and the cloning and wholesale re-creation of extinct ones—passenger pigeons, wooly mammoths—work that founder Stewart Brand promotes as ‘genetic rescue’.
The Long Now Foundation and its views open up a whole chasm of moral, ethical, and legal questions with this 'DeExtinction Movement'. How is genetically engineering endangered species a form of 'genetic rescue'? These species are dying out because of man and man's actions, which is a terrible reflection of the worst part of human nature, but it does not give us the right to clone nature and 'whitewash' all that we have done before. Just because we may have the capacity to do so, does not mean we should. We cannot simply decide that extinction is fine because we can create genetically engineered species in the future to 'make up' for our mistakes. How are we expected to learn from our mistakes if we can simply rewind and start again?
Dr. Ken Adam
Dr. Kenneth Adam, who worked on the Environment Protection board during the Mackenzie Valley Pipeline Inquiry, spent the majority of his career working as a professional engineer with numerous engineering companies and private consulting firms. Some of his experiences included working for Templeton Engineering (for additional information, see the annotation for Carson Templeton), I.D. Engineering, Sentar Consultants, and Earth Tech Canada. In addition to working in industry, Dr. Ken Adam had a highly successful career in academia. He was an associate professor at the University of Manitoba working in the Department of Civil Engineering from 1972 to 1976. Dr. Ken Adam specialized in the construction of winter roads, specifically in the Canadian North. Due to his expertise, he was able to publish several articles on the construction of winter roads. The topics of his papers included the environmental impact of snow and ice roads, the development of improved snow blowers and pavers, and much more. His journal article entitled “Snow and Ice Roads: Ability to Support Traffic and Effects on Vegetation” was published in March of 1977 in the Arctic journal Volume 30 Number 1 (Adam and Hernandez 1977). He had another journal article published in the Journal (Water Pollution Control Federation) Volume 46 Number 12 entitled “Hydraulic Analysis of Winnipeg Sump Inlets” in December of 1974 (Adam and Brandson 1974). These are just two of many articles Dr. Ken Adam has published. These papers were researched and published for the government and private business. His clients included the Department of External Affairs, Indian and Northern Affairs Canada, the Izok project, the Environment Protection Board, and others. Currently, Dr. Ken Adam resides in Winnipeg, Manitoba (Elves 2009).
References
Adam, Kenneth M., and Normal B. Brandson. "Hydraulic Analysis of Winnipeg Sump Inlets." Water Environment Federation, 1974: 2755-2763.
Adam, Kenneth, and Helios Hernandez. "Snow and Ice Roads: Ability to Support Traffic and Effects on Vegetation." Arctic, 1977: 13-27.
Elves, Daniel. Libraries of the University of Manitoba. January 2009. https://umanitoba.ca/libraries/units/archives/collections/complete_holdings/ead/html/Adam.shtml#tag_bioghist (accessed April 9, 2017).
Carson Templeton
Carson H. Templeton was born in Wainwright, Alberta. He earned a diploma studying Mining Engineering at the Southern Alberta Institute of Technology (SAIT) in Calgary, Alberta. He worked at the Madsen Red Lake Mine in Northwest Ontario as an Assistant Assayer, Boat Boy, and Post Office Manager. He attended the University of Alberta to continue his studies of Mining Engineering and graduated with a Bachelor of Science. During World War II, Templeton worked on the Canol Pipeline Project. He then helped construct airports alongside the Alaska Highway for military use. In 1948, Templeton was appointed Assistant Chief Engineer of the Fraser Valley Dyking Board. In 1950, Templeton was appointed Chief Engineer of the Greater Winnipeg Dyking Board. In 1955, Templeton founded a consulting engineering firm which he named the Templeton Engineering Company. Before the Unicity Amalgamation of Winnipeg in 1972, his company worked as the City Engineer for several small cities in Canada. His company performed engineering estimates for the Royal Commission on Flood Cost-Benefits. These calculations led to the construction of the Winnipeg Floodway. Additionally, Carson Templeton’s consulting engineering firm conducted research that supported the writing of “Snow and Ice Roads: Ability to Support Traffic and Effects on Vegetation” by Kenneth Adam and Helios Hernandez (Adam and Hernandez 1977). In 1966, his company merged with Montreal Engineering and Shawinigan Engineering to form Teshmont Consultants Ltd. Teshmont Consultants Ltd. has completed over 50 percent of the world’s high-voltage, direct current projects. Templeton served as the Chairman of the Alaska Highway Pipeline Panel and Chairman of the Environmental Protection Board during the 1970s. As the Chairman of the Environmental Protection Board, Templeton orchestrated the hearing process for the Environmental Impact Assessments for the Mackenzie Valley Pipeline Inquiry (Winnipeg Free Press 2004).
References
Adam, Kenneth, and Helios Hernandez. "Snow and Ice Roads: Ability to Support Traffic and Effects on Vegetation." Arctic, 1977: 13-27.
Winnipeg Free Press. Carson Templeton OC. October 10, 2004. http://passages.winnipegfreepress.com/passage-details/id-89334/Carson_Templeton_#/ (accessed April 8, 2017).
This is a great, short guide for optimizing pull requests for review-ability.