209 Matching Annotations
  1. Aug 2025
    1. https://bytes.vadeai.com/how-one-clojure-function-destroyed-agent-framework-completely/

      How Clojure's iteration Function Disrupts Agent Frameworks

      Agent Framework Pitfalls

      Traditional agent frameworks like CrewAI introduce significant complexity with configuration files, rigid agent definitions, and orchestration mechanics. They require developers to manage state, orchestrate tasks, handle errors, and manage resources, but still abstract away critical decisions, making debugging and customization difficult[1].

      The Simplicity of iteration

      Clojure 1.11 introduced the iteration function, which models any sequential, stateful process — including agentic workflows — far more simply than the framework approach. Its key parameters:

      • step: Does the work (e.g., LLM call, tool execution)
      • initk: Starting state (prompt, initial data)
      • vf: Extracts the meaningful result from each step
      • kf: Determines the next state for the following iteration
      • somef: Decides if the workflow continues or stops

      This aligns perfectly with agentic workflows: - step: agent action - initk: initial task/state - vf: extract agent output - kf: update agent context/state - somef: goal/termination checker[1]

      Real World Example

      A basic agent workflow with iteration in Clojure:

      clojure (defn simple-agent-workflow [initial-prompt max-iterations] (let [llm-instance (create-llm-instance) step-fn (fn [{:keys [iteration prompt response]}] (when (< iteration max-iterations) (let [messages [(create-message :user prompt)] new-response (generate llm-instance messages) next-prompt (extract-next-task new-response)] {:iteration iteration :prompt prompt :response new-response :next-token {:iteration (inc iteration) :prompt next-prompt :response new-response}})))] (iteration step-fn :somef (fn [res] (some? res)) :vf identity :kf :next-token :initk {:iteration 0 :prompt initial-prompt :response {}})))

      Production variants in Vade AI simply expand this pattern for live API streaming, logging, and complex branching, without introducing unnecessary abstraction or opaque state[1].

      Benefits Over Frameworks

      • Complete control: Every workflow step and state transition is transparent and customizable.
      • Easy debugging: Print or inspect state at any moment; no special debugging tools needed.
      • Flexible termination: Workflow can halt based on any custom logic, not just predefined callbacks.
      • Resource efficiency: No framework overhead, predictable and low memory footprint.
      • Streaming and real-time: Can process operations incrementally as LLM responses stream in, with immediate visibility for the user[1].
      • Composability: Integrates natively with the rest of Clojure — no framework lock-in.

      When to Use This Approach

      The iteration pattern is ideal when you need:

      • Custom agent behaviors
      • Transparent workflows
      • Performance optimization
      • Complex branching or termination logic
      • Deep integration with Clojure systems

      Especially powerful for research and analysis, planning systems, validation pipelines, and unique business logic that standard frameworks struggle to express[1].

      Key Takeaways

      Frameworks often create more complexity than they solve. By embracing Clojure’s iteration, you implement agentic workflows with less code, greater clarity, and full control. This enables adaptive, resource-aware, and highly debuggable systems — proven at scale inside Vade AI[1].

      Citations: [1] How One Clojure Function Destroyed Agent Framework Completely https://bytes.vadeai.com/how-one-clojure-function-destroyed-agent-framework-completely/

      • Purpose of the Smalltalk project

        "The purpose of the Smalltalk project is to provide computer support for the creative spirit in everyone."

      • Emphasis on enabling creativity via computing hardware and software.

      • Focus of research areas

        "We have chosen to concentrate on two principle areas of research: a language of description (programming language) ... and a language of interaction (user interface)..."

      • Programming language as an interface between mental models and hardware.

      • User interface as a communication bridge between human and computer systems.

      • Development process

        "Our work has followed a two- to four-year cycle that can be seen to parallel the scientific method."

      • Iterative cycles aligned with hypothesis-experiment-validation.

      • Two levels of communication

        "Explicit communication includes the information that is transmitted in a given message. Implicit communication includes the relevant assumptions common to the two beings."

      • Distinction between explicit message content and shared implicit context.

      • System complexity and dependencies

        "If there are N components in a system, then there are roughly N-squared potential dependencies between them."

      • Larger systems exponentially increase chances for unwanted interactions.

      • Design goal to minimize interdependence

        "...a computer language should be designed to minimize the possibilities of such interdependence."

      • Isolation of components reduces complexity-related risks.

      • Message-sending metaphor for modularity

        "The message-sending metaphor provides modularity by decoupling the intent of a message ... from the method used by the recipient to carry out the intent."

      • Separates communication intent from implementation detail.

      • Protection of structural information

        "All access to the internal state of an object is through this same message interface."

      • Encapsulation enforced through message-based access.

      • Reducing system complexity via grouping

        "The complexity of a system can often be reduced by grouping similar components."

      • Logical grouping organizes structure and reduces mental overhead.

      • Smalltalk’s class mechanism

        "A class describes other objects — their internal state, the message protocol they recognize, and the internal methods for responding to those messages."

      • Classes define state, protocol, and behavior for their instances.

      • Instances and meta-classes

        "Even classes themselves fit into this framework; they are just instances of class Class..."

      • Meta-level reflection: classes are themselves objects.

      • Nature of a user interface

        "A user interface is simply a language in which most of the communication is visual."

      • UI framed as a predominantly visual language.

      • Role of esthetics in UI

        "Because visual presentation overlaps heavily with established human culture, esthetics plays a very important role..."

      • Design aesthetics influence user understanding and comfort.

      • Flexibility in UI design

        "Since all capability of a computer system is ultimately delivered through the user interface, flexibility is also essential here."

      • UI adaptability is critical for capability delivery.

      • Object-oriented principle enabling UI flexibility

        "An enabling condition for adequate flexibility of a user interface can be stated as an object-oriented principle."

      • OOP structures support adaptable interface design.

    1. Summary of the Discussion on SwiftUI: Understanding Identity, Lifetime, and Dependencies

      1. Introduction to SwiftUI and Its Declarative Nature

      2. SwiftUI operates as a declarative UI framework where you describe UI states, and the framework manages their actualization. "That means that you describe what you want for your app at a high level, and SwiftUI decides exactly how to make it happen."

      3. Understanding Identity in SwiftUI

      4. SwiftUI views have identity to distinguish elements as the same or different across updates, critical for UI transitions and state management. "Identity is how SwiftUI recognizes elements as the same or distinct across multiple updates of your app."

      5. Concept of View Identity Using Practical Examples

      6. Demonstrated using the "Good Dog, Bad Dog" app example, explaining how identity influences view transitions and behavior. "That distinction actually matters a great deal because it changes how our interface transitions from one state to another."

      7. Explicit vs. Structural Identity

      8. Discussed two types of identity:

        • Explicit identity is assigned using identifiers like tags. "Explicit identity is powerful and flexible, but does require that someone, somewhere keeps track of all those names."
        • Structural identity is derived from the view's type and position in the hierarchy. "SwiftUI uses the structure of your view hierarchy to generate implicit identities for your views so you don't have to."
      9. Role of Lifetime in SwiftUI

      10. Explained how SwiftUI manages the life cycle of views and data by associating views' identity over time. "Lifetime is how SwiftUI tracks the existence of views and data over time."

      11. Impact of Dependencies on UI Updates

      12. Dependencies are inputs like state variables or environmental settings that trigger UI updates when they change. "Dependencies are how SwiftUI understands when your interface needs to be updated and why."

      13. How SwiftUI Manages State and Identity

      14. Discussed how State and StateObject help preserve state across the lifetime of views tied to their identity. "State and StateObject are the persistent storage associated with your view's identity."

      15. Advanced Use of Identity with SwiftUI's ForEach

      16. ForEach leverages identifiers for efficient updates and animations, showing how identity can impact performance and correctness. "Choosing a good identifier is your opportunity to control the lifetime of your views and data."

      17. Best Practices for Using Identity

      18. Emphasized the importance of stable and unique identifiers to improve performance and prevent state loss. "An identifier that isn't stable can result in a shorter view lifetime."

      19. Troubleshooting and Optimization Techniques

        • Discussed common pitfalls with AnyView and alternatives using view builders to optimize SwiftUI’s understanding and performance. "Having too many AnyViews will often make code harder to read and understand."

      Key Takeaways

      • Identity, lifetime, and dependencies are core concepts that determine how SwiftUI manages and updates the UI.
      • Effective management of these properties can significantly enhance the performance and predictability of SwiftUI applications.
      • Developers are encouraged to use stable and unique identifiers and understand the implications of explicit and structural identities on their code.

      This summary focuses on the critical aspects discussed in the tech talk, ensuring a comprehensive understanding of the primary themes and practical implications for SwiftUI developers.

    1. Eric Normand
      • Introduction

      • Eric Normand introduces himself and the purpose of the talk: "The title of this talk is building composable abstractions...to develop a process to do that and also I'd like to start a discussion about how we can do that better."

      • Importance of Abstractions

      • Abstractions are critical for creating complex applications from small, simple problems. "A lot of people are able to solve small problems like Fibonacci...when they finally want to create an app...they don't know how to take the tools that they've learned and turn them into software."

      • Map of the Talk

      • The talk covers the importance of abstractions, the process of developing them, an example, and concluding thoughts. "Here's sort of the map of the talk: why focus on abstractions, the process, an example abstraction, and concluding thoughts."

      • Why Focus on Abstractions?

      • Refactoring introduces the distinction between the behavior of the code and its implementation. "In the general industry we now have this idea that there's a difference between the behavior of the code and the actual implementation."

      • Example of Newtonian mechanics replacing Aristotelian physics illustrates that some systems can't be refactored but need to be redesigned from scratch. "You can't refactor Aristotle into Newton."

      • Objectives of the Abstraction Process

      • The process should produce good, Newtonian-style abstractions, be iterative, accessible to all, and foster collaboration. "It has to consistently produce good abstractions...an iterative process...anyone can do it...fosters collaboration."

      • Example of Vector Graphics System

      • Normand uses a simple vector graphics system as an example to demonstrate the process of building abstractions. "This is the example we're going to develop: a vector graphics system."

      • Step 1: Physical Metaphor

      • Choose a metaphor to capture important information. "The idea behind this is to choose a metaphor that will capture the important information in your program."

      • Shapes and construction paper is chosen as the metaphor. "Shapes and construction paper...I cut out shapes like rectangles and ellipses...and then I can move them around."

      • Step 2: Meaning Construction

      • Convert physical intuition into precise mathematical language, focusing on the interface. "We're going to be focusing on the interface right now...precise mathematical language."

      • Definitions in Clojure for different components like color, shape, and transformations. "We're defining two types here: cutout and shape...defining a function that takes a cutout and returns a shape."
      • Importance of preserving shape and color, overlay order, and rotation and translation independence. "Preservation of shape...preservation of color...overlay order...rotation and translation independence."

      • Step 3: Implementation

      • Implement the system based on the constructed meaning, ensuring it can be refactored to different requirements like SVG output. "Implementation...we already know what to do...refactor from quill to SVG."

      • Summary of Process

      • Use a physical metaphor, define the parts and their relationships in mathematical language, and refactor for implementation details. "Use a physical metaphor...define the parts and their relationships...refactor to get all the meta properties."

      • Corollaries for the Process

      • Know your domain, constructs, and refactoring techniques. "Know your domain...know your constructs...know your refactoring."

      • Conclusion

      • Encourages further learning and provides resources. "Please go to my site...download the slides...sign up for my newsletter."

      • ClojureScript has excelled in standard UI patterns but now aims to harness modern browser APIs for high-performance use cases.

      "The majority of Clojurescript application development and community discussions seems to be focused on improving standard UI implementation patterns and the general workflow of how we can build web applications better, easier and faster." (Medium)

      • The workshop’s goal was to probe ClojureScript’s internals, identify bottlenecks, and introduce technologies like WebGL, WebRTC, WebWorkers, and Emscripten.

      "So for this workshop I chose to look more below the surface of Clojurescript, analyze problem areas, examine possible optimization strategies and above all introduce people to a number of modern web technologies (WebGL, WebRTC, WebWorkers, Emscripten), techniques & tools offering possible routes to use the language in a sound and elegant way to work with these features." (Medium)

      • A six-step implementation of Conway’s Game of Life, from naive to optimized, achieved a speedup from 10,840 ms to 16.5 ms per frame (\~650× faster) on a 1024×1024 grid.

      "Six implementations of Conway’s Game of Life — from naive (but idiomatic & slow) to optimized Clojurescript using typed arrays and direct pixel manipulations (10,840 ms / frame vs 16.5 ms / frame = \~650× faster for a 1024×1024 grid)." (Medium)

      • A compile-time macro version of get-in eliminated temporary vector allocations and reduce calls, boosting lookup speed from 205.18 ns to 43.61 ns (\~5× faster).

      "Benchmarking this example with criterium under Clojure (which has somewhat different/faster protocol dispatch than in Clojurescript), the macro version results in 43.61ns vs 205.18ns for the default get-in (\~5× faster)." (Medium)

      • Switching from nested vectors to a flat 1D vector enabled nth-based indexing (\~6× speed-up), before Typed Arrays and loop-based pixel updates removed millions of function calls for the full \~650× gain.

      "The more obvious improvement to speed up the simulation was using a flat 1D vector to encode the grid and calculate cell indices for the 2D coordinates ... gain a \~6× speed up ... Since all our data ... are stored in typed arrays ... and altogether gained a \~650× speedup compared to the original." (Medium)

      • Adopting transduce for neighbor counting proved \~15–20% slower than map & reduce, highlighting that idiomatic functions can sometimes underperform.

      "One of the intermediate steps ... was using transduce instead of map & reduce to compute the number of alive neighbor cells, however this ended up actually being \~15–20% slower in this case." (Medium)

      • Effective WebGL programming demands deep knowledge of geometry, linear algebra, the OpenGL state machine, GPU pipelines, and GLSL, making it daunting for newcomers.

      "To anyone interested in directly utilizing the GPU in the browser, WebGL is a huge & fascinating topic, but it can also be very daunting for newcomers to graphics programming, since efficient use of it requires a multitude of prerequisite knowledge and terminology about 2D/3D geometry, linear algebra, spatial thinking in multiple spaces (coordinate systems), low-level data organization, the OpenGL state machine ... GPU processing pipelines, knowledge of the GLSL shading language, color theory etc." (Medium)

      • The thi.ng/geom library employs Clojure maps for semi-declarative OpenGL/WebGL buffer and shader specifications, while preserving explicit control over the GL state machine.

      "The thi.ng/geom library takes a semi-declarative approach to working with OpenGL/WebGL in that it’s extensively using Clojure maps to define various geometry and shader specifications, which are then compiled into the required data buffers & GLSL programs ... but at no point is it hiding the underlying layer, giving advanced users full control over the GL state machine." (Medium)

      • Shadergraph addresses GLSL code reuse by offering transitive dependency resolution, a library of common functions, compile-time minification, and metadata extraction for tooling.

      "To address this in Clojurescript from early on, we can use the thi.ng/shadergraph library, which provides us with: a transitive dependency resolution mechanism for GLSL code ... a growing library of pure, commonly used GLSL functions (lighting, color conversion, matrix math, rotations, effects etc.) ... and a basic compile-time shader minifier ... Clojure meta data extraction of the defined GLSL functions ..." (Medium)

      • A hands-on WebRTC demo showed how to stream a camera feed into Shadertoy-style WebGL shaders for real-time video FX processing.

      "I prepared a small example combining a WebRTC camera stream with Shadertoy-like WebGL image processing using a bunch of effect options." (Medium)

      • True parallelism in the browser comes from WebWorkers—unlike core.async’s simulated concurrency—and relies on isolated modules, message passing, and transferable ArrayBuffers for efficient data sharing.

      "However, the currently only way to obtain real extra compute resources of a multi-core CPU in JavaScript is to use WebWorkers ... WebWorker code needs to be loaded from a separate source file and can only communicate with the main process via message passing. By default, the data passed ... is copied, but some types (e.g. ArrayBuffers) can also be transferred ..." (Medium)

      • Emscripten’s LLVM-based compiler targets asm.js (and soon WebAssembly), enabling C/C++ modules to outperform idiomatic ClojureScript for math-heavy and mutable-data tasks.

      "Emscripten ... a LLVM-based transpiler for C and C++ to asm.js ... the resulting asm.js code almost always performs noticeably faster than the Clojurescript version ... With WebAssembly on the horizon, it’s maybe a good time to invest some time into some “upskilling” ..." (Medium)

      • The workshop’s capstone was a C-based 3D particle system demo, using Emscripten’s JavaScript ArrayBuffer heap and typed arrays to pack 36-byte particle structs tightly and avoid copying overhead.

      "For the final exercise ... we implemented a simple 3D particle system in C, compiled it with Emscripten and learned how to integrate it into a Clojurescript WebGL demo ... The Emscripten runtime emulates the C heap as a single, large JS ArrayBuffer ... Each particle only takes up 36 bytes ... all particles in this array are tightly packed ..." (Medium)

  2. Jul 2025
    1. "one of the best critiques of modern AI design comes from a 1992 talk by the researcher Mark Weiser where he ranted against “copilot” as a metaphor for AI."

      • Weiser’s critique of the “copilot” metaphor for AI is foundational to this argument.

      "He gave this example: how should a computer help you fly a plane and avoid collisions?... The agentic option is a 'copilot' — a virtual human who you talk with to get help flying the plane. If you’re about to run into another plane it might yell at you 'collision, go right and down!'"

      • The “copilot” model is described as an interactive assistant giving explicit instructions or alerts.

      "design the cockpit so that the human pilot is naturally aware of their surroundings. In his words: 'You’ll no more run into another airplane than you would try to walk through a wall.'"

      • Weiser advocates for UIs that naturally augment human situational awareness, eliminating the need for explicit assistant intervention.

      "Weiser’s goal was an 'invisible computer'—not an assistant that grabs your attention, but a computer that fades into the background and becomes 'an extension of [your] body.'"

      • The ultimate aim: seamless, ambient support that integrates with human perception.

      "the Head-Up Display (HUD), which overlays flight info like the horizon and altitude on a transparent display directly in the pilot’s field of view."

      • The HUD is presented as a practical realization of Weiser’s concept: information is ambiently present, not actively disruptive.

      "A HUD feels completely different from a copilot! You don’t talk to it. It’s literally part invisible—you just become naturally aware of more things, as if you had magic eyes."

      • HUDs differ fundamentally from copilots by passively enhancing awareness instead of communicating via dialogue.

      "spellcheck isn’t designed as a 'virtual collaborator' talking to you about your spelling. It just instantly adds red squigglies when you misspell something! You now have a new sense you didn’t have before. It’s a HUD."

      • Spellcheck is given as a familiar analogy for AI-as-HUD: subtle, always-on augmentation of cognition.

      "use AI to build a custom debugger UI which visualizes the behavior of my program!... With the debugger, I have a HUD! I have new senses, I can see how my program runs."

      • Custom visual UIs for debugging exemplify HUD-like designs in AI-driven tools, enabling deeper understanding rather than focusing on transactional automation.

      "Both the spellchecker and custom debuggers show that automation / 'virtual assistant' isn’t the only possible UI. We can instead use tech to build better HUDs that enhance our human senses."

      • Non-agentic, perception-extending UI paradigms are positioned as powerful, sometimes preferable alternatives to assistant-like AI.

      "I don’t believe HUDs are universally better than copilots!... anyone serious about designing for AI should consider non-copilot form factors that more directly extend the human mind."

      • While not dismissing assistants, the author stresses the importance of exploring HUD-like, sense-extending UI for ambitious AI design.

      "routine predictable work might make sense to delegate to a virtual copilot / assistant. But when you’re shooting for extraordinary outcomes, perhaps the best bet is to equip human experts with new superpowers."

      • The conclusion is a nuanced tradeoff: assistants excel in predictable routines, but empowering human expertise requires HUD-style augmentation.
      • Core Concept: The paper introduces Universalis, an AI-first, program-synthesis framework and language designed to be read by human knowledge workers but generated by AI models (LLMs). > "this paper outlines the high-level design of an AI-first, program-synthesis framework built around a new programming language, Universalis, designed for knowledge workers to read, optimized for our neural computer (Automind) to execute, and ready to be analyzed and manipulated by an accompanying set of tools."

      • Design Philosophy: Universalis prioritizes readability for domain experts over writability for professional developers, making code intuitive and easier for AI to generate accurately. > "Unlike traditional programming languages, which prioritize syntax and structure optimized for writing by professional developers, Universalis is designed with the philosophy that code should be read by domain experts and written by machines."

      • Structure and Syntax: The language syntax is analogous to "literate Excel spreadsheet formulas," embedding logical predicates inside [...] hedges surrounded by natural language explanations. > "Think of Universalis clauses as some kind of literate Excel spreadsheet formulas such as [@D is (@S-@B)] over named tables, or relations, enclosed in hedges surrounded by natural language explanation, where cells in the table correspond to variables such as @B, @S, and @D..."

      • AI Safety through Contracts: It natively embeds pre- and post-conditions (contracts) into the language, providing a formal and extensible method to ensure the logical correctness and safety of AI-generated computations. > "By embedding pre- and post-conditions directly into the language semantics, Universalis provides a pragmatic and extensible method for implementing AI safety."

      • Readable Conditionals: Conditional logic is structured as simple checklists, making decision-making processes clear and intuitive for human readers while still being executable by the AI interpreter, Automind. > "By structuring conditionals as checklists and explaining each branch in natural language, Universalis ensures that the logic is clear and intuitive for the human reader, while the Universalis interpreter Automind can still recognize them as control-flow decision points..."

      • Loopless Bulk Processing: The framework handles operations on data collections via implicit broadcasting, similar to NumPy or modern Excel, removing the need for explicit loops that can be a distraction for non-programmers. > "In Universalis, this is handled by implicitly broadcasting operations on single elements to collections, similar to how NumPy or Pandas operate in Python or how dynamic array formulas and spilled array behavior in Excel allow for loopless programming."

      • Accessible Data Queries: It features query comprehensions that use a structured, natural-language style for complex data operations like filtering and aggregation, making them more approachable than traditional query languages like SQL. > "By focusing on a structured natural-language approach for comprehending queries, Universalis ensures that even those with minimal experience in programming can perform advanced data manipulations."

      • Simplified Data Extraction: The language includes powerful pattern-matching capabilities, allowing users to easily extract specific information from complex, nested data structures like JSON without writing complicated parsing code. > "This is where Universalis excels with its pattern-matching capability. Users can simply specify the patterns they want to match within the JSON structure..."

      • Intentional Representation: Rather than generating final, concrete syntax, the Automind system produces an abstract, intentional representation of the user's intent, which can then be rendered into different formats. > "Since Universalis programs are trying to capture the high-level intent of the user as well, Automind does not generate the concrete syntax seen in the examples so far but instead creates an abstract, intentional representation of the Universalis code..."

      • Minimalist Language Design: The language is intentionally kept minimal, focusing on core features like sequential composition, implicit looping, and nested dataframe queries to maintain readability and compatibility with formal verification tools. > "For Universalis, we intentionally keep the language minimal, focusing on sequential composition, implicit looping by lifting singleton operations over collections, and fully nested dataframe queries."

    1. The Utopian Vision of Smalltalk

      Smalltalk as a vision, not just a language:

      “Smalltalk is a vision of the world as it should be, not necessarily as it is but the way it's supposed to be.”

      A future imagined where Smalltalk has won:

      “As we all know, Smalltalk has won and a new age, a Utopia is upon us.”


      Language and System Design Philosophy

      Smalltalk wasn’t originally a full language, but a live object system:

      “Small Talk was never a programming language it was a programming system… there is no way to express a class as a linguistic construct.”

      Early Smalltalk used a clever hack for class definition:

      “Yes a class just happens to be this object that knows how to make other objects… but it is a hack and it's important to recognize that.”

      Textual syntax was later added to enable integration with tools like source control:

      “We came up with an actual textual Syntax for Smalltalk… interacting with Source Control Systems became really easy.”


      Resolving Dialect & Library Fragmentation

      Overcoming dialect fragmentation:

      “We realized all this and came together to define a Common Language.”

      Library standardization was harder but necessary:

      “The situation with the libraries was a bit of a mess… but we realized that the benefits of having a standard… outweighed [vendor-specific advantages].”


      Reflection and Mirrors

      Original reflection API mixed base and meta levels:

      “These class objects… are playing two roles… that architecture was replaced with mirrors.”

      Mirrors cleanly separate base-level and meta-level:

      “We clearly separated the base level and the meta level… that has all kinds of advantages for deployment, distribution, security.”


      Mobile and UI Integration

      Early design enabled seamless remote mobile dev:

      “We were already running images that could manipulate other images… fantastic development experiences almost immediately.”

      Binding to native UI was essential for adoption:

      “We realized early on that we should… bind to Native stuff… they run natively on all the platforms.”

      Tool evolution toward navigation-based UIs:

      “We wouldn’t dream of using the 40-year-old design of a browser… we evolved our tools.”


      Foreign Function Interfaces and “Alien” Objects

      Unified model for system integration:

      “These things outside of Smalltalk… they're second class objects… but they were objects.”

      Alien objects replaced clunky primitive syntax:

      “We didn’t need these hacky things like the Primitive syntax… everything was an object.”


      Modularity and Deployment

      Object ecosystems are difficult to transfer without structure:

      “Fish and objects live in an environment… if you want the fish to be transferred and survive you have to bring its friends.”

      Deployment as serialized object graphs:

      “We realized that an application… was also an object… we just had to serialize it.”

      Avoided “extraneous concepts like packages”:

      “We used the concepts of classes and objects to solve our distribution and modularity problems.”


      Optional Typing and Pluggable Type Systems

      Types as useful but not mandatory:

      “Types can be useful at times… we certainly don’t want them telling us how to live.”

      Static analyses used for optimization:

      “Extra information helps… we could have multiple analyses that didn’t conflict.”


      Web and Live Environments

      Ahead of the curve with web-based IDEs:

      “We built complete programming environments with all their features running in the browser long before anyone else.”

      True liveness beyond class browsers:

      “We never want to look at Dead code… environments evolved to show Exemplar values.”

      Integration of ML into the environment:

      “We started to incorporate [ML] in our live programming environments.”


      Performance and JIT

      Ahead of AOT compilation trend:

      “We had systems that would… have a database of compiled methods… ready to apply them instantly on Startup.”

      Addressed Apple’s no-JIT constraint early:

      “We already had techniques for that… just deploy that and turn off the jit.”


      Security through Object Capabilities

      Reinvention of capability-based security:

      “The capability you want is an object… the only damage you can do is through a message send.”

      Enforced encapsulation and access control:

      “We needed an access control model… public, protected, and private messages.”


      Education and Longevity

      Revolutionary ideas in education:

      “Smalltalk had these ideals of Education from day one.”

      Avoided student drop-off by staying dominant:

      “We short circuited [students leaving] because of all our previous successes.”

      Hypertext-based live documentation:

      “We could put live widgets embedded in text long before the worldwide web.”


      Satirical Punchline

      Entire talk is satirical utopia:

      “This is the world as it should be… imagine if the world wasn’t like the way I described it…”

      Ironic jab at real-world language use:

      “Think if one of the world’s largest… sites ran on PHP… we all know Facebook runs on Smalltalk.”

      • Core Concept: Missionary is a Clojure library for supervised data flow programming, providing a universal language for handling asynchronous event sources in both frontend and backend applications.

        "the missionary is a closure library for supervised data flow programming so this is a common language to manipulate asynchronous event sources um that you can use from anywhere in the stack"

      • Problem Addressed: The library tackles the difficulty of resource management in concurrent programming, where resources (like circuits, UI components, or threads) have a life cycle that must be explicitly managed.

        "this is in my "pinion the main reason why concurrency is hard a resource is an object with a Time dimension because it has a Time Dimension it also has a life cycle"

      • Limitations of Garbage Collection: Traditional garbage collection is insufficient for this problem due to bidirectional references between producers and consumers, leaving the producer unaware if a resource is still needed.

        "garbage collection doesn't help the reason why it doesn't fa is because um the reference to the resource is B directional there is it is referenced by the consumer and it is also referenced by the producer"

      • Proposed Solution: The solution is explicit structure, which enables automatic, demand-driven resource management by binding the resource's allocation lifespan to the period when its data is actually required.

        "what we want to achieve is to bind um the time span where the resource is allocated to the time span when the data produced by this resource is actually needed so it's demand WR resource maintenance"

      • DAG Supervision for Shared Resources: For complex scenarios with shared resources, missionary moves from a simple tree supervision model to a Directed Acyclic Graph (DAG) supervision model, where a resource can have multiple supervisors.

        "so now the supervision uh three is not a three anymore it's a dag has a shared resource here"

      • Core Supervision Policy: The policy for managing shared resources is "allocate on first use and dispose on last," elegantly summarized by the real-world analogy of "the last one turns of the lights."

        "the last one turns of the lights uh so this is a dck for the real world"

      • Improving on Functional Effect Systems: Missionary was conceived to improve upon functional effect systems like Rx by properly implementing continuous time, which requires synchronicity semantics to avoid event-ordering issues.

        "the IDE was uh continuous time is a good idea...the problem is um well at this time the popular FX system in Java was RX and RX doesn't properly Implement continuous time uh so the reason why it doesn't properly Implement continuous time is because it has no synchronicity semantics"

      • Glitch-Free Propagation: A key innovation is a bidirectional flow protocol and a propagation algorithm that uses a priority queue to traverse the dependency graph, ensuring atomic updates and preventing inconsistent intermediate states, known as "glitches."

        "to solve the glitch problem...the string protocol doesn't support that and so therefore the idea is can we have our and El two"

      • Bidirectional Flow Protocol: The protocol allows a producer to signal that its state is invalidated before a new value is computed, enabling a two-phase process of invalidating dependent nodes and then recomputing them in the correct order.

        "bidirectional IPC is what we need to implement uh back pressure and solve the git problem that I talk just before"

      • Successful Validation: The model has been battle-tested at scale through its use in Electric Clojure for serious business applications, proving its robustness and success in managing complex, dynamic, and massively concurrent UIs.

        "that's a success we've been validating that model at scale with electric closure we we've been working on that for more than two years now uh we've been using electric closure for serous business application and this algorithm just well we we experience really no problem with that"

      • Core Thesis: Functional programming is an excellent fit for creating reactive or "situated programs" that must continuously interact with their environment, contrary to some earlier views.

        "we're going to use functional programming to make situated programs um i'm going to show you that rich is wrong about it functional programming is actually a very good fit for situated programs and we're going to see why."

      • Functional Effect Systems: The fundamental principle is to describe computations as values rather than executing them immediately. An effect is a description of an action to be performed.

        "an effect is a description of something to be done that's the essence of functional programming instead of doing we describe."

      • Core Operators: The system is built on operators like pure for turning any value into an effect and bind for composing effects sequentially.

        "pure takes an arbitrary value turns that into an effect... blind is about sequential composition so we want to do something and then do something else."

      • Concurrency and Supervision Trees: Concurrent operations are managed within a process tree. This structure is critical for handling failures and managing resources. When one parallel process fails, its siblings must be canceled to prevent wasted resources.

        "what we get is not a list of process it's a tree of processes and that's that's very important... when it happens you want the error to be processed by another process and this process is the supervisor."

      • Structured Concurrency: Functional programming naturally enforces a well-defined supervision hierarchy, where combinators also define the supervision strategy, preventing orphaned processes.

        "in functional programming it's impossible to do that so you get structural concurrency by default that is you are forced to build your program in such a way that the supervision trees is properly structured."

      • Streams vs. Signals: The talk distinguishes between two types of long-lived effects:

      • Event Streams: A discrete series of events where each event is critical and must be processed. They require backpressure to prevent data loss. > "an event stream is not defined when events don't happen it's discrete time... losing an event is really bad... to represent stream's effect the effect representation must implement by pressure."

      • Signals: A continuous value representing the state of an identity (e.g., mouse position). They are always defined, only the latest value matters, and they benefit from lazy sampling. > "the signals represent the state of an identity... at any point in time you can take a snapshot... only the latest value matters... if you want to represent signals as effects you won't play something."

      • Missionary Library: A Clojure library that implements these concepts, providing operators for two effect types: tasks (for single values) and flows (for multiple values).

        "this is mission array it's a closure library that works enclosure and closure script and it's a collection of purely functional operators that work on effects and there's two kinds of effects there is tasks and flows."

      • Language Extensions: Missionary avoids "callback hell" through language extensions that provide a functional version of async/await, allowing for more readable, sequential-looking code that is internally transformed into callbacks.

        "the solution is to extend the language so we we extend closure with another operator that's the idea of async await but now it works in functional effects."

      • Awaiting Flows: A powerful and unique feature is the ability to "await" a flow (a stream of multiple values). This reruns a computation for each new value while automatically canceling the previous computation if it's still running.

        "if we get a new value of the state and the previous value is still being computed we want to interrupt this this previous computation and start the new one you we are only interested in the latest value so we want to discard the previous computation."

      • Key Differentiators: Missionary stands out due to its powerful language extensions (an expressive alternative to monads) and its first-class support for both discrete-time (streams) and continuous-time (signals) effects.

        "what makes it unique is language extensions uh it's alternative to monad it's as much as powerful but it's much more expressive... discrete versus continuous time we need both."

      • Computational Challenges:

        • We lack effective computational methods: "i think we actually have the foggiest idea how to compute very well."
        • Importance of fast, efficient processes: "it took only 100 milliseconds... we don't understand how to do."
      • Genomic Complexity:

        • Human genome's complexity and flexibility: "with a small change to it you make a cow instead of a person."
        • High-level language for biological processes is unknown: "what I'm interested in is the high-level language that's necessary to do things like that."
      • Programming Evolution:

        • Legacy of programming assumptions based on scarcity: "all of our sort of intuitions from being programmers have come from a time of assuming a kind of scarcity."
        • Current abundance of resources shifts the focus: "memory is free, computing is free."
      • Security and Correctness:

        • Traditional concerns of correctness and security are secondary: "people worry about correctness... is it the real problem? maybe... most things don't have to work."
        • Evolution and adaptability of code are crucial: "we spend all our time modifying existing code."
      • Programming Constraints:

        • Early decisions in programming constrain future changes: "we make decisions early in some process that spread all over our system."
        • Need for flexibility in modifying systems: "organize systems so that the consequences of decisions we make are not expensive to change."
      • Generic Operators and Extensions:

        • Dynamically extensible operations: "dynamically extend things while my program is running."
        • Symbolic algebra as an extension of arithmetic: "expand this arithmetic on functions... it's a classical thing people can do in algebra."
      • Propagators and Parallelism:

        • Concept of propagators for parallel computation: "propagators are independent little stateful machines."
        • Parallelism and monotonic information merging: "we don't actually put values in these cells we put information about a value in a cell."
      • Truth Maintenance Systems (TMS):

        • Maintaining and improving data consistency: "truth maintenance systems... maintain the best estimates of what's going on."
        • Dependency-directed backtracking for efficient problem-solving: "automatically find for me the consistent sub consistent sub the sub world views that are consistent."
      • Historical and Educational Insights:

        • Historical evolution of computation: "when I started computing in 1961... the total amount of memory is probably about 10 kilobytes."
        • Educational gaps between theory and practical engineering: "what we taught the students wasn't at all what the students actually were expected to learn."
      • Vision for the Future:

        • Future computing systems must be inherently parallel, redundant, and flexible: "future... computers are so cheap and so easy to make... they can talk to each other and do useful things."
        • Importance of evolving current computational thinking: "we have to throw away our current ways of thinking if we ever expect to solve these problems."
      • Summary and Call to Action:

        • Main challenge is evolvability, not correctness: "problem facing us as computer engineers is not correctness it's evolvability."
        • Proposals include extensible operations and new architectural paradigms: "extensible generic operations... a more radical proposal is maybe there are freedoms that we can unlock by throwing away our idea of architecture."
    1. Bet on context engineering for rapid iteration.

      “Manus would bet on context engineering. This allows us to ship improvements in hours instead of weeks, and kept our product orthogonal to the underlying models: If model progress is the rising tide, we want Manus to be the boat, not the pillar stuck to the seabed.”

      Design around the KV-cache to optimize latency and cost.

      “If I had to choose just one metric, I'd argue that the KV-cache hit rate is the single most important metric for a production-stage AI agent.”

      Mask tool availability rather than removing tools mid-iteration.

      “Rather than removing tools, it masks the token logits during decoding to prevent (or enforce) the selection of certain actions based on the current context.”

      Use the file system as scalable, persistent context.

      “That's why we treat the file system as the ultimate context in Manus: unlimited in size, persistent by nature, and directly operable by the agent itself.”

      Manipulate attention through recitation of objectives.

      “By constantly rewriting the todo list, Manus is reciting its objectives into the end of the context. This pushes the global plan into the model's recent attention span, avoiding ‘lost-in-the-middle’ issues.”

      Keep errors in the context to enable adaptive behavior.

      “One of the most effective ways to improve agent behavior is deceptively simple: leave the wrong turns in the context. When the model sees a failed action—and the resulting observation or stack trace—it implicitly updates its internal beliefs.”

      Avoid over-few-shotting by injecting controlled diversity.

      “The fix is to increase diversity. Manus introduces small amounts of structured variation in actions and observations—different serialization templates, alternate phrasing, minor noise in order or formatting.”

      Context engineering defines agent performance and scalability.

      “Context engineering is still an emerging science—but for agent systems, it's already essential. … Engineer them well.”

      • Abstractions are based on assumptions

      • We assume some properties which allows us to neglect them.

      • The more we assume the simpler our abstraction.
      • The less we assume, the more we need to address, and the more complex our abstraction becomes.
      • Summary of Tech Talk on Software Abstractions and Model Assumptions

      • Overview of Book on Closure: The speaker discusses a book written to serve as a comprehensive second book on Closure, focusing on choosing the appropriate parts of the language based on different programming situations.

      • "the goal of this book was to be the best second book that you could read about closure where you already know what you can do with the language but you don't know why you would use one part of the language versus another to solve a particular problem."

      • Importance of Names in Programming: Emphasizes the significance of names in software, noting the lack of deep discussion in education and practice, leading to debates often decided by authority rather than merit.

      • "the first chapter that I wrote was about names and for all that we pay a lot of lip service to names being one of the two hard problems in software we don't really spend a lot of time confronting them head-on."

      • Challenges in Defining "Abstractions": Discusses the difficulty in defining and writing about abstractions in software, leading to extensive research without writing progress.

      • "things were going well until I tried to do the same for abstractions right what makes an abstraction good what makes it bad if it's bad how do we make it better."

      • Different Interpretations of "Abstraction": Highlights the two distinct concepts the term "abstraction" covers in computer science, demonstrated by church numerals and cons cells in LISP.

      • "we use the word abstraction to mean two fairly distinct concepts and I think this is demonstrated by two ideas that are very much at the heart of lists which are church numerals and cons cells."

      • Timelessness vs. Practicality in Abstractions: Examines how some abstractions remain relevant due to their theoretical utility, while others fall out of favor due to practical inefficiencies in changing environments.

      • "the difference between these things is not what they are but how we judge them one of them is timeless right it is judged against a standard which does not change but the other one is judged against lis very rapidly changing sort of standard."

      • Role of Environment in Software Abstractions: Argues that understanding the environment and its assumptions is crucial in defining and evaluating abstractions.

      • "when we define an abstraction specifically a software abstraction we have to have three parts we have to have the model which is the thing that we implement we have to have the interface which is a means of interacting with that model and we need to have the environment which is everything else."

      • Adaptability and Principle in Software Systems: Proposes that effective software systems combine principled and adaptable components, ensuring robustness and flexibility.

      • "if we have a sort of an adaptable abstraction that contains as little principal pieces we can see that change comes from the outside and as change comes in from the outside these principled fragile pieces are protected."

      • Complex Adaptive Systems and Software: Suggests modeling software systems on complex adaptive systems to manage change effectively through principled and adaptable components.

      • "these sorts of systems right where you have sort of an adaptable whole and principled subcomponents exist at every level of the world this is empirically a very successful strategy."

      • Conclusion and Encouragement to Engage with Concepts: Concludes by encouraging the audience to read the book and engage in discussions about the concepts, highlighting the evolving understanding of software abstractions.

      • "if this was interesting to you I really encourage you to read my book I think that it's much more clearly articulated there than it has been on this stage but if you do read it right I really encourage you to talk to me about it."
      • Introduction to Queues:
      • "Queues really are pretty simple, right? You have a producer that enqueues messages, you have a consumer that dequeues messages."
      • "Something that is maybe non-obvious is that in addition to the act of enqueuing being a side effect... we get back to just confirmation that it was added."
      • "Queues are a way of dealing with actions... we of course are in the business of making computers do things."

      • Core.async Channels and Queue Mechanisms:

      • "A core.async channel has not one but three queues: a buffer that holds the messages, a puts queue where producers that are trying to add to a full buffer will wait, and a takes queue where consumers that are trying to take messages from an empty buffer will wait."
      • "The blocking queue under the covers looks a great deal like core.async... it has a buffer that holds the messages... it’s just a place for threads to park themselves."

      • Ubiquity and Importance of Understanding Queues:

      • "Queues are ubiquitous; they’re everywhere and it sort of behooves us on some level to understand them."
      • "The reason that queues are so ubiquitous is because they separate what we want to have happen from when it happens."

      • Closed vs. Open Systems:

      • "A key distinction in queuing theory is between closed and open systems."
      • "A closed system is where as we produce something, we must wait for the consumer to complete it before producing something else."
      • "Many of the systems that we build are open systems where there is no coordination... requests are processed as they come in, often before we’re ready."

      • Simulation of Queue Behaviors:

      • "Tasks are arriving according to this exponential distribution... the complexity of the tasks is not very well modeled by an exponential distribution."
      • "As the system becomes unstable... it just sort of takes off like a rocket... what happens when a queue grows out of control."

      • Dealing with Queue Overload:

      • "Unbounded queues are fundamentally broken because it puts the correctness of your system in somebody else’s hands."
      • "We cannot have our strategy for dealing with too much data be to hold onto it and hope for the best."

      • Strategies for Managing Excess Data:

      • "When too much data shows up, we have only three strategies: drop the data on the floor, reject the data, or pause the data."
      • "Dropping the data is valid for problems where newer data makes older data obsolete."
      • "Rejecting the data... is what you see when you go to an overloaded website and it returns a 503."
      • "Pausing the data or exerting back pressure... is often the most neutral choice we can make."

      • Benefits of Buffers and Back Pressure:

      • "Buffers allow us to make sure that our throughput is more stable and going at a higher rate."
      • "Back pressure enacting and retracting is not free... buffers help with the overall throughput at the expense of latency."

      • Best Practices for Queue Management:

      • "Always plan for too much data... using back pressure wherever possible."
      • "Buffers are useful for constant throughput but should be used where absolutely necessary."
      • "Avoid composing buffered queues wherever possible as each additional buffer magnifies the results of the next."

      • Application Example: Durable Queue and S3 Journal:

      • "I end up writing two libraries, one of which is called durable queue, a disk-backed queue, and another called S3 Journal, built on top of the durable queue."
      • "The system looks like this: there are two stages, the HTTP server concerned with persisting data and another loop uploading it to S3."

      • Ensuring System Stability:

      • "Measure what is the health of our system... quantify how much we might have lost when a machine goes down."
      • "It’s really crucial when building these systems to understand what it means to complete something."

      • Monitoring and Instrumentation:

      • "We have metrics now... gives us a little bit more visibility in the system."
      • "Be picky about your tools... prefer something which actually tells you how fast it is."

      • Final Recommendations:

      • "Unbounded queues are bounded by memory or some fixed shared resource."
      • "Account for what happens when your system receives too much data."
      • "Use back pressure to defer the choice to someone who understands the application better."
      • "Demand metrics everywhere to ensure system robustness."
    1. This summary outlines the core arguments of the paper "Biology as Reactivity," which posits that biological systems, particularly living cells, can be understood as the ultimate reactive systems. * Core Thesis: The principles and tools used for analyzing reactive systems in computer science can be applied to biological research to create virtual experimentation environments.

      "Concepts, languages, and tools for the description and analysis of reactive systems can help in the process of biological discovery, ultimately by providing biologists with virtual experimentation environments."

      • The Cell as a Reactive System: A living cell is presented as a quintessential reactive system, where its behavior is a function of not just the inputs, but also their timing, order, and location.

        "A living cell, we claim, is not only reactive in nature, but is the ultimate example of a reactive system, and so are collections thereof."

      • Adaptivity as a Form of Reactivity: The adaptive nature of biological systems to both internal and external stimuli is a key aspect of their reactivity.

        "Biological systems are also highly adaptive, to both internal and external changes; they use signals coming in from receptors and sensors, as well as emergent properties to fine-tune their functioning. This adaptivity is another facet of the reactivity of such systems."

      • Key Concepts in Biological Reactivity: The understanding of biological systems as reactive necessitates a grasp of fundamental concepts like parallelism, interaction, and causality.

        "Hand-in-hand with the central notion of reactivity go the discrete event-based execution and simulation of dynamical systems, which requires a fundamental understanding of parallelism, interaction, and causality; the design of complex systems from building blocks, requiring means for composition and encapsulation"

      • Signal Propagation and distributed control: The paper gives the example of muscle contraction to illustrate how signals are propagated in a distributed manner.

        "But notice that when a muscle contracts, the nerve reaches only some of the muscle cells, and responsibility for the remainder of the signal's effect must be taken over by other mechanisms."

      • A Novel Programming Paradigm: Behavioral programming is a new way to create reactive systems by focusing on how the system should behave, which aligns with the initial requirements.

        "We describe an implementation-independent programming paradigm, behavioral programming, which allows programmers to build executable reactive systems from specifications of behavior that are aligned with the requirements."

      • Modular and Flexible Development: It allows for the addition and modification of existing behaviors through software modules, which simplifies dealing with incomplete or conflicting requirements.

        "Behavioral programming simplifies the task of dealing with under-specification and conflicting requirements by enabling the addition of software modules that can not only add to but also modify existing behaviors."

      • Scenario-Based Programming: Behavioral programming originated from the scenario-based language of live sequence charts (LSC), which allows for creating executable specifications of reactive systems.

        "The work on behavioral programming began with scenario-based programming, a way to create executable specifications of reactive systems, introduced through the language of live sequence charts (LSC) and its Play-Engine implementation."

      • Core Programming Idioms: The paradigm uses specific idioms to express what must, may, or must not happen, similar to modal verbs in natural language.

        "Reminiscent of modal verbs in a natural language (such as shall, can or mustn't), they state not only what must be done (and how) as in standard programming, but also what may be done, and, more uniquely to behavioral programming, what is forbidden and therefore must not be done."

      • Event-Driven Approach: To begin development, a common set of events relevant to the system's scenarios is determined.

        "In behavioral programming, all one has to do in order to start developing and experimenting with scenarios that will later constitute the final system, is to determine the common set of events that are relevant to these scenarios."

      • Conflict Resolution and Scalability: The paradigm includes composition operators to manage and resolve conflicting behaviors, analysis tools like model checkers, and architectures for large-scale applications.

        "We deal with these issues by using composition operators that allow both adding and forbidding behaviors, analysis tools such as model checkers, and architectures for large-scale applications."

      • Language Independence: The principles of behavioral programming can be implemented in various programming languages and environments.

        "Behavioral programming principles can be readily implemented as part of different languages and programming approaches, with possible variations of idioms."

  3. Jun 2025
    1. To my understanding, the talk started with the following premise:

      Global state is bad, DB is global state => DB is bad (use Rama instead)

      I'd argue that: every observable state is global state

      • Global state refers to a state that is outside of some context but still affects it.
        • Affects it means that the state is observable in some way from outside the state's context.
      • If we consider that when we talk about the problems of state we effectively means observable state and the issues it causes outside its context.
      • Then by acknowledging it has effects on other contexts outside its own context we acknowledging that it is a global state (because it affects some context from outside).

      So after establishing every observable state should be considered global state to some extent (of some contexts). The way to solve this problem is eliminate state. Can this be done? I don't know (I don't believe it can)

      But more practically: did Rama eliminated state? No, it didn't, if you append an event to an event log you have changed the state of that event log and potentially every view that was derived from that event log.

      So:

      global state is bad, db is global state, rama is global state => everything is bad

  4. May 2025
    1. https://youtube.com/watch?v=TAQ7yBLRZ3U&feature=shared

      Certainly! Here’s a detailed summary and key insights from the YouTube talk “Use.GPU - Declarative/Reactive 3D Graphics by Steven Wittens #LambdaConf2024” (link to video):


      Overview

      Steven Wittens introduces Use.GPU, a TypeScript library for driving WebGPU with a declarative and reactive programming model. The talk explores the motivation, design, and technical underpinnings of Use.GPU, emphasizing productivity, maintainability, and the bridging of web and graphics paradigms.


      Key Topics Covered

      1. The Problem with Traditional 3D Graphics Development

      • High Complexity & Maintenance Cost: Building custom 3D graphics (e.g., configurators, data visualizations, CAD apps) is often slow, expensive, and results in code that’s hard for teams to maintain.
      • Specialization Barrier: The field is so specialized that many companies avoid using advanced GPU graphics due to the expertise required.

      2. The Permutation Problem

      • Example: A 3D house configurator requires manually assembling assets and coding every possible combination of options, leading to exponential complexity.
      • Customization Pain: Existing visualization libraries (like Deck.gl) are hard to deeply customize without forking and maintaining complex codebases.

      3. The Web vs. Graphics Divide

      • Graphics World: Driven by games/CAD, large teams, offline delivery, monolithic codebases, and focus on rendering performance.
      • Web World: Driven by SaaS, small teams, continuous delivery, focus on compatibility, composition, and reuse.
      • Different Priorities: These differences make it hard to bring GPU graphics into mainstream web development.

      4. Live: A React-like Runtime

      • What is Live? A React-inspired, incremental, and reactive runtime that allows for declarative UI and graphics code.
      • Key Features:
      • Incremental updates: Only re-executes code in response to changes.
      • Implicit, one-way data flow.
      • Declarative side effects: Auto-mounting and disposal.
      • Enables features like undo/redo and multiplayer state management.
      • Unique Twist: Live allows data to flow back from child to parent components—something not possible in React—which is crucial for certain graphics/data workflows.

      5. Use.GPU: Declarative WebGPU

      • Goal: Make GPU graphics as easy to use and maintain as modern web UIs.
      • Approach: Use familiar JSX-like syntax and React-style components to describe 3D scenes and behaviors.
      • Incremental Rendering: The system is designed as if rendering one frame, and only reruns necessary parts for interactivity/animation.
      • Bridging the Gap: By combining Live’s reactive model with WebGPU, Use.GPU makes advanced graphics accessible to web developers.

      6. Technical Insights

      • Immediate vs. Retained Mode:
      • Immediate mode (e.g., Canvas): Easy but doesn’t scale for complex interactivity.
      • Retained mode (e.g., GPU): More efficient but much harder to program and maintain.
      • GPU as a Pure Function Applicator: The challenge is efficiently feeding unique data to millions of parallel shader invocations, with memory bandwidth as a key constraint.
      • Use.GPU’s Innovation: Abstracts away much of the boilerplate and complexity, letting developers focus on high-level structure and reactivity.

      Why This Matters

      • Productivity: Use.GPU aims to democratize GPU programming for web developers, reducing the need for deep graphics expertise.
      • Maintainability: Declarative, reactive patterns make complex interactive graphics more maintainable and composable.
      • New Possibilities: Opens the door for more sophisticated, interactive, and visually rich web applications.

      Further Resources


      TL;DR

      Use.GPU is a new TypeScript/WebGPU library that brings React-style declarative, reactive programming to 3D graphics in the browser. Built on the “Live” runtime, it enables maintainable, high-performance graphics apps with familiar web development patterns—potentially revolutionizing how interactive graphics are built on the web.


      If you want a specific section of the talk summarized, or code examples from Use.GPU, let me know!

      Citations: [1] watch?v=TAQ7yBLRZ3U https://www.youtube.com/watch?v=TAQ7yBLRZ3U

    1. https://stopa.io/post/296

      Certainly! Here’s a summary and key insights from the article "A Graph-Based Firebase" by Ivan Stopa.


      Summary: "A Graph-Based Firebase"

      The Problem

      Modern, delightful apps like Figma, Notion, and Linear share three standout features: - Optimistic updates (instant UI feedback) - Multiplayer collaboration - Offline mode

      Building these features is hard. Most frameworks and databases don’t make it easy-especially when you want all three together.

      Existing Solutions & Their Limits

      • Firebase: Great at optimistic updates, offline mode, and reactivity. But it falls short for complex data relations and permissions.
      • Supabase & Hasura: Support relations (thanks to SQL/Postgres), but lack robust local/offline abstractions and have limited permission systems for complex apps.
      • Custom Architectures: Apps like Figma/Notion/Linear end up building their own stack: normalized in-memory stores, custom permission layers, manual syncing, and more.

      The Core Insight

      All these apps end up re-inventing a kind of database, but one that works both locally (in the browser) and remotely (on the backend), with: - Graph/relational queries - Expressive permissions - Built-in reactivity, offline, and optimistic updates

      The Proposed Solution: Instant

      Ivan and his co-founder built Instant: a graph-based, Firebase-like platform with: - Triple store backend (instead of SQL), enabling flexible relations and recursive queries. - Datalog-inspired query language (instead of SQL or GraphQL), for nested, relational data. - Unified local/remote architecture: The same query/mutation language runs in the browser and on the server. - Expressive, Facebook-like permissions: Functions for allow/deny rules, not just boolean expressions or row-level policies. - Lightweight and easy to start: No heavy schemas or large client libraries.

      Why Not SQL?

      • SQL is heavyweight (1700+ page spec), not optimized for nested/recursive queries common in UIs.
      • Frontend needs nested data, but SQL returns flat rows-requiring GROUP BY, JSON hacks, or complex post-processing.
      • Implementing full SQL reactivity and sync is overkill for most app needs.

      The Vision

      A new abstraction-something like a graph database in the browser-could make it dramatically easier to build fast, collaborative, offline-capable apps. If the local and backend database speak the same language, you get: - Real-time sync - Optimistic updates - Offline support - Powerful, maintainable permissions - Simple, nested queries for UI


      Key Takeaways

      • Modern app UX is a database problem: The hardest parts of delightful apps (speed, collaboration, offline) are really about how data is managed and synchronized.
      • Existing tools force trade-offs: You can have relations or offline support, but not both. You can have permissions or reactivity, but not both.
      • A new kind of database abstraction is needed: One that combines the best of Firebase (local-first, reactivity) with the best of SQL/Graph (relations, permissions).
      • Triple store + Datalog is promising: It enables flexible, nested queries and is easier to make reactive and local-first than SQL.

      If You’re a Developer…

      • If you’ve struggled to build fast, collaborative, offline-friendly apps, you’re not alone.
      • Watch for tools like Instant (or similar local-first, graph-based databases).
      • The future of app development may look a lot more like building with a distributed, reactive, permissioned graph database-on both client and server.

      Let me know if you want a deeper technical breakdown, a comparison table, or actionable recommendations for your own projects!

      Citations: [1] A Graph-Based Firebase https://stopa.io/post/296 [2] A Graph-Based Firebase https://stopa.io/post/296

    Annotators

    URL

  5. Apr 2025
      • Definition and Nature of Design:

        • Design involves preparing executable plans and structuring form.

          "Prepare the plans for a work to be executed... plan the form and the structure of that thing." - Effective design separates elements to enable later composition.

          "Designing is fundamentally about taking things apart... in such a way that they can be put back together."

      • Misconceptions About Design:

        • Good design isn't mere documentation generated post-implementation.

          "Can we just write our program and then generate some documentation?... No, that's not a plan." - Bad past experiences with monolithic designs don't invalidate design itself.

          "Those were plans but those are not good plans... doesn't mean planning is bad."

      • Key Elements of Good Design:

        • Identify underlying problems rather than stated needs or symptoms.

          "Decompose those wants and needs into problems." - Differentiate known from unknown requirements, causes from symptoms.

          "Take apart causes and root causes from symptoms." - Address unstated, implicit requirements (e.g., maintainability, performance).

          "Unstated requirements... problems nobody wants in the future." - Separate concerns clearly: time/order, places/participants, information/mechanisms.

          "Separating apart when things happen... places and participants... information our systems manipulate."

      • Iterative and Decompositional Nature of Design:

        • Design involves continuous decomposition and recomposition, supporting iterative changes.

          "Good design process is iterative... breaking things down until they're nearly atomic."

      • Why Good Design Matters:

        • Design aids understanding, coordination, extension, reuse, and efficient testing.

          "Design helps you understand a system... coordinate... extension... reuse... design-driven testing." - Design leads to greater efficiency by simplifying iterative changes.

          "Easier to iterate a design than... an implementation."

      • Composition and Performance Analogies (Bartók and Coltrane):

        • Composers set constraints deliberately, solving self-created problems through design.

          "First thing composers do... they create their own problems to solve." - Bartók (composer) specifies details precisely; Coltrane (performer) dynamically applies studied compositional knowledge.

          "Bartók completely specified every note... Coltrane had melody and changes... providing constraints for the performer."

      • Improvisation vs. Planning in Software:

        • Improvisation isn’t random; it relies heavily on preparation and deep understanding.

          "Improvisation is not spontaneous emoting... application of knowledge and vocabulary." - Good developers, like good improvisers, combine studied preparation and dynamic execution.

          "Coltrane was working out this composition dynamically... resources behind it were things he had prepared."

      • Harmony as Critical Design Skill:

        • Harmony is about congruity and simultaneous fitting together.

          "Harmonic sensibility is of critical design skill... systems that fit together." - Bartók and Coltrane exemplify mastery and innovation in harmonic principles.

          "Masters of harmony... retained focus on what fits together."

      • Programming Languages as Instruments:

        • Programming languages/tools should resemble instruments: simple, focused, and designed for skilled users.

          "Languages and libraries like instruments... minimal yet sufficient." - Instruments are intentionally limited, enabling deep expertise through practice.

          "Piano can’t play in-between notes... saxophone one note at a time... instruments are minimal yet sufficient."

      • Problems of Excessive Complexity and Choice:

        • Too many options or excessive configurability leads to poor design, overwhelming users.

          "Having a lot of choices... opposite of enabling us to accomplish things... constraint is a driver of creativity." - Simplicity and constraint enhance usability and creativity.

          "Every module is a good idea... adding them together you end up with something you can't play."

      • Tools Should Empower Skilled Users, Not Beginners:

        • Software tools/instruments should not overly cater to beginners, sacrificing depth.

          "Instruments are made for people who can play them... beginners can't play yet." - Effort and practice are necessary for mastery; attempts to eliminate all effort weaken tool quality.

          "Effort is not a bad thing... neither learning nor teaching is effort-free."

      • Design as Decision-Making:

        • Effective design clearly communicates deliberate, beneficial decisions.

          "Design is about making decisions... conveying those decisions to the next person." - Failure to decide clearly (excess configurability) is a design failure.

          "If you make everything configurable you're failing to design."

      • Summary Recommendations:

        • Embrace decomposition and iterative refinement in design.
        • Develop harmonic sensibility for building coherent, maintainable systems.
        • Create simple, constrained, and purposeful software tools/instruments.
        • Prioritize deliberate decision-making to convey clarity and utility.
      • Final Takeaways:

        • "Design like Bartók" by applying meticulous compositional care across scales.
        • "Code like Coltrane" by dynamically applying studied design knowledge.
        • Seek simplicity and harmony, avoiding unnecessary complexity and choice.

          "Take things apart... design like Bartók... code like Coltrane... pursue harmony."

    1. Overview & Motivation

      Repeatedly failed to write a post, realizing it should be a talk:

      “It turns out that I wasn’t really writing a post; I was actually preparing a talk.”

      Central Topic: React Server Components and distributed computations between two machines using React concepts.

      “It’s about everyone’s favorite topic, React Server Components.”


      Act 1: Recipes (Imperative) vs. Blueprints (Declarative)

      Tags vs. Function Calls:

      Visual and structural differences:

      “< and > are hard and spiky and ( and ) are soft and round.”

      Similarities:

      Both reference named operations (functions or tags) and accept arguments.

      Both allow nesting.

      “Clearly, function calls and tags are very similar...they let us elaborate by nesting further.”

      Differences:

      Tags (declarative):

      Often nouns; represent timeless structures (blueprints).

      Convenient for deep nesting, clearly marking structure.

      Time-independent, passive descriptions.

      “Tags tend to be nouns rather than verbs... nouns are easier to decompose.”

      Function calls (imperative):

      Often verbs; represent sequential actions (recipes).

      Execution order critical.

      “A recipe prescribes a sequence of steps to be performed in order.”


      Remote Procedure Calls (RPC) and Async/Await

      Problem: Calling Functions Across Computers

      RPC concept introduced: Functions across network boundaries.

      async/await: Simplifies asynchronous calls but still has limitations (coupling, losing direct references).

      “An async function...may pause execution...async and await propagate upwards.”

      Import RPC idea: Extends importing to remote function calls while maintaining references and type-checking.

      “Let’s invent a special syntax...import rpc because what we’ve described here has been known for decades as RPC.”


      Potential Calls (Tags as Deferred RPCs)

      "Potential function calls": Represented by tags; calls that might happen in the future.

      “It’s a blueprint of a function call.”

      Nested tags: Express dependencies naturally.

      “Dependencies between potential calls...should be expressed by embedding these calls inside each other.”


      Splitting Computation in Time and Space

      Computation split in time: Returning partial functions that capture necessary data (closures).

      Computation split across space (client-server): Splitting execution between two computers, handling data passing explicitly.

      “It’s an interesting shape—a program returning the rest of itself...closure over the network.”


      Two Types of Operations: Components vs. Primitives

      Components (Capitalized): "Brains" of a program; flexible, timeless, and declarative, embedding tags without introspection.

      “Components are truly timeless...they accept tags as arguments.”

      Primitives (lowercase): "Muscles"; introspect arguments, execution order sensitive, imperative, execute last.

      “Primitives introspect arguments...they must know all their arguments.”

      Execution Phases:

      1. Interpret (thinking): Processes Components freely without strict order.

      2. Perform (doing): Executes Primitives strictly inside-out.

      “First, you need to think...then you need to do.”


      Act 2: Reflections and Dialog

      Meta-dialog: Reflection on the writing process itself; writer and reader dialogue, acknowledging uncertainty and experimental nature of content.

      “The Writer: I have a rough idea, but truthfully, I’m pretty much winging it.”


      Core Conceptual Innovations

      Tags as code/data pairs: Potential function calls represented explicitly as data (tags), allowing deferred execution across contexts.

      Program as distributed computation: A single conceptual function spanning multiple runtime environments (Early and Late worlds).

      Timelessness and Flexibility: Components allow arbitrary computation ordering; Primitives enforce execution order.


      Key Quotes & Ideas:

      Blueprints vs. Recipes:

      “A blueprint describes what nouns a thing is made of...a recipe prescribes a sequence of steps to be performed.”

      RPC and Potential Calls:

      “A tag is like a function call but passive, inert, open to interpretation.”

      Components and Primitives Separation:

      “Components are the ‘brains’...Primitives are the ‘muscles’.”

      Importance of Introspection vs. Embedding:

      “If a function only embeds an argument without introspection, you can delay computing it.”


      Conclusion (Conceptual Breakthroughs)

      Distributed React Model: Redefining client-server interaction as React component structures.

      Future implications: Suggests moving common primitives into lower-level implementations to optimize distributed computation.

      “If many programs used the same Primitives...move their implementation to Rust or C++.”


    1. Talk overview: Discusses integrating Resource Description Framework (RDF) with language models (LMs), aiming to leverage RDF’s logical precision and LM’s linguistic flexibility.

      "We'll go over what rdf is, what language models are, and why they go well together."

      Controversial statements: Presents two conflicting viewpoints:

      LMs are valuable technological advances with profound philosophical implications.

      "Language models are pretty neat and I like them... We never imagined we'd be able to compute over unstructured text like this."

      AI/LMs have negative societal impacts, including environmental harm, disinformation, and devaluing art and labor.

      "Language models suck or rather AI sucks... It's bad for the environment, art, labor... we're drowning in slop and spam."

      Personal perspective and approach: Advocates mindful, responsible tech use; avoid superficial social media debates; focus on enhancing tech to achieve net positive societal impact.

      "Be mindful of the impact... don't talk about this stuff on social media... how good does it have to be before it becomes net good?"

      Introduction to RDF: RDF, developed post-internet by symbolic AI enthusiasts (W3C), is designed for precise, abstract knowledge representation via "triples" (subject-predicate-object).

      "RDF is an attempt to tackle some of the hardest problems of knowledge representation and reasoning."

      Critiques of RDF: Historically misunderstood due to complex formats (RDF XML), overly complex object-oriented libraries, and failed Semantic Web hype.

      "RDF XML... was a verbose complex format... The Semantic Web was simply overhyped."

      Strengths of RDF: Despite critiques, RDF remains robust and is heavily adopted in scientific, governmental, and enterprise environments for precise information modeling.

      "RDF is being used productively in science, heavy industry, government."

      Core components of RDF:

      Resources: Unique identifiers (IRIs) representing any real or abstract entities.

      "Anything that can be the subject of language can be the subject of rdf."

      Triples: Fundamental units of RDF, expressing precise semantic relationships clearly.

      "Subject predicate object... one of the most granular ways of representing information."

      Importance of context in RDF: RDF IRIs inherently carry context, resolving linguistic ambiguity through precise identifiers.

      "The goal of IRIs in RDF is that they carry their context with them."

      RDF as knowledge representation: RDF is more than data storage; it’s structured for logical entailment, inference, and reasoning capabilities, akin to symbolic AI.

      "RDF... is about representing knowledge... it's designed to make it possible to talk about all the things I know."

      Introduction to Language Models (LMs): Originating with Transformers (Attention Is All You Need, 2017), LMs revolutionized natural language processing by capturing grammar, syntax, semantics, pragmatics, and reasoning patterns from vast datasets.

      "Attention is all you need... unlocks everything language models can do today."

      Defining LMs: LMs are pure mathematical functions trained on language, predicting next tokens based on vast statistical measurements (latent semantic space).

      "Language model... is a pure function that predicts the next token."

      Current state of LM programming: Due to the complexity and opaque outputs of LMs, effective usage relies on controlling inputs through prompt engineering, information retrieval, query generation, tool use, and agent design.

      "All AI programming... is altering the inputs... Prompt engineering... Tool use... Agents."

      RDF and LM integration: RDF uniquely bridges structured data (logic) with natural language, enabling precise interactions (querying, reasoning) between LMs and structured data via RDF's semantic precision and natural linguistic alignment.

      "We should be putting rdf data in our prompts... RDF is really surprisingly good at going back and forth between natural language and rdf."

      Advantages of combining RDF and LMs:

      Simplifies querying structured data through natural language, enabling reasoning and inference beyond simple SQL queries.

      "If my RDF implementation supports reasoning... all that complexity is abstracted out of the model."

      Allows soft inference using LMs to approximate implicit knowledge, overcoming limitations of strictly rule-based AI systems.

      "Soft inference... we now have hard inference... and soft inference... you can do reasoning you couldn’t do with either alone."

      Neurosymbolic AI: Integrating RDF (symbolic reasoning) with LMs (neural networks) represents neurosymbolic AI, combining the strengths of precise logic with flexible language modeling.

      "This is the central insight behind what's called neurosymbolic AI."

      Practical vision and conclusion: Advocates using AI/RDF to automate tedious "data chores," proposing a constructive, pragmatic use-case-driven vision of AI tech.

      "I do a lot of programming is dishes and laundry of data... trying to build it... we're going to bring back the Semantic Web with AI."

  6. Mar 2025
    1. Figma's blog post, "How Figma's Multiplayer Technology Works," provides an in-depth look at the architecture and mechanisms enabling real-time collaborative editing in Figma. Below is a structured summary highlighting key concepts and direct quotes from the article:

      • Client-Server Architecture: Figma employs a client-server model where web clients communicate with servers via WebSockets. Each document has a dedicated server process managing real-time collaboration.

      "We use a client/server architecture where Figma clients are web pages that talk with a cluster of servers over WebSockets." citeturn0search0

      • Custom Multiplayer Solution: Instead of using traditional Operational Transforms (OT), Figma developed a bespoke system tailored to design tools, prioritizing simplicity and performance.

      "We decided to develop our own solution... we didn’t want to use operational transforms... As a startup we value the ability to ship features quickly, and OTs were unnecessarily complex for our problem space." citeturn0search0

      • Document Structure: Each Figma document is organized as a tree, akin to the HTML DOM, with a root object encompassing pages and nested objects.

      "Every Figma document is a tree of objects, similar to the HTML DOM. There is a single root object that represents the entire document." citeturn0search0

      • Conflict Resolution via CRDTs: Figma's system draws inspiration from Conflict-Free Replicated Data Types (CRDTs), specifically implementing structures like grow-only sets and last-writer-wins registers to manage collaborative edits.

      "Figma’s data structure isn't a single CRDT. Instead it's inspired by multiple separate CRDTs and uses them in combination to create the final data structure that represents a Figma document." citeturn0search0

      • Property Synchronization: The servers track the latest values for each property on objects, ensuring that non-conflicting edits merge seamlessly, while concurrent edits on the same property resolve using a last-writer-wins approach.

      "Figma’s multiplayer servers keep track of the latest value that any client has sent for a given property on a given object." citeturn0search0

      • Object Creation and Deletion: Object IDs are generated to be unique across clients, facilitating offline operations. Deleted objects' data is stored in the undo buffer of the client that performed the deletion.

      "This system relies on clients being able to generate new object IDs that are guaranteed to be unique." citeturn0search0

      • Tree Structure Management: Parent-child relationships are maintained by storing parent links as properties on child objects. Measures are in place to prevent cycles and ensure a valid tree structure.

      "The approach we settled on was to represent the parent-child relationship by storing a link to the parent as a property on the child." citeturn0search0

      • Fractional Indexing for Ordering: Children's order within a parent is determined using fractional indexing, assigning positions as fractions between 0 and 1, allowing efficient reordering.

      "Figma uses a technique called 'fractional indexing' to do this." citeturn0search0

      This architecture ensures that Figma delivers a responsive and reliable real-time collaborative design experience.

  7. Feb 2025
    1. Introduction and Motivation: Term rewriting with Meander offers an intuitive introduction aimed at everyday software engineers.

      > "Meander is heavily inspired by the capabilities of term rewriting languages. But sadly, there aren't many introductions to term rewriting aimed at everyday software engineers."

      Basic Concept of Term Rewriting: It transforms data by applying rules that match a left-hand-side pattern to produce a right-hand-side output.

      > "The goal of Term Rewriting is to take some bit of data and rewrite it into some other bit of data. We accomplish this by writing rules that tell us for a given piece of data what we should turn it into."

      Simple Rewrite Rule Example: A basic rule maps a specific input (e.g., :x) to a designated output (:y).

      > "Here is the most simple rewrite rule imaginable. If we are given :x we turn it into :y."

      Combining Multiple Rewrite Rules: Multiple rules can be defined to handle various inputs simultaneously.

      > "Here we've extended our rewrite to have multiple rules."

      Utilizing Variables in Patterns: Variables (prefixed with ?) match any value and allow that matched value to be reused in the output.

      > "Here we added the variable ?x to our left-hand-side. Variables start with a ? and match any value."

      Pattern Matching on Data Structures: Rules can be crafted to operate on vectors, enabling extraction of specific elements such as the first element.

      > "Here we can see some really simple rules that work on vectors of various sizes. We can use this to extract the first element from each."

      Rewriting Strategies for Control: Strategies allow precise control over how and when rewrite rules are applied during computation.

      > "Strategies let us control how our terms are rewritten."

      The Attempt Strategy: The attempt strategy tries to apply a rewrite and, if it fails, returns the original value to handle cases where no match is found.

      > "We can fix that by using the attempt strategy. It will try to rewrite and if it fails, just return our value."

      Iterative Application with (until =) Strategy: The (until =) strategy repeatedly applies rules until the expression stops changing.

      > "What we really want to say is to continue applying our rewrite rules until nothing changes. We can do that by using the (until =) strategy."

      Traversal Approaches – Bottom-Up vs. Top-Down: Different strategies, such as bottom-up and top-down, affect the order and frequency of rule application in nested expressions.

      > "If we look at the top-down approach, we can see that the top-down strategy actually gets called three times... Our bottom-up strategy however is only called twice."

      Tracing the Rewriting Process: The trace strategy provides visibility into each step of the rewriting process for debugging and analysis.

      > "We can inspect our strategies at any point by using the trace strategy."

      General Computation via Term Rewriting: Term rewriting can express any computable function, exemplified by implementing Fibonacci with Peano numbers.

      > "Term Rewriting is a general programming technique. Using it we can compute absolutely anything that is computable."

      Code and Execution as Data: It enables code to be treated as data, allowing for introspection of intermediate execution steps.

      > "Not only can our 'code' be data more than it can in lisp, but we can actually have our execution as data."

      Support for Partial Programs: The paradigm facilitates working with incomplete programs by allowing unimplemented parts to be represented without immediate failure.

      > "Term Rewriting also gives us an easy basis for talking about partial programs."

      A New Programming Paradigm: Term rewriting represents a uniform, pattern-based approach to programming that challenges traditional distinctions between code and data.

      > "Term Rewriting represents a distinct way of programming. It offer s us a uniform way of dealing with data."

    1. When recording a new permanent note, always think about linking that note to existing ideas and concepts. To do so, ask yourself questions like:How does this idea fit your existing knowledge?Does it correct, challenge, support, upgrade, or contradict what you already noted?How can you use this idea to explain Y, and what does it mean in the context of Z?

      What kinds of connections are there?

    2. Write exactly one note for each idea and write as if you were writing for someone else. Use full sentences, disclose your sources, make references, and try to be as precise, clear, and brief as possible

      Permanent notes

    3. How to create permanent notes
      • After annotating a mass of content around a topic (a problem to be solved?)
      • I review all the notes and simply group them (using drag & drop? Or canvassing) as I go through them
      • Each note goes into a group of similar notes
      • These groups might just be the key takes from my research and the outline of my permanent note
      • Problems to be solved could be the anchors of my structure
      • At any given moment I'm thinking or consuming content in the context of some problem
      • Problem could be phrased as goals, challenges or even products to be designed
        • It might be important to choose which way to phrase it or even use aliasing to find it from different state of mind
    4. Level 1: Fleeting Notes
      • Spontaneous thought that occur.
      • Only need quick capture
      • I'd also add a quick linking/tagging mechanism so it will at least be at the right area
    5. Level 2: Literature Notes
      • Bullet point summary in my own words of external content
      • I need to read more on how to take notes
      • Annotations seems like an effective approach
      • Collecting material feels more useful than it usually is
        • When I'm collecting references around a topic of interest I feel productive: it is very systematically, efficient, easy to do, and the product is tangible.
        • However it leaves the meaningful work of actually digging into those references for later.
        • Sometimes that later usually doesn't come and if it does:
          • Digging deeply into references isn't cheap so I usually don't dig into most of that references I collected but only to a selected few.
          • This can be looked at as BFS vs DFS strategy.
          • For articles, collection means less than actually reading and writing about the article.
          • For tools, nothing beats actually using the tool.
        • Perhaps my knowledge accumulation strategy should emphasis shortest path to value.
      • Comparison of web annotation systems
        • [[Memex]]
          • Desktop
            • UI looks good
            • Live annotations on the site
            • Multiple notes on page
            • AI integration
            • Annotations for videos
          • Mobile
            • Annotations through the app
              • The app seems to keep the integrity of the original content (no content loss)
            • Buggy text input widget
          • Syncing
            • Require Readwise
              • Not bi-directional!
              • Unmodifiable log-based format
        • [[Hypothesis]]
          • Desktop
          • Mobile
            • Bookmarklet is a good option to bypass the need for extension
            • Proxy is also an option
            • Annotations on the original site (no content loss)
            • No annota
          • Syncing
            • Through Readwise
              • Nice format
              • Full control over frontmatter
            • Obsidian plugin
              • No control over frontmatter
        • [[Readwise]]
          • Desktop
            • UI looks old and unmaintained
            • Live annotations on the site
            • Multiple annotation on site can only be achieved by highlights
              • Only single note per page without highlighting
          • Mobile
            • Annotations through the app
              • Some content is lost when importing to the app
            • Text-to-speech
          • Syncing
            • Through Readwise
              • Not bi-directional!
              • Cleaner format then memex
    1. My notes are publicly accessible, and I integrate them into public conversation
      • I too think knowledge should be open for feedback
      • Feedback improves quality
      • This also increases exposure which creates opportunities for collaboration
    2. Titles are very important indexes in my system. Zettels normally have numeric identifiers; modern adherents may give their notes titles, but apparently not with the same approach of “creating APIs.” Evergreen note titles are like APIs
      • This is true for my KMS as well
      • A system of naming and aliasing is very important for discoverablity
      • And also prevent duplications
    3. More generally, my approach uses a broader Taxonomy of note types which describes a hierarchy and methodology for capturing ideas very early and incrementally developing clusters of notes into increasingly higher-level representations.
      • I'm looking for a Taxonomy of notes and relations
      • Andy's Taxonomy can provide some ideas / insight for my Taxonomy
      • Purpose & Motivation

        We show that for a broad class of CRUD (Create, Read, Update, Delete) applications, this gap can be bridged.

        • This work addresses the difficulty many web authors face in creating interactive, data-driven applications due to limited programming and data-schema knowledge.
      • Core Contribution

        Mavo extends the declarative syntax of HTML to describe Web applications that manage, store and transform data.

        • The system introduces a novel approach where authors define schemas implicitly by marking up HTML elements, effectively eliminating the need for complex server-side CMS solutions or JavaScript frameworks.
      • Related Work Context

        These three systems introduced powerful ideas: extending HTML to mark editable data in arbitrary web pages, spreadsheet-like light computation, a hierarchical data model, and independence from back-end functionality.

        • Mavo differentiates itself from prior efforts (e.g., Dido, Quilt, Gneiss) by combining lightweight computation, purely client-side data management, and seamless integration with arbitrary HTML layouts.
      • HTML-Based Declarative Syntax

        We chose to use HTML elements, attributes, and classes instead of new syntax for Mavo functionality because our target authors are already familiar with HTML syntax.

        • Mavo leverages standard HTML attributes—like property, data-multiple, and data-store—to mark elements for data binding, persistence, and editing.
      • Editing & Storage

        Mavo can store data locally in the page (data-store=#elementid\), in the browser’s local storage (data-store=\local\), in an uploaded / downloaded file (data-store=\file\) or on one of the supported persistent storage services.

        • Once marked, any element or text becomes directly editable on the rendered page; changes are saved back to the chosen store, enabling WYSIWYG content management within the browser.
      • Objects & Collections

        Properties that contain other properties become grouping elements (objects in programming terminology); this permits a user to define multi-level schemas.

        • By adding data-multiple, users enable collection-like behavior, allowing repeated sections or items to be dynamically added or removed.
      • Lightweight Computation & Expressions

        Expressions are delimited by square brackets ([]) by default and can be placed anywhere inside the Mavo instance, including in HTML attributes.

        • This spreadsheet-like capability supports referencing other properties, performing aggregations (e.g., sum(), average(), count()), and conditional logic (iff()).
      • Implementation Details

        Mavo is implemented as a JavaScript library that integrates into a web page to simulate native support for our syntax.

        • On page load, Mavo constructs an internal tree representation of the HTML and expressions, updating all references reactively whenever data changes.
      • User Studies: Structured Tasks

        We found that the majority of users were easily able to mark up the editable portions of their mockups to create applications with complex hierarchical schemas.

        • Participants successfully transformed static HTML pages (for a ‘Decisions’ app and a ‘Foodie log’) into dynamic CRUD applications, achieving a 100% success rate on the basic CRUD tasks.
      • Common Difficulties

        Some participants frequently copied and pasted expressions when they needed the same calculation in different places.

        • Most challenges arose around conditionals (iff()) and more advanced expressions (e.g., filtered counts), underscoring that more complex logic remains harder for novices.
      • User Studies: Freestyle Tasks

        We asked them to use Mavo to make their mockup fully functional in any way they chose.

        • Users brought their own HTML for an ‘Address Book’ concept. All were able to implement multi-level data (e.g., multiple phone numbers) with minimal changes.
      • Direct Manipulation Design

        Instead of crafting a data model and then deciding how to template and edit it, a Mavo author’s manipulation of the visual layout of an application automatically implies the data model.

        • This makes building and editing data-backed pages more natural for users who think first in terms of presentation.
      • Intended Audience & Scalability

        Mavo is aimed at a broad population of users. There is no hard limit to what it can do, since its expressions also accept arbitrary JavaScript.

        • The system is best suited to ‘small data’ applications such as personal information management or single-author web publishing, but can also facilitate multi-user scenarios with suitable back-end access controls.
      • Semantic Web Alignment

        Mavo syntax for naming elements is based on a simplified version of RDFa… as a result, at runtime any Mavo instance becomes valid RDFa that can be consumed by any program that needs it.

        • Authors who add property attributes gain structured, machine-readable data without explicitly learning RDFa or JSON.
      • Planned Improvements

        A more direct way to declaratively express these operations [sorting, searching, filtering] is needed… sorting, searching and filtering were recurring themes.

        • Future work will focus on refining conditional syntax, improving data migrations when HTML structures change, and incorporating advanced multi-user access models.
      • Conclusion

        We show that HTML authors can quickly learn use Mavo attributes to transform static mockups to CRUD applications, and, to a large extent, use Mavo expressions to perform dynamic calculations and data transformations.

        • The researchers envision a future where editing and storing structured data via standard HTML becomes ubiquitous, lowering barriers for both novice and skilled web authors.
      • Object algebras solve the expression problem in OO languages using simple generics.

        "This paper presents a new solution to the expression problem that works in OO languages with simple generics (including Java or C#)."

      • They avoid the need for advanced typing features such as F-bounded quantification, wildcards, or variance annotations.

        "Object algebras use simple, intuitive generic types that work in languages such as Java or C#. They do not need the most advanced and difficult features of generics available in those languages, e.g. F-bounded quantification, wildcards or variance annotations."

      • They improve on the Visitor pattern by eliminating accept methods and preserving encapsulation.

        "Object algebras also have much in common with the traditional forms of the Visitor pattern, but without many of its drawbacks: they are extensible, remove the need for accept methods, and do not compromise encapsulation."

      • They support retroactive interface implementations, allowing new operations to be added without modifying existing code.

        "By using this simple pattern we can provide retroactive implementations of interfaces to existing code."

      • They enable extension in two dimensions: adding new data variants and new operations.

        "There are two ways in which we may want to extend our expressions: adding new variants; or adding new operations."

      • They directly implement functional internal visitors, reflecting Church encodings.

        "Object algebras provide a direct implementation of (functional) internal visitors since constructive algebraic signatures correspond exactly to internal visitor interfaces."

      • Multi-sorted object algebras support multiple, potentially mutually recursive types and operations as a family.

        "In larger programs, it is often the case that we need multiple (potentially mutually) recursive types and operations evolving as a family."

      • Modular combinators, such as union and combine, allow independent extensibility and parallel execution of operations.

        "Sometimes it is useful to compose multiple operations together in such a way that they are executed in parallel to the same input."

      • A real-world case study demonstrates their application in a remote batch invocation system integrating RPC, web services, and SQL translation.

        "We have used this technique in implementing a new client model for invoking remote procedure calls (RCP), web services, and database clients (SQL)."

      • In conclusion, object algebras offer a lightweight, factory-oriented programming style that scales well while minimizing conceptual overhead.

        "This paper presents a new solution to the expression problem based on object algebras. This solution is interesting because it is extremely lightweight in terms of required language features; has a low conceptual overhead for programmers; and it scales well with respect to other challenges related to the expression problem."

      • Denotational Model Overview:

        This paper presents a denotational model of inheritance. The model is based on an intuitive motivation of inheritance as a mechanism for deriving modified versions of recursive definitions.

      • Inheritance as Differential Programming:

        Inheritance is a mechanism for differential, or incremental, programming.

      • Handling Self-Reference:

        in order for a derivation to have the same conceptual effect as direct modification, self-reference in the original definition must be changed to refer to the modified definition.

      • Fixed Point Semantics Foundation:

        Theorem 1 (Fixed Point) If D is a cpo and f ∈ D → D is continuous, then there is a least x ∈ D such that x = f(x).

      • Modeling Inheritance via Generators:

        Inheritance is modeled as an operation on generators that yields a new generator.

      • Wrapper Mechanism for Modifications:

        The modifications, however, are also defined in terms of the original methods (via super). In addition, the modifications refer to the resulting structure (via self).

      • Equivalence of Semantics:

        Theorem 2 send = behave.

      • Operational Method Lookup Semantics:

        When a message is sent, the methods in the receiver’s class are searched for one with a matching selector. If none is found, the methods in that class’s superclass are searched next.

      • Intuitive Advantages and Broader Implications:

        The primary advantage of the denotational semantics is the intuitive explanation it provides. It suggests that inheritance may be useful for other kinds of recursive structures, like types and functions, in addition to classes.

      • Application to Compiler Generation:

        Our denotational semantics of inheritance can be used as a basis for semantics-directed compiler generation for object-oriented languages, as shown by Khoo and Sundaresh (1991).

    1. Denotational Design as a Real Process

      Denotational design is the process that has been elaborately developed by Conal Elliott.

      Core Principle of Stepping Back from Implementation

      We don't want to jump in and say, ‘An image is an array of pixels.’ That’s too soon yet that’s where most of us start.

      Abstract Definition of an Image

      An image is just a function from a pixel location, so an X, Y coordinate to color, where X, Y are in the real number space.

      Emphasis on Algebraic Properties and Category Theory

      He uses algebraic properties and category theory. I think algebraic properties are a very good indicator that you are, ‘on to something’ in the design.

      Incremental, Iterative Refinement

      You have to go back and revise and you make an attempt in a certain direction, and you learn something, and you bring that back to the beginning.

      Four Steps of the Denotational Design Process

      These are the four steps that I see... This first one is to...like a Zenning out and forgetting all implementation assumptions...Then you explore...Then you align with category theory concepts...Then the final thing is actually implementing it.

      Challenges with Haskell’s Type System

      Haskell has no type for real numbers. Most languages don’t...Another thing is, when you’re talking about say, the Monad laws or the Functor laws...there’s no way to do that equality comparison.

      Similar Difficulties in Clojure

      I do think it's a little harder than in Haskell, but I also think that most of the design part is happening in your head.

      The Essence of Denotational Design

      It’s about going back to first principles, building things up, understanding how things compose, and following a different gradient from what most people use when they design.

    1. Introduction to Signals & Reactivity

      "Signals are value that change over time...The key to a reactive system is that it knows when you set the value; it looks for the set on the specific property, the specific function...and it reads with a function."

      • The speaker emphasizes that signals conceptually hold a current value rather than providing a continuous stream of intermediate states.
      • Signals update synchronously, ensuring that any consumer remains consistent with the current snapshot of data.

      Immutable vs. Mutable Structures

      "We used to learn the framework and then learn the fundamentals kind of thing... in 2014, JavaScript had already taken off and I admittedly didn’t know very much."

      • The conversation highlights the contrast between immutable data (where changes create new references) and mutable data (where changes happen in place).
      • Immutable updates trigger re-diffing, whereas mutable updates allow granular changes at the specific location.

      Nested Signals & Efficiency

      "If we just go in and change Jack’s name to Janet by just setting the name, we only need to re-run the internal effect…it’s only the nearest effect that runs."

      • By nesting signals inside signals, individual property changes can avoid re-running the entire component or the entire data structure.
      • This nested approach demonstrates how specific effects update only the parts that need to change, improving performance.

      Store Proxies as a “Best of Both Worlds”

      "React basically tells you that this is where you end up—when you know that you can cheat it a little bit, you just get past it."

      • Using proxies allows for deep mutation with minimal overhead, merging the developer simplicity of an immutable interface with the fine-grained updates of mutable change.
      • The speaker points out that many frameworks lack built-in “derived mutable” structures and rely on bridging solutions such as user-space code or specialized stores.

      Map, Filter & Reduce with Signals

      "We can also avoid allocations here by only storing the final results... but only if you care about final results."

      • Mapping over large datasets illustrates the trade-offs between immutable approaches (always re-map) and mutable approaches (apply partial updates or diffs).
      • The speaker notes that “filter” and “reduce” often involve more complete re-runs and may need specialized logic or custom diffs rather than a one-size-fits-all operator.

      Convergent Nodes & Reactive Graphs

      "Because signals are a value conceptually, not a stream—...the goal of effects isn’t to serve as a log... it’s a way of doing external synchronization with the current state."

      • Computed values (memos) serve as convergence points in the reactive graph. Multiple sources merge into derived data that updates automatically.
      • Fine-grained systems need to track only minimal dependencies, but the conversation repeatedly underscores that different transformations (map, filter, reduce) pose unique challenges.

      Async & Suspense Insights

      "Run once doesn’t work—there’s no way to avoid scheduling. We need to always throw or force undefined, so we need suspense."

      • Lazy asynchronous signals can lead to “waterfalls,” where the second request only starts after the first completes.
      • Suspending or temporarily showing old data can prevent blank states but risks double fetching and inconsistent chaining unless carefully guarded.

      Early Returns vs. Control Flow Components

      "Early returns... push decisions upwards where further positions are pushed down, so it might not impact React, but it doesn’t lead to a way forward."

      • The speaker critiques patterns that rely on returning early in components, arguing they duplicate layout logic and sometimes break optimal reactivity.
      • The recommendation is to align with data flow and push condition checks closer to where data is rendered instead of scattering them at multiple return statements.

      Syntax Debates & Framework Convergence

      "Syntax in JS frameworks is overrated... we’re at a point now where the looks are so superficial that you can be looking at complete opposites and it looks identical."

      • Despite the prevalent notion that “ruin” syntax in various frameworks resembles React, the underlying reactivity mechanics differ significantly.
      • Discussion highlights that every major framework—React, Vue, Svelte, Solid—converges on signals or reactivity, yet each approach’s details (mutable vs. immutable, compiler vs. runtime) vary widely.

      Conclusion: Future of Signals & Reactive Systems

      "People are starting to wonder if we should just have one super framework now that we mostly agree... but each framework’s identity is in how it approaches these details."

      • The speaker underscores ongoing exploration into data diffing, nesting, and push-pull mechanisms to improve performance while simplifying the developer experience.
      • Signals and granular reactivity are core to bridging a user-friendly interface with minimal overhead, a goal each framework pursues in its own unique, evolving way.
    1. Summary of "Wrong Way! Choosing a Direction for Datomic Ref Types" by Francis Avila

      Problem Statement

      • Datomic reference attributes require a direction (e.g., :vehicle/passengers vs. :passenger/vehicle).
      • Key question: Which direction should be chosen for optimal performance and usability?

      Why Prefer :passenger/vehicle (Lower-Cardinality Forward Direction)?

      1. Keeps collections in map-projections smaller

      • "Lower-cardinality attributes values are generally easier to deal with because entity-walking with d/entity or d/touch won’t occasionally give you unexpectedly large sets."
      • Key Issue: High-cardinality attributes can create large collections, making entity exploration cumbersome.
      • Protection Mechanism: "d/pull protects you from this because it will only pull 1000 items by default, but this is easy to forget!"

      2. Keeps EAVT index smaller per entity

      • "A high-cardinality relationship between two entities often implies some kind of containment relationship."
      • Key Benefit: If the container entity is "rich" (i.e., has many attributes), using :passenger/vehicle keeps the EAVT index for the container smaller.
      • Why it matters: Large EAVT indexes decrease index selectivity, making entity lookup less efficient.
      • Caveat: This mainly affects d/entity and d/pull, not d/q or d/pull-many, which rely more on AEVT.

      3. Improves EAVT history readability

      • "Keeping the EAVT smaller per E also makes the history of an entity more human-legible when using d/history database reads over the EAVT index."
      • Problem with :vehicle/passengers: "It will clutter the history of the container entity (the vehicle)."
      • Advantage of :passenger/vehicle: Changes in passenger-vehicle relationships are recorded on the passenger’s history, not the vehicle’s.
      • Takeaway: For audit logs and admin views, the passenger's history is usually more useful than the vehicle's.

      4. Supports cardinality-one constraints with last-write-wins semantics

      • "Very often, there is also an 'only in one container' constraint between a container and contained entity."
      • Key Advantage: Using :passenger/vehicle as a cardinality-one attribute naturally enforces the rule that a passenger can only be in one vehicle at a time.
      • Contrast with :vehicle/passengers: This direction cannot enforce uniqueness without additional constraints (e.g., transaction functions).
      • Takeaway: If the relationship is inherently one-to-one, :passenger/vehicle provides better integrity guarantees.

      Why Prefer :vehicle/passengers (High-Cardinality Forward Direction)?

      1. Better schema legibility

      • ":vehicle/passengers makes it clear when grouping by keyword namespace that vehicles are expected to reference many passengers."
      • Issue with :passenger/vehicle: It does not make the vehicle-passenger relationship explicit.
      • Challenge: No built-in way to highlight reverse ref relationships in Datomic schema.
      • Possible Fixes:
        • Use entity specs with :doc metadata.
        • Create custom ref-range annotations.

      2. More useful d/index-pull queries

      • "d/index-pull provides extremely efficient, lazy, and offset-able pulls over the third slot in an AVET or AEVT index span."
      • Problem: "d/index-pull cannot scan VAET."
      • Why it matters: If you need to retrieve all passengers for a vehicle, d/index-pull won’t work efficiently for :passenger/vehicle.
      • Workaround: Adding :db/index true to :passenger/vehicle, but this adds extra datoms to the index.

      3. Reduces index segment churn

      • "If the number of containers is significantly smaller than the number of contain-able entities, and the containment relationship churns frequently, the lower-cardinality-forward attribute is going to invalidate more segments during indexing."
      • Example: If passengers frequently switch vehicles, using :passenger/vehicle means frequent updates across many EAVT and AEVT segments, causing more index fragmentation.
      • Alternative: ":vehicle/passengers results in fewer index segments being invalidated."
      • Takeaway: For high-churn relationships, :vehicle/passengers may be more efficient.

      Summary Takeaway

      • :container/contained (e.g., :vehicle/passengers) is intuitive, but :contained/container (e.g., :passenger/vehicle) is usually better due to:
        • Smaller map-projections.
        • More efficient entity walking.
        • Better historical auditing.
        • Natural enforcement of cardinality constraints.
      • But :vehicle/passengers is better when:
        • Schema readability is critical.
        • d/index-pull is needed for efficient queries.
        • The relationship has frequent changes and involves many contained entities.
    1. Introduction to the Stream and Purpose

      “I'm uh pretty excited about this one because I'm going to get to finally show off some of the stuff I've been working on for months.”

      • Highlights the speaker’s enthusiasm to demonstrate months of development progress.
      • Establishes that the stream will cover new reactive features and mechanisms.

      Parallel and Nested Async Fetching

      “What if we want to do nested fetching where each component fetches data, but we don’t want to cause waterfalls? … We’ve basically solved waterfalls because promises do not throw out too early.”

      • Emphasizes that asynchronous tasks in Solid can now run in parallel.
      • Shows how nested components fetch data without blocking each other.

      createAsync as a Core Signal Primitive

      createAsync… if we look at the signature here, it expects a computation… and then returns an accessor of a number… it will just give you the resolved async value without returning undefined.”

      • Introduces createAsync as an “async signal” that never yields undefined.
      • Allows a direct, non-nullable way to fetch and use async data in the component.

      Local vs. Global Suspense Boundaries

      “We don’t have to throw away our render tree just because we have something async… it only throws exactly where we read it, and that means everything else is fine.”

      • Suspense is granular: only the part that reads an unresolved value suspends.
      • Other parts of the UI remain interactive rather than unmounting the entire tree.

      Self-Healing Error Boundaries

      “We basically collect the nodes that fail, and then, if they become unfailed or get disposed, the boundary can remove itself—self-heal.”

      • Explains that failed async or errors get tracked locally by boundaries.
      • Once the failure resolves or is disposed, the error boundary automatically resets.

      Avoiding Unpredictable Tearing

      “You basically never want your async data to just flicker in or out. We can choose to throw or to keep ‘stale’ data. Suspense can opt into that.”

      • Details the importance of consistent state during asynchronous updates.
      • Introduces a mechanism (isStale or latest) to avoid jarring UI replacements.

      Splitting createEffect for Predictability

      “If we just let you read signals in the same function where we do side effects, we get unpredictable re-runs… so we split it into two halves.”

      • Shows how Solid 2.0 separates the tracking (pure) side from the side-effect (impure) side.
      • Ensures that data retrieval and side-effects remain consistent, avoiding “zalgo” outcomes.

      Mutable Reactivity and Store Projections

      “I realized a store approach is a general solution… the idea is you have a single source signal and can ‘project’ it out to many places… only the fields that change update.”

      • Describes a new technique called “projections” to handle large data sets efficiently.
      • Allows per-field reactivity, so only the row or property that changes triggers updates.

      Granular Handling of Async and Errors

      “Error boundaries and suspense handle each failing effect locally. The rest of the system doesn’t even know something failed.”

      • Illustrates that errors remain localized, preventing a full unmount.
      • Reflects the fine-grained reactivity approach, making error handling more targeted.

      Impact on Ecosystem Comparisons

      “React can’t do this because… they don’t have the semantics to pull from signals. It’s not the same model.”

      • States the fundamental difference from React’s component rendering.
      • Emphasizes that granular updates and specialized async signals differ sharply from React’s design.

      Future Plans: SSR, Hydration, Transitions

      “We still need transitions. I haven’t implemented them yet, but they’re part of the equation. … Also looking at SSR so we can skip hydration IDs.”

      • Points to upcoming work for Solid 2.0: concurrency transitions, improved server rendering, and more efficient hydration.
      • Aims to unify the new runtime mechanisms with advanced features like streaming.

      Concluding Observations

      “We’ve basically… proven we can handle async, error boundaries, and concurrency all purely at the reactive level. This changes everything.”

      • Summarizes the significance of these new developments in Solid’s reactivity engine.
      • Stresses that purely runtime-based solutions enable advanced use-cases without a compiler-centric approach.
    1. Summary of "Genuinely Functional User Interfaces" by Antony Courtney & Conal Elliott

      Abstract & Motivation

      • Fruit is a GUI library for Haskell based on a formal model.
      • The model defines signals (continuous time-varying values) and signal transformers (pure functions mapping signals to signals).
      • GUI components are composed as signal transformers.
      • The aim is to develop a denotational model for GUIs to enable reasoning about properties, abstraction levels, and new interaction paradigms.

      Key Contributions

      • Introduces AFRP (Arrow-based Functional Reactive Programming), an extension of FRP (Functional Reactive Programming).
      • Uses signals (functions from time to values) and signal transformers (functions from signals to signals) to model GUIs.
      • Defines GUI components compositionally, rather than as a hierarchy of widgets.
      • Provides novel transformations, including continuous spatial scaling (zooming) and multiple views.

      1. Introduction

      • Previous Haskell GUI libraries (e.g., TkGofer, Fudgets, FranTk) have imperative or semi-functional designs.
      • High-level programming means focusing on the conceptual model rather than implementation details.
      • Fruit aims for a purely functional approach, mapping Haskell types and functions directly to their conceptual counterparts.

      "What is an abstract conceptual model of a graphical user interface?"

      "All previous GUI libraries for Haskell define the conceptual model of a GUI only informally, or defer to some external system (e.g., X Windows, Tk, Gtk)."


      2. AFRP Programming Model

      • Based on signals (time-dependent values) and signal transformers (functions on signals).

      Signals

      "A signal is a function from time to a value: Signal α = Time → α"

      • Example: The mouse’s (x,y) position is a Signal Point.

      Signal Transformers

      "A signal transformer is a function from Signal to Signal: ST α β = Signal α → Signal β"

      • Example: A transformer that outputs a signal of the mouse position.

      Abstract Types

      • Signals are conceptually continuous but implemented using discrete sampling.
      • First-class signal transformers (but signals are not first-class) ensure modularity and prevent space-time leaks.

      Arrows in AFRP

      • Arrows (introduced by Hughes) allow composing functions structurally.
      • AFRP uses arr, >>>, first, and loop operators for structuring signal transformers.

      Discrete Events

      • Events modeled as Maybe α signals:

      "If the value at time t is Nothing, the event did not occur. If it is Just v, the event occurred with value v."


      3. The Fruit GUI Model

      GUI Definition

      "A GUI is a signal transformer of type GUI a b = ST (GUIInput, a) (Picture, b)"

      • GUIInput: Captures mouse and keyboard state.
      • Picture: Defines the GUI’s visual representation.
      • a → b: Represents auxiliary semantic input/output.

      GUI Components

      • Mouse Position Transformer haskell mouseST = arr snd >>> arr (fmap mpos) >>> stepper G.origin2
      • A GUI that draws a ball following the mouse haskell ballGUI = first (mouseST >>> arr (move ballPic))
      • Running a GUI haskell runGUI :: GUI a b -> IO ()

      4. Composing Applications

      Example: Paddleball Game

      • Uses recursive signal transformers for position and velocity.
      • Handles collisions with when combinator: haskell when :: ST Bool (Maybe ())
      • Enables restart functionality with an event-based state machine.

      Restart Button

      "To restart the game, we need a button that produces an event when pressed and resets the game state."

      • Uses stepAccum and switch for event handling.

      Dynamically Enabling Buttons

      • Uses an event-driven signal for enabling/disabling: haskell allowRestart <- stepper False -< gameDone

      5. Exploring Modularity

      Transforming GUIs

      "A GUI's visual output is a Picture signal, which can be transformed point-wise."

      • Enables zooming and spatial transformation: haskell transformGUI :: G.Transform -> GUI b c -> GUI b c

      Multiple Views

      "Many applications need multiple views on the same underlying dataset."

      • Passive views: Observe state but don’t interact.
      • Active views: Interact and sync across views.
      • Implemented using focus-aware GUI multiplexing.

      6. Dynamic GUIs

      • Adding GUI components dynamically (e.g., labels appearing when a button is pressed).
      • Uses accumST for incremental updates: haskell accumST :: (ST b c -> d -> ST b c) -> ST b c -> ST (b,Maybe d) c
      • Example: Dynamically growing label list haskell countLabels = proc (inpS, es) -> do lblNumE <- countE -< es (picS,_) <- accumST addLabel (mkLabel 0) -< ((inpS,()),lblNumE)

      7. Related Work

      • Fudgets: Similar compositional approach but uses asynchronous stream processing.
      • FranTk: Uses FRP but has an imperative widget-creation model.
      • Mux (Pike, 1989): A CSP-style GUI system similar to Fruit’s multiplexing model.

      8. Implementation & Future Work

      • Prototype implemented using Haskell and Java2D.
      • Plans to:
      • Replace stream-based FRP with efficient data-driven FRP.
      • Add a complete widget set.
      • Support efficient dynamic collections.

      Key Takeaways

      1. Denotational Model
        • GUI components are pure signal transformers.
        • Enables formal reasoning and abstraction-level comparisons.
      2. Modularity & Composability
        • GUIs are first-class values, supporting higher-order manipulation.
        • Transformations (e.g., zooming) are naturally expressible.
      3. Functional Advantages
        • No space-time leaks (unlike earlier FRP models).
        • Multiple active views and zooming come "for free."
      4. Real-world Implications
        • Can integrate modern HCI paradigms (e.g., zoomable UIs) seamlessly.
        • Has potential beyond traditional GUI frameworks.

      Final Thoughts

      Fruit presents a genuinely functional approach to GUIs, offering modularity, composability, and abstraction benefits over imperative toolkits. Its denotational foundation provides a rigorous semantic basis, paving the way for more principled and flexible UI development in functional programming.

    1. Summary of the Talk: Building More Powerful User Interfaces in the Browser

      Introduction & Motivation

      • The speaker reflects on a year’s work in adopting modern web technologies (AMD, Backbone, HTML5, CSS3, new browser APIs) but realizes that the fundamental power given to users has not improved significantly from 15 years ago.

      "I looked at what we had built at the end of the year and I said you know I think I could have built this 15 years ago when I started writing JavaScript."

      • Traditional web applications function like simple forms: users provide inputs, and the application computes outputs. The speaker seeks a way to eliminate this rigid distinction and create more interactive and dynamic UI models.

      "You have to decide in advance which things you think are input... and which things you think are output."

      Example: Federal Budget Visualization

      • A web-based visualization of the US 2013 federal budget illustrates the limitations of traditional input-output models.

      "What if we could actually take this and change this and not have a distinction between input and output?"

      • The speaker explores how users could interact more dynamically by locking certain variables (e.g., keeping the deficit fixed) and observing how other variables (e.g., taxes) adjust automatically.

      Concept of Constraint Programming

      • Constraint programming allows defining relationships between variables instead of prescribing explicit procedures for computation.

      "Constraint programming is about writing our programs in terms of relations instead of procedures."

      • Example: Instead of coding tax rates explicitly, define relationships between tax brackets and let the system adjust them dynamically.

      "We make all of our variables both input and output."

      Cassowary Solver for UI Constraints

      • Cassowary (CJS), a JavaScript library, enables constraint solving for UI applications. Originally used in iOS Auto Layout.

      "Cassowary is a fantastic library for doing constraint programming in JavaScript."

      • It ensures relationships like total spending = defense + non-defense spending remain consistent, even when one variable is modified dynamically.

      "The solver will automatically for us make sure that this relationship between the three variables always holds."

      • Supports priority constraints to handle over-constrained systems (e.g., some constraints are "required," others are "nice to have").

      "If you have problems that are over constrained where there's no complete solution, Cassowary will find you the best solution you can find."

      Practical Implementation

      • Basic interaction model:
      • Mark a variable as being edited (beginEdit).
      • Suggest new values (suggestValue).
      • End editing (endEdit).
      • Add constraints (stayConstraint) to keep certain variables fixed.

      "The reason this is called Suggest and not set... is that this value might not lead to an actual solution."

      • Using constraints simplifies complex relationships, such as progressive taxation and revenue calculations.

      "Several very natural things fell out of writing the relationships that would have been a real pain to code by hand."

      Limitations of Cassowary

      • Only supports linear equations and numeric variables (e.g., it cannot handle quadratic constraints).

      "C can only solve problems where the variables are numbers and it can only solve where the relationships between the numbers are linear expressions."

      • Proposes improvements:
      • Nonlinear constraint solvers (for geometric problems, e.g., keeping a point on a circle).
      • MiniKanren & Core.logic for relational programming on non-numeric problems.

      MiniKanren & Core.logic

      • A relational programming model that generalizes constraints beyond numbers to trees, lists, colors, and abstract structures.

      "MiniKanren provides relational programming just like we've been doing with Cassowary but over non-numerical problem domains."

      • Example: Sudoku Solver in Core.logic solves the problem declaratively by defining constraints rather than writing procedural code.

      "This looks like a statement of the problem of Sudoku, and yet this will run really fast and give us the answers."

      • Benchmarks: A JavaScript Core.logic Sudoku solver runs 100x faster than Peter Norvig’s optimized Python version.

      "This version which is not handwritten just happens to use a constraint solver works on average about a hundred times faster than Peter Norvig's Python code in JavaScript in the browser."

      Future Directions: Cooperating Solvers

      • Alan Kay’s Viewpoints Research Institute explores "cooperating solvers," where different constraint-solving techniques (dataflow, logical constraints) work together.

      "How do they cooperate? There's some really interesting PDFs and example code showing how to do that."

      • Potential applications:
      • More powerful layout engines.
      • Interactive problem-solving beyond algebraic constraints.

      Bonus: Brett Victor’s Scrubbing Calculator

      • Inspired by Brett Victor’s UI work, the speaker demonstrates an interactive "scrubbing calculator" implemented in Cassowary.

      "This is an example of the kind of program that would be hard to write by hand unless you really want to write your own computer algebra system."

      • Allows solving for any variable in an equation dynamically just by adjusting values interactively.

      "If we had a real computer algebra system in JavaScript wouldn't that be great?"

      Conclusion

      • Advocates for constraint programming as a means to build more powerful, flexible UI applications.
      • Suggests integrating constraint-based approaches in mainstream web development to create more intelligent, adaptive, and user-driven applications.

      "If we want to build more powerful applications, we’ve got to give our users more leverage."

      • Calls for open-source contributions to Cassowary and similar projects.

      "C.JS is a fantastic one... let's go out and build more powerful user interfaces."

    1. Summary of DevTools FM Podcast with Juan Capa on Membrane.io

      Introduction and Background

      • Juan Capa is the creator of Membrane.io, a still-in-development platform for simplifying API automation and internal tooling.

      "Juan is the creator of membrane.io, a still-on-development platform for simplifying API Automation and internal tooling."_

      • He has a background in game development, having spent over a decade working on console, mobile, and web games.

      "I have a background in game development. I spent about 10 years a little bit more than 10 years working in game development."_

      • Worked at Vercel on the CDN team after being hired through Twitter, then briefly returned to Zynga before joining Mighty under a program that allowed him to work part-time on Membrane.

      "I saw a tweet by Guillermo Rauch ... He hired me to work for Vercel ... I spent two years there as the lead in the CDN team."_

      "Then I guess my last last thing I did was join Mighty ... working on my startup but also working three days for them."_

      • Now focusing on Membrane full-time and looking to onboard users soon.

      "So yeah now I'm a member in 100 and yeah hoping that I can show to the world and onboard some users in the coming week or two."_

      Membrane: Concept and Vision

      • Membrane was inspired by game engines, where every entity is programmable and data is universally accessible.

      "In game development, you’re dealing with this Engine with this universe, and this universe is completely programmable."_

      • Aimed at simplifying API automation and small-scale applications, particularly for personal automation.

      "It’s a place to write programs to build personal automation ... optimized for personal automation programs."_

      • Membrane provides an abstraction over APIs, allowing users to interact with data and automate workflows through a graph-based system.

      "The key to Membrane is this whole concept of a graph that is the main thing that programs use to manipulate the world."_

      • Designed to be highly accessible by integrating with Visual Studio Code and leveraging JavaScript/TypeScript.

      "The entire thing is built inside of Visual Studio Code ... The most used IDE is Visual Studio Code and the most used language is JavaScript."_

      Durability & Orthogonal Persistence

      • Membrane implements "orthogonal persistence," ensuring program state is always durable.

      "I decided to start building what is sometimes called orthogonal persistence, which is this concept of a durable program."_

      • Every Membrane program is an SQLite database, meaning all messages, state, and execution history are stored persistently.

      "Every member program is actually just one SQLite database."_

      • Programs execute with an event-sourcing model, where all inputs and outputs are first logged in SQLite before execution.

      "Every message that it receives, it first goes in the database and then it's processed."_

      • Uses Linux’s soft dirty pages for memory tracking, making it highly efficient in persisting only changed memory states.

      "I use quickjs ... and there’s a constant in the Linux kernel called Soft Dirty Pages ... only serialize the pages that actually change."_

      • Future improvements include optimizing serialization using WebAssembly’s linear memory model.

      "I’m saving more data than I should, so there’s even more optimizations I can do."_

      Observability & Debugging

      • Membrane prioritizes perfect observability, logging every event to enable full program introspection and debugging.

      "If it’s not in the logs, it didn’t happen."_

      • Allows time-travel debugging, replaying past states and executions.

      "You can go back to when that message was received and then run the code that was available back then."_

      • Aims to support snapshot-based time travel for enhanced debugging.

      "The first version I’m gonna have of that type of time travel is going to be with a snapshot that is taken every hour."_

      Membrane’s Graph Model

      • Membrane’s "graph" serves as a type-safe, unified interface for APIs.

      "Everything is a node, which you can think of as an object or a scalar (string, number, JSON type)."_

      • Drivers enable API connectivity, converting external APIs into Membrane’s schema and providing a consistent interface.

      "The GitHub driver has a schema ... basically it mirrors the GitHub API as a Membrane schema."_

      • Pagination is abstracted away, making API traversal seamless.

      "With Membrane, you have this object that’s a one-page, and a page has a reference to the next page."_

      • Users can mount different programs' graphs into their own, dynamically expanding their automation environment.

      "Your graph is basically the combination of all the graphs of all your programs."_

      Chrome Extension & API Interfacing

      • Membrane includes a Chrome extension that recognizes API entities on webpages.

      "What it does is it asks Membrane, ‘Hey, do any of the programs under Juan’s account recognize anything on this page?’"_

      • Future improvements will allow automatic driver installation when encountering unrecognized APIs.

      "Eventually, I can just offer you the option to install that driver with a click from the Chrome extension."_

      • Currently requires users to provide their own API keys, but OAuth-based authentication is planned.

      "Right now, you have to bring your own keys."_

      Cron & Automation Features

      • Membrane features built-in cron-like timers, which are stored in SQLite and visualized in the UI.

      "The SQLite database has a table called timers, and that table holds all scheduled actions."_

      • Users can visually track when timers will execute and manually trigger actions for testing.

      "From Visual Studio Code, you can just hover on each timer and see how long until it fires."_

      • Logs every timer execution, ensuring full transparency in automation workflows.

      "If it’s not in the logs, it didn’t happen."_

      Potential for Expansion & Future Vision

      • Membrane’s approach is inspired by game development tooling, where objects and behaviors are always inspectable.

      "In game engines, you’re dealing with objects where you can see all their properties and control them."_

      • Aims to provide a seamless developer experience, where APIs become interactable entities without custom adapters.

      "If you wanted to automate something with Twitter, you shouldn’t have to pre-install a driver."_

      • Exploring self-hosting and open-source models to improve privacy and decentralization.

      "Self-hosting membrane is going to be a thing ... I think I want to make it open-source."_

      • Could enable mobile implementations, particularly for interacting with on-device automation.

      "You could just access your Membrane graph from your phone."_

      • Possibility of auto-generating API drivers from HAR files or OpenAPI specs.

      "There are ways to generate API specs from network traffic ... from that API spec, you can generate the driver."_

      Conclusion

      • Membrane is a powerful tool aimed at making personal automation and API interaction seamless, leveraging game engine principles for maximum programmability.
      • It provides persistent execution, deep observability, and a graph-based API abstraction layer that simplifies working with external services.
      • With a focus on usability, it integrates tightly with VS Code and JavaScript while also offering innovative features like event sourcing, time travel debugging, and drag-and-drop API connections.
      • The future of Membrane includes open-source possibilities, mobile integrations, and potentially eliminating the need for manually defining API adapters.
      • It represents a new paradigm in developer tooling, where programs are durable, transparent, and universally programmable.
  8. Jan 2025
    1. Introduction and Purpose

      “I would like to tell you why fulcro is awesome and why it's much easier to learn than you might believe so we will look at what fulcro is and what it can do for you and why is it interesting...”

      • Emphasizes that the talk aims to introduce Fulcro, explain its ease of learning, and highlight its benefits.

      Speaker Background

      “So first of all who is… I've been doing back-end development since 2006 and front-end development since 2014 on and off…”

      • Establishes the speaker’s credibility with extensive development experience.

      “...I built learning materials for Fulcro beginners and I pair program with and mentor my private clients on their first Fulcro project...”

      • Demonstrates the speaker’s active role in teaching Fulcro to newcomers.

      Motivation for Fulcro

      “When I create web applications I want to be productive and I want to have fun… I don't want to have to manually track whether the data started loading or finished or failed…”

      • Highlights the desire to reduce boilerplate and tedious manual tasks.

      “I don't want to write tons of boilerplate and especially not to do that and again and again for every new type data in my application…”

      • Stresses that Fulcro removes repetitive coding patterns, enhancing developer efficiency.

      Choosing a Full-Stack Framework

      “Now there are simpler Frameworks… or you can pick a full stack framework that has all the parts you need…”

      • Explains how Fulcro’s integrated approach can be preferable to patching together multiple libraries.

      “...malleable web framework designed for sustainable development of real world full stack web applications...”

      • Defines Fulcro as a flexible system that supports complex, long-lived applications.

      Key Fulcro Capabilities

      “It can render data in the UI and it uses React so it's wraps React for that…”

      • Confirms that Fulcro uses React under the hood for rendering.

      “It can manage state… it keeps the state for you at some place… re-render the UI so it reflects that state…”

      • Describes automatic state management and reactive re-rendering.

      “It makes it easy to load data from the backend… you have full control...”

      • Emphasizes the fine-grained control over data fetching.

      “Fulcro also caches the data for you automatically and it does so in normalized form…”

      • Highlights how normalized data storage simplifies updates across the UI.

      “Fulcro has excellent developer experience for multiple reasons… the biggest is locality and navigability…”

      • Points out how Fulcro keeps relevant code together, making it easier to navigate and maintain.

      Core Principles

      1. Graph API / EQL (Edn Query Language)

      “...we use graph API instead of rest API which means that we have just a single endpoint and it's the front end which asks the back end for what data it wants by sending over a query…”

      • Simplifies data retrieval by letting the client specify exactly what it needs.

      • UI as Pure Function of State

      “UI is pure function of state… components only ever get the data they need from their parent…”

      • Removes side effects from the rendering flow.

      • Locality

      “...to understand the UI component I shouldn't be forced to jump over four different files… so in Fulcro a component doesn’t have only a body but also a configuration map…”

      • Co-locates component queries, rendering, and logic in one place.

      • Normalized Client-Side State

      “...it stores that data normalized in a simple tabular form where entities contain other entities replaced with references…”

      • Ensures any update in one place is reflected throughout the UI.

      Architecture Overview

      “...it's a full stack web framework so it has the front end and back end part… front end is Fulcro proper… the back end is Fulcro’s library Pathom…”

      • Describes the division between the Fulcro client and the Pathom-based server.

      “On the front end… we have client DB… we have a transactional subsystem… to the back end we have Pathom… as kind of adapter between the tree of data the UI wants and whatever data sources there are.”

      • Clarifies how Fulcro’s client and server components communicate via EQL queries and mutations.

      UI Rendering Process

      “...UI is a tree of components and for each component we have a query… these queries are composed up so that the root component’s query is the query for the whole page.”

      • Outlines how each component declares its data needs, culminating in a single root query.

      “...Fulcro takes this query, combines it with the client DB, and forms a tree of data that matches the query shape, then hands it off to the root to render.”

      • Demonstrates the round-trip from query to final rendered UI.

      Component Example

      “Here we can see how a Fulcro component looks in code… The most important part here is the query…”

      • Provides a code snippet showing query co-location with the component.

      “...the component also includes the queries of its child components so the parent can pass down just the needed data.”

      • Reinforces that data flows naturally down the component tree.

      Learning Fulcro

      “People have this assumption or believe that Fulcro is hard to learn but it's not…”

      • Dispels the notion of steep difficulty.

      “There are simpler frameworks that do just one thing… but you need to handle a number of tasks and that you need to work across both front end and back end…”

      • Explains why novices might find full-stack solutions initially overwhelming.

      “You need to rewire your brain… if you come in expecting that things just work the way you expect you will be running into walls…”

      • Advises a mindset shift for those accustomed to different paradigms.

      Recommended Beginner Resources

      “...the Fulcro Developer's Guide… it describes everything in great detail but it can be overwhelming…”

      • Mentions the official documentation’s comprehensive nature.

      “...start with the do it yourself Fulcro Workshop… play with the concepts in practice and see how they work...”

      • Suggests hands-on learning as the best first step.

      “...there's this minimalist Fulcro tutorial… tries to teach you the absolute minimum amount of things you need to know…”

      • Recommends a focused tutorial that avoids overload.

      Simplicity Through Principles

      “Fulcro doesn't do any magic… its operation is straightforward and very much possible to understand…”

      • Emphasizes that Fulcro’s complexity is principled, not opaque.

      “...UI is pure function of data, standard input of data is the graph API, standard output of side effects is the transaction subsystem, and data is data, meaning queries and mutations are just data.”

      • Summarizes how Fulcro simplifies data handling, state management, and side effects uniformly.

      Demo Highlights

      “So let's have a demo… a simple Fulcro application showing todo list…”

      • Introduces a working demonstration of a to-do list in Fulcro.

      “...every side effect goes through transaction subsystem so I should see data here and I do, I see that they are loading them…”

      • Illustrates how Fulcro logs and displays all transactions for debugging.

      “I can also see the response… the data mirrors the query… if I ask for something that doesn't exist I get back empty data…”

      • Demonstrates the transparency of EQL-based queries and responses.

      Conclusion and Key Takeaways

      “Takeaways… that full stack frameworks are really useful and especially that Fulcro is really worth looking into and learning is not hard if you are a little smart about it…”

      • Concludes that Fulcro offers an approachable path to building maintainable full-stack ClojureScript applications.

      “Here are some awesome resources especially the Fulcro Community guide where you find the workshop and tutorial…”

      • Reiterates the availability of community-driven materials to support new learners.
    1. Summary of the Tech Talk on Software Development Leverage

      Speaker's Background & Context

      • The speaker has experience with nine startups, with four successes (defined as acquired or still operational).

        "I've been involved in nine startups, four successes so far, success defined as either bought by somebody else or still exists." - Core interests include minimal degradation over time, maximum architectural clarity, and minimal boilerplate.

        "I want to build systems that have a minimal amount of that maximum architecture clarity... I want a small number of Core Concepts and I also want minimal boilerplate." - Prefers Clojure and ClojureScript due to Lisp features, a REPL, macros, full-stack capabilities, and immutable data.

        "The main things are that it's a Lisp, I've got a REPL, I've got macros, I've got full stack language immutable data and literals."

      Concept of Software Development Leverage

      • Defines leverage in software as maximizing efficiency while minimizing incidental complexity.

        "What’s the minimal amount of code I can write to build these things?" - Software generally consists of forms and reports, and optimizing these elements reduces complexity.

        "A lot of what we write are forms or reports essentially." - Critiques past attempts at UI and form abstraction (e.g., Informix 4GL, Visual Basic, Rails, Java Enterprise) as insufficient or overly complex.

        "Every kind of library on the planet trying to do the same sort of thing." - Identifies challenges in leverage: short levers, fragile systems, opposing mindsets, and complex structures.

        "You can have too short of a lever, the object that we're trying to move could be too big for the lever, or my strength... I could have a crowd of people who are just philosophically opposed to levers."

      Key Approaches to Leverage

      • Minimal Incidental Complexity: Reducing unnecessary complexity that accumulates over time.

        "We love minimal incidental complexity... other communities don’t even think about that." - Functional & Immutable Data Models: Advocates for a pure functional approach to state management and UI rendering.

        "The state of the world is some immutable thing, initialized somehow, then I walk from step to step running some pure function." - Generalized Pure Functions: Aiming for functional purity while acknowledging that some dynamism is needed.

        "To me, you're starting by breaking the ideal. You're saying, 'I’m not really going to use pure functions for that.'" - Component-Based Rendering: Prefers data-driven UI, minimizing reliance on React’s event-based state management.

        "A pure function, a render of some sort of transform of the world."

      Core Abstractions for Software Leverage

      1. Entity-Attribute-Value (EAV) Model: A flexible, normalized data structure for representing application state.

        "The first one is just the power of entity attribute value." 2. Idents (Universal Entity Identifiers): Unique tuples ([type id]) for referencing entities.

        "The kind allows you to prevent collisions... useful semantic information." 3. Graph Queries: Uses EDN-like queries to efficiently pull and update data.

        "Attach logic to graph queries that say when you get the result of this query, here's how you normalize it." 4. Full-Stack Datified Mutations: CQRS-like abstractions over side effects and state transitions.

        "CQRS kind of idea... I’m going to make an abstract thing that says what I want to do."

      Emergent Benefits of This Approach

      • Normalized State Representation: Enables automatic merging of data, reducing complexity in state updates.

        "This gives me on my world, my immutable World in that diagram of kind of our idealized application." - Minimizing UI Boilerplate: Using annotated queries and data-driven components reduces manual UI code.

        "A UI location-specific way to annotate my UI... initial state is just a mirror of that." - Abstracting Side Effects: Remote calls and transactions become well-structured, reducing ad-hoc state management.

        "Transact things... processing system talks to remotes for side effects, talks to the database for local changes, and triggers renders."

      State Machines for Process Control

      • Advocates state machines for handling application logic, avoiding scattered imperative code.

        "Very often, process is just peppered around everywhere... having a state machine that abstracts over this is powerful." - Uses state charts (Harel state machines) for complex workflows like authentication.

        "State charts are way better when your state machine gets large."

      Fulcro & RAD (Rapid Application Development)

      • Fulcro: A ClojureScript-based framework built on these principles.

        "How do I simplify F? How do I get these core pieces generic enough to reuse?" - RAD: Built to automate UI and backend generation, minimizing redundant work.

        "I really wanted to minimize the boilerplate right... tired of handwriting schema." - Plugins for Databases, Forms, Reports, and APIs: Reduces custom implementation for common application patterns.

        "Datomic support gives me my network API and integration with Datomic in 1900 lines of code."

      Key Takeaways

      • Graph-based, normalized application state leads to better leverage and scalability.
      • Functional purity where possible, and controlled side effects when necessary.
      • Automatic UI and backend generation through metadata and introspection.
      • Composable, small-core abstractions allow flexibility without unnecessary complexity.

      "A very small number of Core Concepts... it's pluggable, you can escape from everything... it's just an annotated data model."

      This approach significantly reduces the long-term maintenance cost of applications by emphasizing reusability, composition, and functional principles.

    1. Summary of the Talk on the Future of CMS, No-Code, Low-Code, and AI-Generated Applications

      Evolution of CMS and No-Code Tools

      Traditional CMS:

      "Back in the day it was like WordPress... the original web where we would write code and then we would just push it up to servers."

      CMS emerged to allow non-developers to contribute to web content without coding.

      No-Code Tools:

      "No code is like your drag and drop GUI... Webflow or whatever."

      Introduced drag-and-drop interfaces for broader accessibility, with pros and cons in usability.

      AI in No-Code:

      "Fast forward even further we've got AI coding right... now a person can just make an app."

      AI models like Claude 3.5 enable app generation with minimal developer intervention.

      Current No-Code/Low-Code AI Tools Landscape

      Key Tools in the Market:

      "Let's create the definitive list here... Cursor, Bolt, Lovable."

      Cursor is for developers; Bolt and Lovable cater to non-developers with different strengths.

      Strengths and Weaknesses:

      "Bolt is a great boilerplate generator... Lovable is great if you want ShadCN styling."

      Developers prefer Bolt for flexibility; Lovable is preferred for pre-styled design systems.

      Challenges with AI-Generated Code

      Integration Issues:

      "It's not your existing code base... you need to use your components, design system, and backend logic."

      AI-generated code often exists in isolation, making integration difficult for enterprise use.

      Code Quality Concerns:

      "Engineers are not going to want a pull request by a non-engineer."

      Quality control and maintainability remain significant barriers.

      Customization and Precision:

      "Webflow is hard to use... but it gives you 100% precision control."

      While AI provides convenience, fine-grained control is still preferred by professionals.

      Future of AI-Driven Development

      Combining AI with Structured CMS-like Workflows:

      "Ideally, we have something like a headless CMS where we can make updates over API."

      Future solutions should enable AI updates via APIs while maintaining design consistency.

      Ideal Workflow Vision:

      "In an ideal world, we can be editing with prompts and visually."

      The goal is a hybrid model with AI-driven automation and manual precision controls.

      AI-Based Iteration and Optimization:

      "AI should listen to your customers... iterate really fast."

      Faster feedback loops and continuous optimization through AI experimentation.

      Technical Approaches to Solving Challenges

      Meta's React Blocks:

      "What React blocks let developers do is a backend dev in Python can code up a React UI."

      An approach that allows dynamic UI changes without shipping new native app versions.

      Mitosis Framework:

      "Mitosis is a project that explores transpilation and visual manipulation."

      Enables converting JSX into structured JSON for flexible rendering and AI-based updates.

      Code-Driven Visual Editing:

      "SwiftUI allows updating code with visual previews and vice versa."

      Bidirectional code editing is a possible future solution but is still complex.

      Current Limitations and Considerations

      Performance and Feasibility Issues:

      "When I had Google bots crawling my AI-generated site, I got a $4,000/day Anthropics bill."

      Generating content in real-time is currently too expensive at scale.

      Security and Compliance Risks:

      "Dynamic code delivery is ripe with security challenges."

      Any AI-driven solutions must consider performance, security, and governance.

      Key Use Cases and Applications

      Prototyping vs. Production:

      "Phenomenal prototyping tools, but moving to production is challenging."

      AI tools excel in concept validation but require extensive refinement for production.

      Personalization Opportunities:

      "The AI could automatically scale things up or down based on performance."

      Future possibilities include hyper-personalized user experiences.

      Conclusion and Outlook

      Near-Term Expectations:

      "Webflow and Framer will likely add more AI features over time."

      Existing players are expected to incorporate AI capabilities gradually.

      Long-Term Potential:

      "AI tools will eventually iterate and personalize dynamically based on user input."

      The convergence of AI, CMS, and design systems may redefine how software is built.

      This summary captures the essence of the speaker's discussion, highlighting key concepts, industry trends, challenges, and possible future developments in the AI-powered CMS and no-code/low-code space.

    1. Summary of JavaScript Frameworks - Heading into 2025 by Ryan Carniato

      General Reflections on 2024

      The pursuit of simplicity hasn’t simplified web development:

      "The quest for simplicity hasn't resulted in making web development simpler."

      Despite advancements, the overall complexity of web development persists.

      Economic pressures have led to a cautious approach:

      "Global economy tightening budgets and keeping solutions on the safe path."

      The industry has recognized that complex problems have no silver bullet solutions.

      A call for reflection and reassessment:

      "It is a sobering thought but it gives me hope in 2025 that we can take some time and re-evaluate."

      Returning to fundamentals has shown the need for a better balance.

      The Server-First Movement

      Resurgence of server-first frameworks:

      "Making things 'server-first' has been the narrative over the last 5 years in the front end."

      New meta-frameworks (SvelteKit, Astro, Remix, SolidStart, Qwik) and upgrades (Next, Nuxt).

      Tension between SPA and MPA models:

      "SPA-influenced isomorphic approaches up against MPA-influenced split-execution approaches."

      New technologies such as Out-of-Order Streaming and Server Islands emerged.

      Complexity in feature integration:

      "When you assemble all these features, things are not so simple anymore."

      It remains uncertain whether these advancements are truly solving core issues.

      Return to traditional server approaches:

      "Not wanting to wade through this mess, has led the conversation back to more traditional server approaches."

      Server-side rendering (SSR) is being reconsidered, despite its lack of novelty.

      Compilation as a Core Component

      The role of compilation in modern development:

      "Compilation is an ever-present aspect of JavaScript development."

      It addresses shortcomings, improves performance, and introduces new capabilities.

      React and Svelte’s contrasting compiler approaches:

      "On one side we have the React Compiler... on the other, you have Svelte 5 Runes."

      React optimizes re-renders, while Svelte adopts a fine-grained signals model.

      Increased complexity due to compiler-driven optimization:

      "Both choices come at the expense of increased complexity in tooling."

      The trade-offs between simplicity and capability continue to evolve.

      AI and Development Tools

      AI's growing role in developer tooling:

      "AI's impact on JavaScript frameworks themselves is still minimal."

      While AI aids prototyping, it has yet to revolutionize core development.

      AI-assisted performance optimization:

      "MillionJS developer Aiden Bai got our attention again with React Scan."

      Tools now analyze apps for re-render issues, though challenges remain.

      Shift towards integrated AI support:

      "Supporting tools rise up to meet that."

      Projects like VoidZero suggest the future of integrated AI development workflows.

      Emerging Trends for 2025

      Pendulum swinging back to SPA-friendly approaches:

      "We already have started seeing some of the swing back of the pendulum."

      Frameworks like SvelteKit and SolidStart now offer SPA modes.

      Local-first and sync engine technology gaining traction:

      "How that is to manifest itself is still left to be seen."

      Sync-first approaches may play a larger role in future web applications.

      Slow and steady adoption of stable frameworks:

      "Both Vue and Angular are frameworks I'd have my eye on this next year."

      Reliable, well-supported frameworks remain attractive in uncertain times.

      Challenges in adopting signals-based architectures:

      "Developers are starting to understand the depths of tradeoffs present."

      While signals improve performance, they come with complexity and learning curves.

      Final Reflections

      No major technological leaps anticipated in 2025:

      "I'm not predicting some big technology leap in the next 12 months."

      The focus is on refinement and thoughtful development.

      Complexity remains a persistent challenge:

      "We live in a world full of complexity and that doesn't appear to be changing."

      Balancing complexity and usability is an ongoing struggle.

      Encouragement to tackle interesting problems:

      "Between you and me, this is the type of environment I thrive in."

      Developers are encouraged to embrace problem-solving in a complex ecosystem.

      This summary captures the key insights from the article while preserving clarity and depth, offering a comprehensive understanding of the current state and future outlook of JavaScript frameworks.

    1. Below is a concise overview of the key concepts in the article “How Real-Time Materialized Views Work with ksqlDB, Animated.” It explains:

      1. What Real-Time Materialized Views Are
      2. A real-time materialized view is a continuously updated “pre-aggregated” or “read-optimized” result of incoming streaming data.
      3. Instead of recalculating the entire view on demand (as in many traditional databases), stream processing incrementally updates the view with each new event (the “delta”).

      4. How ksqlDB Maintains These Views

      5. Continuous Queries: When you write a SQL-like query in ksqlDB (e.g., CREATE TABLE ... SELECT ... FROM readings GROUP BY ... EMIT CHANGES;), ksqlDB creates a persistent query that runs forever, reading new events from Kafka topics and updating the view.
      6. Incremental Updates + Changelog: As ksqlDB updates the materialized view in its local state store (RocksDB), it also emits a new record to a changelog topic in Kafka that captures the change.

        • This changelog topic is essentially the “audit log” of every update.
        • The local RocksDB store is fast but treated as transient; changelog topics in Kafka provide durability and fault tolerance.
      7. Push vs. Pull Queries

      8. Pull Queries ask for the current state of the materialized view at the moment you run the query (e.g., SELECT * FROM avg_readings WHERE sensor=...;).
      9. Push Queries subscribe to changes as they happen (e.g., SELECT * FROM avg_readings EMIT CHANGES;). You get a continuous stream of updates whenever a new change arrives.

      10. RocksDB as the Local Store

      11. Each partition of the input stream(s) to a ksqlDB query is associated with its own local RocksDB instance.
      12. RocksDB stores the current state needed for aggregations, joins, etc.
      13. Because data is partitioned, all rows with the same key end up on the same partition (and thus the same RocksDB instance).

      14. Automatic Repartitioning

      15. If your grouping key is not the same as the original Kafka key, ksqlDB must shuffle data so that rows with the same group key end up on the same partition.
      16. This shuffle is automatically handled by creating a *-repartition topic.
      17. If your original keys are already aligned with the grouping columns, ksqlDB skips this shuffle to save I/O.

      18. Fault Tolerance via Changelogs

      19. If a ksqlDB node dies, a new node can rebuild the materialized view by replaying the changelog from Kafka.
      20. Changelog topics use log compaction, which removes older updates to each key, keeping only the latest.
      21. This keeps replay time manageable (rather than applying every single historical update).

      22. Latest-by-Offset Aggregations

      23. Besides sum, min, max, or average, ksqlDB also supports “latest by offset” to store just the most recent value for each key, effectively creating a “recency cache.”
      24. Example:<br /> sql CREATE TABLE latest_readings AS SELECT sensor, LATEST_BY_OFFSET(area) AS area, LATEST_BY_OFFSET(reading) AS last FROM readings GROUP BY sensor EMIT CHANGES;
        • This ensures the table always reflects the last known value for each key (based on Kafka offset).

      Why This Matters

      • Fast Queries: Because the materialized view is already “pre-aggregated,” queries against it are extremely fast—no need to scan or recalculate everything from scratch.
      • Real-Time Updates: The view is updated continuously as new data arrives, so you always have a near-real-time representation of what is happening.
      • Scalable & Fault-Tolerant: Using Kafka’s partitions and log compaction for changelogs, ksqlDB scales horizontally (across multiple nodes) and recovers state quickly when nodes fail.

      Further Resources

      • Try It Out
      • The ksqlDB quickstart is a straightforward way to experiment locally.
      • Once it’s running, you can execute the code examples in the article to see real-time materialized views in action.
      • Next Steps
      • Deep dive into ksqlDB’s fault tolerance and scaling model (i.e., how queries distribute across clusters).
      • Explore additional stream processing patterns such as windowed aggregations for time-based summaries.
      • Learn how joins work between tables and streams in ksqlDB (similar incremental update logic, but with different partitioning considerations).

      In essence, real-time materialized views in ksqlDB let you maintain continually up-to-date “snapshots” of streaming data. By storing incremental results in a local state store and capturing updates in a Kafka changelog, ksqlDB can serve extremely fast queries, recover quickly from failures, and scale out for large data volumes.

    1. Introduction and Context

      • The speaker acknowledges the significance of the event, as it coincides with the 25th anniversary of Smalltalk's creation.

        "This conference on this day is pretty much in the epicenter of the 25th anniversary of Smalltalk."

      • The talk is not intended to be historical but rather to provide insights into the evolution and current relevance of object-oriented programming (OOP).

        "I don't want to I'm not going to give a historical talk because I finally discharged those obligations."

      Early Computing Insights

      • Early programs in the 60s were mechanical and small, akin to interlocking gears, with a heavy influence from mathematics.

        "Programs were quite small and they had a lot in common with their mathematical antecedents."

      • Scaling challenges are illustrated using the analogy of dog houses vs. cathedrals, where naïve scaling leads to collapse.

        "When you blow something up by a factor of 100, it gets about a factor of 100 weaker."

      Object-Oriented Programming (OOP) and Abstraction

      • OOP was conceived to manage growing complexity, emphasizing architecture over raw materials.

        "Part of the message of OOP was that as complexity starts becoming more important, architecture is always going to dominate material."

      • The Air Force’s early approach to data abstraction in 1961 is highlighted as a precursor to OOP concepts.

        "Somebody came up with the following idea: on the third part of the record on this tape, we'll put all of the records of this particular type."

      Limitations of Current Systems

      • Criticism of modern systems, particularly HTML, which reintroduces archaic dependencies on browsers.

        "HTML on the internet has gone back to the Dark Ages because it presupposes that there should be a browser that should understand its formats."

      • Object-oriented thinking should align more with biological models rather than mechanical systems to achieve scalability and maintainability.

        "Computers should realize their destiny by learning from biology."

      Encapsulation and Polymorphism

      • Encapsulation is key to building scalable systems; the internal state of an object should not affect the overall system.

        "You must must must not allow the interior of any one of these things to be a factor in the computation of the whole."

      • Polymorphism enables objects to act as flexible service providers, facilitating interoperability across systems.

        "Polymorphism allows us to define classes of these services and interact in a general way."

      Challenges with Modern Programming Approaches

      • C++ and Java fail to capture the true essence of OOP, focusing too much on old paradigms rather than leveraging new metaphors.

        "The most pernicious thing about languages like C++ and Java is that they think they're helping by looking like the old thing."

      • The lack of metaprogramming in Java is seen as a significant hindrance to its evolution and adaptability.

        "When I looked at Java I thought, my goodness, how could they possibly hope to survive without a meta system?"

      Lessons from Biological Systems

      • Biological systems, such as bacteria and cells, provide a model for growth and adaptability that software should aspire to replicate.

        "Cells not only scale by factors of a hundred but by factors of a trillion."

      • The success of the internet is attributed to its ability to replace every component over time without needing to stop.

        "The internet has expanded by a factor of 100 million and has never had to stop."

      Future of Object-Oriented Systems

      • A move towards universal interoperability through unique identifiers such as URLs and IPs for every object is necessary.

        "Every object should have a URL because what the heck are they if they aren't these things?"

      • Future systems should be designed with the ability to dynamically evolve without requiring downtime.

        "Growing systems without shutting them down is crucial for future success."

      Critique of Programming Culture

      • The programming community tends to take single perspectives too seriously, leading to dogmatism rather than evolution.

        "Taking single points of view and committing to them like religions is the biggest challenge in our field."

      • Smalltalk’s success was not in its syntax but in its ability to continuously evolve and support innovation.

        "The thing I’m most proud of is that Smalltalk has been so good at getting rid of previous versions of itself."

      Conclusion

      • The speaker encourages a mindset of continuous improvement and pushing current systems to achieve their full potential.

        "Just play it grand, and always play your systems more grand than they seem to be right now."

    2. Introduction and Context

      • The speaker acknowledges the significance of the event, as it coincides with the 25th anniversary of Smalltalk's creation.

        "This conference on this day is pretty much in the epicenter of the 25th anniversary of Smalltalk."

      • The talk is not intended to be historical but rather to provide insights into the evolution and current relevance of object-oriented programming (OOP).

        "I don't want to I'm not going to give a historical talk because I finally discharged those obligations."

      Early Computing Insights

      • Early programs in the 60s were mechanical and small, akin to interlocking gears, with a heavy influence from mathematics.

        "Programs were quite small and they had a lot in common with their mathematical antecedents."

      • Scaling challenges are illustrated using the analogy of dog houses vs. cathedrals, where naïve scaling leads to collapse.

        "When you blow something up by a factor of 100, it gets about a factor of 100 weaker."

      Object-Oriented Programming (OOP) and Abstraction

      • OOP was conceived to manage growing complexity, emphasizing architecture over raw materials.

        "Part of the message of OOP was that as complexity starts becoming more important, architecture is always going to dominate material."

      • The Air Force’s early approach to data abstraction in 1961 is highlighted as a precursor to OOP concepts.

        "Somebody came up with the following idea: on the third part of the record on this tape, we'll put all of the records of this particular type."

      Limitations of Current Systems

      • Criticism of modern systems, particularly HTML, which reintroduces archaic dependencies on browsers.

        "HTML on the internet has gone back to the Dark Ages because it presupposes that there should be a browser that should understand its formats."

      • Object-oriented thinking should align more with biological models rather than mechanical systems to achieve scalability and maintainability.

        "Computers should realize their destiny by learning from biology."

      Encapsulation and Polymorphism

      • Encapsulation is key to building scalable systems; the internal state of an object should not affect the overall system.

        "You must must must not allow the interior of any one of these things to be a factor in the computation of the whole."

      • Polymorphism enables objects to act as flexible service providers, facilitating interoperability across systems.

        "Polymorphism allows us to define classes of these services and interact in a general way."

      Challenges with Modern Programming Approaches

      • C++ and Java fail to capture the true essence of OOP, focusing too much on old paradigms rather than leveraging new metaphors.

        "The most pernicious thing about languages like C++ and Java is that they think they're helping by looking like the old thing."

      • The lack of metaprogramming in Java is seen as a significant hindrance to its evolution and adaptability.

        "When I looked at Java I thought, my goodness, how could they possibly hope to survive without a meta system?"

      Lessons from Biological Systems

      • Biological systems, such as bacteria and cells, provide a model for growth and adaptability that software should aspire to replicate.

        "Cells not only scale by factors of a hundred but by factors of a trillion."

      • The success of the internet is attributed to its ability to replace every component over time without needing to stop.

        "The internet has expanded by a factor of 100 million and has never had to stop."

      Future of Object-Oriented Systems

      • A move towards universal interoperability through unique identifiers such as URLs and IPs for every object is necessary.

        "Every object should have a URL because what the heck are they if they aren't these things?"

      • Future systems should be designed with the ability to dynamically evolve without requiring downtime.

        "Growing systems without shutting them down is crucial for future success."

      Critique of Programming Culture

      • The programming community tends to take single perspectives too seriously, leading to dogmatism rather than evolution.

        "Taking single points of view and committing to them like religions is the biggest challenge in our field."

      • Smalltalk’s success was not in its syntax but in its ability to continuously evolve and support innovation.

        "The thing I’m most proud of is that Smalltalk has been so good at getting rid of previous versions of itself."

      Conclusion

      • The speaker encourages a mindset of continuous improvement and pushing current systems to achieve their full potential.

        "Just play it grand, and always play your systems more grand than they seem to be right now."

    1. Cono Elliot's work on notational design and his influential papers - Cono got his PhD at Carnegie Mellon University in the '90s under Frank Fenny working on higher order unification. - Cono has devoted his life to thinking and refining graphic computation and tools behind it, and has published influential papers on various topics related to functional programming and notational design.

      Living in a forest setting with deep connection to nature. - Conor lives on 20 acres next to his family's 60 acres and has a deep emotional connection to the place because of his parents' presence. - He sees a connection between nature and technology, highlighting the non-sequential nature of computation and neurology.

      We are in a pre-scientific age of thinking about computation. - Humans have created thinking organisms that think systematically, leading to computation. - We are in an awkward phase of thinking about computation in a clumsy and pre-scientific way.

      Humans are driven by curiosity to understand the universe. - We have a limited ability to perceive the universe due to our evolutionary constraints. - Through the advancement of science and technology, we have developed tools like telescopes, microscopes, high-speed cameras, and time-lapse to enhance our perception.

      Elegance and wonder in computer science - Elegance is the deepest value in computer science, inspiring a sense of play and wonder. - Computer science is in a deeply inelegant phase, but there is potential for Elegance and Beauty in the field.

      Elegance as a guiding value in theoretical physics - Elegance guided Einstein in developing the special and general theory of relativity. - Modern civilization is built on general relativity and quantum physics; GPS system corrects for relativity.

      Elegance and simplicity in formalizing concepts in computer science - Elegance and simplicity in formalizing concepts are related - People often mistake familiarity for simplicity in programming

      Academia today lacks time for critical thinking - Focused on churning out papers and credentials - Issue of education accessibility affecting teaching quality

      Semantics is crucial in programming - Meanings are called semantics - The relationship between a program and its meaning is important

      Dana Scott answered the crucial question of the mathematical meaning of Lambda calculus in 1970. - Lambda calculus was originally intended for encoding high order logic and quantifiers, not for programming. - Peter Landon realized the potential of Lambda calculus for programming and introduced the concept of executing Lambda calculus on a machine.

      Languages convey meanings, computation looks at meanings. - Languages and programming languages serve the same purpose: to convey meanings. - Computation and technological tools help us observe and understand meanings in various forms, from stars and quasars to microorganisms and atoms.

      Euclid revolutionized geometry with his conceptual approach - Introduced a new way of thinking about geometry with axioms and postulates - Plato's influence on the idea of mathematical space and its relation to the physical world

      Mathematics describes real truth and possibly taps into platonic truth. - Platonist perspective considers mathematics as a way to describe truth beyond us. - Success of mathematical fantasy or story inspires acting as if tapping into platonic truth.

      Ancient beliefs about movement of stars and planets - Stars and planets thought to move in circular paths due to perfection/God concept - Some stars behaved differently, known as 'The Wanderers' or planets

      Kepler discovered planetary laws - Planets move in an ellipse, not a circle - Kepler's explanation lacked why planets move in an ellipse

      Scientific theories evolve with enhanced observations - Newton's theory successful until discrepancies discovered in the 20th century - Einstein's theory validated through observations of planet Mercury during solar eclipse

      Scientific exploration is an unending journey - Science aims to understand what we don't know - In academia, the system often fails to reward wonder and not knowing

      Denotational semantics helps distinguish beauty and elegance from complexity - Beauty or elegance in theory is described precisely in terms of mathematics - Fortran, led by John Backus, introduced expressions, advancing from Von Neumann style sequential programming

      Functional programming emphasizes expressions over statements - Fortran blends statements and expressions but still leans towards statements - Functional programming eliminates everything except expressions

      Hardware limitations led to sequential model prototyping - John Von Neumann's experiment from 1947 is still relevant in 2022 - John Backus discussed fundamental problems in computing during the war

      Von Neumann bottleneck affects computer performance. - Physical bottleneck slows computers due to high heat generation. - Mental bottleneck limits brain capacity and mental efficiency.

      Breaking out of the Von Neumann bottleneck - The Von Neumann style of programming forces us to think small and is fundamentally sequential and mechanistic. - The lecture emphasizes the importance of thinking in larger, powerful notions and focusing on functions rather than words.

      Functions as building blocks for knowledge - Functions built from other functions allow for scalability and creation of complex systems - Importance of denotational semantics in designing new languages rather than just explaining existing ones

      Backus emphasized fixing defects and learning from mistakes. - Using denotational semantics reveals detailed defects in existing languages. - Advancement in computer science involves replacing outdated concepts like go-to with structured and functional programming.

      The cost of focusing on education and progress is losing the ability to make significant advances in science. - The speaker expresses disappointment with the impact of Academia on progress and science. - The speaker remains dedicated to truth and beauty, advocating for the importance of denotational semantics in making aesthetic distinctions.

      Ideas are expressions of beauty or ugliness which give deep insights across fields. - Denotational semantics serves as a reliable guide to beauty and elegance in ideas. - Beauty and elegance are valuable guides for understanding the universe and computation.

      Passion for mathematics and computer graphics - Attended undergrad in math at UC Santar with a small group of math students in a nurturing environment - Transitioned to grad school at Carnegie Mellon for computer science and pursued computer graphics due to love for geometry and math

      Had to change plans at Carnegie Mellon - Arrived at CMU to study computer graphics, but found out the people I wanted to study with had left - Discovered a group focusing on reasoning about programs, which became the focus of my PhD work

      Transition to computer graphics and involvement in group projects - Worked with notable advisors like Dana Scott, John Reynolds, and Frank - Focused on exploring the next advancements in programming interfaces and data structures at Sun Microsystems

      Introduction to denotational semantics in understanding language meanings - Studied denotational semantics under Stevenh Brooks and Dana Scott in grad school, leading to a revelation on language meanings - Believes language meanings should be independent of specific machines and analyzed compositionally for better understanding

      Graphics programs are sequential commands organizing video memory for visual output. - Graphics programs are different from traditional software due to their focus on organizing instructions for video memory. - Alternative design paradigms focus on conveying meanings and inventing tools to help users view desired content through a computer.

      Designing a language library for geometry and colors - Creating a composable vocabulary of geometry and colors, similar to modern linguistic frameworks - Developing a rich system of types for three-dimensional geometry and adding a time component to the design

      Rendering graphics offscreen to build up incrementally for a correct answer. - Rendering offscreen allows showing previous true things before replacing them incrementally. - Temporal discreetness in computer graphics breaks compositionality and introduces fundamental bugs.

      Compositional models with approximations lose accuracy when composed - Compositional models incorporating approximations result in gross inaccuracies upon composition - Functional reactive programming involves composing before approximating for accurate results

      Outline fonts are resolution independent - Outline fonts are continuous and do not have pixels when zoomed in - Switching from bitmap graphics to outline fonts improves efficiency and clarity

      Transition from discrete to continuous programming in space and time. - Examples of continuous programming in space like fonts, 2D and 3D geometry, vector graphics. - Applying continuous programming principles to time requires a fundamental shift in implementing and describing things that vary with time within the Von Neumann model.

      John Reynolds introduced the idea of using functions from the reals instead of sequences for solving time interpolation problems - This approach helped in resolving issues with interpolations and time manipulation - Continuous time modeling was found to be more effective than discrete modeling for things that vary with time

      Functional programming requires a shift from loops to lazy lists - Functional programming involves describing the mathematical model behind the data manipulation - The common reasoning that input and output data should have the same nature is wrong in functional programming

      Functional reactive programming is about understanding concepts in the simplest, most elegant compositional terms. - It emphasizes denotational semantics, where types have a mathematical model. - It focuses on fully explaining operations in terms of the model, independent of implementation.

      Programming expresses ideas with clear understanding before implementation - Category Theory is appreciated for its precise and elegant tools in mathematics - Functional Reactive Programming lacks denotational and compositional principles, leading to fundamental misunderstandings in programming

      Algebraic patterns like monoids and distributivity are powerful for organizing reasoning - There are different types of monoids like addition and multiplication each with their own properties - Multiplication distributes over addition and zero plays a special role in this interaction

      Algebra and category theory provide reusability and reasoning in mathematics and programming. - Algebra allows for reasoning that is parameterized and applicable to different mathematical scenarios. - Category theory generalizes various algebraic concepts and is important for correctness in programming.

      The complexity of Python programs and limited cognitive abilities can lead to a lack of understanding. - Options include quitting the profession or divorcing what you've seen from what you do. - Another option is switching to a language with simple semantics, such as purely functional or denotative languages.

      Denotative programming allows for proving program correctness. - Denotative programming enables answering questions about the multiple meanings of programs. - Functional programs can have meanings within a cartesian closed category.

      Tropical semi-rings relate to timing analysis of parallel computations. - Understanding operations of plus and max in relation to semi-rings. - Realization of dot products and matrix multiplication pattern in timing analysis.

      Timing analysis can be described compositionally using the language of categories - I realized the parallel sequential composition is the fundamental building blocks of functions computation - The type Lambda calculus has more than one model, and the mathematical values it describes can have different interpretations

      Realizing the connection between HCLL and lambda calculus led to successful compilation to hardware. - HCLL translates to a small core lambda calculus - Interpreting lambda calculus in cartesian closed categories enabled successful compilation to hardware

      Exploring unconventional categories for computation - Discovering powerful ideas by compiling categories since 1980 - Seeking beauty in solutions to drive innovation and never settling for unsatisfactory answers

      Geometry and the introduction to proof changed my life - The systematic way of exploring what is true and growing knowledge in geometry was a life-changing concept for me. - Discovering computers at Lawrence Hall of Science through the Star Trek Club in high school eventually led me to computation.

      Introduction to programming through games on teletypes - Experiencing games and printing out results on rolls of paper as souvenirs - Discovering source code hidden in the printed paper, initiating an interest in programming

      Started college with no computer science department, emphasized logic and enjoyed math contests - Computer science classes offered in math department or College of Engineering - Discovered talent and passion for math despite discouragement from elementary school teacher

      The origin of computer science in universities and its impact on its development - Initial classes were labeled as Computer Science or logic, sparking a debate on department placement. - Placement in engineering rather than mathematics influenced the practical nature of computer science education.

      Transition from imperative to functional programming - Discovered Haskell as a better alternative to imperative programming languages - Applied Haskell in programming for 25+ years and mentorship in hardware design for machine learning

      Realizing the power of category theory in simplifying automatic differentiation - Changed vocabulary to be more symmetric with respect to composition - Describing automatic differentiation in the language of categories simplifies and generalizes it

      Denotational design is key for software implementation - HLL was not effective for teaching denotational design - Inner guidance essential for understanding and using HLL effectively

      Struggling with teaching denotations and homomorphisms in programming - Encountered issues with students not understanding correct implementations - Wanted compiler to indicate errors instead of personally correcting

      Understanding the question is more important than answering it correctly - Operational thinking is about biases in answering problems and questions - The most important thing is to understand the question in the most beautiful way

      Realization about teaching and learning process - Programmers differ in their attitude towards being told they're wrong - Importance of being open to feedback for growth in programming

      Automation has benefits but limited scalability - SMT automation has advantages in problem-solving but faces scaling limitations - Despite advancements, SMT technology cannot achieve unlimited scalability

      Agda is the most tasteful tool for working with dependent types. - Agda offers beauty, consistency, simplicity, and tremendous power. - Agda contributes to an incredibly beautiful story about the equivalence of computation, logic, and the foundations of mathematics.

      Exploring if all of mathematics can be built on logic - David Hilbert's attempt to formalize logic in the early stages - Can natural numbers be understood via logic as a foundation?

      Natural numbers are a profound and important concept - Natural numbers are a product of human construction on top of other systems - Piano numbers are a significant concept in mathematics

      Constructive logic allows expression of proofs as either A or B - In constructive logic, every proof of A or B can be expressed as a proof of A or a proof of B - Brower's logic allows for this expression without the law of excluded middle, leading to simple answers for negation, implication, truth, and falsehood in terms of types.

      De bruyne pioneered logic computable through computers - Exploration of dependent typing and realization of logic and types - Mechanization of information and manipulation, leading to modern programming languages

      The power of math and knowledge in programming - Manipulating from the bones is a powerful and beautiful concept - Embracing sequential stateful notion of computation limits insights and learning

      Written language enabled deep reflection and improvement of ideas. - Written language allowed ideas to be examined and improved over time. - Written language initiated a feedback loop for continuous enhancement of concepts.

      Continuous improvement through iterative optimization - Iteratively refining program logic and expressions for efficiency and clarity. - Enhanced abstraction and reusability through denotational design and parameterization.

      The debate on using formal proofs in industry - Industry perspective often argues against formal proofs due to perceived time constraints and impracticality. - Decision to use formal proofs depends on the objectives and the value placed on accuracy and thoroughness.

      Achieving 100% correctness is the only way to reach 95%. - Errors compound, leading to significant deviations in calculations. - Approximations and probable correctness can lead to overall incorrectness in complex projects.

      Inspired by deep conversation - The conversation has been engaging and has touched on major topics of interest. - The speaker hopes to discuss denotational design and its application in software design.

      Create space for contemplation in the age of instant information - Encourage meditation and reflection on content - Announcement about a dedicated email for audience feedback and inquiries

      • Definition and Promise of Reactivity

        “Reactivity is the future of JS frameworks! Reactivity allows you to write lazy variables that are efficiently cached and updated, making it easier to write clean and fast code.”

        • The article frames reactivity as a foundational approach for modern JavaScript frameworks, emphasizing its power to optimize code by caching and selectively recalculating only what changes.
      • Introduction to Reactively

        “I’ve been working on a new fine grained reactivity libary called Reactively inspired by my work on the SolidJS team.”

        • Reactively is presented as a fine-grained, “tiny (<1 kb)” library that focuses on lazy variables, caching, and incremental recalculation, aiming to be “the fastest reactive library in its category.”
      • Core Functionality Example

        “Here’s an example of using Reactively for a lazy variable:”

        • The provided code snippet (with reactive(10) and a deferred fetch) illustrates how Reactively defers computations and retrieves data only when needed, ensuring on-demand execution for performance benefits.
      • Dependency Graph Awareness

        “Reactive libraries work by maintaining a graph of dependencies between reactive elements.”

        • By automatically detecting and organizing which reactive elements depend on which sources, libraries like Reactively minimize developer effort and maximize performance by only re-executing relevant nodes in the graph.
      • Wide Use Across Frameworks

        “Reactivity libraries are at the heart of modern web component frameworks like Solid, Qwik, Vue, and Svelte.”

        • Fine-grained reactivity is not limited to standalone libraries; it also underpins the state management logic of multiple popular front-end ecosystems.
      • Primary Goals of Reactive Libraries

        “The goal of a reactive library is to run reactive functions when their sources have changed.”

        • Two fundamental objectives emerge—efficiency (avoiding unnecessary computations) and glitch-free updates (ensuring all dependencies are in sync before rendering to the user).
      • Lazy vs. Eager Evaluation

        “Reactive libraries can be divided into two categories: lazy and eager.”

        • Eager libraries update as soon as a source changes, while lazy libraries defer recalculation until the value is explicitly requested. Each approach has implications for performance optimizations and complexity handling.
      • The Diamond Problem & Equality Check Problem

        “Evaluating D twice is inefficient and may cause a user visible glitch.”

        • Eager libraries often face the “diamond problem” (risk of double-updating downstream nodes), while lazy libraries must handle the “equality check problem” (e.g., re-checking parents unnecessarily if the value hasn’t changed).
      • MobX Algorithm

        “MobX uses a two pass algorithm, with both passes proceeding from A down through its observers.”

        • MobX solves the diamond problem by a “count” mechanism across two phases, ensuring every dependent node is updated exactly once and in the correct order, also tracking whether a parent’s value has changed for equality checks.
      • Preact Signals Approach

        “Preact checks whether the parents of any signal need to be updated before updating that signal. It does this by storing a version number on each node and on each edge…”

        • Preact employs version numbers and a two-phase “down” and “up” traversal. This allows quick detection of stale nodes without re-walking the entire graph when no real changes occur.
      • Reactively’s Graph Coloring Mechanism

        “Instead of version numbers, Reactively uses only graph coloring.”

        • Reactively marks changed nodes as red (dirty) and their children as green (check). An “up” phase (updateIfNecessary()) recalculates only if a node or any of its ancestors is red, then cleans the graph state once values are confirmed.
      • Benchmarks and Observations

        “Reactively is the fastest (who would’ve guessed 😉).”

        • Although performance differences are often negligible for typical app use, experiments show Reactively performing strongly under heavy load, with Solid excelling in wide graphs, and Preact Signals being both swift and memory-efficient.
      • Memory Management & Data Structures

        “In a future blog post we'll look at the data structures and optimizations used in each framework…”

        • Beyond the core update algorithms, internal implementation details—like how each library manages and structures reactive nodes—play a crucial role in overall speed and memory usage.
      • Overall Conclusion

        “Most important is that the framework not run any user code unnecessarily.”

        • Across modern fine-grained reactive libraries, the central guiding principle is to avoid superfluous work. The focus remains on providing glitch-free, lazy or eager updates while preserving efficiency and correctness.
      • Motivation & Purpose of the Talk

        “This talk is called I see what you mean what a tiny language can teach us about gigantic systems… which sort of uh formed the rock on which I built all of my thesis work.”

        • Alvaro introduces a small, “tiny” language (Dedalus) that explores how to effectively build and reason about distributed systems by focusing on semantics rather than purely operational details.
      • Importance of Abstraction & Its Pitfalls

        “Abstraction is a thing… arguably the the the best tool that we have in computer science… but sometimes it’s harmful.”

        • While abstraction helps manage complexity, it can also hide essential details about distributed behavior and lead to design failures (e.g., RPC “leaking” distributed complexity).
      • Division Between “Infrastructure Programmers” and “Users”

        “We tend to think of abstractions as these fixed boundaries, you know these walls… we put us right we put the Geniuses the infrastructure programmers the 10x Engineers below the wall… who goes above the wall well the the despised users.”

        • Alvaro criticizes the mindset that library writers and library users are separate classes of people; instead, we all alternate between these roles.
      • Spark for a Declarative Approach

        “…if you kind of squint your eyes the work that I was doing in those two modes [C code vs. SQL queries]… it wasn’t really that different.”

        • Observing that data wrangling in both imperative and declarative styles shares core similarities prompted an interest in “could you write distributed systems using a logic/query language?”
      • Model-Theoretic Semantics & Queries

        “…model theoretic semantics say that no no the meaning of a program is precisely the structures that make the statements in the program true… data becomes a Common Language…”

        • A logic-based or query-based approach allows mapping “programs” to “outcomes” directly through data, making correctness and debugging potentially clearer than in purely imperative styles.
      • Data Log & Concurrency

        “Data log is interesting because we see that there's this Rich intimate connection between talking about what you don't know and having to commit to particular orders to get deterministic results.”

        • Data log provides a unifying lens for data, but the addition of recursion, negation, and timing must be carefully managed to keep semantics deterministic in distributed settings.
      • Introducing Dedalus (pronounced ‘Day-Duh-Luss’)

        “So the idea is we want to take that clock and rify it, make the clock a piece of every uh unit of knowledge that we have… time is just a device that was invented to keep everything from happening at once.”

        • Dedalus extends data log with explicit time and asynchronous rules so programmers can represent mutable state, concurrency, and message ordering in a precise logical framework.
      • Three Rule Types in Dedalus

        “…we say you know every every record has a time stamp… deductive rules say the conclusion has the same time stamp as the premise… inductive rules say the conclusion has one Higher Tim stamp… asynchronous rules say hey look there's this infinite domain of time we randomly pick from it…”

        • Dedalus’s key contribution is capturing “now,” “next,” and “eventually” semantics, reflecting real-world distributed behaviors (e.g., immediate local inference vs. future state vs. network delays).
      • State as “Induction in Time”

        “…unlike in databases which with having no time had only state in Dedalus there is no state… state is what you get when you say when you know something then you know it at the next time and by induction you keep knowing it.”

        • Dedalus reframes state changes as an inductive process on discrete time steps, allowing logic-based reasoning about mutation.
      • Confluence & Determinism

        “If we take away that pesky negation… or with very carefully controlled negation… monotonic… we know that negation free or monotonic more broadly Dedalus programs are confluent… they're deterministic without coordination.”

        • By restricting programs to monotonic logic (no negative conditions or well-controlled negation), a system can behave deterministically despite asynchronous execution and failures.
      • Significance for Distributed Systems

        “…there’s this Rich intimate connection between… the meaning of programs, the uniqueness of a model… and this really valuable systems property of deterministic outcomes…”

        • Dedalus reveals how purely logical constructs (stable models, minimal models) can correspond directly to reliable, deterministic distributed protocols in practice.
      • Legacy & Extensions

        “…on top of Bloom we built Blazes… that allow programmers… exactly why they aren’t if they aren’t [deterministic]… lineage driven fault injection… we can prove that our programs are fault tolerant…”

        • Dedalus’s ideas led to subsequent systems like Bloom, Blazes, and lineage-driven fault injection that leverage logic-based reasoning to auto-generate or verify coordination strategies.
      • Closing Thoughts & Academic Invitation

        “We don’t do a good enough job respecting our users… If any of you are interested in spending the next five or six years screwing around inventing languages Building Systems with them… I’m looking for PhD students.”

        • Alvaro emphasizes user-focused abstractions, fluid design, and invites new students to further this research in language-driven system development.
    1. Overview of a keynote on a tiny programming language and its impact on systems. - Peter Alvaro discusses the development of the programming language Deus at UC Berkeley and its significance. - The talk includes a mix of personal insights and technical elements about language design and semantics.

      The significance of abstraction in software design and its implications. - Abstraction helps in managing complexity by hiding unnecessary details and exposing only essential functionalities. - While abstraction fosters innovation, it can lead to complacency if perceived as fixed boundaries.

      Speaker discusses personal journey from novice to professional coder. - Became a coder in the Bay Area around 2000, working at a company called Ask Jeeves. - Balanced roles in infrastructure coding with C and data reporting using SQL, feeling skilled in both.

      Explores the use of SQL for general-purpose distributed systems. - Data wrangling in SQL provides clearer rules for data manipulation compared to traditional programming. - SQL fosters a more intuitive connection between programming logic and outcome semantics, enhancing debugging and correctness.

      Understanding complexities in reliable messaging over faulty networks. - Alice and Bob's communication can face message loss and reordering, complicating reliability. - Calculating correct outcomes during message reception involves considering numerous execution scenarios.

      Understanding outcomes requires examining the properties of functions f and g. - If function f is commutative, certain variables become irrelevant in calculating outcomes. - The streaming model allows queries to operate dynamically with data, contrasting with static query models.

      Decisions on web server design should leverage adaptable optimization strategies. - Choosing data structures, algorithms, and caching strategies is crucial for efficient web server performance. - Most query languages, like SQL, combine basic relational operations but lack recursion and iteration, necessitating enhancements for complex queries.

      Data log semantics remain intact despite looping structures. - The data log model asserts that a program's meaning is its minimal model, avoiding unsupported data. - Programs in data log function as implications, where true right-hand premises guarantee true left-hand conclusions.

      Data log enables order-independent explanations and supports 'what if' scenarios in data analysis. - By capturing lineage, data log reveals why specific data is included in outputs, facilitating clear reasoning. - It allows for exploring the impact of data deletion on derived data, highlighting redundancy and support in decision-making.

      Self-reference and negation complicate program semantics, leading to non-deterministic outcomes. - Programs with self-reference can yield multiple meanings based on the order of data and rule execution. - Non-deterministic behavior arises when different replicas process the same input in varying sequences.

      Understanding state and non-determinism in distributed systems is critical. - Protocols like Paxos and two-phase commit face challenges due to unreliable communication and dynamic state. - Capturing mutable state and non-deterministic message handling is essential for improving the design of distributed systems.

      Discussing the relationship between assignment statements and the concept of time. - Assignment statements in programming segment time into past and future, with implications for how values are understood. - The conventional and database models of time, with emphasis on their linearity and how events are recorded.

      Knowledge is ephemeral and relies on context and timing. - Knowledge only holds meaning at specific moments and must be shared in real-time to be relevant. - Computation involves aligning data and events temporally to derive new knowledge.

      Implementing asynchronous rules in data log languages simplifies time management. - Every record in the data log has a timestamp, allowing for deductive and inductive reasoning based on time. - The powerful constructs in Deal list enable expressing complex temporal relationships like atomicity, mutual exclusion, and sequencing.

      Understanding non-determinism and time in distributed systems programming. - In distributed systems, messages from agents race to the counter, leading to uncertain outcomes about the read values. - The challenge lies in developing a model theoretic semantics for programs with non-deterministic behavior influenced by time.

      Understanding program behavior and outcomes through equivalence classes. - Behavior tracking reveals that outcomes often change only by their timestamps, requiring classification. - Establishing unique models for programs can simplify distributed systems, ensuring deterministic executions.

      Abstraction in programming simplifies complexity while ensuring data consistency. - Hiding various representations of state helps in maintaining clarity, allowing focus on data as the core unit of computation. - Utilizing concepts from logic programming enhances understanding and correctness of distributed systems through deterministic outcomes.

      Development of programming languages and systems for fault tolerance and ease of use. - Blom language simplifies programming syntax while preserving semantics, providing insights into program determinism. - Lineage-driven fault injection enhances program reliability by proving outcomes through independent redundant proofs.

      Audience applauds in appreciation of the performance. - The applause signifies the audience's approval and enjoyment of the event. - It serves as a positive reinforcement for the performers' efforts.

    1. Summary of Tech Talk: Atomic Data and the Semantic Web

      Introduction

      • Speaker: Software developer and entrepreneur with experience in linked data and the semantic web.

      "My name is [Hube], I'm a software developer and entrepreneur and I did a lot with linked data and the semantic web."

      • Context: Focus on atomic data as an iteration on the semantic web and RDF concepts.

      "Today I'm going to talk about the semantic web and atomic data which is an iteration on that."

      Challenges with the Current Web and Semantic Web

      • Centralization: Web has shifted from a decentralized network to a centralized model controlled by few actors.

      "It has become way more centralized...and the nodes between individuals have kind of went missing over time."

      • Non-interoperable APIs: Many systems are incompatible due to proprietary APIs.

      "Even when developers create new types of applications, they often end up creating their totally own APIs."

      • Initial Design Gaps: The web's machine readability and write capabilities were underdeveloped.

      "The web is not machine-readable...The very first web browser actually had full read-write capabilities."

      RDF and the Vision for a Semantic Web

      • Concept: RDF uses triples (subject, predicate, object) to make data machine-readable and interoperable.

      "The idea of RDF is that it really increases the quality of data on the web."

      Challenges with RDF:

      • Lack of native arrays or sequential data, making JSON more practical.

      "RDF doesn't have any form of native arrays or sequential data."

      • Subject-property uniqueness issues lead to complexity.

      "A subject can have many triples with the same predicate...makes it very hard to store data in maps."

      • Blank nodes and named graphs introduce high complexity.

      "Blank nodes are this really complex thing...and there's also named graphs in RDF which provide just another layer of complexity."

      Atomic Data: An Alternative Approach

      • Vision: Combines RDF's interoperability, JSON's simplicity, and TypeScript's type safety.

      "Atomic data is basically where all these three things are combined."

      Key Features:

      • Strict subset of RDF and JSON with tight coupling between schema and validation.

      "Classes describe which properties are required and which ones are optional."

      • Transactions standardize changes for versioning, history, and cryptographic verification.

      "Every change is a transaction...every transaction is actually a resource."

      • Fully resolvable URLs ensure semantic clarity and reusability.

      "Every resource actually resolvable...you can use a local ID for resources not hosted globally."

      • Extensibility for dynamic and interactive web applications (e.g., real-time collaboration, file handling).

      "Static data is relatively easy but as we all know data changes over time."

      Practical Implementations

      • Atomic Server: Graph database in Rust for speed and robustness.

      "A graph database...written in Rust and it's really fast."

      Features:

      • Full text search, authentication, and live synchronization.

      "Embedded full text search index, queries, sorting, filtering, pagination, authorization."

      • Front-end: Built with React for ease of use and collaboration.

      "A graphical user interface that basically allows you to view all the data and add all data."

      Community and Tools

      • Libraries: JavaScript, TypeScript, and Rust libraries for integration.

      "Atomic Lib Library which is a JavaScript TypeScript npm type of library."

      • Open Source and Documentation: MIT-licensed projects with comprehensive guides.

      "Everything shown is all MIT license...a very, very big documentation book."

      • Community Engagement: Discussions on GitHub, Discord, and plans for W3C submission.

      "There's a W3C community group for atomic data...most of the activity is on GitHub and Discord."

      Conclusion and Vision

      • Future of Atomic Data: Bridging the gaps of RDF with pragmatic, developer-friendly solutions.

      "I think we should always use URLs that resolve."

      • Call to Action: Developers encouraged to explore, contribute, and adopt atomic data.

      "Thank you for listening, does anybody have a question?"

      This summary encapsulates the core challenges and solutions proposed for evolving the web's interoperability through atomic data while highlighting the technical nuances and community-driven development efforts.

  9. Dec 2024
      • Context and Core Problem

        “Harvard writes, you know additionally llms struggle to provide contextual relevant answer … specifically llms have difficulty combining scientific factual … with tcid … non-codified knowledge…”

        • The text emphasizes how Large Language Models (LLMs) struggle in medical contexts due to the dual challenge of incorporating both codified (structured) and non-codified (tacit) knowledge.
        • Harvard recognizes that inaccurate or incomplete retrieval poses a major risk in medical applications.
      • Inadequacy of Current Medical LLM Performance

        “…if you look here at llama 3B we are here with the basic below 50% inmediate we are below 40% and the expert is below 30% … with gbt4 Turbo … about at 50% so for a medical system this is not acceptable…”

        • The text compares LLMs on basic, intermediate, and expert-level medical questions, showing subpar performance (30%-50% accuracy).
        • Such performance is “horrible” for medical standards where high precision is necessary.
      • Limitation of Retrieval-Augmented Generation (RAG)

        “…Howard discovers that also those R methods can provide multisource knowledge … but they are really vulnerable to potential errors… incomplete and incorrect information … leading now here in har Medical School inaccurate retrieval…”

        • RAG systems can still fail, especially if the underlying data repositories are incomplete or if there is no robust verification mechanism after retrieving facts.
      • Proposed Knowledge Graph-Based Agent

        “…they developed a new methodology … knowledge graph-based agent … designed to address the complexity of knowledge intensive medical queries…”

        • Harvard’s new solution leverages knowledge graphs (KGs) to systematically integrate multi-source, structured medical data.
        • It aims to combine LLM-generated insight with the rigor of codified knowledge in KGs.
      • Four-Phase Approach

        1. Generate

          “…At first we are at a generate level … to prompt the llm to follow different procedures for generating relevant triplets…”

          • The LLM identifies key medical entities and relationships from the question (“H, R, T” format).
          • It creates structured triplets (head entity – relationship – tail entity) to capture potential facts.
            1. Review

          “…it aims to assess the correctness of the generated triplets … by grounding here the correctness of the triplets in a medical Knowledge Graph…”

          • The system validates these newly generated triplets against the KG.
          • Entities are mapped to UMLS codes to ensure standardized medical terms.
            1. Revise

          “…the llm continues to propose and refine the triplets until the triplet are now validated Again by the knowledge CFT…”

          • If the triplets are incorrect or incomplete, the LLM iteratively refines them.
          • Mismatched or missing
    1. Speaker’s Mixed Feelings on Language Models

      Key Quote: “The second controversial statement is language models suck or rather AI sucks and specifically the way our culture has been using it.”

      The speaker acknowledges that while language models are groundbreaking (“language models are pretty neat”), there are serious ethical, social, and environmental concerns, which creates a personal and professional dilemma.

      Motivation: Making AI Good

      Key Quote: “How can we make the tech actually good if it comes with all these trade-offs...so let’s make it as good as it can possibly be but how?”

      The talk’s central goal is to explore how to refine language model technology to maximize its societal benefits and minimize harms.

      RDF Overview and History

      Key Quote: “rdf is an attempt to tackle some of the hardest problems of knowledge representation and reasoning...from the same group of people that put together all the internet specifications the w3c.”

      RDF (Resource Description Framework) emerged post-internet boom, aiming to provide a universal system for knowledge representation, rooted in symbolic AI traditions and overseen by the W3C.

      Why RDF Fell Out of Favor

      Key Quote: “One is rdf XML which is one of the initial formats...this is a verbose complex format it’s just honestly not great.”

      Early technical choices and heavy enterprise solutions contributed to RDF’s reputation as being cumbersome and outdated, even though “under the hood” it remains robust and conceptually sound.

      RDF’s Elegant Core: Resources and Triples

      Key Quote: “It’s the resource description framework so let’s talk about resources first...a resource is anything in the world that you can talk about.”

      RDF structures knowledge as “triples” (subject, predicate, object) linked by unique identifiers (IRIs), enabling precise, context-rich data representation.

      Federation and Union of Data

      Key Quote: “An rdf data set is a set in the closure sense or the mathematical sense...we can also safely union sets.”

      By standardizing each piece of data (triples + IRIs), RDF allows combining multiple datasets (federation) without losing context or creating duplication conflicts.

      Inference, Logic, and Schemas

      Key Quote: “This is all about entailment...given a set of triples I can derive other triples from them conceptually.”

      RDF includes logical rules for automatically deriving new facts (entailment) and validating data, reflecting decades of research in symbolic AI and formal logic.

      Language Models: Context and Probabilistic Reasoning

      Key Quote: “All a model is is a pure function that predicts the next token...with some constants in it.”

      Modern language models leverage the Transformer architecture for predicting tokens and exhibit capabilities in semantics, grammar, and even “fact patterns,” though they remain probabilistic approximations.

      Challenges of Using Language Models

      Key Quote: “All of AI programming...is putting the right stuff into the model to try to get it to get out the stuff that you want.”

      Because re-training an LLM is expensive, practitioners focus on techniques like prompt engineering, retrieval-augmented generation, query generation, and tool use to shape the model’s output.

      Core Problem: Integrating Data and Language

      Key Quote: “We write programs that work between them a lot...But how do I get my data to meet my language?”

      The speaker emphasizes the need for a mechanism that unifies formal data representation and natural language capabilities, highlighting RDF as the bridge.

      RDF as the Bridge Between LLMs and Data

      Key Quote: “We should be putting rdf data in our prompts and when we are asking to get kind of more structured data out of models we should be asking for it in rdf format.”

      RDF’s subject-predicate-object structure aligns naturally with the grammar captured by language models, enabling more precise input/output handling and reducing ambiguity.

      RDF Queries for Tool Use

      Key Quote: “If my rdf implementation supports reasoning...the language model is asking a different question who is Luke a descendant of.”

      By combining RDF’s inference with LLM queries, complex or open-ended questions (“who am I?” or genealogical lookups) can be answered reliably, without forcing the language model to handle all logic internally.

      Combining Symbolic and Neural Approaches

      Key Quote: “Neuorsymbolic AI is...any research that is trying to combine the abstract fuzzy neural network with hard concrete logical symbols.”

      RDF can serve as the symbolic layer, and the language model as the neural layer—together addressing knowledge gaps that purely logical or purely neural methods struggle with alone.

      Conclusion and Practical Use Cases

      Key Quote: “I think it’s a tool to actually automate a lot of the dishes and the laundry of working with data...particularly on the data side.”

      The speaker envisions AI not to replace creative work or coding but to handle the “tedious chores” of data management, with RDF acting as a structured, logic-friendly foundation to harness LLMs effectively.

    2. Speaker’s Mixed Feelings on Language Models

      Key Quote: “The second controversial statement is language models suck or rather AI sucks and specifically the way our culture has been using it.”

      The speaker acknowledges that while language models are groundbreaking (“language models are pretty neat”), there are serious ethical, social, and environmental concerns, which creates a personal and professional dilemma.

      Motivation: Making AI Good

      Key Quote: “How can we make the tech actually good if it comes with all these trade-offs...so let’s make it as good as it can possibly be but how?”

      The talk’s central goal is to explore how to refine language model technology to maximize its societal benefits and minimize harms.

      RDF Overview and History

      Key Quote: “rdf is an attempt to tackle some of the hardest problems of knowledge representation and reasoning...from the same group of people that put together all the internet specifications the w3c.”

      RDF (Resource Description Framework) emerged post-internet boom, aiming to provide a universal system for knowledge representation, rooted in symbolic AI traditions and overseen by the W3C.

      Why RDF Fell Out of Favor

      Key Quote: “One is rdf XML which is one of the initial formats...this is a verbose complex format it’s just honestly not great.”

      Early technical choices and heavy enterprise solutions contributed to RDF’s reputation as being cumbersome and outdated, even though “under the hood” it remains robust and conceptually sound.

      RDF’s Elegant Core: Resources and Triples

      Key Quote: “It’s the resource description framework so let’s talk about resources first...a resource is anything in the world that you can talk about.”

      RDF structures knowledge as “triples” (subject, predicate, object) linked by unique identifiers (IRIs), enabling precise, context-rich data representation.

      Federation and Union of Data

      Key Quote: “An rdf data set is a set in the closure sense or the mathematical sense...we can also safely union sets.”

      By standardizing each piece of data (triples + IRIs), RDF allows combining multiple datasets (federation) without losing context or creating duplication conflicts.

      Inference, Logic, and Schemas

      Key Quote: “This is all about entailment...given a set of triples I can derive other triples from them conceptually.”

      RDF includes logical rules for automatically deriving new facts (entailment) and validating data, reflecting decades of research in symbolic AI and formal logic.

      Language Models: Context and Probabilistic Reasoning

      Key Quote: “All a model is is a pure function that predicts the next token...with some constants in it.”

      Modern language models leverage the Transformer architecture for predicting tokens and exhibit capabilities in semantics, grammar, and even “fact patterns,” though they remain probabilistic approximations.

      Challenges of Using Language Models

      Key Quote: “All of AI programming...is putting the right stuff into the model to try to get it to get out the stuff that you want.”

      Because re-training an LLM is expensive, practitioners focus on techniques like prompt engineering, retrieval-augmented generation, query generation, and tool use to shape the model’s output.

      Core Problem: Integrating Data and Language

      Key Quote: “We write programs that work between them a lot...But how do I get my data to meet my language?”

      The speaker emphasizes the need for a mechanism that unifies formal data representation and natural language capabilities, highlighting RDF as the bridge.

      RDF as the Bridge Between LLMs and Data

      Key Quote: “We should be putting rdf data in our prompts and when we are asking to get kind of more structured data out of models we should be asking for it in rdf format.”

      RDF’s subject-predicate-object structure aligns naturally with the grammar captured by language models, enabling more precise input/output handling and reducing ambiguity.

      RDF Queries for Tool Use

      Key Quote: “If my rdf implementation supports reasoning...the language model is asking a different question who is Luke a descendant of.”

      By combining RDF’s inference with LLM queries, complex or open-ended questions (“who am I?” or genealogical lookups) can be answered reliably, without forcing the language model to handle all logic internally.

      Combining Symbolic and Neural Approaches

      Key Quote: “Neuorsymbolic AI is...any research that is trying to combine the abstract fuzzy neural network with hard concrete logical symbols.”

      RDF can serve as the symbolic layer, and the language model as the neural layer—together addressing knowledge gaps that purely logical or purely neural methods struggle with alone.

      Conclusion and Practical Use Cases

      Key Quote: “I think it’s a tool to actually automate a lot of the dishes and the laundry of working with data...particularly on the data side.”

      The speaker envisions AI not to replace creative work or coding but to handle the “tedious chores” of data management, with RDF acting as a structured, logic-friendly foundation to harness LLMs effectively.

    1. Introduction & Problem Statement

      Quoted sentence: “The problem is that datomic users can’t reason about transactor performance and our objective today is that everyone should be able to leave being able to answer two questions: where is all the time going and what can I do about it?”

      Summary: The speaker highlights a key challenge: Datomic users struggle to understand transactor performance. The goal of this presentation is to enable them to identify performance bottlenecks and find actionable solutions.

      Datomic Architecture Overview

      Quoted sentence: “We then have another component called the transactor and the transactor is an appliance which processes transactions safely one at a time… sometimes given enough load transactions will cue.”

      Summary: Datomic consists of peer processes co-located with user applications for fast reads and a single-threaded transactor responsible for safely processing write transactions, which can become a bottleneck under heavy load.

      Queuing Theory Basics

      Quoted sentence: “As the utilization of any queuing system increases, the residence time trends toward infinity.”

      Summary: The talk introduces queuing theory to explain how increased transaction arrival rates and high resource utilization cause wait times to grow dramatically. Key metrics—service time, utilization, throughput, and response time—are all crucial to analyzing performance.

      Impact of High Utilization on Latency

      Quoted sentence: “We can determine the max utilization in order to achieve a response time that we could tolerate.”

      Summary: By controlling or reducing service time (S) in Datomic’s transactor, one can keep overall utilization manageable and thus keep transaction latencies within acceptable limits.

      Working Set Model Explanation

      Quoted sentence: “The working set model is effectively the smallest collection of information that must be present in main memory to ensure efficient execution of your program.”

      Summary: The speaker references a classic 1968 MIT paper to illustrate that performance is fundamentally affected by how much necessary data (segments) are in memory. Excessive page traffic (moving segments from external storage to memory) slows down transaction processing.

      Performance Benchmark: 10-Billion-Datum Database

      Quoted sentence: “We’re going to…flood this system with 5,000 TPS and it’s going to be completely saturated… so this is just a way for us to ensure that we’re absolutely saturating this system.”

      Summary: A large-scale test environment demonstrates how Datomic transactor performance responds under heavy load, revealing that as data size grows, the transactor becomes increasingly IO-bound.

      Identifying IO Bottlenecks

      Quoted sentence: “If we wanted to know what can we do about it we would look at the TX stats chart and we can see…most of the time is spent resolving transaction identities.”

      Summary: Transaction resolution and de-duplication dominate overall processing time once the database is large. Reducing disk or network IO through smarter caching and smaller working sets is key to boosting throughput.

      Caching Hierarchy in Datomic

      Quoted sentence: “We have…a valache which is an NVMe SSD…Light Years faster than a network hop especially across availability zones.”

      Summary: Datomic’s multi-tier caching structure (object cache, in-flight lookup, val cache, external storage) significantly impacts latency, showing that local NVMe-based caching outperforms remote storage solutions.

      Strategies to Improve Transactor Performance

      Quoted sentence: “If you wanted to improve transactor performance you could fiddle with some knobs… deploy faster Hardware… or semantic improvements in your application.”

      Summary: Tuning existing settings is a short-term fix. More powerful hardware (e.g., NVMe val cache) or re-thinking application design (e.g., minimal uniqueness checks, fewer data collisions, smaller transactions) often yields far bigger gains.

      Semantic Optimizations & Sequential Identifiers

      Quoted sentence: “Why do sequential identifiers actually work this way? … if you’re using a squid… you only need to check against the random part of competing squid identifiers within a slide time window of one second.”

      Summary: The talk stresses that using sequential or time-ordered IDs (like squids) can drastically reduce random lookups and maintain a small, hot working set in memory, leading to significant transaction throughput improvements.

      Platform-Level Improvements

      Quoted sentence: “An interesting consequence of this… is that as the utilization increases we end up getting more time waiting in the Q which then gives us more time to prefetch the IO which then reduces the utilization.”

      Summary: Recent Datomic enhancements, such as “datomic hints” and advanced prefetching, exploit queue wait time to fetch needed segments preemptively, alleviating IO stalls in the critical apply phase and thereby driving down utilization and latency.

      Datomic Hints & Segment Prefetch

      Quoted sentence: “We can begin to prefetch some of the reads… purely for side effects to pre-populate the cache with the appropriate data… it can reduce the IO inside the apply thread.”

      Summary: A notable new feature allows peers to send “hints” about which segments the transaction will touch, letting the transactor load them while transactions are queued. This cuts down service time dramatically.

      Conclusion

      Quoted sentence: “With that I think we’ve run through our agenda… so thank you.”

      Summary: The talk ends by reiterating how queueing theory, the working set model, semantic changes, and new Datomic features like segment prefetch collectively empower engineers to tackle transactor bottlenecks at scale.

    2. Introduction & Problem Statement

      Quoted sentence: “The problem is that datomic users can’t reason about transactor performance and our objective today is that everyone should be able to leave being able to answer two questions: where is all the time going and what can I do about it?”

      Summary: The speaker highlights a key challenge: Datomic users struggle to understand transactor performance. The goal of this presentation is to enable them to identify performance bottlenecks and find actionable solutions.

      Datomic Architecture Overview

      Quoted sentence: “We then have another component called the transactor and the transactor is an appliance which processes transactions safely one at a time… sometimes given enough load transactions will cue.”

      Summary: Datomic consists of peer processes co-located with user applications for fast reads and a single-threaded transactor responsible for safely processing write transactions, which can become a bottleneck under heavy load.

      Queuing Theory Basics

      Quoted sentence: “As the utilization of any queuing system increases, the residence time trends toward infinity.”

      Summary: The talk introduces queuing theory to explain how increased transaction arrival rates and high resource utilization cause wait times to grow dramatically. Key metrics—service time, utilization, throughput, and response time—are all crucial to analyzing performance.

      Impact of High Utilization on Latency

      Quoted sentence: “We can determine the max utilization in order to achieve a response time that we could tolerate.”

      Summary: By controlling or reducing service time (S) in Datomic’s transactor, one can keep overall utilization manageable and thus keep transaction latencies within acceptable limits.

      Working Set Model Explanation

      Quoted sentence: “The working set model is effectively the smallest collection of information that must be present in main memory to ensure efficient execution of your program.”

      Summary: The speaker references a classic 1968 MIT paper to illustrate that performance is fundamentally affected by how much necessary data (segments) are in memory. Excessive page traffic (moving segments from external storage to memory) slows down transaction processing.

      Performance Benchmark: 10-Billion-Datum Database

      Quoted sentence: “We’re going to…flood this system with 5,000 TPS and it’s going to be completely saturated… so this is just a way for us to ensure that we’re absolutely saturating this system.”

      Summary: A large-scale test environment demonstrates how Datomic transactor performance responds under heavy load, revealing that as data size grows, the transactor becomes increasingly IO-bound.

      Identifying IO Bottlenecks

      Quoted sentence: “If we wanted to know what can we do about it we would look at the TX stats chart and we can see…most of the time is spent resolving transaction identities.”

      Summary: Transaction resolution and de-duplication dominate overall processing time once the database is large. Reducing disk or network IO through smarter caching and smaller working sets is key to boosting throughput.

      Caching Hierarchy in Datomic

      Quoted sentence: “We have…a valache which is an NVMe SSD…Light Years faster than a network hop especially across availability zones.”

      Summary: Datomic’s multi-tier caching structure (object cache, in-flight lookup, val cache, external storage) significantly impacts latency, showing that local NVMe-based caching outperforms remote storage solutions.

      Strategies to Improve Transactor Performance

      Quoted sentence: “If you wanted to improve transactor performance you could fiddle with some knobs… deploy faster Hardware… or semantic improvements in your application.”

      Summary: Tuning existing settings is a short-term fix. More powerful hardware (e.g., NVMe val cache) or re-thinking application design (e.g., minimal uniqueness checks, fewer data collisions, smaller transactions) often yields far bigger gains.

      Semantic Optimizations & Sequential Identifiers

      Quoted sentence: “Why do sequential identifiers actually work this way? … if you’re using a squid… you only need to check against the random part of competing squid identifiers within a slide time window of one second.”

      Summary: The talk stresses that using sequential or time-ordered IDs (like squids) can drastically reduce random lookups and maintain a small, hot working set in memory, leading to significant transaction throughput improvements.

      Platform-Level Improvements

      Quoted sentence: “An interesting consequence of this… is that as the utilization increases we end up getting more time waiting in the Q which then gives us more time to prefetch the IO which then reduces the utilization.”

      Summary: Recent Datomic enhancements, such as “datomic hints” and advanced prefetching, exploit queue wait time to fetch needed segments preemptively, alleviating IO stalls in the critical apply phase and thereby driving down utilization and latency.

      Datomic Hints & Segment Prefetch

      Quoted sentence: “We can begin to prefetch some of the reads… purely for side effects to pre-populate the cache with the appropriate data… it can reduce the IO inside the apply thread.”

      Summary: A notable new feature allows peers to send “hints” about which segments the transaction will touch, letting the transactor load them while transactions are queued. This cuts down service time dramatically.

      Conclusion

      Quoted sentence: “With that I think we’ve run through our agenda… so thank you.”

      Summary: The talk ends by reiterating how queueing theory, the working set model, semantic changes, and new Datomic features like segment prefetch collectively empower engineers to tackle transactor bottlenecks at scale.

  10. Nov 2024
    1. State management is a complex topic in Flutter with various approaches to choose from.

      "State management is a complex topic."

      Provider: A commonly used state management solution in Flutter.

      "Provider package"

      Riverpod offers compile safety and testing without depending on the Flutter SDK, similar to Provider.

      "Riverpod works in a similar fashion to Provider. It offers compile safety and testing without depending on the Flutter SDK."

      setState is the low-level approach for widget-specific, ephemeral state.

      "The low-level approach to use for widget-specific, ephemeral state."

      ValueNotifier & InheritedNotifier use Flutter's built-in tools to update state and notify the UI of changes.

      "An approach using only Flutter provided tooling to update state and notify the UI of changes."

      InheritedWidget & InheritedModel facilitate communication between ancestors and children in the widget tree and are the foundation for other state management solutions.

      "The low-level approach used to communicate between ancestors and children in the widget tree. This is what provider and many other approaches use under the hood."

      June is a lightweight and modern library focusing on a pattern similar to Flutter's built-in state management.

      "A lightweight and modern state management library that focuses on providing a pattern similar to Flutter's built-in state management."

      Redux is a state container approach familiar to web developers, suitable for managing application state.

      "A state container approach familiar to many web developers."

      Fish Redux is an assembled Flutter application framework based on Redux, suitable for medium and large applications.

      "Fish Redux is an assembled flutter application framework based on Redux state management. It is suitable for building medium and large applications."

      BLoC / Rx comprise a family of stream/observable-based patterns for state management.

      "A family of stream/observable based patterns."

      GetIt is a service locator approach that doesn't require a BuildContext for state management.

      "A service locator based state management approach that doesn't need a BuildContext."

      MobX is a popular library based on observables and reactions for state management.

      "A popular library based on observables and reactions."

      Dart Board is a modular feature management framework designed to encapsulate and isolate features in Flutter applications.

      "A modular feature management framework for Flutter. Dart Board is designed to help encapsulate and isolate features, including examples/frameworks, small kernel, and many ready-to-use decoupled features such as debugging, logging, auth, redux, locator, particle system and more."

      Flutter Commands uses the Command Pattern and ValueNotifiers for reactive state management.

      "Reactive state management that uses the Command Pattern and is based on ValueNotifiers."

      Binder is a state management package using InheritedWidget at its core, promoting separation of concerns.

      "A state management package that uses InheritedWidget at its core. Inspired in part by recoil. This package promotes the separation of concerns."

      GetX is a simplified and powerful reactive state management solution.

      "A simplified reactive state management solution."

      states_rebuilder combines state management with dependency injection and an integrated router.

      "An approach that combines state management with a dependency injection solution and an integrated router."

      Triple Pattern (Segmented State Pattern) uses Streams or ValueNotifier, focusing on three core values: Error, Loading, and State.

      "Triple is a pattern for state management that uses Streams or ValueNotifier. This mechanism (nicknamed triple because the stream always uses three values: Error, Loading, and State), is based on the Segmented State pattern."

      solidart is a simple yet powerful state management solution inspired by SolidJS.

      "A simple but powerful state management solution inspired by SolidJS."

      flutter_reactive_value offers a minimalistic solution, allowing newcomers to add reactivity without complexity.

      "The flutter_reactive_value library might offer the least complex solution for state management in Flutter. It might help Flutter newcomers add reactivity to their UI, without the complexity of the mechanisms described before."

      Elementary provides a straightforward way to build Flutter applications with MVVM, enhancing productivity and testability.

      "Elementary is a simple and reliable way to build applications with MVVM in Flutter. It offers a pure Flutter experience with clear code separation by responsibilities, efficient rebuilds, easy testability, and enhancing team productivity."

      Developers are encouraged to review these options to select an approach that best fits their use case.

      "If you feel that some of your questions haven't been answered, or that the approach described on these pages is not viable for your use cases, you are probably right."

    1. Summary of Ravi Chugh's Talk on "Programming with Direct Manipulation":

      Motivation to make programming languages more interactive, human-friendly, and accessible:

      Quote: "This talk is about research efforts to make programming languages and tools more interactive, more human friendly, and hopefully more accessible and useful to a wide variety of people."

      Tension between programming and direct manipulation interfaces:

      Quote: "On one hand, we want and need the full expressive power of our fancy general purpose programming languages that are equipped for abstract symbolic reasoning; at the same time, we also want and need tangible interactive user interfaces for understanding and manipulating the concrete things we are making."

      Desire for systems that blend programming languages with direct manipulation UIs:

      Quote: "So naturally, what we would like are systems that blend programming languages and concrete direct manipulation user interfaces, allowing us to smoothly move back and forth between these different modes as needed."

      Introduction of the concept "Programming with Direct Manipulation":

      Quote: "I'll refer to these goals as programming with direct manipulation—that is, in addition to unrestricted text-based editing of source code in whatever our favorite language happens to be, we would like the ability to inspect and interact with and change the output, and have the system help suggest changes to the code based on these interactions."

      Historical context of interactive programming systems:

      Quote: "Similar visions for interactive programming systems to augment human creativity and intelligence can be traced all the way back to the 1960s, from the constraint-oriented Sketchpad system by Ivan Sutherland to the work on graphical user interfaces and interactive computing by Doug Engelbart, Alan Kay, and many, many others."

      Recent interest and efforts in the intersection of PL and HCI:

      Quote: "In the past decade or so, there's been renewed interest in these challenges which lie at the intersection of PL and HCI."

      Introduction of Sketch-n-Sketch prototype system:

      Quote: "In my group, we've been exploring a few ideas in a prototype system called Sketch-n-Sketch—for sketching partial programs in the program synthesis sense, and sketching or drawing objects in the GUI editor sense."

      Three main ideas explored in Sketch-n-Sketch:

      Programming by demonstration in a pure lambda calculus:

      Quote: "The first idea is to explore programming by demonstration techniques for building programs in a pure lambda calculus, rather than in a lower-level imperative language as in most PBD work."

      Streamlined structure editing of abstract syntax trees in a text editor:

      Quote: "The second idea is to explore how structure editing of an abstract syntax tree might be streamlined into an ordinary existing text editor, as opposed to being a completely separate editing paradigm."

      Incorporating bidirectional programming techniques:

      Quote: "The third idea explores how to incorporate bidirectional programming techniques so that relatively small changes to the output can be mapped back to changes in the program."

      Demo Part 1: Programming by Demonstration—Every interaction is codified as a program change:

      Quote: "The key takeaway from this first part of the demo is that every direct manipulation interaction in the output pane is codified as a change to the program in the left pane."

      Demo Part 2: Structure Editing—Combining text and structure edits with GUI overlays:

      Quote: "The key takeaway from the following demo is that, in addition to regular text edits on the concrete syntax of the program, the left pane also supports certain program transformations by hovering, selecting, and clicking on the abstract syntax tree."

      Demo Part 3: Bidirectional Programming—Mapping output changes back to the program:

      Quote: "In the third and final part of the demo, we'll talk about the bidirectional programming features in Sketch-n-Sketch that support such changes and compare to the previous examples where Sketch-n-Sketch was configured for SVG programming, the following example will show a program that generates a simple HTML page."

      Exploration of programming by demonstration techniques in functional programming languages:

      Quote: "In contrast, our goal in Sketch-n-Sketch so far has not been to build the ultimate visual graphics editor, but rather to explore whether GUI interactions can be represented as ordinary text-based programs, as a way to bridge rather than replace a full-featured programming language."

      Discussion on structure editing and the use of GUI elements to manipulate ASTs:

      Quote: "There are many aspects to consider, both in the user interface side as well as the semantics of the transformations."

      Integration of text and visual editing in programming environments:

      Quote: "All of these ideas help make progress toward the goal of integrating text and visual editing, but these user interfaces really only make edits at the leaves of the AST."

      Challenges in scaling up structure editing and program transformations:

      Quote: "It remains an open question how to scale up such edit languages to describe larger program transformations and refactorings in a way that preserves static and dynamic information across compiles and allows the editor to be extended with new and custom transformations."

      Importance of bidirectional programming in mapping output changes to code:

      Quote: "I think there's potential to develop this kind of bidirectional programming for a lot of practical settings."

      Potential application domains: Data science, web development, graphics design:

      Quote: "I think it's easy to imagine a workflow where programmers, designers, and end users with a variety of technical backgrounds and different permission levels can work together to suggest and commit changes in this kind of a bidirectional system."

      Advances in live programming interfaces and integrating program synthesis:

      Quote: "It's been great to see all these efforts to make synthesis techniques more usable, and we'll need a lot more of this work going forward."

      Discussion on the role of spreadsheets as live programming interfaces:

      Quote: "So although spreadsheets have always lacked many of the bread-and-butter features that we would expect in any real programming system, spreadsheets have proven extraordinarily flexible and useful, and especially with some of these recent language extensions, spreadsheets provide a lot of really compelling opportunities both for PL and user interface design."

      Bridging the gap between designers and developers in collaborative projects:

      Quote: "Here's one setting in which I'm personally interested in trying to bridge these gaps between the designer and developer, regardless of whether they are multiple different people or just an individual user."

      Conclusion emphasizing recurring themes and future challenges:

      Quote: "So that was a whirlwind tour of a bunch of ideas spanning PL and HCI that factor into this pursuit of more interactive programming systems that support direct manipulation."

      Summary of recurring themes:

      Quote: "One is, can we design every graphical user interface to be backed by text-based programs in a general programming language?"

      Encouraging collaboration and future work in PL plus HCI:

      Quote: "So if you're interested, if you're sympathetic to the cause, there are certainly missions out there for you."

      Acknowledgments and appreciation:

      Quote: "There are a lot of people I want to thank for encouraging me in this work."

    2. Summary of Ravi Chugh's Talk on "Programming with Direct Manipulation":

      Motivation to make programming languages more interactive, human-friendly, and accessible:

      Quote: "This talk is about research efforts to make programming languages and tools more interactive, more human friendly, and hopefully more accessible and useful to a wide variety of people."

      Tension between programming and direct manipulation interfaces:

      Quote: "On one hand, we want and need the full expressive power of our fancy general purpose programming languages that are equipped for abstract symbolic reasoning; at the same time, we also want and need tangible interactive user interfaces for understanding and manipulating the concrete things we are making."

      Desire for systems that blend programming languages with direct manipulation UIs:

      Quote: "So naturally, what we would like are systems that blend programming languages and concrete direct manipulation user interfaces, allowing us to smoothly move back and forth between these different modes as needed."

      Introduction of the concept "Programming with Direct Manipulation":

      Quote: "I'll refer to these goals as programming with direct manipulation—that is, in addition to unrestricted text-based editing of source code in whatever our favorite language happens to be, we would like the ability to inspect and interact with and change the output, and have the system help suggest changes to the code based on these interactions."

      Historical context of interactive programming systems:

      Quote: "Similar visions for interactive programming systems to augment human creativity and intelligence can be traced all the way back to the 1960s, from the constraint-oriented Sketchpad system by Ivan Sutherland to the work on graphical user interfaces and interactive computing by Doug Engelbart, Alan Kay, and many, many others."

      Recent interest and efforts in the intersection of PL and HCI:

      Quote: "In the past decade or so, there's been renewed interest in these challenges which lie at the intersection of PL and HCI."

      Introduction of Sketch-n-Sketch prototype system:

      Quote: "In my group, we've been exploring a few ideas in a prototype system called Sketch-n-Sketch—for sketching partial programs in the program synthesis sense, and sketching or drawing objects in the GUI editor sense."

      Three main ideas explored in Sketch-n-Sketch:

      Programming by demonstration in a pure lambda calculus:

      Quote: "The first idea is to explore programming by demonstration techniques for building programs in a pure lambda calculus, rather than in a lower-level imperative language as in most PBD work."

      Streamlined structure editing of abstract syntax trees in a text editor:

      Quote: "The second idea is to explore how structure editing of an abstract syntax tree might be streamlined into an ordinary existing text editor, as opposed to being a completely separate editing paradigm."

      Incorporating bidirectional programming techniques:

      Quote: "The third idea explores how to incorporate bidirectional programming techniques so that relatively small changes to the output can be mapped back to changes in the program."

      Demo Part 1: Programming by Demonstration—Every interaction is codified as a program change:

      Quote: "The key takeaway from this first part of the demo is that every direct manipulation interaction in the output pane is codified as a change to the program in the left pane."

      Demo Part 2: Structure Editing—Combining text and structure edits with GUI overlays:

      Quote: "The key takeaway from the following demo is that, in addition to regular text edits on the concrete syntax of the program, the left pane also supports certain program transformations by hovering, selecting, and clicking on the abstract syntax tree."

      Demo Part 3: Bidirectional Programming—Mapping output changes back to the program:

      Quote: "In the third and final part of the demo, we'll talk about the bidirectional programming features in Sketch-n-Sketch that support such changes and compare to the previous examples where Sketch-n-Sketch was configured for SVG programming, the following example will show a program that generates a simple HTML page."

      Exploration of programming by demonstration techniques in functional programming languages:

      Quote: "In contrast, our goal in Sketch-n-Sketch so far has not been to build the ultimate visual graphics editor, but rather to explore whether GUI interactions can be represented as ordinary text-based programs, as a way to bridge rather than replace a full-featured programming language."

      Discussion on structure editing and the use of GUI elements to manipulate ASTs:

      Quote: "There are many aspects to consider, both in the user interface side as well as the semantics of the transformations."

      Integration of text and visual editing in programming environments:

      Quote: "All of these ideas help make progress toward the goal of integrating text and visual editing, but these user interfaces really only make edits at the leaves of the AST."

      Challenges in scaling up structure editing and program transformations:

      Quote: "It remains an open question how to scale up such edit languages to describe larger program transformations and refactorings in a way that preserves static and dynamic information across compiles and allows the editor to be extended with new and custom transformations."

      Importance of bidirectional programming in mapping output changes to code:

      Quote: "I think there's potential to develop this kind of bidirectional programming for a lot of practical settings."

      Potential application domains: Data science, web development, graphics design:

      Quote: "I think it's easy to imagine a workflow where programmers, designers, and end users with a variety of technical backgrounds and different permission levels can work together to suggest and commit changes in this kind of a bidirectional system."

      Advances in live programming interfaces and integrating program synthesis:

      Quote: "It's been great to see all these efforts to make synthesis techniques more usable, and we'll need a lot more of this work going forward."

      Discussion on the role of spreadsheets as live programming interfaces:

      Quote: "So although spreadsheets have always lacked many of the bread-and-butter features that we would expect in any real programming system, spreadsheets have proven extraordinarily flexible and useful, and especially with some of these recent language extensions, spreadsheets provide a lot of really compelling opportunities both for PL and user interface design."

      Bridging the gap between designers and developers in collaborative projects:

      Quote: "Here's one setting in which I'm personally interested in trying to bridge these gaps between the designer and developer, regardless of whether they are multiple different people or just an individual user."

      Conclusion emphasizing recurring themes and future challenges:

      Quote: "So that was a whirlwind tour of a bunch of ideas spanning PL and HCI that factor into this pursuit of more interactive programming systems that support direct manipulation."

      Summary of recurring themes:

      Quote: "One is, can we design every graphical user interface to be backed by text-based programs in a general programming language?"

      Encouraging collaboration and future work in PL plus HCI:

      Quote: "So if you're interested, if you're sympathetic to the cause, there are certainly missions out there for you."

      Acknowledgments and appreciation:

      Quote: "There are a lot of people I want to thank for encouraging me in this work."

    1. The speaker aims to distill years of thinking about Functional Reactive Programming (FRP) into a concise talk.

      Quote: "So this talk is really distilling those years into like a 40-minute thing so you don't have to you don't have to go through the same thing that I did."

      The main goals are to understand what FRP is, categorize different variations, and evaluate them effectively.

      Quote: "Our goals for today are to understand what is FRP, how do we categorize different things that sort of fall into that umbrella, and then how do we evaluate those different things in a nice way."

      First-order FRP, as implemented in Elm, involves signals connected to inputs from the world, representing values that change over time.

      Quote: "The key part of a signal graph is that it has inputs from the world... it's a mouse position that's changing over time."

      Signals in Elm are infinite and cannot be created or destroyed; they model continuous inputs like mouse position or keyboard presses.

      Quote: "Another property is signals are infinite... there's no such thing as deleting a signal... the inputs to your program are fixed."

      Signal graphs in Elm are static, meaning their structure is known at startup and does not change over time, which provides several benefits.

      Quote: "Another property is signal graphs are static; there's a known structure for your application from startup all the way into the future."

      Elm's FRP model is synchronous by default, ensuring events are processed in the order they occur, which is crucial for consistent user interactions.

      Quote: "Finally, it's synchronous by default... when you type hello... you want those letters to show up in that order."

      Transformations on signals are performed using functions like lift, allowing for pure functional manipulation of time-varying values.

      Quote: "We have a function that goes from A's to B's... and we go make a signal of B."

      Stateful computations are handled using foldp (fold from the past), which accumulates state over time based on incoming events.

      Quote: "In Elm, this is called foldp... we give it a starting state... and a way to update that state."

      Signals can be merged and combined using functions like merge and lift2, enabling complex signal graphs built from simpler components.

      Quote: "Finally, we have a way to merge signals together... we can apply a function that puts them together."

      The static nature of signal graphs in Elm enables efficient execution, as everything is event-driven and stateful nodes only need to look back at their previous state.

      Quote: "Okay, what do we get when we make these design choices... the first one is efficiency... everything's event-driven."

      Elm's architecture promotes modularity and separation of concerns, dividing programs into models, updates, and views in a pure functional style.

      Quote: "So when we look at the structure of this application, it breaks up nicely into four parts... we first have a model... we have update... we have a view."

      Hot swapping allows code changes to propagate in real-time without restarting the application, facilitated by the static signal graph.

      Quote: "So we're able to change the behavior of the program while the program is running and sort of see those changes propagate automatically."

      The time-travel debugger in Elm leverages the static nature of signal graphs to record and replay events, aiding in debugging and development.

      Quote: "We can pause this program and sort of go back in time to wherever we want to go... you can start to get really cool insights about what's going on in your program at particular points in time."

      Higher-order FRP introduces dynamic signal graphs that can be reconfigured at runtime, but this comes with significant trade-offs and complexities.

      Quote: "What happens if signal graphs could be reconfigured? What is higher-order FRP? Surely higher is better... this is a very surprisingly hard question."

      Introducing join in higher-order FRP allows for signals of signals, enabling dynamic switching between signals but leading to challenges like infinite lookback and memory growth.

      Quote: "We have a signal of signals... creating a new signal may need infinite lookback... memory growth is linear with time."

      To mitigate these issues, higher-order FRP requires restricting join with advanced type systems, adding complexity to the API or language.

      Quote: "The solution isn't that this is a bad idea; it's that we only should switch to signals that have safe amounts of history... How do we restrict the definition of join with fancier types?"

      Asynchronous dataflow systems like Reactive Extensions abandon the infinite and synchronous nature of signals, allowing signals to end and defaulting to asynchrony.

      Quote: "Again, we can look at our core design and see which things we cross off... we cross off that signals are infinite... we get rid of... that it's synchronous by default."

      In asynchronous dataflow, switching signals creates entirely new signals, and concepts like hot and cold signals determine if signals produce values when not observed.

      Quote: "Whenever you create one of these signals, you're creating a totally new one... If it's hot, it's going to keep producing things, and if it's cold, it's going to stop."

      Arrowized FRP uses automata (state machines) that can be composed and switched, but these automata are not connected directly to the world, limiting their scope.

      Quote: "We cross off that signal graphs are connected to the world... these nodes aren't connected to the world... when we take it out of the graph... it doesn't keep running."

      The choice between different FRP approaches involves trade-offs; no one method is universally better, and the decision depends on application needs.

      Quote: "This isn't a competition of like which one's better but rather different points in a design space that are complementary."

      Elm integrates aspects of these different FRP models, emphasizing a static signal graph for predictability and tooling while allowing asynchrony where necessary.

      Quote: "Elm has a thing called automaton which is the same concept... you can have all these nice guarantees when you want but still integrate with a system that has something more complex going on."

      Evaluating FRP systems requires considering factors like synchronicity, ability to handle asynchrony, support for dynamic graphs, and the complexity of code produced.

      Quote: "When you want to evaluate this, the question isn't how can I get the fanciest words on my library; it's what properties do I need for my application."

      Questions to ask when choosing an FRP system include: Is it synchronous by default? Can it handle asynchrony? Can I talk about inputs? Can I reconfigure my graph?

      Quote: "The questions you want to be asking are: Is it synchronous by default? Does it allow asynchrony? Can I talk about inputs? Can I reconfigure my graph?"

      Debugging capabilities and code complexity are important considerations; the system should make debugging straightforward and keep the code maintainable.

      Quote: "What is debugging like... does the code come out nice... you're paying a complexity cost for that, and do you need that in your application?"

      The speaker encourages learning Elm to understand the principles of FRP, as it provides a foundation that can be applied to other FRP systems.

      Quote: "If you learn Elm, then it's easy to go to these other ones, and you learn these principles which you can use there."

      In conclusion, the talk provides insights into the trade-offs of different FRP designs, emphasizing that the best choice depends on specific application requirements.

      Quote: "Hopefully, this has given you some insight... it's not about which one is better but what properties you need for your application."

    1. The article discusses Rama, a dataflow language and platform built on Clojure that leverages continuation-passing style (CPS) to generalize functions and enable powerful programming paradigms, especially for parallel and asynchronous code in distributed systems.

      Key Concepts:

      1. Rama Operations (deframaop):

      Functions that can emit zero, one, or multiple values.

      Use :> to emit values to an implicit continuation.

      Variables are prefixed with * (e.g., *v).

      1. Continuation-Passing Style (CPS):

      Functions receive an extra argument—the continuation—to which they pass results.

      Rama hides the continuation, simplifying the syntax compared to explicit CPS in Clojure.

      1. Emitting Multiple Times:

      Operations can emit values multiple times or not at all.

      Useful for operations like filtering or generating sequences without materializing collections.

      1. Anonymous Operations:

      Defined with <<ramaop, allowing for closures that capture lexical scope.

      Can be passed around as first-class citizens, similar to functions.

      1. Asynchronous Emission and Partitioners:

      Rama operations can emit asynchronously, enabling distributed computing.

      Partitioners like |hash, |all, and |global relocate computation across different threads or nodes in a cluster.

      Example: |hash sends computation to a task determined by hashing a key.

      1. Multiple Output Streams:

      Operations can emit to different output streams (e.g., :>, :a>, :b>).

      Allows branching control flow based on different emitted streams.

      Handled using constructs like <<branch and inline hooks :>>.

      1. Unification:

      Merges separate computation branches using unify>.

      Ensures shared code executes after different conditional paths.

      1. Loops (loop<-):

      Support for iterative processes that can emit multiple times.

      Can be combined with partitioners for distributed loops across nodes.

      1. Optimizations:

      Rama distinguishes between deframaop and deframafn.

      deframafn must emit exactly once and synchronously, allowing stack-efficient invocation.

      The compiler optimizes code to prevent stack overflows and ensure efficiency comparable to idiomatic Clojure code.

      1. Uniformity and Composition:

      Operations, conditionals, loops, and even partitioners are all treated uniformly.

      This uniformity simplifies the language and enhances code composability.

      Applications:

      Distributed Systems: Rama's ability to emit asynchronously and control execution location makes it ideal for distributed computing tasks.

      Backend Development: Expresses computation and storage needs for backends at any scale.

      Parallel and Asynchronous Code: Simplifies writing complex parallel operations without the usual boilerplate.

      Example Highlights:

      Identity Function in Rama:

      (deframaop identity-rama [v] (:> v))

      Emitting Multiple Times:

      (deframaop emit-many-times [] (:> 1) (:> 3) (:> 2) (:> 5))

      Asynchronous Partitioning:

      (|hash from-user-id) (user-current-funds $$funds from-user-id :> *funds)

      Multiple Output Streams and Branching:

      (deframaop emit-multiple-streams [] (:a> 1) (:> 2) (:> 3) (:a> 4) (:b> 5 6))

      Conclusion:

      Rama extends the capabilities of functional programming by generalizing the concept of functions through CPS and dataflow principles. It provides:

      A powerful framework for writing efficient, parallel, and distributed applications.

      A unified approach to computation that simplifies the handling of asynchronous and multi-emission operations.

      Enhanced composability and expressiveness in code, reducing boilerplate and improving readability.

      Note: While the article delves deeply into the technical aspects of Rama and its implementation details, the overarching theme is the exploration of how CPS and dataflow paradigms can revolutionize the way we write and reason about code in distributed systems.

    1. Conal Elliott introduces 'Denotational Design' as his central paradigm for software and library design.

      Quote: "I call it denotational design."

      He emphasizes that the primary job of a software designer is to build precise abstractions, focusing on 'what' rather than 'how'.

      Quote: "So I want to start out by talking about what I see as the main job of a software designer, which is to build abstractions."

      He references Edsger Dijkstra's perspective on abstraction to highlight the need for precision in software design.

      Quote: "This is a quote I like very much from a man I respect very much, Edgar Dykstra, and he said the purpose of abstraction is not to be vague... it's to create a whole new semantic level in which one can be absolutely precise."

      He identifies a common issue in software development: the focus on precision about implementation ('how') rather than specification ('what').

      Quote: "So I'm going to say something that may be a little jarring, which is that the state of the... commonly practiced state of the art in software is something that is precise only about how, not about what."

      He stresses the importance of making specifications precise to avoid self-deception in software development.

      Quote: "So the reason I harp onto precision is because it's so easy to fool ourselves and precision is what keeps us away from doing that."

      He cites Bertrand Russell's observation on the inherent vagueness of concepts until made precise.

      Quote: "Everything is vague to a degree you do not realize until you've tried to make it precise."

      He discusses the inadequacy of the term 'functional programming' and introduces 'denotational programming' as a better-defined alternative, referencing Peter Landin's work.

      Quote: "Peter Landon suggested term denotated... having three properties... every expression denotes something... that something depends only on the denotations of the sub-expressions."

      He defines 'Denotational Design' as a methodology that provides precise, simple, and compelling specifications, and helps avoid abstraction leaks.

      Quote: "I call it denotational design... It gives us precise, simple, and compelling specifications... you do not have abstraction leaks."

      He outlines three goals in software projects: building precise, elegant, and reusable abstractions; creating fast, correct, and maintainable implementations; and producing simple, clear, and accurate documentation.

      Quote: "So I suggest there are three goals... I want my abstractions to be precise, elegant, and reusable... My implementation, I'd like it to be fast... correct... maintainable... and the documentation should also be simple and... accurate."

      He demonstrates Denotational Design through an example of designing a library for image synthesis and manipulation, engaging the audience in defining what an image is.

      Quote: "So an example I want to talk about is image synthesis and manipulation... What is an image?"

      He considers various definitions of an image, including arrays of pixels, functions over space, and collections of shapes, before settling on a mathematical model.

      Quote: "My answer is: it's an assignment of colors to 2D locations... there's a simple precise way to say that which is the function from location to colors."

      He applies the denotational approach to define the meanings of types and operations in his image library, emphasizing the importance of compositionality.

      Quote: "So now I'm giving a denotation... So the meaning of over top bot is... mu of top and mu of bot... Note the compositionality of mu."

      He improves the API by generalizing operations and types, introducing type parameters to increase flexibility and simplicity.

      Quote: "So let's generalize... instead of saying an image which is a single type, let's say an image of a... we'll make it be parameterized by its output."

      He introduces standard abstractions like Monoid, Functor, and Applicative, showing how his image type and operations fit into these abstractions, leveraging their laws and properties.

      Quote: "Now we can also look at a couple of other interfaces: monad and comonad."

      He explains the 'Semantic Type Class Morphism' principle, stating that the instance's meaning follows the meaning's instance, ensuring that standard abstractions' laws hold for his types.

      Quote: "This leads to this principle that I call the semantic type class morphism principle... The instance's meaning follows the meaning's instance."

      He demonstrates that by following this principle, his implementations are necessarily correct and free of abstraction leaks, as they preserve the laws of the standard abstractions.

      Quote: "These proofs always go through... There's nothing about imagery except the homomorphism property that makes these laws go through."

      He illustrates the principle with examples from his image library, such as showing that images form a Monoid and Functor due to their underlying semantics.

      Quote: "So images... Well, image has the right kind... Well, yes it is... Here's this operation we called lift one."

      He discusses how this approach allows for reusable and compositional reasoning, similar to how algebra uses abstract interfaces and laws.

      Quote: "So when I say laws hold, you should say what are you even talking about... So in order for a law to be satisfied... we have to say what equality means."

      He provides further examples of applying Denotational Design to other types, such as streams and linear transformations, showing the broad applicability of the approach.

      Quote: "Another example is... so we just follow these all through and they all work... linear transformations."

      He concludes by summarizing the benefits of Denotational Design, including precise specifications, correct implementations, and the elimination of abstraction leaks, and invites further discussion.

      Quote: "I think it's a good place to stop... I'm happy to take any questions... I'd love to hear from you."

    1. Introduction of a more elegant specification of Functional Reactive Programming (FRP) to demonstrate a general technique for designing software libraries.

      "I want to talk to you about this more elegant version for two reasons... because it's an example of a general technique for specifying, for Designing and specifying software libraries."

      FRP is based on two key ideas: basing the API on a precise denotational specification and using continuous time.

      "Function reactive programming... really is based on two key ideas... to base the entire development of the API... on a precise specification in the form that I call denotational... and then the second principle is... it is continuous time."

      The central data type in FRP is the Behavior, representing a time-varying value.

      "The central data type in FRP is something I call the behavior, which you can think of as a Time varying value."

      The meaning of a Behavior is a function from time to some type alpha, with time being continuous (the real numbers).

      "It's critically important that time is reals and not something discreet like integers."

      From the definition of Behavior, most of FRP can be derived.

      "I mentioned that from this one choice here most of FRP can follow."

      Many operations in the original API resemble general operations like Functor and Applicative.

      "Many of the operations that were in the original API look like more General operations."

      By leveraging standard abstractions like Functor, Applicative, and Monoid, we can simplify the API and get specifications for free.

      "This information which we didn't invent... we get this information for free."

      The meaning function is a homomorphism for each type class, ensuring that the laws hold automatically.

      "Our specification is that mu is a homomorphism... and then we get laws for free."

      Homomorphisms preserve structure and are key to this methodology.

      "Homomorphism is something that's come up over and over in algebra... it means that it's kind of structure preserving."

      Examples of monoid homomorphisms include length of lists and logarithms of products.

      "For example... the length of an append is the sum of the lengths... the log of a product is the sum of the logs."

      This approach avoids the need to invent new definitions and relies on established mathematical structures.

      "I didn't have to invent it and that's very important... if mathematics... is telling me what it has to be, then I know that's going to be a solid design."

      The methodology guarantees that there will be no abstraction leaks if applied successfully.

      "This methodology guarantees you will not have abstraction leaks if we apply it successfully."

      Conclusion: FRP is about having a precise simple denotation and using continuous time, with semantic homomorphisms used as a specification.

      "As I mention FRP is really about two things having a precise simple denotation... and then in particular a continuous time as the germ idea."

      Applying semantic homomorphisms leads to elegant, predictable, well-founded, and simple designs.

      "It steers me toward an elegant predictable well-founded simple precise design."

      The laws of standard type classes hold for free due to the homomorphism properties.

      "The laws must hold... we get laws for free."

      The approach suggests that the semantics invents the API rather than the other way around.

      "The semantics invents the API or it suggests one for me... it tells me exactly what the API has to mean so I don't get to choose that which is a good thing."

  11. Oct 2024
    1. Keynote Summary: Enhancing Programming Languages with Direct Manipulation Interfaces

      Introduction

      • Motivation: Enhance programming languages and tools to be more interactive, human-friendly, and accessible.
        • "This talk is about research efforts to make programming languages and tools more interactive, more human friendly, and hopefully more accessible and useful to a wide variety of people."

      Current State of Programming Interfaces

      • Limitations: Traditional programming interfaces are text-based and lack the interactivity of other computer applications.

        • "The actual interfaces we use to build and understand our programs essentially look like this: we have a big text box where we punch in our source code."
      • Ideal Scenario: Blend the expressive power of programming languages with tangible, interactive user interfaces.

        • "We also want to need tangible interactive user interfaces for understanding and manipulating the concrete things we are making."

      Programming with Direct Manipulation

      • Concept: Systems that allow seamless transitions between text-based programming and direct manipulation of code and output.
        • "We would like the ability to inspect and interact with and change the output and have the system help suggest changes to the code based on these interactions."

      Historical Context

      • Legacy: The vision for interactive programming has roots in the 1960s with works like Sutherland's Sketchpad and Engelbart's interactive computing.
        • "Similar visions for interactive programming systems to augment human creativity and intelligence can be traced all the way back to the 1960s."

      Techniques and Trends

      • Renewed Interest: Recent conferences and workshops indicate a growing interest in integrating programming languages (PL) with human-computer interaction (HCI).
        • "In the past decade or so there's been renewed interest in these challenges which lie at the intersection of PL and HCI."

      Sketch and Sketch System

      • Prototype: An exploration into combining direct manipulation and programming by demonstration in a lambda calculus environment.

        • "In my group we've been exploring a few ideas in a prototype system called sketch and sketch."
      • Key Features:

        • Programming by Demonstration: Building programs through direct manipulation and demonstration in a pure lambda calculus.
        • "Every direct manipulation interaction in the output pane is codified as a change to the program in the left pane."

        • Structure Editing: Integrating abstract syntax tree (AST) manipulation within a text editor.

        • "The left pane also supports certain program transformations by hovering, selecting, and clicking on the abstract syntax tree."

        • Bi-directional Programming: Allowing changes to program output to propagate back to the source code.

        • "In sketch and sketch, bi-directional programming features are used to reconcile changes to output values."

      Broader Research Themes

      • Programming by Demonstration: Techniques for representing GUI interactions as ordinary text-based programs.

        • "Our goal in sketch and sketch so far has not been to build the ultimate visual graphics editor but rather to explore whether GUI interactions can be represented as ordinary text-based programs."
      • Structure Editing: Enhancing the usability of programming environments by supporting structure-based program transformations.

        • "Structure editing of an abstract syntax tree might be streamlined into an ordinary existing text editor."
      • Bi-directional Programming: Extending traditional lenses to mainstream languages and exploring backward evaluation for program expressions.

        • "Several recent proposals employ some kind of backward evaluation that allows all program expressions to be run in reverse."

      Live Programming and Synthesis

      • Immediate Feedback: Providing users with real-time feedback about the dynamic behavior of their programs.

        • "Live programming interfaces can be useful in educational contexts."
      • Integration with Synthesis: Developing interfaces where program synthesis techniques help fill in code holes based on user interaction.

        • "There have been other promising ideas for integrating program synthesis techniques into a live programming loop."

      Application Domains

      • Spreadsheets: Enhancing spreadsheets with more structured data types and reactive programming features.

        • "The spreadsheet interface is now probably one of the most promising approaches to live programming in a general setting."
      • Graphics Editors: Combining programming and direct manipulation in tools for data visualization and creative expression.

        • "There are a variety of different kinds of programming and GUI systems for making charts."

      Future Directions

      • General Framework: Developing a general framework for programming with traces and demonstration.

        • "Can we design every graphical user interface to be backed by text-based programs in a general programming language?"
      • Improving Usability: Studying programmer behavior to design better hybrid text and structure interactions.

        • "If we study more about how programmers go about their work, we can design better hybrid text plus structure interactions to help."

      Conclusion

      • Call to Action: Encouragement for more research and collaboration to achieve the goal of more interactive and user-friendly programming systems.
        • "There are certainly missions out there for you."

      This summary encapsulates the critical aspects and insights of Ravi Chugh's keynote on making programming languages more interactive and user-friendly through the integration of direct manipulation interfaces.

    1. Investigating the theoretical foundation of Lisp - Delving into the history of mathematical logic and its influence on Lisp - Understanding the origin story and confusion around Lisp, particularly with regards to the Lambda calculus

      Grounding of mathematics in logic - Russell and Whitehead ensured the grounding of mathematics in logic to avoid paradoxes - Hilbert's formalism approach focused on consistency and syntactic forms of mathematics

      Hilbert's program led to proof Theory and Godel's work validated questions with reliable Foundation of mathematical knowledge. - Godel's creation of a self-referential statement within Principia Mathematica was a plot twist. - The arithmetization of syntax and assigning unique numbers to syntactic elements played a key role in Godel's proof.

      Rosa Peter advocated for the study of recursive functions as a separate field and made significant contributions to recursion Theory. - She presented a paper on this topic at the international Congress of mathematicians in Zurich in 1932 and coined the term primitive recursive functions. - Despite being forbidden to teach during fascist laws, she resumed her career after World War II and made contributions to recursion theory, computer science, and pedagogy in mathematics.

      Primitive recursive functions are built from basic functions and operations - Basic functions include constant zero, successor, and projection functions - Operations of composition and recursion can be used to build new computable functions

      Understanding recursion theory and computable functions - Recap on negation, conjunction, and disjunction as addition and multiplication behaviors - Exploration of recursive functions, including non-primitive recursive functions like Akerman's

      Classification of recursive functions - Primitive recursive functions are defined on all inputs, while partial recursive functions are not - The addition of the minimization operator creates unbounded search space, distinguishing partial recursion from primitive recursion

      Touring machines formalized computability, influential in the history of computation. - Touring modeled computability based on human step-by-step calculations intuitively. - Various models of computation became equivalent in 1936, known as the Church-Turing thesis.

      Lisp was devised as a vehicle for developing a theory of computation. - John McCarthy led the development of Lisp as a programming language and as a formalism in mathematical logic. - Lisp as an idea is found behind every implementation that came after.

      Lisp is a realization of the Lambda calculus - Lisp is seen as a realization of Lambda calculus based on notation - The association between Lisp and Lambda calculus explained through historical context

      Lisp was developed as an interpreter for Lisp, creating a new programming language. - Lisp's development was prompted by its potential to serve as an interpreter for Lisp. - It allowed for symbolic computation and the translation of formulas into code.

      Closure code translating basic functions of Primitive recursive functions. - Explanation of the machinery being involved. - Blog post titled 'Lisp's Grandfather Paradox' for more detailed information.

    1. Functional Reactive Programming mindset and principles - Exploring the mindset from which FRP grew and its key principles - Encouraging designing programming languages differently based on domain independent notion

      FRP principles: precise denotation and continuous time - Having a precise and simple denotation in FRP for elegance and rigor. - Continuous time in FRP leads to naturalness and composability, with deterministic and concurrent models for enhanced reasoning and debugging capabilities.

      Denotational specification is crucial for functional programming mindset. - The denotation must be precise and simple for valid conclusions and correct reasoning. - In library design, clear communication is essential for usage and understanding.

      Functional Reactive Programming (FRP) applies the same concept for time as for space. - FRP asserts the idea of continuous time as opposed to discrete time. - It draws parallels with the evolution of graphics from bitmap to vector graphics and the benefits of resolution independence.

      Avoid discarding information early to enable compositionality - Discretization of information leads to loss of data and hinders compositionality - Functional programming emphasizes not discarding information to enable infinite-sized and finely detailed data structures

      Behavior in FRP is a continuous flow of values, often represented as a function of time. - Behavior in FRP is closely related to mathematical values, specifically as a flow of values over continuous time, represented as a function from real time. - The API and semantics of FRP mainly stem from the foundational concept of behavior as a function of time, which is essential for understanding and defining the framework.

      Haskell uses type classes for overloading behaviors - The num type in Haskell is instantiated to overload numeric operations for behaviors - All numeric classes in Haskell are overloaded for behaviors and work in a regular principled way

      Lifting functions and transformations simplifies behavior definition and reasoning. - Lifted behavior functions sample argument behavior at specific times and apply transformations. - Transformations and numeric operations are straightforward and provide regularity, simplifying design and reasoning.

      Introduction to key operations for manipulating data types A and B - Constraint on maintaining non-decreasing times for leaving temporarily - Understanding the operations of generating type B events from type A and capturing time varying Boolean values

      Events trigger new behaviors, with time stamps. - Each event occurrence results in a new behavior. - Before sampling, consider the time and previous occurrences.

      Introduction to monoid operation in Haskell - Explains the relation between fmap and applicative type class - Details the laws and properties of monoid operation and its benefits

      Leveraging mathematical context for reasoning and learning - Knowledge of monoids, functors, and applicatives helps in reasoning and leveraging work - The ability to get specifications for free by leveraging mathematical context

      Denotational design principles lead to laws in classes - Denotational design methodology ensures laws must always hold in classes without needing to prove them - Origin of FRP: Transition from graphics to automated program transformation and functional programming at Carnegie Mellon

      Functional animation based on streams of pictures - Functional animation model by Kavi based on streams of pictures and Miranda language - John Reynolds suggested considering functions from the reels to address interpolation issues

      Developed a system using multi-way constraints and off-the-shelf constraint systems - Implemented functions of time in the constraints to specify the relationships between components - Used high-level, imperative approach with asserting and retracting constraints for handling interaction

      Modular programming across network - Efficiently passing constraints and solving them locally - Transition to graphics hardware development at Microsoft research

      Optimizing execution cost based on active parts - Discussing challenges with garbage collection and push-pull pointers - Importance of striving for provably correct solutions in programming

      Paul Hudak's impactful contributions to Haskell and functional reactive programming - Met Paul at Microsoft Research in 1995 or 1996, where he presented his language embedded in Haskell for music composition. - Paul was enthusiastic about and encouraged further work in behavior modeling and Haskell, going on to collaborate and supervise various FRP projects in the following 15 years.

      Mathematical functions on reels exhibit fascinating pathological behaviors - Functions can be everywhere continuous and nowhere differentiable. - Some mathematical functions lead to non-termination and paradoxical results.

      Exploring different systems and their denotations - Various systems like Bacon and Elm with different approaches are discussed - Discussion on the fundamental attitude towards programs and computers

    1. Overlap between functional programming and formal methods - Functional programming and formal methods have a shared intent of understanding computation. - Functional programming, particularly Haskell, emphasizes equational reasoning and treating programming as mathematics.

      Early computation reflected the original weak and reliable machines - The language directly reflected the machines due to inability to afford abstraction - Von Neumann model of computation involved massively parallel and sequential computing

      Scientific revolution shifted from guessing to knowing through experimentation and refutation - Galileo improved telescope technology and observed moons of Jupiter, disproving Earth as the center of the universe - Newton developed calculus to describe planetary motions, leading to enormous progress in science

      Galileo's approach to knowledge - Galileo's clear and precise investigations contrasted with vague grand discourses based on metaphysical arguments - The importance of patience and humility before the facts in advancing knowledge and civilization

      John Backus's message about imperative programming being a mistake and the contradiction in computing - Backus led the effort to develop Fortran and won a Turing award for it in 1976 - Imperative programming paradigm lacks nice mathematical properties and hinders progress towards correctness

      Testing can only move towards reliability but cannot guarantee it - Decision procedures are impressive but not scalable for solving NP-hard problems - Focusing on properties for correctness may miss the point, as correctness is the ultimate important property

      *Logic and computation combined in the 20th century * - Automated computation in the middle of the 20th century - Dependently typed purely functional programming developed in the 21st century

      Compositional reliability is essential for scalability and ambitious goals. - Reliability is about working most of the time within certain Precision. - Compositional reliability ensures that combining mostly reliable components does not result in mostly unreliable outcomes.

      Compositionality is key for practical correctness - Correctness is hard due to need for verification and handling incorrect guesses. - To make correctness practical, compose correct specifications, implementations, and proofs.

      Quality of specifications is crucial for program correctness - Programs must not be considered their own specifications for clarity - Our brains require simple specifications for reliable understanding and correctness

      Mathematical models simplify reasoning process. - Mathematical models are represented as functions of time, space, or infinite trees. - Implementations in physics involve electrical flows through transistors, but models are simpler.

      Operational models are more complex than necessary - Operational semantics fail to provide elegance and compositionality - Operational models are inefficient and provide unrealistic insights

      Working on paper proofs in Haskell can lead to mistakes. - Reliance on oneself or paper reviewers for error detection. - Striving for clarity, beauty, and power in programs due to formal language feedback.

      Introduction to Monadic IO in Haskell - Haskell's monadic IO is sequential and lacks denotational semantics, leading to challenges in concurrency and operational semantics. - The loss of denotational nature in monadic IO results in decreased efficiency and reliability due to the imperative paradigm limitations and lack of support for practical reasoning.

      Haskell's correctness paradigm works best at a small scale. - Large-scale computations may require imperative communication and lack reasoning capabilities. - Haskell's types are useful for preventing some bugs but may miss the heart of the matter in terms of correctness.

      Dependent types provide complete correctness specifications. - Dependent types allow embedding values into types for symmetry and simplicity. - Dependent types offer precise machine-understandable correctness definitions.

      Exploring dependent types for language parsing - The speaker discusses formulating languages for parsing using dependent types. - He explores the benefits and capabilities of doing so as compared to traditional methods.

      Languages as functions from strings to types - Languages can be viewed as functions from strings to propositions or types - Empty language is a function that returns the type or proposition that a string is in the language

      Concatenation in language theory requires proofs for membership determination - Concatenation defines string formation without computation details - Existential types in Agda allow for proof-based language definition

      Regular expressions recognized as regular languages by derivatives. - Using 'new' to check if the empty string is in the language. - Using 'Delta' to remove non-starting characters and check for nullability.

      Applying denotational design to challenging problems - Discussion on applying denotational design to computer vision, speech recognition, speech synthesis - Contrasting machine learning approach with feed a bunch of data into the model

      Deep learning's reliance on back propagation poses challenges - Back propagation is complex and stateful, hindering efficiency - The sequential nature of back propagation makes it hard to parallelize, impacting performance

      Local optimization is effective in solving problems. - Eliminating unnecessary complexity from imperative and graph-based models. - Improving and simplifying everything to make remaining problems apparent.

      Shifting to new paradigms in software development - Referencing the importance of having creative language designers and logicians for innovation - Mentioning the book 'The Structure of Scientific Revolution' by Thomas Kuhn and its relevance to paradigm shifts in scientific fields

      Invest in multi-generational advancements beyond short-term survival - Encourage investments in Galilean revolution in Computing for long-term benefits - Promote Hardware design for significant performance efficiency improvement

      Hardware design and elegant specifications require machine check proofs for performance and scalability. - The need for proofs and elegance in hardware design to demonstrate performance and scalability. - Question raised on the complexity of meta-specifications and the trustworthiness of highly expressive types.

      Computing involves counting or measuring sets and types. - It's important not to put arithmetic into the specification, but to prove it from a simpler specification. - The burden of computability is not necessary with dependent types, allowing for simpler specifications.

      Building practical solutions for interfacing with complex specifications - Addressing the challenges of bridging the gap between simple elegance and messy realities in hardware design - Exploring the possibility of creating useful components as a solution, despite potential limitations and difficulties

      Consider transitioning to exact real Computing for precise numerical computation. - -1 - Lazy evaluation enables correct Computing on infinite representations. - -2 - Designing efficient hardware for exact real computation is feasible with reprogrammable hardware.

      Define a data type for floating point numbers with error bound and proof. - Floating point computation is non-compositional due to approximation errors. - Adaptive representation allows for defining functions with real semantics using floating point numbers.

    1. Computability theory and algorithms history - Algorithms date back to Classical Greece and 9th century Persia. - Formal mathematical definition of effective computability proposed in the 20th century.

      Provability and undecidability in mathematics - False statements being provable leads to serious consequences. - True statements that are not provable can be frustrating, especially in mathematical problem-solving.

      Introduction to propositions as types and natural deduction by Gensen - Gensen introduced natural deduction as the main form of logic we use today - He also developed sequent calculus and introduced the use of '∀' as upside-down 'A' for 'for all'

      Understanding implications in proofs - Implications are about assuming and not proving - Different rules for concluding A and B in proofs

      Direct proof simplifies proof process - Substitution may add nodes but simplifies by removing sub-formulas - Consistency ensured by absence of proof of false in logic

      Introduction to Simply Typed Lambda Calculus - Church developed Simply Typed Lambda Calculus to establish a consistent system. - Lambda calculus includes functions, pairs, and the ability to build various data types.

      Propositions in logic correspond to types in a programming language - Proofs in the logic correspond to terms programs in the programming language - Simplification of proofs corresponds to evaluation of programs

      Curry Howard is a foundational concept for various fields - It applies to intuitionistic logic and all areas of logic - Logicians and computer scientists independently discovered similar concepts

      Lambda calculus may be a universal programming language for aliens. - Lambda calculus could be easier for aliens to decipher than other programming languages due to its logical foundation. - It is speculated that aliens could potentially understand Lambda calculus as it is rooted in fundamental logic principles.

      The strength of the weak electron force is crucial for the existence of matter and life. - If the electron force was slightly stronger, matter wouldn't form and life wouldn't exist. - Multiple universes could have different electron forces and gravities, but a universe without logic is hard to imagine.

      Exploring correspondence between logic and programming features - Linear Logic corresponds to session Types for concurrency - Category theory considered as the third column linking multiple concepts

      Computer science impacts knowledge discovery - Logicians often make discoveries before computer scientists due to their experience. - Linear logic and concurrency had early correspondence, leading to joint discoveries.

    1. Exploring notational design in computer science - Episode 17 delved into philosophical aspects, while this episode focuses on practical applications. - Addressing the need for operational semantics and elegance in reasoning and design, alongside listener questions.

      Work environment and shift in focus - The department was initially conducive for mathematically oriented work and founded by three mathematicians. - The focus shifted over time and became less favorable for the speaker.

      Elegance is about concise expressibility - Definition of elegance involves simplicity and mathematics we already know - Elegance does not mean familiarity or subjectivity, opposite to what is commonly perceived

      Mathematics values concise expressions and reusability. - Mathematicians value expressing things concisely in core mathematic field. - Abstract algebra is a pragmatic tool derived from the need for reusability in mathematics.

      Capitalism promotes short-term value work - Values of capitalism lead to focusing on immediate results without considering long-term impact. - Precise simplicity is essential for understanding complexity and elegance in theories.

      MIT shifted from teaching Scheme to Python due to industry demand - Python is favored by companies for hiring programmers. - MIT's switch to Python reflects the influence of capitalism over academic values.

      Courage to go beyond easy solutions - Being uncomfortable with proposed solutions but lacking a better one - Importance of speaking up and listening in decision-making processes

      Choosing personal responsibility and honesty in values - Choosing not to prioritize promotions and raises by avoiding rocking the boat - Recognizing the brief window of opportunity to make an impact in the world

      Scientific method emphasizes experimentation and progress towards truth - Experimentation measures theories against reality and guides where to look and what to test for - Capitalism focuses on short-term value, individuals must prioritize work of lasting value by making strategic choices and being skeptical of established norms and technologies.

      Choosing work of lasting value over short-term gain requires sacrifice and dedication - Prioritizing meaningful work over maximizing bonuses and raises can be challenging but rewarding - Agreeing with Dan on valuing elegance in science and using objective criteria for progress

      Complex questions must be formally checked for trustworthiness. - Complicated questions are likely false and can fool most people. - Questions that cannot be formally checked must be approached with simplicity and pragmatism.

      Elegance is a duality of simplicity and cost - Simplicity requires mental effort and is difficult to find - Complexity results in usage cost for the user and creation cost for the creator

      Elegance and easily formalizable values discussed - Dan and the speaker share similar values on elegance - Elegance should be easily formalizable and objective to avoid self-deception

      Occasionally compelling theories may be disproven by measurements - Mistakes in measurements can lead to disagreement with theories - Compelling theories may prompt reevaluation and remeasurement to verify accuracy

      Proof prevents self-dilution and ensures arrival at truth - Proof tells us if we haven't finished the proof or if we have truly reached the truth, unlike informal arguments - The cost of proof is the cost of knowing we are right, which outweighs the detraction from performance

      Proof is essential for optimizing performance in software engineering. - Without proof, optimization can lead to self-defeating results and shaky correctness. - Engineers prioritize reliability and efficiency, with energy efficiency being a key aspect in modern software development.

      Proper documentation in proof states reveals and corrects mistakes. - Proof not only confirms correctness but also helps constructively identify and correct mistakes. - Would you rather appear right or become right? Proof is key to becoming right and challenges the idea of already being right.

      Code documentation must include proofs for understanding - The statement of the theorem represents the functionality of the software - The proof ensures the implementation details are correct

      Elegance is key for simplicity and value in theorems and proofs - Simple and precise theorems provide more value to users than complex specifications, leading to practical benefits. - Striving for elegance in proofs reduces unnecessary complexity and effort, ensuring a better outcome.

      Creating high value and cost-effective solutions with efficiency and sustainability - Balancing high value with cost-effectiveness and efficiency to create sustainable solutions. - Emphasizing the importance of simplicity, correctness, and elegance in delivering value to end users.

      Analogy between compositional semantics and homomorphic design. - The essence is the same but in a more specific form. - Computers are analog, made out of nature, and cannot run at Nature's Own rate.

      Nature's discreetness is impossible to achieve in this universe. - The discreetness of nature at the plank level is interesting but impractical in software or hardware implementation. - Even hardware built in this universe cannot achieve the required sampling rate set by nature.

      Digital abstraction is about bits and bit patterns. - Digital abstraction involves interpreting data as bits and patterns, not numbers, trees, or graphs. - Using precise and truthful analogies, such as homomorphism in mathematics, is essential for accurate representation.

      Groups and their relation in engineering - Groups like real numbers and symmetry groups serve as a basis in engineering. - Monoids, such as natural numbers and strings, are also used in programming.

      Logarithms provide precise analogies between multiplication and addition - Log of a product is the sum of the logs (log a + log b) - Logarithms allow computation by turning multiplication into addition

      Creating an implementation to add natural numbers using machine - Describing the need for dependable and efficient implementation - Resolving the challenge of using a machine that deals with bits, not numbers

      Understanding the three-step process of converting numbers to binary and interpreting output - First step involves converting a number to binary before interpretation - Second step includes interpreting the binary using a mathematical function to get the number output

      Denotational design ensures consistent results from input to output. - Correctness in denotational design means getting the same result from different paths. - Denotational design aims for a beautiful and elegant formulation of implementations and correctness proofs.

      Implementing homomorphic data representations using different mathematical structures. - Utilizing the same vocabulary for data representations to maintain homomorphism. - Exploring various mathematical structures like monoids, groups, rings, and vector spaces in relation to machine learning and linear algebra.

      Linear algebra deals with vector spaces over a ring with scalars. - Scalars form a ring, which can also be a commutative ring or a field. - Vectors can be added and scaled by scalars, forming the basis of linear algebra.

      Matrices encode linear maps - Using currying, matrices represent functions from vectors to vectors - Matrices describe linear functions, making it easier to prove properties

      Matrix multiplication is associative - Matrix multiplication involves matching up rows and columns, multiplying corresponding elements, and adding them up. - Associativity in matrix multiplication means the order of multiplication doesn't affect the result.

      Understanding the purpose of a matrix as denoting a linear transformation - A matrix represents the mapping from a matrix to a linear function - Linear algebra is about functions that are linear, not about matrices

      Matrices are a more efficient representation of linear functions. - Matrix multiplication follows a completely systematic denotational design. - Choosing the right representation, such as matrices, is crucial for success.

      Matrix multiplication must be a perfect analogy to function composition. - Linear Maps and matrices speak the same language. - Composition is the main motivation for matrix multiplication in linear algebra.

      Matrix composition is equivalent to interpreting matrices as linear maps and then composing them. - Correct matrix multiplication is necessarily associative. - Defining equality for matrices and linear functions is crucial for consistency.

      Proving algebraic properties is not necessary for understanding concepts. - Algebraic properties are symptoms of superficial understanding. - Properties can be assumed to hold without explicit proof, as they follow from discipline.

      Clear interpretation of linear maps and matrices is crucial for correct implementation. - Linear algebra is not about computers but using them as tools for visualization. - It is essential to understand the analogy between linear maps and matrices for accurate implementation.

      Specification drives towards simplicity, implementation towards efficiency. - Using functions for linear maps is simpler than matrices to avoid errors in interpretation. - Introducing denotation helps reconcile different interpretations of matrices, ensuring correctness.

      Specifications and implementations should be pulled in opposite directions - Specifications should be simple and free of detail, while implementations should be full of clever tricks and optimized for the specific hardware - The flaw in operational semantics is that it tries to put specifications and implementations together, rather than allowing them to be different

      Correctness proofs emphasize simplicity for valuable theorems - Denotation simplifies theorem proof by removing operational complexities - Efficient implementation requires consideration of modern processing elements like GPU, ASIC

      Understanding function composition and associative properties. - Importance of both formal and informal reasoning in relation to function composition. - Distinguishing between operational implementations for better performance and theoretical theorems for proofs.

      Identifying linear operations and their presence in algebra - Linear operations include left and right projection of a pair, and appending zero to the pair - Exploring the existence of these operations within an algebraic structure, such as vector spaces and categories

      Denotational design and linear maps in category theory - Linear maps and building blocks of denotational design - Understanding the mathematical realm and implementing the linear transformations

      Showing type safety through operational semantics - Operational semantics used to demonstrate correctness of a computable language - Type safety proven through strong normalization and proper type relations

      Operational semantics is a means to an end, not the goal. - Using operational semantics to solve problems definable outside of technology. - Creating a language with a type system to implement mathematical concepts in computing.

      Designing programming interfaces vs. languages - I don't design languages, I design programming interfaces and implementations. - Programming languages have two parts: descriptions and gluing things together.

      Choose one host language and embed domain-specific vocabularies to avoid constant language creation. - Peter Landin suggests embedding all vocabularies in a host language, distinguishing between domain-independent and domain-specific components. - This approach leads to pragmatic benefits and helps in avoiding the constant reinvention of programming languages.

      Don't design languages - Existing imperative languages like C, C++, JavaScript are not conducive to adding new features easily. - Functional languages, particularly non-strict function languages, and dependently typed languages are better at hosting other vocabularies.

      Operational and denotational semantics in language designing - Choosing between operational and denotational semantics for libraries - Challenges in separating operational and denotational semantics in some cases

      Understanding denotational design in mathematical manipulation - Differentiating essential hard work from inessential hard work in math proofs - Exploring representations and operations in linear algebra and polynomial manipulation

      Automatic differentiation is about functions that are differentiable - Definition and importance of differentiable functions in automatic differentiation - The connection between automatic differentiation and denotational design

      Discussion on full abstraction and notational semantics - The conversation revolves around the impressive notational semantics and its relation to full abstraction. - The discussion also involves a comparison between concrete models and the challenges in achieving full abstraction in sequential and parallel computation.

      Full abstraction is a key concept for equivalence - Observable operational equivalence means matching in all contexts - Many current languages lack full abstraction, which is historically accidental

      Exploring the concept of parallelism in computational functions - It involves understanding PCF partial computable functions and Lambda calculus - The implementation challenge arises from deciding the evaluation order when dealing with false arguments

      Differentiating between operational and denotational semantics in defining discourse - Operational semantics should not dictate denotational semantics - We should question and challenge the legitimacy of paradigms defining discourse

      Imperative knowledge crucial for realistic 21st-century program implementation. - Declarative programs in high-level languages still need to meet the machine for practical interest. - Care about implementation involves proof, specification, correctness, and elegance.

      Emphasizing the importance of proof in efficient specifications - Discussion on the necessity of detailed proofs for efficiency - Highlighting the comparison between operational and denotational approaches in linear algebra

      Prevention through higher-level language and rich type systems - Higher-level language with a simple denotational model is expressive and aids in preventing errors - Focus on a small subset of machine behaviors that reflect correct execution of simple programming notions

      Handling errors and exceptions in programming - Errors are things that cannot be captured in the semantic domain, leading to exceptions - A failure of the type system can result in errors, showing a need for a better system like in C or Pascal

      Dependent types provide equivalent reasoning to foundations of mathematics and logic. - Dependent types offer a general solution for proofs and encoding in a self-consistent logical framework. - The entry barrier for learning advanced concepts like dependent types can be intimidating but crucial for ecosystem development.

      Bridging familiarity and elegance in programming paradigms - Discussing the balance between familiarity and elegance in programming paradigms - Emphasizing the importance of making small tweaks to familiar paradigms for easier adoption

      Challenging existing paradigms in computation - Exploring fundamental weaknesses in current computational paradigms - Adapting to the deceleration of Moore's Law and the need for innovation

      Contributing to existing paradigms may bring short-term popularity but leads to dead ends - Choosing to contribute to existing paradigms may result in short-term popularity, kudos, and raises. - However, it also involves expending life energy, a non-renewable and precious resource, into something that is a dead end.

      Learning negotiation and mediation from Roger Carl - Reflecting the other person's point of view to their satisfaction - Listening deeply and connecting with the other person's perspective

      Facilitating heart-to-heart communication. - The facilitator observes and guides the process. - The dialogue creates a deeper connection and understanding between the individuals involved.

      Belief clings but Faith lets go - Belief is the insistence that the truth is what one would wish it to be - Faith is an unreserved opening of the mind to the truth, without preconceptions

    1. Here are the key points regarding Rama's new "instant PState migration" feature and the context around it:

      Status Quo of Database Migrations

      • SQL Databases: Support schema evolution through Data Definition Language (DDL) and Data Manipulation Language (DML).
      • Support alterations in table structures but may require complex transactions and can incur downtime.
      • Issues include re-implementation of logic in SQL, extended migration times, and potential locking of tables.

      • NoSQL Databases: Limited built-in support for schema migrations; often require custom solutions or third-party tools.

      • Many document databases are "schemaless,” making it cumbersome to change existing data structures.
      • Common approaches include eating migration efforts during read time or using third-party tools like Mongock.

      • NewSQL Databases: Aim to combine NoSQL's scalability with SQL's transactional integrity.

      • Though effective, they still retain many limitations of traditional SQL migrations.

      Features of Instant PState Migrations in Rama

      • Expressive: Migrations can be performed with arbitrary transformations written in the user’s programming language, offering more flexibility than SQL.

      • Instantaneous: The data migration process is quick; immediately after deployment, all PState reads reflect the migrated data regardless of data volume.

      • Durable and Fault-Tolerant: Automatically handles persistent changes in a consistent manner without manual intervention. Migrated data is rewritten in the background during ongoing operations, maintaining overall application performance.

      Migration Process

      1. Schema Definition: Developers define the new schema for PStates and specify migration functions using existing application logic.
      2. Deployment: Deployments use CLI commands that spin up new workers, allowing for seamless transitions.
      3. Progress Monitoring: Migration status is available in the UI, allowing for real-time tracking of the migration process without interrupting service.

      Advantages of Using Rama

      • Simplifies schema evolution, reducing operational pain compared to traditional databases.
      • Retains all historical data through an event sourcing architecture.
      • Enables easy updates and adjustments in response to changing business needs without downtime.

      Conclusion

      Rama's instant PState migration significantly enhances the responsiveness and flexibility of application development, providing powerful tools for developers to manage schema changes efficiently.

  12. Sep 2024
    1. Cono Elliot's work on notational design and his influential papers - Cono got his PhD at Carnegie Mellon University in the '90s under Frank Fenny working on higher order unification. - Cono has devoted his life to thinking and refining graphic computation and tools behind it, and has published influential papers on various topics related to functional programming and notational design.

      Living in a forest setting with deep connection to nature. - Conor lives on 20 acres next to his family's 60 acres and has a deep emotional connection to the place because of his parents' presence. - He sees a connection between nature and technology, highlighting the non-sequential nature of computation and neurology.

      We are in a pre-scientific age of thinking about computation. - Humans have created thinking organisms that think systematically, leading to computation. - We are in an awkward phase of thinking about computation in a clumsy and pre-scientific way.

      Humans are driven by curiosity to understand the universe. - We have a limited ability to perceive the universe due to our evolutionary constraints. - Through the advancement of science and technology, we have developed tools like telescopes, microscopes, high-speed cameras, and time-lapse to enhance our perception.

      Elegance and wonder in computer science - Elegance is the deepest value in computer science, inspiring a sense of play and wonder. - Computer science is in a deeply inelegant phase, but there is potential for Elegance and Beauty in the field.

      Elegance as a guiding value in theoretical physics - Elegance guided Einstein in developing the special and general theory of relativity. - Modern civilization is built on general relativity and quantum physics; GPS system corrects for relativity.

      Elegance and simplicity in formalizing concepts in computer science - Elegance and simplicity in formalizing concepts are related - People often mistake familiarity for simplicity in programming

      Academia today lacks time for critical thinking - Focused on churning out papers and credentials - Issue of education accessibility affecting teaching quality

      Semantics is crucial in programming - Meanings are called semantics - The relationship between a program and its meaning is important

      Dana Scott answered the crucial question of the mathematical meaning of Lambda calculus in 1970. - Lambda calculus was originally intended for encoding high order logic and quantifiers, not for programming. - Peter Landon realized the potential of Lambda calculus for programming and introduced the concept of executing Lambda calculus on a machine.

      Languages convey meanings, computation looks at meanings. - Languages and programming languages serve the same purpose: to convey meanings. - Computation and technological tools help us observe and understand meanings in various forms, from stars and quasars to microorganisms and atoms.

      Euclid revolutionized geometry with his conceptual approach - Introduced a new way of thinking about geometry with axioms and postulates - Plato's influence on the idea of mathematical space and its relation to the physical world

      Mathematics describes real truth and possibly taps into platonic truth. - Platonist perspective considers mathematics as a way to describe truth beyond us. - Success of mathematical fantasy or story inspires acting as if tapping into platonic truth.

      Ancient beliefs about movement of stars and planets - Stars and planets thought to move in circular paths due to perfection/God concept - Some stars behaved differently, known as 'The Wanderers' or planets

      Kepler discovered planetary laws - Planets move in an ellipse, not a circle - Kepler's explanation lacked why planets move in an ellipse

      Scientific theories evolve with enhanced observations - Newton's theory successful until discrepancies discovered in the 20th century - Einstein's theory validated through observations of planet Mercury during solar eclipse

      Scientific exploration is an unending journey - Science aims to understand what we don't know - In academia, the system often fails to reward wonder and not knowing

      Denotational semantics helps distinguish beauty and elegance from complexity - Beauty or elegance in theory is described precisely in terms of mathematics - Fortran, led by John Backus, introduced expressions, advancing from Von Neumann style sequential programming

      Functional programming emphasizes expressions over statements - Fortran blends statements and expressions but still leans towards statements - Functional programming eliminates everything except expressions

      Hardware limitations led to sequential model prototyping - John Von Neumann's experiment from 1947 is still relevant in 2022 - John Backus discussed fundamental problems in computing during the war

      Von Neumann bottleneck affects computer performance. - Physical bottleneck slows computers due to high heat generation. - Mental bottleneck limits brain capacity and mental efficiency.

      Breaking out of the Von Neumann bottleneck - The Von Neumann style of programming forces us to think small and is fundamentally sequential and mechanistic. - The lecture emphasizes the importance of thinking in larger, powerful notions and focusing on functions rather than words.

      Functions as building blocks for knowledge - Functions built from other functions allow for scalability and creation of complex systems - Importance of denotational semantics in designing new languages rather than just explaining existing ones

      Backus emphasized fixing defects and learning from mistakes. - Using denotational semantics reveals detailed defects in existing languages. - Advancement in computer science involves replacing outdated concepts like go-to with structured and functional programming.

      The cost of focusing on education and progress is losing the ability to make significant advances in science. - The speaker expresses disappointment with the impact of Academia on progress and science. - The speaker remains dedicated to truth and beauty, advocating for the importance of denotational semantics in making aesthetic distinctions.

      Ideas are expressions of beauty or ugliness which give deep insights across fields. - Denotational semantics serves as a reliable guide to beauty and elegance in ideas. - Beauty and elegance are valuable guides for understanding the universe and computation.

      Passion for mathematics and computer graphics - Attended undergrad in math at UC Santar with a small group of math students in a nurturing environment - Transitioned to grad school at Carnegie Mellon for computer science and pursued computer graphics due to love for geometry and math

      Had to change plans at Carnegie Mellon - Arrived at CMU to study computer graphics, but found out the people I wanted to study with had left - Discovered a group focusing on reasoning about programs, which became the focus of my PhD work

      Transition to computer graphics and involvement in group projects - Worked with notable advisors like Dana Scott, John Reynolds, and Frank - Focused on exploring the next advancements in programming interfaces and data structures at Sun Microsystems

      Introduction to denotational semantics in understanding language meanings - Studied denotational semantics under Stevenh Brooks and Dana Scott in grad school, leading to a revelation on language meanings - Believes language meanings should be independent of specific machines and analyzed compositionally for better understanding

      Graphics programs are sequential commands organizing video memory for visual output. - Graphics programs are different from traditional software due to their focus on organizing instructions for video memory. - Alternative design paradigms focus on conveying meanings and inventing tools to help users view desired content through a computer.

      Designing a language library for geometry and colors - Creating a composable vocabulary of geometry and colors, similar to modern linguistic frameworks - Developing a rich system of types for three-dimensional geometry and adding a time component to the design

      Rendering graphics offscreen to build up incrementally for a correct answer. - Rendering offscreen allows showing previous true things before replacing them incrementally. - Temporal discreetness in computer graphics breaks compositionality and introduces fundamental bugs.

      Compositional models with approximations lose accuracy when composed - Compositional models incorporating approximations result in gross inaccuracies upon composition - Functional reactive programming involves composing before approximating for accurate results

      Outline fonts are resolution independent - Outline fonts are continuous and do not have pixels when zoomed in - Switching from bitmap graphics to outline fonts improves efficiency and clarity

      Transition from discrete to continuous programming in space and time. - Examples of continuous programming in space like fonts, 2D and 3D geometry, vector graphics. - Applying continuous programming principles to time requires a fundamental shift in implementing and describing things that vary with time within the Von Neumann model.

      John Reynolds introduced the idea of using functions from the reals instead of sequences for solving time interpolation problems - This approach helped in resolving issues with interpolations and time manipulation - Continuous time modeling was found to be more effective than discrete modeling for things that vary with time

      Functional programming requires a shift from loops to lazy lists - Functional programming involves describing the mathematical model behind the data manipulation - The common reasoning that input and output data should have the same nature is wrong in functional programming

      Functional reactive programming is about understanding concepts in the simplest, most elegant compositional terms. - It emphasizes denotational semantics, where types have a mathematical model. - It focuses on fully explaining operations in terms of the model, independent of implementation.

      Programming expresses ideas with clear understanding before implementation - Category Theory is appreciated for its precise and elegant tools in mathematics - Functional Reactive Programming lacks denotational and compositional principles, leading to fundamental misunderstandings in programming

      Algebraic patterns like monoids and distributivity are powerful for organizing reasoning - There are different types of monoids like addition and multiplication each with their own properties - Multiplication distributes over addition and zero plays a special role in this interaction

      Algebra and category theory provide reusability and reasoning in mathematics and programming. - Algebra allows for reasoning that is parameterized and applicable to different mathematical scenarios. - Category theory generalizes various algebraic concepts and is important for correctness in programming.

      The complexity of Python programs and limited cognitive abilities can lead to a lack of understanding. - Options include quitting the profession or divorcing what you've seen from what you do. - Another option is switching to a language with simple semantics, such as purely functional or denotative languages.

      Denotative programming allows for proving program correctness. - Denotative programming enables answering questions about the multiple meanings of programs. - Functional programs can have meanings within a cartesian closed category.

      Tropical semi-rings relate to timing analysis of parallel computations. - Understanding operations of plus and max in relation to semi-rings. - Realization of dot products and matrix multiplication pattern in timing analysis.

      Timing analysis can be described compositionally using the language of categories - I realized the parallel sequential composition is the fundamental building blocks of functions computation - The type Lambda calculus has more than one model, and the mathematical values it describes can have different interpretations

      Realizing the connection between HCLL and lambda calculus led to successful compilation to hardware. - HCLL translates to a small core lambda calculus - Interpreting lambda calculus in cartesian closed categories enabled successful compilation to hardware

      Exploring unconventional categories for computation - Discovering powerful ideas by compiling categories since 1980 - Seeking beauty in solutions to drive innovation and never settling for unsatisfactory answers

      Geometry and the introduction to proof changed my life - The systematic way of exploring what is true and growing knowledge in geometry was a life-changing concept for me. - Discovering computers at Lawrence Hall of Science through the Star Trek Club in high school eventually led me to computation.

      Introduction to programming through games on teletypes - Experiencing games and printing out results on rolls of paper as souvenirs - Discovering source code hidden in the printed paper, initiating an interest in programming

      Started college with no computer science department, emphasized logic and enjoyed math contests - Computer science classes offered in math department or College of Engineering - Discovered talent and passion for math despite discouragement from elementary school teacher

      The origin of computer science in universities and its impact on its development - Initial classes were labeled as Computer Science or logic, sparking a debate on department placement. - Placement in engineering rather than mathematics influenced the practical nature of computer science education.

      Transition from imperative to functional programming - Discovered Haskell as a better alternative to imperative programming languages - Applied Haskell in programming for 25+ years and mentorship in hardware design for machine learning

      Realizing the power of category theory in simplifying automatic differentiation - Changed vocabulary to be more symmetric with respect to composition - Describing automatic differentiation in the language of categories simplifies and generalizes it

      Denotational design is key for software implementation - HLL was not effective for teaching denotational design - Inner guidance essential for understanding and using HLL effectively

      Struggling with teaching denotations and homomorphisms in programming - Encountered issues with students not understanding correct implementations - Wanted compiler to indicate errors instead of personally correcting

      Understanding the question is more important than answering it correctly - Operational thinking is about biases in answering problems and questions - The most important thing is to understand the question in the most beautiful way

      Realization about teaching and learning process - Programmers differ in their attitude towards being told they're wrong - Importance of being open to feedback for growth in programming

      Automation has benefits but limited scalability - SMT automation has advantages in problem-solving but faces scaling limitations - Despite advancements, SMT technology cannot achieve unlimited scalability

      Agda is the most tasteful tool for working with dependent types. - Agda offers beauty, consistency, simplicity, and tremendous power. - Agda contributes to an incredibly beautiful story about the equivalence of computation, logic, and the foundations of mathematics.

      Exploring if all of mathematics can be built on logic - David Hilbert's attempt to formalize logic in the early stages - Can natural numbers be understood via logic as a foundation?

      Natural numbers are a profound and important concept - Natural numbers are a product of human construction on top of other systems - Piano numbers are a significant concept in mathematics

      Constructive logic allows expression of proofs as either A or B - In constructive logic, every proof of A or B can be expressed as a proof of A or a proof of B - Brower's logic allows for this expression without the law of excluded middle, leading to simple answers for negation, implication, truth, and falsehood in terms of types.

      De bruyne pioneered logic computable through computers - Exploration of dependent typing and realization of logic and types - Mechanization of information and manipulation, leading to modern programming languages

      The power of math and knowledge in programming - Manipulating from the bones is a powerful and beautiful concept - Embracing sequential stateful notion of computation limits insights and learning

      Written language enabled deep reflection and improvement of ideas. - Written language allowed ideas to be examined and improved over time. - Written language initiated a feedback loop for continuous enhancement of concepts.

      Continuous improvement through iterative optimization - Iteratively refining program logic and expressions for efficiency and clarity. - Enhanced abstraction and reusability through denotational design and parameterization.

      The debate on using formal proofs in industry - Industry perspective often argues against formal proofs due to perceived time constraints and impracticality. - Decision to use formal proofs depends on the objectives and the value placed on accuracy and thoroughness.

      Achieving 100% correctness is the only way to reach 95%. - Errors compound, leading to significant deviations in calculations. - Approximations and probable correctness can lead to overall incorrectness in complex projects.

      Inspired by deep conversation - The conversation has been engaging and has touched on major topics of interest. - The speaker hopes to discuss denotational design and its application in software design.

      Create space for contemplation in the age of instant information - Encourage meditation and reflection on content - Announcement about a dedicated email for audience feedback and inquiries

  13. Aug 2024
    1. Siting is essential complexity (arguably essence of a distributed system),

      Some distribution are for performance reasons and in those case siting is a non-essential complexity

      • Introduction and Audience Engagement:

        • The speaker encourages the audience to ask questions during the talk for better understanding. "If I say something that you don't understand the best possible thing you can do is ask a question that will help me, that will help you, and it will probably help a lot of other people in the audience that have the same question."
      • Gift and Motivation:

        • The speaker presents a gift to a member of the audience as a token of appreciation for attending the early morning session. "As a token of my thanks to all of you I'm going to give a gift to one of you."
      • Programming Language Foundations in Agda:

        • The speaker authored a book titled "Programming Language Foundations in Agda," available both locally and online. "So I wrote a book called programming language foundations in agda and it's...available online."
      • Overview of the Talk:

        • The session covers key ideas in Naturals and induction, similar to a previous talk but with more concrete examples. "I'm going to try to take you very quickly through the key ideas in Naturals and induction."
      • Interactive Coding Session:

        • The speaker performs live coding to demonstrate key concepts, such as defining natural numbers and addition using Agda. "You can Define all the natural numbers all Infinity of them in three lines and then you can take something like addition and take all infinite number of instances of addition and Define it in three lines."
      • Case Analysis and Structural Induction:

        • Definition of addition using structural induction and case analysis on natural numbers is demonstrated. "Zero is a natural number and successor takes a natural number to a natural number."
      • Proof of Associativity:

        • The speaker proves the associativity of addition through induction and congruence. "To prove associativity of addition...by the congruence of successor applied to the associativity of plus."
      • Takeaway Concepts:

        • Key insights include the equivalence of definition by recursion and proof by induction, and the concept of structural induction. "Definition by recursion and proof by induction are the same thing."
      • Audience Questions and Interactive Learning:

        • The speaker addresses audience questions, enhancing the understanding of the presented concepts. "Great question, thank you."
      • Practical Applications and Animations:

        • The application of constructive proofs to achieve animation in programming language semantics is discussed. "Preservation and progress are exactly what you need to get evaluation."
      • Commercial Applications in Cryptocurrency:

        • The use of formal proofs and functional languages in the cryptocurrency industry is highlighted, demonstrating practical, real-world applications. "People are willing to use functional languages to get it right and willing to use proof assistance like agda and caul to prove they've gotten it right."
      • Conclusion and Further Learning:

        • The talk concludes with references to further resources for those interested in learning more about the theoretical foundations. "If you want to learn more about this idea...then have a look at this paper propositions as types."
    1. Short Summary for https://www.youtube.com/watch?v=9yplm_dsQHE by [Merlin] (https://www.getmerlin.in/mobile-app)

      Encouragement to ask questions and stay engaged - Asking questions is helpful for everyone in the audience - Rewarding engagement with a gift and discussion on programming language foundations

      Follow the 'Getting Started' section in PLFA book for installation instructions. - Pay attention to the instructions at the bottom on setting up required libraries. - Consider volunteering as a TA if you have experience with Agda to help others.

      Natural numbers are represented as types in agda with two constructors, zero and successor. - Zero and successor are the two constructors for representing natural numbers in agda. - Successor takes a natural number and produces another natural number.

      Introduction to defining natural numbers in Agda - Agda treats natural numbers as a data type with zero and successor constructors - Proofs of equality in Agda involve simplifying expressions to check for equality

      Definition of addition in just three lines - Explaining the linear representation for addition using tables - Discussing the efficiency of unary representation for proofs and definitions

      HL gives you infix notation instead of underbar plus underbar - AGA gives you any combination of binary operations with mixfix notation - AGDA allows defining prefix and postfix functions for flexibility

      Using variable M for case analysis - M can be zero or the successor of something else - Checking and verifying types using M and N

      Using recursive calls to compute based on natural numbers - M plus n recursive call utilized to compute sum - Explanation and examples demonstrate the computation process

      Understanding the successor function and its application in arithmetic. - The concept of successors, zero and matching are used in rewriting arithmetic expressions. - The historical background and origin of these definitions and concepts from the 1800s.

      Understanding properties of binary operators - Ask about operator's type - associative or commutative - Understanding associative and commutative properties for operators

      Proof by induction involves replacing the whole by a part for smaller scale proof. - Associativity of addition is proven by induction on smaller scales. - Defining operations on successor of M in terms of operations on M helps in the proof.

      Showing the simplification process of a mathematical proof - Explaining the process of writing out proofs with one side being left-hand and the other side being right-hand, and simplifying inwards - Detailing the process of proving the successor of M plus n plus p is equal to the successor of M plus n plus P

      Proof by induction and universal quantification - Induction and recursion are the same thing - The proof of a for all indicates the existence of a function to compute the right type of result

      Explanation of recursive function and type checking - Recursive functions ensure type checking by computing proofs for specific types. - Universal quantification and implications are proved using functions, showing pairs of proofs for A and B.

      Challenges in using K and redex for free evaluations - Functional bigstep semantics paper defines the need to use semantics to evaluate programs - Pop Mark challenge requires proving preservation and progress using a proof system like agda

      Realization of the concept of preservation and progress in evaluation - Preservation ensures the type correctness of the step in progress - Progress evaluates the value or takes a step in a constructive manner

      Cryptocurrency firms use proof assistants like Agda and Coq to verify core system correctness. - Functional languages and proof assistants play a crucial role in verifying cryptocurrency systems. - Commercial applications in cryptocurrency benefit from rigorous verification processes.

    1. Certainly! Here’s an example that illustrates the expressiveness of the combinator-style versus the callback-style in animation DSLs.

      Scenario

      Suppose we have two basic animations: - Animation A: A box fades in over 2 seconds. - Animation B: A box moves to the right over 1 second. - Animation C: A box scales up over 1.5 seconds.

      Combinator-Style DSL Example

      Using the combinator-style DSL, we can express a sequence where Animation A and Animation B happen in parallel, followed by Animation C happening sequentially:

      plaintext ( A parallel B ) sequential C

      Interpretation: 1. Animations A and B will run simultaneously: the box fades in while it moves to the right. 2. Once both A and B are complete, Animation C will start, scaling up the box.

      Callback-Style DSL Example

      Using the callback-style DSL, the same scenario would be expressed like this:

      javascript A.onComplete(() => { B.onComplete(() => { C.start(); }).start(); }).start();

      Interpretation: - Animation A is started. - Only after Animation A completes, Animation B is executed. - Finally, Animation C starts after Animation B is done.

      Comparison of Expressiveness

      • Combinator-Style: This approach allows for a direct and simple expression of animations happening in parallel followed by another animation, making it clear how the timeline flows.
      • Callback-Style: This representation, while functional, is more verbose and cumbersome. It requires nesting callbacks, which can become hard to manage, especially as animations get more complex.

      Conclusion

      The combinator-style DSL provides a more elegant and straightforward way to express complex animations involving both parallel and sequential choreography. The callback-style, on the other hand, can lead to complicated nesting, making it less intuitive for capturing the relationships between animations.

    2. Overview of Animation DSLs

      The article discusses two distinct approaches to defining Domain-Specific Languages (DSLs) for animations: combinator-style and callback-style.

      Combinator-Style DSL

      • Structure: It consists of basic animations and allows for the sequential and parallel composition of animations.
      • Syntax Example (defined in Backus-Naur Form - BNF): <animation> ::= basic | <animation> parallel <animation> | <animation> sequential <animation>
      • Semantics:
        • Each combined expression’s meaning is derived from its sub-expressions.
        • Timelines are visual representations of the animations based on when they occur, either in parallel or sequentially.

      Callback-Style DSL

      • Structure: This style uses onStart and onComplete operators for composing animations.
      • Syntax Example: <animation> ::= basic | <animation> onStart <animation> | <animation> onComplete <animation>
      • Semantics:
        • This structure often leads to complications in visualizing how animations relate to each other in terms of timing and attachment points.

      Comparison of Both Styles

      • Expressiveness:
        • The combinator-style is preferred due to its ability to more directly express the timelines and behaviors expected in animations.
        • The callback-style is viewed as less expressive, particularly in scenarios where the sequencing and duration of animations are not known in advance.

      Conclusion

      The article concludes that the combinator-style DSL provides clearer semantics for programming animations, while the callback-style may lead to confusion and less intuitive defining of animation sequences.

      • Presenter discusses improvements in PowerPoint skills and the development of a new implementation of Idris.

        • "I've been working very hard on my PowerPoint skills recently this is a this is a kind of a show-and-tell session so I've been hiding away in my cupboard over the last few months working on a new implementation of address."
      • Curious about attendees' experience with dependently typed programming languages.

        • "Has anyone played with any dependently type of programming languages before just by way of show of hands?"
      • Overview of Idris improvements, focusing on type system advancements.

        • "Firstly definitely faster type checking...I don't get out much you know so them so I'm gonna show you a bit about the new interactive additive feature."
      • Emphasis on interactive editing and user-friendly type-driven development.

        • "It's all about the interactive editing it's all about the condo that the machine and the program of being in a conversation."
      • Introduction of quantitative type theory and its significance in Idris.

        • "The key new feature of the language though is that it's based on quantitative type theory."
      • Explanation of quantities in types: 0, 1, and many, and their runtime implications.

        • "0 means this is going to be a race at runtime and the type checker has told me that it's going to be raised at runtime 1 means I can use this exactly once at runtime and many is just back in the old world where we always used to be."
      • Demonstration of the use of quantities in practical examples like vectors.

        • "The basic idea is the lengths are encoded in the type...so if I'm writing an append function and I say the effector has n things and another vector has n things then the resulting vector has n plus M States."
      • Exploration of automated programming with interactive editing, including a magic trick analogy.

        • "Wouldn't it be nice if we get abstract away in the acts of programming so if we could just say you know keep case splitting and searching until you find an answer and hilariously that works quite well."
      • Advantages of having more information in types for program synthesis.

        • "The linearity gives more information to the type prototype directed synthesis."
      • Implementation of session types using linear and dependent types.

        • "With linearity and dependent types, you absolutely can do that and I think it's quite a nice way of writing concurrent programs."
      • Encouragement to contribute to Idris development on GitHub.

        • "Please come along I'd love to have your funky issues right so this isn't dependent type stock so by law I have to show you the vexes there's a reason I'm going to show you the vectors."
      • Final thoughts on interactive editing, efficiency, and community contribution.

        • "Interactive editing there's a lot more we can do with interactive editing really important that it's responsive so this is I want this to be the default mode in which people interact with the system if that's going to be the case it has to come back with the answer very quickly."
  14. Jul 2024
      • Overview of Research History and Commercial Development:

        • The research group's work extends over 60 years, difficult to condense into a short talk.
        • "Processes of commercial product development" are well-known, but research's purpose is less understood.
      • Importance of Research and Key Innovations:

        • Research is vital for foundational innovations; examples include text on screens, interactive text, pointing devices, copy-paste functions, menus, and scroll bars.
        • Early pioneers like Ivan Sutherland and Doug Engelbart in the 60s, and Xerox PARC's Smalltalk in the 70s, introduced groundbreaking concepts in computing.
      • Challenges in Research and Development:

        • High costs and limited computing power in early decades delayed commercialization of research.
        • Innovations often took decades to reach commercial viability due to Moore's Law and decreasing hardware costs.
      • Examples of Fundamental Research Leading to Industry Transformation:

        • Machine learning, neural networks, and the Internet's development were rooted in research labs.
        • "Neural networks were invented in the 40s by neuroscientists" and later led to modern AI advancements.
      • Impact and Future of Research Funding:

        • Public funding in the 60s enabled long-term ambitious projects; today, such projects lack sufficient funding.
        • The absence of funding today could hinder future innovation and technological progress.
      • Concept of Bootstrapping Research Environments:

        • Bootstrapping research focuses on creating innovative environments to enhance research effectiveness.
        • Doug Engelbart’s lab aimed to invent tools to improve the lab's own productivity, leading to user interface innovations.
      • Research Methods and Dynamic Land:

        • The research group Dynamicland uses space to show context and enable spatial manipulation of ideas.
        • Their work includes creating expansive spatial interfaces beyond traditional screens, using posters and physical objects for programming and interaction.
      • Examples of Dynamicland’s Projects:

        • Real Talk: a system where physical objects are programmed and manipulated by hand, fostering visible and tangible computing environments.
        • Dynamicland as a community space where diverse residents collaboratively create and innovate in a shared environment.
      • Vision for the Future of Computing:

        • Advocates for computing as ubiquitous infrastructure, accessible and modifiable by everyone, akin to reading and writing.
        • Emphasizes creating environments where people can work together interactively and understand complex systems holistically.
      • Final Thoughts:

        • The ultimate goal is for humanity to leverage computation to understand and solve complex problems, with a vision for a future where computing is an integral and accessible part of everyday life for all.

      Relevant quotes: - "Processes of commercial product development" are well-known. - "Neural networks were invented in the 40s by neuroscientists." - "Public funding in the 60s enabled long-term ambitious projects." - "Dynamicland uses space to show context and enable spatial manipulation of ideas." - "The ultimate goal is for humanity to leverage computation to understand and solve complex problems."

    1. Summary of Tech Talk on Functional Reactive Programming (FRP)

      Introduction and Overview

      • Emotional Start: The speaker acknowledges an emotional moment due to Paul’s birthday.
      • Purpose: To share the original and intended principles of Functional Reactive Programming (FRP) which many modern interpretations miss.
      • Main Principles: "Precise and simple denotation" and "continuous time."

      Core Principles of FRP

      • Denotation: "A precise and simple denotation" means a mathematical model that defines the API elements compositionally and recursively.
      • Continuous Time: The foundational idea for naturalness and composability in FRP, contrasting with the discrete time models commonly used today.

      Characteristics of FRP

      • Deterministic and Continuous: The model ensures predictable and simultaneous behavior across a continuum of time.
      • Not Graph-Based: Emphasizes that FRP is not about graphs, streams, or operational notions.
      • Focus on Being, Not Doing: Functional programming is about defining states rather than actions, contrasting with imperative programming.

      Denotation Importance

      • Specification vs. Implementation: A clear separation ensures simple specifications free of implementation artifacts, enhancing correctness and usability.
      • Reasoning and Predictability: Simplicity and precision enable accurate reasoning about the system.

      Continuous Time Benefits

      • Natural Transformations: Continuous time allows for natural temporal transformations, akin to spatial transformations in vector graphics.
      • Resolution Independence: Like scalable vector graphics, continuous time FRP supports flexible and accurate transformations.
      • Integration and Differentiation: Continuous models enable natural descriptions of motion and physical systems, leveraging high-quality solvers.
      • Compositional Approximations: Avoiding early discretization prevents compositional difficulties and information loss.

      FRP API and Semantics

      • Behavior Type: Defined as a function from time to values (e.g., Behavior a is Time -> a).
      • Key Operations:
      • time returns the current time.
      • lift0 creates a constant behavior.
      • lift1, lift2, etc., apply functions over behaviors.
      • timeTransform modifies behaviors based on time transformations.
      • integrate provides continuous integration over behaviors.
      • Events and Reactions: Events are time-value pairs, with operations to merge, map, and sample them. Reactive behaviors are created by switching behaviors at event occurrences.

      Event Semantics

      • Event Denotation: A list of time-value pairs with monotonically non-decreasing times.
      • Combining Behaviors and Events: Reactive behaviors switch based on event-driven new behaviors, preserving temporal semantics.

      Technical Content and Practical Aspects

      • Early Implementations: Describes the evolution from rbmh (Reactive Behavior Modeling in Haskell) to direct animation and arrowized FRP.
      • Efficiency Challenges: Balancing precise semantics with efficient implementation, particularly in push-based systems.

      Influence of Paul Hudak

      • Collaboration: Paul Hudak's enthusiasm and collaboration significantly advanced FRP development.
      • Naming Contributions: Paul suggested the names "Fran" and "Functional Reactive Programming."

      Questions and Closing

      • Addressing Common Questions: Clarifies the relevance of continuous abstractions despite the discrete nature of computers.
      • Further Discussions: Indicates further talks on FRP’s elegant denotation and denotational design principles.

      By adhering to these principles, FRP maintains mathematical elegance and practical utility, encouraging a shift in how programming languages and systems are designed and understood.

      • Overview of Graphs in Computation:

        • Graphs have been successful in domains like shader programming and signal processing.
        • Computation in these systems is usually expressed on nodes with edges representing information flow.
        • Traditional models often have a closed-world environment where node and edge types are pre-defined.
      • Introduction to Scoped Propagators (SPs):

        • SPs are a programming model embedded within existing environments and interfaces.
        • They represent computation as mappings between nodes along edges.
        • SPs reduce the need for a closed environment and add behavior and interactivity to otherwise static systems.
      • Definition and Mechanics:

        • A scoped propagator consists of a function taking a source and target node, returning a partial update to the target.
        • Propagation is triggered by specific events within a defined scope.
        • Four event scopes implemented: change (default), click, tick, and geo.
        • Syntax: scope { property1: value1, property2: value2 }.
      • Event Scopes and Syntax:

        • Example: click {x: from.x + 10, rotation: to.rotation + 1} updates target properties when the source is clicked.
      • Demonstration and Practical Uses:

        • SPs enable the creation of toggles and counters by mapping nodes to themselves.
        • Layout management is simplified as arrows move with nodes.
        • Useful for constraint-based layouts and debugging by transforming node properties.
        • Dynamic behaviors can be created using scopes like tick, which utilize time-based transformations.
      • Behavior Encoding and Side Effects:

        • All behavior is encoded in arrow text, allowing for easy reconstruction from static diagrams.
        • Supports arbitrary JavaScript for side effects, enabling creation of utilities or tools within the environment.
      • Cross-System Integration:

        • SPs can cross boundaries of siloed systems without editing source code.
        • Example: mapping a Petri Net to a chart, demonstrating flexibility in creating mappings between unrelated systems.
      • Complex Example:

        • A small game created with SPs includes joystick control, fish movement, shark behavior, toggle switch, death state, and score counter.
        • The game uses nine arrows to propagate behavior between different node types.
      • Comparison to Prior Work:

        • Differences from Propagator Networks: propagation along edges, scope conditions, arbitrary stateful nodes.
        • Previous work like Holograph influenced the use of the term "propagator."
      • Open Questions and Future Work:

        • Unanswered questions include function reuse, modeling side effects, multi-input-multi-output propagation, and applications to other domains.
        • Formalization of the model and examination of real-world usage are pending tasks.

      By following the structured format above, the summary captures the essence and main points of the text, providing clear insights into the Scoped Propagators model and its potential applications.

    1. Denotational Design with Type Class Morphisms (Extended Version)

      Author: Conal Elliott

      Affiliation: LambdaPix

      Abstract and Introduction

      • Type classes provide varied implementations for standard interfaces: "Type classes provide a mechanism for varied implementations of standard interfaces."
      • Mathematical foundations guide implementers through types and properties: "Many of these interfaces are founded in mathematical tradition and so have regularity not only of types but also of properties (laws) that must hold."
      • Proposing type class morphisms (TCMs): "To give additional guidance to the what, without impinging on the how, this paper proposes a principle of type class morphisms (TCMs)..."
      • Principle of TCMs: "The TCM idea is simply that the instance’s meaning follows the meaning’s instance."
      • Purpose of TCMs: "This principle determines the meaning of each type class instance, and hence defines correctness of implementation."

      Key Concepts and Examples

      • Data types and abstraction: "Data types play a central role in structuring our programs, whether checked statically or dynamically."
      • Finite maps example: "For instance, for a finite map, the interface may include operations for creation, insertion, and query..."
      • Denotational semantics for clarity: "One useful answer is given by denotational semantics. The meaning of a program data type is a type of mathematical object..."
      • Example of Map denotation: "For a language, the meaning function is defined recursively over the abstract syntactic constructors."

      Type Classes and Laws

      • Type classes in Haskell: "Haskell provides a way to organize interfaces via type classes..."
      • Monoid instance example: "For example, a Monoid instance must not only define a ∅ value and a binary (⊕) operation of suitable types..."
      • Instance’s meaning follows the meaning’s instance: "That is, the meaning of each method application is given by application of the same method to the meanings of the arguments."

      Simplicity in Design

      • Simplicity leads to better design: "An important aspect of software design is simplicity of use. Specifying semantics precisely gives us a precise way to measure and compare simplicity of designs."
      • Generalization through simplicity: "A simple design, however, requires achieving these specific goals without correspondingly specific design features."

      Applications of TCMs

      • Monad and Functor examples: "The Functor interface is... with laws: fmap id ≡ id, fmap (h ◦ g) ≡ fmap h ◦ fmap g."
      • Applicative functor: "The Applicative (applicative functor) type class has more structure than Functor, but less than Monad."
      • Monad morphism property: "This property of [[·]] has a special name in mathematics: [[·]] is a monoid homomorphism, or more tersely a “monoid morphism”."

      Further Examples and Discussions

      • Functor instance example: "instancesem Functor (TMap k ) where [[fmap f m]] = λk → f ([[m]] k )."
      • Challenges in implementing TCMs: "Again, these failures look like bad news. Must we abandon the TCM principle, or does the failure point us to a new, and possibly better, model for Map?"
      • Numeric overloading using TCMs: "Overloading makes it possible to use familiar and convenient numeric notation for working with a variety of types."

      Advanced Topics

      • Automatic differentiation: "Automatic differentiation (AD) is a precise and efficient method for computing derivatives."
      • Functional reactive programming: "My first exposure to type class morphisms was in giving a denotational semantics to functional reactive programming (FRP) structured into type classes."

      Conclusions and Recommendations

      • Design advice for software developers: "By defining a type’s denotation clearly, library designers can ensure there are no abstraction leaks."
      • Importance of TCMs: "The instance’s meaning follows the meaning’s instance."
      • Benefit of clear denotations: "When a library not only type-checks, but also morphism-checks, it is free of abstraction leak, and so the library’s users can safely treat a program value as being its meaning."

      This summary encapsulates the key ideas and examples from Conal Elliott's technical report on "Denotational Design with Type Class Morphisms," focusing on the principle of TCMs, their application, and the benefits of clear and precise semantic models in software design.

    1. Summary of "אין שגרה, וארגונים צריכים להביא זאת בחשבון לקראת 2025" by נירית כהן

      • Ongoing Challenges for Organizations:

        • Organizations have adapted to dealing with mobilized employees, partners of reservists, and various affected groups due to war and unrest in 2024.
        • The norm of unpredictability has become a strength, enhancing management and organizational muscles to handle constant change.
        • "שום דבר ב-2024 לא היה כרגיל, לכן אין סיבה להניח שתוכניות העבודה שאנחנו כותבים עכשיו לקראת 2025 יהיו כרגיל."
      • Adapting to a World of Constant Change:

        • Post-pandemic, the rest of the world resumed normalcy, but the local scenario continued to lack routine, reinforcing the need to incorporate challenges and trends into 2025 work plans.
        • "לעולם יש זמן לעסוק בהשלכות הבינה המלאכותית על כל מרחבי העבודה והעסק בעוד אנחנו מנסים לשלב את זה עם כל מה שלא כרגיל."
      • Humanizing the Workforce:

        • Since 2020, the perspective on work has shifted to valuing life and the exchange of time for meaningful work.
        • Employees choose work based on personal values and life priorities, altering the social contract around work.
        • "אם יש מאבק בין עבודה וחיים, החיים ינצחו. תמיד."
      • Prioritizing Employee Well-Being:

        • Employees need genuine integration of their well-being into business decisions rather than superficial welfare events or budgets.
        • "הם רוצים שתחליפו את התעדוף המוחלט של צרכי העבודה ב-Win-Win משותף לארגון ולאנשים."
      • Redefining Career Paths:

        • Careers are now seen as climbing a personal climbing wall rather than a traditional corporate ladder.
        • Employees regularly reassess their career steps, whether within the organization or elsewhere, influenced by a broad array of work and income opportunities.
        • "דווקא בגלל שהם מנהלים עכשיו את העבודה בהקשר רחב יותר של החיים, האנשים שלכם לא מגבילים את עצמם לסולם הארגוני אלא מעצבים לעצמם קיר טיפוס אישי."
      • Flexible Job Structures:

        • Organizations should deconstruct roles into tasks and projects, aligning them with employees' career paths, and embracing flexible employment models.
        • "אם תלמדו לפרק תפקידים למשימות ופרויקטים תגלו שאתם יכולים לחבר בין אנשים לעבודה באופן שמציע להם פתרונות שכן מתאימים לקיר הטיפוס שלהם."
      • Empowering Managers:

        • Shifting focus from managing work to managing people, ensuring that managers have the flexibility and tools to support employees' personal and professional needs.
        • "אם תסתכלו היטב על הנהלים שלכם תגלו שלמנהלים שלכם יש הרבה יותר דרגות חופש בניהול תקציבים ועבודה מאשר בניהול אנשים."
      • Technological Impact:

        • The democratization of knowledge and capabilities due to technology decentralizes decision-making and authority.
        • "תהליכי הדמוקרטיזציה של ידע ויכולות מחזקים עוד יותר את העובדה שהידע כבר לא נמצא בראש הפירמידה."
      • Strategic Questions for 2025 Work Plans:

        • How to assist managers in understanding employees' needs and adapting offerings accordingly.
        • How flexible work looks in location-dependent industries.
        • How to balance organizational and individual needs in a world of reduced work.
        • "שאלות לתכניות עבודה: איך עוזרים למנהלים להבין מה האנשים שלהם צריכים? ומה אתם צריכים לשנות או להציע כדי לספק את זה?"
      • Conclusion:

        • The year 2025 will deepen the opportunities and challenges faced in recent years, emphasizing actionable integration of organizational success with employees' full lives.
        • "שנת 2025 תעמיק בארגונים את ההזדמנויות והאתגרים שכבר פגשנו בשנים האחרונות."
    1. Tracy Hogg recommended changing diapers during the "E.A.S.Y." routine, specifically during the "A" (Activity) phase. This means after the baby eats (E), you would change the diaper as part of the activity (A) before putting the baby down for sleep (S). This helps to ensure the baby is comfortable and clean before nap time, promoting better sleep quality.

    2. Sure, here are the steps from "Secrets of the Baby Whisperer" by Tracy Hogg:

      1. E.A.S.Y. Routine:

        • E: Eat. The baby eats when they wake up.
        • A: Activity. The baby has some playtime or interaction.
        • S: Sleep. The baby goes down for a nap.
        • Y: Your Time. Parents take time for themselves while the baby sleeps.
      2. Understanding Baby Cues:

        • Observe and interpret the baby's body language and sounds to understand their needs.
        • Learn the difference between hunger cues, tired cues, and other signals.
      3. Respect the Baby:

        • Treat the baby as an individual with unique needs and preferences.
        • Communicate with the baby and explain what you're doing, even if they can't understand the words yet.
      4. Establishing a Routine:

        • Create a predictable and consistent daily routine to provide structure and security for the baby.
        • Adapt the routine as the baby grows and their needs change.
      5. The Pick Up/Put Down Method:

        • Use a gentle sleep training method to help the baby learn to fall asleep independently.
        • Pick up the baby to comfort them when they cry, then put them down once they are calm but still awake.
      6. Teaching Self-Soothing:

        • Encourage the baby to learn how to soothe themselves to sleep without relying on external aids like feeding or rocking.
        • Gradually reduce the amount of intervention as the baby becomes more capable of self-soothing.
      7. Balanced Parenting:

        • Strive for a balance between being responsive to the baby's needs and setting boundaries.
        • Avoid overindulgence or becoming too rigid with routines.

      These steps emphasize a gentle, respectful approach to parenting that fosters independence and understanding between parents and their baby.

    1. How to Celebrate Your Wins at Work Without Coming Across as a Jerk

      By Jessica Chen

      • Introductory Insight
      • One of the best ways to advance your career is to have your efforts recognized, yet many deflect compliments and minimize their contributions.<br /> “Isn’t it ironic that one of the best ways to accelerate our career is to have people see and recognize your efforts, yet for many of us, when that happens, such as when we get complimented or praised by our team, we instantly deflect and minimize the contribution?”

      • Challenges in Self-Promotion

      • Some find it easy to share their thoughts and highlight their work, while others find it challenging due to cultural teachings of modesty and humility.<br /> “For some of us, sharing what’s on our mind and highlighting our work comes easy... But for others, the idea of putting ourselves out there...feels challenging.”
      • It’s not just about being introverted or extroverted but also about cultural upbringing that discourages self-promotion.<br /> “For some of us, talking about ourselves wasn’t what we were taught to do... Instead, we were taught to minimize the spotlight and focus on getting the work done.”

      • Necessity of Celebrating Wins

      • To progress in your career, it’s essential to both do good work and confidently talk about your impact.<br /> “We need to get things done and confidently talk about our impact because when we do, we highlight our genius and keep ourselves top of mind for bigger opportunities at work.”
      • Celebrating wins is not optional but a necessary part of professional growth.<br /> “Celebrating our wins isn’t a nice to do, it’s a must do.”

      • Reframing Misconceptions

      • Misconception 1: Celebrating wins is selfish.
        • Reframe: It’s part of your job to communicate your work and its impact.<br /> “We celebrate our wins because it’s part of the work we do.”
      • Misconception 2: Celebrating wins will annoy others.
        • Reframe: Use tact and emphasize the benefit for the team.<br /> “We can do this by considering our tone of voice and structuring our message so it’s leading with the benefit for the team and how it has helped them.”
      • Misconception 3: Celebrating wins is complex.

        • Reframe: Simple gestures, like forwarding a client’s compliment, can be effective.<br /> “Sometimes the most effective way to highlight our wins is to approach it in the simplest of ways.”
      • Communication Strategies: ABC Checklist

      • A – Articulate the Benefit: Explain how your achievements help others.<br /> “How did your accomplishments help others?”
      • B – Be Open About the Process: Share the steps taken to accomplish the task.<br /> “What steps did you take to accomplish this task?”
      • C – Communicate Using Power Words: Use emotionally impactful words to convey your enthusiasm.<br /> “What emotions did you feel with this win? Use words like excited, happy, proud.”

      • Practical Tips

      • Create a "Yay Folder" in your email to store positive feedback for easy reference.<br /> “Create what I call a ‘Yay Folder’ in your inbox...if you ever need evidence to prove you are doing great work...you now have it stored in one place.”
      • Don't overcomplicate sharing your wins; simple expressions of excitement can be enough.<br /> “We shouldn’t overthink how we celebrate our wins at work.”

      • Final Thoughts

      • Celebrating your achievements helps reinforce the value of your work and builds your professional reputation.<br /> “Celebrating your wins is knowing your work, effort, and impact matter.”
      • Be your own cheerleader to ensure recognition of your accomplishments.<br /> “If you’re not your own best cheerleader, who will be?”

      This summary encapsulates the main ideas and actionable advice provided by Jessica Chen on effectively celebrating your wins at work without coming off as arrogant.

    1. Summary of Eric Normand's Talk: Building Composable Abstractions

      • Introduction

        • Eric Normand introduces himself and the purpose of the talk: "The title of this talk is building composable abstractions...to develop a process to do that and also I'd like to start a discussion about how we can do that better."
      • Importance of Abstractions

        • Abstractions are critical for creating complex applications from small, simple problems. "A lot of people are able to solve small problems like Fibonacci...when they finally want to create an app...they don't know how to take the tools that they've learned and turn them into software."
      • Map of the Talk

        • The talk covers the importance of abstractions, the process of developing them, an example, and concluding thoughts. "Here's sort of the map of the talk: why focus on abstractions, the process, an example abstraction, and concluding thoughts."
      • Why Focus on Abstractions?

        • Refactoring introduces the distinction between the behavior of the code and its implementation. "In the general industry we now have this idea that there's a difference between the behavior of the code and the actual implementation."
        • Example of Newtonian mechanics replacing Aristotelian physics illustrates that some systems can't be refactored but need to be redesigned from scratch. "You can't refactor Aristotle into Newton."
      • Objectives of the Abstraction Process

        • The process should produce good, Newtonian-style abstractions, be iterative, accessible to all, and foster collaboration. "It has to consistently produce good abstractions...an iterative process...anyone can do it...fosters collaboration."
      • Example of Vector Graphics System

        • Normand uses a simple vector graphics system as an example to demonstrate the process of building abstractions. "This is the example we're going to develop: a vector graphics system."
      • Step 1: Physical Metaphor

        • Choose a metaphor to capture important information. "The idea behind this is to choose a metaphor that will capture the important information in your program."
        • Shapes and construction paper is chosen as the metaphor. "Shapes and construction paper...I cut out shapes like rectangles and ellipses...and then I can move them around."
      • Step 2: Meaning Construction

        • Convert physical intuition into precise mathematical language, focusing on the interface. "We're going to be focusing on the interface right now...precise mathematical language."
        • Definitions in Clojure for different components like color, shape, and transformations. "We're defining two types here: cutout and shape...defining a function that takes a cutout and returns a shape."
        • Importance of preserving shape and color, overlay order, and rotation and translation independence. "Preservation of shape...preservation of color...overlay order...rotation and translation independence."
      • Step 3: Implementation

        • Implement the system based on the constructed meaning, ensuring it can be refactored to different requirements like SVG output. "Implementation...we already know what to do...refactor from quill to SVG."
      • Summary of Process

        • Use a physical metaphor, define the parts and their relationships in mathematical language, and refactor for implementation details. "Use a physical metaphor...define the parts and their relationships...refactor to get all the meta properties."
      • Corollaries for the Process

        • Know your domain, constructs, and refactoring techniques. "Know your domain...know your constructs...know your refactoring."
      • Conclusion

        • Encourages further learning and provides resources. "Please go to my site...download the slides...sign up for my newsletter."
      • Connell emphasizes the importance of abstraction in software design, focusing on libraries over individual programs: "the main job of a software designer... is to build abstractions."

      • He introduces the concept of "denotational design," inspired by the aesthetics of code poetry: "trying to formulate a semantics for functional reactive programming... led to what's now my central consciously Central Paradigm for doing Library design."

      • Connell outlines three main goals in software projects: building abstractions, implementing those abstractions, and documenting them clearly: "I want my abstractions to be precise, elegant, and reusable."

      • He quotes Edgar Dijkstra to highlight the need for precision in abstractions: "The purpose of abstraction is not to be vague... it's to create a whole new semantic level in which one can be absolutely precise."

      • Connell suggests a three-pronged approach to software projects: focusing on precision, elegance, and reusability in abstractions; correctness, speed, and maintainability in implementations; and simplicity, clarity, and accuracy in documentation: "I want my program my implementation to be maintainable."

      • He critiques the current state of software development for its emphasis on precision in "how" rather than "what": "software without a specification is an answer without a question."

      • Bertrand Russell's philosophy on precision is highlighted to argue that informal understandings in software often lead to errors: "everything is vague to a degree you do not realize until you've tried to make it precise."

      • Connell advocates for specifying clear semantics in software to avoid ambiguity and errors: "if you think you know what you're doing but you haven't made precise, you're probably wrong."

      • He introduces "denotational design" as a methodology that ensures precise, simple, and compelling specifications: "denotational design... gives us precise, simple and compelling specifications."

      • Connell discusses the advantages of continuous, infinite models in functional programming, as opposed to discrete, finite models: "locations are functions in two-dimensional continuous space infinite continuous space."

      • He proposes the use of standard algebraic abstractions like functor, monoid, and applicative to simplify and generalize software design: "standard algebraic abstractions things like functor and monoid and implicative."

      • The talk covers the process of refining an API for image synthesis by identifying and generalizing operations: "we're going to design functional apis... for image synthesis and manipulation."

      • Connell stresses the importance of compositionality in software design, where the meaning of compositions is defined in terms of the meanings of their components: "the meaning of a composition... is defined in terms of only the meanings of the components."

      • He refines the design by leveraging standard abstractions and type classes, illustrating with examples like monoids, functors, and applicatives: "standard vocabulary in Haskell means type classes... comes with laws that hold."

      • Connell concludes with examples showing how denotational design can lead to precise, efficient, and maintainable software, encouraging further exploration and application of these principles: "examples of how you can calculate a correct implementation because in this case the meaning function has an inverse."

    1. Summary of "Flecs v4.0 is out!" by Sander Mertens

      What is Flecs? - Flecs is an Entity Component System (ECS) for C and C++ designed for building games, simulations, and other applications. - “Store data for millions of entities in data structures optimized for CPU cache efficiency and composition-first design.” - “Find entities for game systems with a high performance query engine that can run in time critical game loops.” - “Run code using a multithreaded scheduler that seamlessly combines game systems from reusable modules.” - “Builtin support for hierarchies, prefabs and more with entity relationships which speed up game code and reduce boiler plate.” - “An ecosystem of tools and addons to profile, visualize, document and debug projects.” - Open-source under the MIT license.

      Release Highlights for Flecs v4.0: - Over 1700 new commits, totaling upwards of 4700 commits. - “More than 1700 new commits got added since v3 with the repository now having upwards of 4700 commits in total.” - Closed and merged 400+ issues and PRs from community members. - “More than 400 issues and PRs submitted by dozens of community members got closed and merged.” - Discord community grew to over 2300 members, GitHub stars doubled from 2900 to 5800. - Test cases increased from 4400 to 8500, with test code growing from 130K to 240K lines.

      Adoption of Flecs: - Used by both small and large projects, including the highly anticipated game Hytale. - “Flecs provides the backbone of the Hytale Game Engine. Its flexibility has allowed us to build highly varied gameplay while supporting our vision for empowering Creators.” - Tempest Rising uses Flecs to manage high counts of units and spatial queries. - “We are using it [Flecs] mostly to leverage high count of units. Movement (forces / avoidance), collisions, systems that rely on spatial queries, some gameplay related stuff.” - Smaller games like Tome Tumble Tournament use Flecs for movement rules.

      Language Support and Community Contributions: - Flecs Rust binding released, actively developed and in alpha. - “An enormous amount of effort went into porting over all of the APIs, including relationships, and writing the documentation, examples, and tests.” - Flecs.NET (C#) has become the de facto C# binding. - “The binding closely mirrors the C++ API, and comes bundled with documentation, examples and tests.”

      New Features in v4.0: - Unified query API simplifies usage and enhances functionality. - “The filter, query, and rule implementations now have been unified into a single query API.” - Explorer v4 offers a revamped interface and new tools. - “The v4 explorer has a few new tricks up its sleeve, such as a utility to capture commands, editing multiple Flecs scripts at the same time, the ability to add & remove components, and new tools to inspect queries, systems and observers.” - Flecs Script for easy entity and component creation, with improved syntax and faster template engine. - “Flecs Script got completely overhauled, with an improved syntax, more powerful APIs and a much faster template engine.” - Sparse components for stable component pointers and performance gains. - “Sparse components don’t move when entities are moved between archetypes. Besides being good for performance, this also means that Flecs now supports components that aren’t movable!” - Overhauled demos showcasing new features and enhanced graphics. - “The Tower Defense demo has been overhauled for v4 to better showcase Flecs features, while also quadrupling the scale of the scene!” - Improved inheritance model, now opt-in for better performance. - “When a prefab is instantiated in v4, components are by default copied to the instance.” - Member queries reduce overhead and simplify relationships. - “In v4 queries can directly query entity members as if they were relationship targets, which is like having relationships without the fragmentation!” - Flecs Remote API for connecting to Flecs applications remotely. - “The new Flecs Remote API includes a simpler JSON format, a new REST API with a cleaner design, and a new JavaScript library for the development of web clients that use Flecs data.”

      Documentation and Future Directions: - Improved and expanded documentation covering new features in-depth. - “Several weeks of the v4 release cycle were spent on improving the documentation and making sure it’s up to date.” - Future updates to include reactivity frameworks, dense tree storage, dense/sparse tables, pluggable storages, and a node API. - “Reactivity... dense tree storage... dense/sparse tables... pluggable storages... node API...”

      Community Acknowledgment: - Special thanks to community members and sponsors who contributed to the development and support of Flecs v4. - “A special thanks to everyone that contributed PRs and helped with the development of Flecs v4 features.”

      This summary encapsulates the key updates, features, and community efforts surrounding the release of Flecs v4.0, highlighting its impact and future potential.

    1. Erlang Mailboxes

      • Process Association: Mailboxes are tied to processes, not first-class values. "The processes are, and the mailboxes go with them."
      • Pattern Matching and Out-of-Order Reception: Messages can be received out of order using pattern matching, crucial for RPC simulations. "You send a message out to some other process, and then receive the reply by matching on the mailbox."
      • Zero-or-One Delivery: Messages are reliably delivered exactly once within the same OS process but can be lost across nodes. "Messages going to the same OS process are reliable enough to be treated as reliable exactly-once delivery but you are not really supposed to count on that."
      • Asynchronous Nature: Mailboxes operate asynchronously and do not guarantee message arrival without explicit acknowledgment. "You can not wait on 'the message has arrived at the other end' because in a network environment this isn't even a well-defined concept."
      • Erlang Terms Only: Mailboxes send and receive only Erlang terms, which are dynamically-typed without user-defined types. "Erlang terms are an internal dynamically-typed language with no support for user-defined types."
      • Single Process Visibility: Each mailbox is visible to only one Erlang process, making communication many-to-one across the cluster. "A given mailbox is only visible to one Erlang process; it is part of that process."
      • Debug Mechanisms: There are tools to inspect mailboxes live, useful for devops but not recommended for system operations. "You really shouldn't be using these debug facilities as part of your system, but you can use them as a devops sort of thing."

      Go Channels

      • First-Class Values: Channels are independent of goroutines and can be passed around freely. "One goroutine may create a channel and pass it off to two others who will communicate on it."
      • Intrinsic Order: Channels are ordered, preventing receivers from selectively pulling messages. "Receivers can't go poking along the channel to see what they want to pull out of it."
      • Select Statement Flexibility: Go's "select" statement allows a goroutine to wait on multiple communication operations, unlike Erlang's pattern matching. "A single goroutine can wait on an arbitrary combination of 'things I'm trying to send' and 'things I'm trying to receive'."
      • Synchronous Delivery: Channels ensure synchronous, exactly-once delivery, limited to a single machine and process. "It is also in general guaranteed that if you proceed past a send on a channel, that some other goroutine has received the value."
      • Typed Channels: Channels send specific Go types, with flexibility using interfaces. "Each channel sends exactly one type of Go value, though this value can be an interface value."
      • Many-to-Many Communication: Channels support many-to-many communication within a single OS process. "It is perfectly legal and valid to have a single channel value that has dozens of producers and dozens of consumers."
      • Opacity: Channels are opaque, preventing peeking into their contents, maintaining the integrity of the send-receive guarantee. "There is no 'peek', which would after all break the guarantee that if an unbuffered channel has had a 'send' that there has been a corresponding 'receive'."

      Comparative Analysis

      • Implementation Constraints: Each system has unique features that cannot be replicated in the other without losing significant capabilities. "You can not implement one in terms of the other."
      • Problem-Solving Capabilities: Both systems effectively solve most communication problems but have distinct strengths and weaknesses. "It's not clear to me that either is 'better'; both have small parts of the problem space where they are better than the other."

      This summary captures the essential differences and functionalities of Erlang mailboxes and Go channels, highlighting their unique approaches to process communication and synchronization.

    1. Summary of Joe Armstrong's Interview on Erlang

      Introduction and Current Involvement: - Joe Armstrong is the principal inventor of Erlang and coined the term "Concurrency Oriented Programming". - "Today I go round and give talks about Erlang, promoting Erlang - that's one side of what I do."

      Companies Using Erlang: - Erlang is used by Kreditor (financials), TLF (network management systems), and Synapse (mobile phone provisioning) in Sweden. - "Each one of them employs about 30 people and they are probably market leading in each of their areas, very niched areas."

      Popularity and Strength of Erlang in Concurrency: - Ralph Johnson noted Erlang’s superiority in handling concurrency, allowing millions of processes compared to 10,000-20,000 in other languages. - "In Erlang the notion of a process is part of a programming language, is not part of the operating system."

      Theoretical Basis: - Erlang is based on the Actors model of computation and is a pure message-passing language. - "The theoretical basis would be Actors model of computation, Carl Hewitt."

      Development and Changes in Erlang: - Future changes will be minimal to avoid breaking legacy code, focusing mainly on libraries rather than syntax. - "I think we'll see very few changes in the language itself. We'll see changes to the libraries and things like that."

      Comparison with Object-Oriented Programming (OOP): - Armstrong criticizes OOP for its complexity and inefficiency, favoring Erlang’s messaging model for true object-oriented behavior. - "Erlang is actually more object oriented, truer to the spirit of pure object orientation than all object-oriented languages."

      Garbage Collection and Multicore Processing: - Erlang’s soft real-time behavior minimizes issues with garbage collection, even in multicore environments. - "It's extremely unusual that Erlang programs are bothered by garbage collection issues."

      Interfacing with Other Languages: - Erlang deliberately avoids linking with other languages’ memory spaces to ensure fault tolerance. - "Erlang is built for fault tolerant systems and therefore it does not allow you to link anything into the same memory space."

      Philosophy of Connecting Components: - Armstrong advocates for simple, message-based connections similar to Unix pipes over complex API integrations. - "There is an easy and a difficult way to connect components together, and the easy way - the prime example is the Unix pipe mechanism."

      High-Performance Erlang (HiPE): - HiPE compiles Erlang to native code, enhancing performance. - "Yes, this is high performance Erlang, done at the university of Upsala."

      Advantages of a Register Machine: - Erlang’s VM is a register machine, which is more efficient than a stack machine. - "It's better to have a register machine than a stack machine."

      This summary encapsulates the essence of Joe Armstrong's insights on Erlang, its development, advantages, and practical applications, while highlighting key quotes and ideas from the original interview.

      • The talk honors Joe Armstrong, the principal inventor of Erlang, and discusses his significant contributions to the field of computer science.

        • "Joe Armstrong the principal inventor of Erlang who unfortunately recently passed away and so Joe sadly will not be with us anymore but his work lives on and this is what we're going to explore today."
      • The speaker shares personal enthusiasm for Erlang and Elixir, aiming to convey their benefits and joy in using these languages.

        • "I've been using Erlang an elixir for quite some time now and for me personally this is my by far my favorite part of my own professional career."
      • The focus is on the Erlang virtual machine (BEAM), which underlies multiple languages and serves as the core of the talk.

        • "The main reason why I can do this is because I'm focusing on the common theme between these languages and this is the runtime layer the Erlang virtual machine which is also known under the name beam."
      • The talk is structured into three parts: introduction to BEAM concurrency, a demo-driven exploration of BEAM concurrency in practice, and a think piece on alternative software building styles enabled by BEAM.

        • "The talk is going to consist of three parts I'm gonna kick off with a basic introduction to the principles and mechanics of beam concurrency... then finally I'm going to wrap up with a short and somewhat controversial think piece exploring a different style of building software systems."
      • BEAM-style concurrency revolves around the concept of processes, which are lightweight, isolated, and communicate via message passing.

        • "The central idea the central concept here is the concept of a thing called process and process in beam terminology is a runtime execution context of code."
      • Processes in BEAM are different from OS processes or threads; they are lightweight and support high concurrency.

        • "Process is not a novice process and it is not a novice thread and we're gonna clarify this distinction a bit later on but essentially what process is is basically a sequential program."
      • BEAM supports the creation of many processes, with examples showing the spawning of 10,000 processes in a second.

        • "In a matter of a single second we have 10,000 processes running running around 10,000 of these independent programs running around our system."
      • The system demonstrates fault tolerance by isolating failures in individual processes, allowing the rest of the system to continue functioning.

        • "The calculation process has crashed and burned there was some unhandled exception in the calculation process and it crashed... we have a stable success rate of 10 K successes per second."
      • BEAM's preemptive scheduling ensures fair distribution of CPU time among processes, even under heavy load.

        • "Beam scheduler performs very frequent context switching like in less than 1 millisecond intervals and it does it with proper preemption."
      • The speaker shows how to debug and fix issues in a running system using BEAM's introspection capabilities, emphasizing the runtime's observability.

        • "Beam is a runtime which is highly debuggable in respectable observable if you will right so beam allows us to hook into the running system and peek and poke inside it and get a lot of useful information."
      • The talk highlights the importance of technical uniformity, where using BEAM and Elixir minimizes the need for multiple disparate technologies, simplifying the development and maintenance processes.

        • "The technical bar is significantly lower there are less technical things to learn and that's a good thing but a new developer comes on board they learn themselves a bit of a leak serum that can immediately contribute to any part of the system."
      • BEAM supports distributed systems, allowing processes to run across multiple nodes, though it has some mechanical issues that need addressing.

        • "Beam distribution is sort of plagued with a bunch of issues serious issues you know so like well many people do use Beam distribution distributed beam in production there are also many people who would tell you like maybe not use that thing."
      • The potential of BEAM is seen in its ability to combine quick start simplicity with long-term scalability and robustness.

        • "The artificial trade-off of having to choose between a quick start technology and the long run one disappears flies out the window and I get both in the same piece of technology."
    1. Summary of Electric Closure Presentation

      Overview and Purpose

      • The presentation discusses the motivation, applications, and technical workings of Electric.
      • Electric is designed for enterprise apps, specifically single-page applications with complex business rules and validations.
      • It aims to solve issues encountered in CRUD apps by simplifying state management, network interaction, and rendering.

      Application Examples

      • Partner apps for internal operations, such as support apps for startups, demonstrate Electric's capabilities.
      • Electric enables the creation of apps that look and function like spreadsheets, emphasizing strong user experience over styling.

      Technical Approach

      • Electric uses a DSL (domain-specific language) to encode operational CRUD apps.
      • It allows the creation of live CRUD apps through a DSL that generates full-stack applications in real-time.
      • The key innovation is the ability to compose functions directly with the DOM, simplifying the data flow and reducing the need for heavy frameworks.

      Challenges and Solutions

      • Traditional approaches to managing CRUD apps involve complex state and network interactions.
      • Electric proposes a direct composition model, eliminating the need for frameworks like ORMs and simplifying the data flow.
      • The system uses reactive programming to manage interleaved client-server data flows efficiently.

      Architecture and Implementation

      • Electric compiles programs into a directed acyclic graph (DAG), managing data flow between client and server.
      • The DAG enables efficient rendering and state management, minimizing unnecessary computations and optimizing performance.
      • Missionary, the functional effect system used by Electric, supports glitch-free rendering, cancellation, resource lifecycle management, and composability.

      Practical Examples and Demos

      • The chat app demo shows multiplayer capabilities by streaming state changes across clients.
      • The 2D MVC demo illustrates full-stack applications created with Electric, highlighting its ability to handle real-time data entry and server-client synchronization.

      Future Directions and Goals

      • Electric aims to support more advanced use cases, such as offline-first applications and more complex data interactions.
      • The project is committed to improving build times, optimizing network traffic, and refining the development experience.
      • The team envisions Electric as a foundational technology for richer, more dynamic web applications, emphasizing strong composition and functional effects.

      Key Takeaways

      • Electric simplifies the development of enterprise applications by using a functional effect system to manage state and data flow.
      • It provides a declarative approach to building CRUD apps, reducing accidental complexity and enhancing code readability and maintainability.
      • The system's architecture ensures efficient real-time data synchronization and rendering, making it suitable for high-performance applications.

      Relevant Quotes

      • "We're going to talk about why we built this and what is for."
      • "Electric is designed for enterprise apps, single-page applications."
      • "The system uses reactive programming to manage interleaved client-server data flows efficiently."
      • "Electric compiles programs into a directed acyclic graph (DAG), managing data flow between client and server."
      • "Missionary, the functional effect system used by Electric, supports glitch-free rendering, cancellation, resource lifecycle management, and composability."
    1. Summary of "Propagation Networks: A Flexible and Expressive Substrate for Computation" by Alexey Andreyevich Radul

      Abstract and Introduction

      • Shift in Computation Foundations: Proposes a shift from single-computer, large-memory models to networks of local, independent, stateless machines interconnected with stateful storage cells.

        • Quote: "The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage cells."
      • General-Purpose Propagation System: Introduces a prototype for a general-purpose propagation system to enhance expressive power and flexibility.

        • Quote: "I present in this dissertation the design and implementation of a prototype general-purpose propagation system."

      Chapter 1: Time for a Revolution

      • Expression Evaluation Limitations: Traditional programming systems are limited by time constraints and the need for complete values.

        • Quote: "The successful evaluation of an individual expression, or, equivalently, the successful production of an individual value, inescapably marks a point in time."
      • Propagation Paradigm: Argues for computation as a network of interconnected machines propagating information, allowing for non-linear time management.

        • Quote: "The commonality is to organize computation as a network of interconnected machines of some kind, each of which is free to run when it pleases, propagating information around the network as proves possible."

      Chapter 2: Design Principles

      • Propagators: Describes propagators as asynchronous, autonomous, and stateless machines communicating through cells.

        • Quote: "Let us likewise posit that our propagators are autonomous, asynchronous, and always on—always ready to perform their respective computations."
      • Simulation Until Quiescence: Simulation focuses on achieving a quiescent state where no further propagation occurs without external input.

        • Quote: "The objective of our simulation of such a system is then the faithful computation of such a steady, or quiescent, state, rather than the faithful reproduction of the intermediate states that lead up to it."
      • Cells Accumulate Information: Cells do not store values but accumulate information about values, enabling incremental computation and multidirectionality.

        • Quote: "I propose, in contrast, that we should think of a cell as a thing that accumulates information about a value."

      Chapter 3: Core Implementation

      • Basic Example - Temperature Conversion: Demonstrates a simple propagator network converting Fahrenheit to Celsius, emphasizing the structure and low-level nature of propagator networks.

        • Quote: "If we have a Fahrenheit temperature in cell f, we can put the corresponding Celsius temperature in cell c by the formula c = (f − 32) ∗ 5/9."
      • Simulation Details: Details the implementation of a basic scheduler to handle the execution of propagators, ensuring they run in an arbitrary order.

        • Quote: "This dissertation, therefore, uses a relatively boring software scheduler that maintains a queue of pending propagators, picking them off one by one and running them until there are no more."

      Chapter 4: Dependencies

      • Dependency Tracking: Introduces dependency tracking to improve expressiveness and support for alternate worldviews, contradictions, and search optimization.
        • Quote: "Dependencies track Provenance."

      Chapter 5: Expressive Power

      • Applications: Discusses various applications of propagation networks, including constraint satisfaction, logic programming, and probabilistic programming.
        • Quote: "Propagation systems allow us to embed constraint satisfaction naturally."

      Chapter 6: Towards a Programming Language

      • Language Design: Explores potential programming languages based on propagation networks, focusing on conditionals, abstraction mechanisms, and partial information strategies.
        • Quote: "Conditionals just work."

      Chapter 7: Philosophical Insights

      • Broader Implications: Reflects on the philosophical implications of propagation networks on concurrency, time, space, and side effects.
        • Quote: "On Concurrency."

      Appendix

      • Implementation Details: Provides detailed implementation notes, code fragments, and design considerations for the prototype propagation system.

      This summary encapsulates the essence and key points of Radul's dissertation, offering insights into the innovative propagation network model and its implications for the future of computation.

    1. Summary of Gerald J. Sussman's Tech Talk on Programming

      Introduction

      • Event and Speaker Introduction:
        • Jointly sponsored by IEEE Computer Society chapter and Greater Boston chapter of ACM.
        • Speaker: Gerald J. Sussman, a professor of electrical engineering at MIT, co-author of the influential "Structure and Interpretation of Computer Programs" book used globally for computer science education.
      • Event Context:
        • Talks usually occur monthly and are still virtual due to COVID-19.
        • Recordings available on chapter websites.

      Programming History and Evolution

      • Early Programming Experiences:
        • Sussman began programming in 1962.
        • Initial experiences with large, expensive computers like the IBM 70 90 system and later at MIT with PDP6.
        • Cost and capabilities of early computers highlighted (e.g., IBM 70 90 cost over $15 million with less than a megabyte of memory).

      Philosophical Insights on Programming

      • Nature of Programming:
        • Programming is not merely coding but a form of abstract engineering design.
        • Specifications are often unclear or evolve during the development process.
        • Analogous to creative arts like poetry, music, and architecture.
      • Creative Process:
        • Importance of planning and debugging, comparing programming to the creation of artistic works and scientific theories.
        • Emphasis on programming as a process of continuous refinement and exploration.
        • Quote: "Programming is fun because it allows for creative problem-solving and exploration of new ideas."

      Programming as a Learning Tool

      • Education and Influence:
        • Shared experiences from his freshman year at MIT learning LISP and its transformative impact.
        • The role of programming in enhancing understanding of complex subjects like physics and mathematics.
      • Historical Comparisons:
        • Comparison of programming concepts like the eval-apply interpreter to Maxwell’s equations in physics.
        • Philosophical reflections on dualities and the nature of identity in programming.

      The Joy of Debugging

      • Debugging as a Learning Opportunity:
        • Bugs are not failures but opportunities to learn and improve designs.
        • Debugging described as a detective adventure.
        • Quote: "Interesting bugs are a consequence of doing things right because they reveal the limitations of our initial assumptions."

      Modern Programming Challenges and Fun

      • Modern Tools and Systems:
        • Critique of languages like C and Python for their inconsistencies and potential to introduce errors.
        • Praise for the flexibility and expressiveness of languages like LISP and Scheme.
      • Innovations in Programming:
        • Shared personal anecdote about developing automatic differentiation using dual numbers.
        • Collaboration with other researchers to extend programming capabilities in areas like classical mechanics and differential geometry.
        • Quote: "Programming languages need to be flexible enough to allow for abstract thinking and extension of primitive operations."

      Conclusions

      • Programming as an Intellectual Pursuit:

        • Emphasis on programming as a source of intellectual pleasure through discovery, philosophical contemplation, and creative problem-solving.
        • Encouragement to view programming as a means of achieving clarity in thought and expression.
        • Quote: "The fun in programming comes from the joy of discovery and the satisfaction of creating something that works beautifully."
      • Call to Action:

        • Encouragement to share work with others and contribute to free software initiatives.
        • Quote: "Programming is most fun when you can share your work with others. Write and use free software."

      Additional Resources and References

      • Further Learning:
        • Mention of influential books and works by Sussman and collaborators, such as "Structure and Interpretation of Computer Programs" and "Software Design for Flexibility."
        • References to external resources like Spivak's "Calculus on Manifolds" for mathematical notation and theories.

      This summary captures the main ideas and key points from Gerald J. Sussman's talk, highlighting the evolution of programming, the joy of creative problem-solving, and the importance of sharing knowledge and tools within the programming community.

    1. cell Pond was um to be able to do everything that this language can do which is 00:24:50 called space toad um and everything that I've ever made in space toad which I have in a repo on GitHub I need to be able to make in cell pond so those Primitives need to do all of that um and 00:25:05 I just kept iterating
      • Work toward a collection of use-cases
      • Always design & test against all of them
      • Introduction to Presentation

        • Speaker introduces themselves as Lou or Luke and begins the presentation titled "Cell Pond: Spatial Programming Without Escape."
        • The presentation is structured into three parts: spatial programming, Cell Pond, and the concept of programming without escape.
      • Part 1: Spatial Programming

        • Lou shares a personal story from 23 years ago about their interest in trains and computers, learning to code with Stagecast Creator.
        • Stagecast Creator involved drag-and-drop spatial rules, where actions were defined by before-and-after pictures, such as character movement and obstacle navigation.
        • Despite its popularity, spatial programming had limitations, especially in 3D environments, requiring many rules for simple tasks.
        • Block-based coding became the norm, leading to a decline in spatial programming's popularity.
      • Spatial Programming Tools

        • Various spatial programming tools are introduced, such as Splats, Color Code, NetBoden, and Viskit, each serving different purposes like education, art, simulation, and games.
        • Lou introduces their own tool, Sand Pond, which uses before-and-after pictures to code elements like sand, water, and rock.
      • Challenges of Spatial Programming

        • The main challenge is the reliance on escape hatches, where more complex tasks require traditional programming, not just spatial rules.
        • Tools like Stagecast Creator and others incorporate lines of code for advanced functions, highlighting the limitation of pure spatial programming.
      • Part 2: Cell Pond

        • Lou demonstrates Cell Pond, a tool where users can paint and create rules using color channels and directional triangles to manipulate spatial elements.
        • Example: A rule turning green into pink is shown, highlighting how spatial rules work dynamically.
        • Users can create complex simulations with multiple rules, illustrating conservation of properties like color across rules.
        • A key feature is creating generalized rules that apply to multiple elements, such as sand, water, and rock.
      • Concept of Conservation in Cell Pond

        • Conservation ensures that properties remain consistent when elements interact, demonstrated through color copying.
        • Complex behaviors like diffusion are shown, where rules allow elements to swap positions randomly.
      • Advanced Features of Cell Pond

        • Cell Pond supports splitting and merging cells to increase or decrease resolution and store more data.
        • Example: Creating a fractal by continuously splitting cells, demonstrating Cell Pond's potential for complex simulations.
      • Part 3: Programming Without Escape

        • Lou questions if it's possible to create a powerful spatial programming tool without escape hatches.
        • The breakthrough came from creating a virtual machine called Dragon, which includes instructions for memory operations, streamlining the creation of spatial rules.
      • Implications and Future Directions

        • Lou reflects on how the virtual machine (VM) simplifies the process, leading to better UI development.
        • The goal is to develop higher-level spatial programming languages, exploring what a "C of spatial programming" might look like.
      • Q&A Session

        • Lou discusses the iterative design process and the importance of strict goals to achieve the intended functionality of Cell Pond.
        • Comparisons to other games like "Falling Sand" and "Baba is You" are made, highlighting influences and potential future directions.
        • Lou addresses the balance between adding UI features and maintaining VM capabilities, emphasizing the VM's core role in enabling advanced functions.
        • The presentation concludes with a discussion on live programming's role in creativity and how it fosters exploration and discovery in computational creativity.
  15. Jun 2024
    1. Summary of David Nolan's Talk on Predicate Dispatch and Pattern Matching in Clojure

      • Introduction to Predicate Dispatch:

        • David Nolan introduces the concept of predicate dispatch and its benefits, calling it "incredible benefits of wishful thinking".
        • Nolan references an email from Rich Hickey mentioning predicate dispatch as an improvement over current multi-methods.
      • Historical Context and Personal Journey:

        • Nolan recounts his journey from discovering William Byrd’s dissertation on miniKanren to implementing core.logic in Clojure.
        • He acknowledges the influence of logic programming and the flexibility of Clojure in incorporating such advanced concepts.
      • Core.logic and Performance Enhancements:

        • Nolan's implementation of core.logic made it idiomatic to Clojure with significant performance enhancements.
        • He highlights features like knowledge base facilities similar to Prolog, which allow defining FAQs and running queries.
      • Pattern Matching Exploration:

        • Nolan describes his deep dive into the theory of pattern matching, inspired by William Byrd's alphaKanren and Luke Maranjé's work on compiling pattern matching to decision trees.
        • Maranjé's approach uses lazy pattern matching to create shorter decision trees, enhancing efficiency.
      • Predicate Dispatch in Detail:

        • Nolan discusses Craig Chambers’ paper on efficient predicate dispatch, noting its comprehensive handling of single and multiple dispatching, predicate classes, and classifiers in pattern matching.
        • He explains the challenges in implementing a flexible and efficient predicate dispatch system, mentioning the hardwired handling of numeric operations in Chambers’ system.
      • Limitations of Multi-Methods in Clojure:

        • Multi-methods in Clojure are powerful but limited by the dispatch function, which can restrict future extensions.
        • Nolan emphasizes the need for a more flexible system like predicate dispatch, which allows for dynamic and open-ended dispatching.
      • Core.match and Extensibility:

        • Nolan's core.match library aims to provide a pattern matching facility in Clojure with extensibility in mind.
        • Core.match uses a pattern matrix to optimize decision trees, allowing for efficient pattern matching by reordering tests based on the most constrained variables.
      • Examples of Pattern Matching:

        • Nolan provides examples of matching sequences and maps, demonstrating how core.match handles different data structures efficiently by analyzing and reordering patterns.
        • The algorithm ensures minimal testing by leveraging random access in maps and optimized testing order for sequences.
      • Challenges in Dynamic Predicate Dispatch:

        • Nolan addresses the complexity of making predicate dispatch dynamic and extensible, highlighting the need for namespace-local changes to avoid breaking existing code.
        • He proposes using core.logic to drive the core.match compiler, ensuring logical consistency and efficient dispatch tree generation.
      • Future Directions and Open Questions:

        • Nolan suggests several strategies for maintaining performance in a dynamic system, such as lazy compilation and grouping predicate dispatch definitions.
        • He invites the community to contribute ideas and solutions to the challenges of dynamic predicate dispatch, emphasizing Clojure’s flexibility and the importance of non-overlapping clauses.
      • Conclusion and Community Engagement:

        • Nolan concludes by encouraging the community to experiment and innovate with predicate dispatch and pattern matching in Clojure, aiming for next year's advancements to be a reality rather than wishful thinking.
      • Introduction to re:Invent 2021:

        • Werner Vogels, CTO of Amazon.com, celebrates the 10-year anniversary of re:Invent.
        • Emphasis on the evolution and innovation in cloud computing over the past decade.
      • Historical Context and AWS Lambda:

        • Introduction of AWS Lambda as an event-driven compute service eliminating the need for servers: "introduce AWS Lambda, which is an event-driven compute service for a dynamic application, and you have to run no servers, no instances, server-free back ends."
        • Reflection on the cloud's impact on innovation and resource accessibility: "Suddenly, getting access to capacity was just a click of a button."
      • Evolution of AWS Services:

        • Transition from physical to programmable resources, enhancing scalability and flexibility: "all the hardware pieces... now became virtually programmable."
        • Development of specialized EC2 instances for various needs (compute, memory, storage): "Now, you want to store the optimized ones, you want compute optimized, you want to have large memory instances where you could run your SAP online."
      • Technological Innovations and Nitro Hypervisor:

        • Investment in data centers and introduction of the Nitro hypervisor for enhanced performance: "The Nitro hypervisor made it possible to introduce all these new hardware platforms for you."
        • Launch of Mac EC2 M1 instances, highlighting Apple's shift to custom silicon: "Today, I'm happy to announce that you will get your hands on the Mac EC2 M1 instances."
      • Scaling and Utilization of EC2:

        • Significant increase in EC2 usage: "60 million launches of EC2 each day," doubling since 2019.
        • Importance of scaling both up and down to optimize resource use and energy consumption.
      • AWS Global Footprint and Latency Reduction:

        • Expansion to 25 regions and 81 Availability Zones, with plans for more: "there are the nine more regions planned that we will bring online in the coming two years."
        • Development of Local Zones and AWS Wavelength for low-latency applications.
      • Advances in Networking and AWS Cloud WAN:

        • Introduction of AWS Cloud WAN for building and managing global networks: "AWS Cloud WAN, which gives you the ability to build, manage, monitor global-private five wide area networks using AWS."
        • Shift from EC2-Classic to VPC as the default network model.
      • Edge Computing and IoT:

        • FreeRTOS and AWS IoT Core for managing billions of devices: "for that, we give you FreeRTOS as a stable base as an operating system for these devices."
        • Introduction of AWS Panorama Appliance for video stream analysis in various environments.
      • AWS in Space:

        • AWS Ground Station enabling space data processing and innovation: "Ground Station, data comes off to the Ground Station, you basically just went a 10-hour time to get your data of your satellites and move into AWS."
        • Case study of the Mohammed Bin Rashid Space Center using AWS for Mars exploration data.
      • Development Tools and Services:

        • Launch of AWS Amplify Studio for low-code development: "AWS Amplify Studio, which is a completely visual environment to build feature rich apps in hours and weeks."
        • Introduction of new SDKs for Swift, Kotlin, and Rust, expanding developer tools.
      • Customer Success and Best Practices:

        • Liberty Mutual's journey using AWS CDK for rapid, compliant development: "delivering industry leading capabilities like our unstructured data ingestion pipeline... all built using well architected CDK patterns."
        • Emphasis on reusable constructs and Well-Architected Framework to optimize development processes.
      • Sustainability and Energy Efficiency:

        • AWS's commitment to reducing carbon footprints through efficient cloud solutions: "moving on premises workloads to AWS can lower your carbon footprint by 88%."
        • Introduction of the AWS Carbon Footprint Tool and new Sustainability Pillar in the Well-Architected Framework.
      • Gaming and Distributed Systems:

        • Overview of New Worlds, an MMORPG game built on AWS architecture: "New Worlds is a game designed by Amazon Games... a massively online multiplayer game where players from all over the world can connect together and play together."
        • Details on the scalable, serverless architecture enabling immersive gameplay and rapid scaling: "in New Worlds, we were able to do this 30 times a second."
      • Final Advice and Vision:

        • Importance of security, simplicity, and innovation in building modern applications: "protecting your customers will be forever, should be forever your number one priority."
        • Encouragement to build with AWS's broad array of services and tools, emphasizing sustainability and efficient resource use.
    1. Here is a detailed, thorough, in-depth, and concise summary of the provided research paper, formatted as a bullet outline for easy understanding. Each bullet point includes a relevant quoted sentence for reference.

      Abstract

      • Influence of Tokenization: Tokenization significantly influences language models (LMs)’ performance.
      • Evolution of Tokenizers: Traces the evolution from word-level to subword-level tokenizers.
      • Challenges with Subword Tokenizers: Subword tokenizers face difficulties with non-Latin languages and require extensive training data.
      • Principle of Least Effort: Introduces the Principle of Least Effort from cognitive science to improve tokenizers.
      • Less-is-Better (LiB) Model: Proposes the LiB model, which learns an integrated vocabulary of subwords, words, and multiword expressions (MWEs).
      • Evaluation of LiB Model: Comparative evaluations show that the LiB tokenizer outperforms existing word and BPE tokenizers [❞].

      Introduction

      • Simplification of Information: Our brains simplify vast or intricate information into smaller segments to better understand and remember it.
      • Impact of Tokenizers on LMs: The choice of tokenizer crucially impacts the performance of language models.
      • Tokens and Types: Investigates the roles of tokens and types in tokenizer design to optimize performance.
      • Human Language Processing: Argues that tokenizers should emulate human language processing methods [❞].

      From Word-level Tokenizers to Subword-level Tokenizers

      • Initial Word-level Tokenizers: Word-level tokenizers initially divided text into words using spaces and punctuation.
      • Limitations: Word-level tokenizers struggled with languages without clear word boundaries and flexible morphological inflections.
      • Rise of Subword Tokenizers: Subword tokenizers became mainstream due to their flexibility and ability to generalize [❞].

      Balancing Tokens and Types by Subwords

      • Core Consideration: Balancing the number of tokens and types is crucial in transitioning from word-level to subword-level tokenizers.
      • Example of BPE and WordPiece: These tokenizers handle rare vocabularies by merging frequently occurring character pairs.
      • Efficiency of Subword Tokenization: Subword tokenizers reduce the number of types significantly while slightly increasing the number of tokens, enhancing model performance and adaptability [❞] [❞].

      Current Marginalization of Multiword Expressions (MWEs) in Language Models

      • Importance of MWEs: MWEs play a crucial role in everyday language but are often overlooked in LM development.
      • Challenges: Introducing MWEs increases the number of types and complexity, leading to rare or domain-specific MWEs being underrepresented in training data.
      • Potential Benefits: Direct recognition of MWEs can enhance the model’s language comprehension and accuracy [❞] [❞].

      Optimizing Future Tokenizers

      • Human Language Processing: Emulating human language processing methods can optimize tokenizer design.
      • Principle of Least Effort (PLE): The PLE suggests that people follow paths that minimize effort, and this principle can guide the design of efficient tokenizers [❞] [❞].

      Principle of Least Effort

      • Human Tokenization: Humans learn and recognize cognitive units (words, subwords, MWEs) that reduce language complexity.
      • Cognitive Units: These units are adaptable in size and form, reflecting how humans process language.
      • Zipf’s Principle: Zipf’s Principle of Least Effort states that people strive to minimize total work in solving immediate and future problems [❞] [❞].

      LiB Model: An Implementation of ‘Principle of Least Effort’

      • Model Mechanism: The LiB model consists of a “Memorizer” that merges tokens into longer units and a “Forgetter” that removes less useful units, balancing the number of tokens and types.
      • Flexibility and Performance: The LiB model autonomously learns a vocabulary that includes subwords, words, and MWEs, performing better than traditional tokenizers in various evaluations [❞] [❞] [❞].

      This structured summary encapsulates the essence of the research paper while providing clear references to the original text for verification and further reading.

    1. Summary of "A History of Clojure" by Rich Hickey

      Key Points and Quotes:

      • Introduction and Design Goals:

        • "Clojure was designed to be a general-purpose, practical functional language, suitable for use by professionals wherever its host language, e.g., Java, would be."
        • It aims to combine the features of functional programming and Lisp while running on the runtime of another language such as the JVM.
      • Unique Positioning:

        • "Most of the ideas in Clojure were not novel, but their combination puts Clojure in a unique spot in language design (functional, hosted, Lisp)."
        • This combination allows Clojure to leverage existing libraries and interoperate efficiently with its host language.
      • Concurrency and State Management:

        • "It complements programming with pure functions of immutable data with concurrency-safe state management constructs that support writing correct multithreaded programs without the complexity of mutex locks."
        • This feature is crucial for developing robust multithreaded applications.
      • Adoption and Impact:

        • "In spite of combining two (at the time) rather unpopular ideas, functional programming and Lisp, Clojure has since seen adoption in industries as diverse as finance, climate science, retail, databases, analytics, publishing, healthcare, advertising, and genomics, and by consultancies and startups worldwide."
        • Clojure's adoption across various industries showcases its versatility and effectiveness.
      • Evolution and Community:

        • "This paper recounts the motivation behind the initial development of Clojure and the rationale for various design decisions and language constructs. It then covers its evolution subsequent to release and adoption."
        • The language has evolved through community contributions and the author's vision, continuously adapting to new challenges and requirements.
      • Host Language Integration:

        • "Clojure is intentionally hosted, in that it compiles to and runs on the runtime of another language, such as the JVM."
        • This intentional design choice allows Clojure to benefit from the performance and ecosystem of its host language.

      Main Ideas and Takeaways:

      • Practical Functional Language: Designed to be practical and usable in professional environments, leveraging the strengths of functional programming and Lisp.
      • Concurrency-Safe: Emphasizes immutability and concurrency-safe constructs to facilitate the development of correct and efficient multithreaded applications.
      • Wide Adoption: Despite initial skepticism, Clojure's unique approach has led to its adoption across a diverse range of industries.
      • Community and Evolution: The language's evolution is driven by both community input and the original design vision, ensuring it remains relevant and robust.
      • Integration with Host Languages: By compiling to and running on established runtimes like the JVM, Clojure can interoperate with existing libraries and technologies efficiently.

      This summary captures the essence of the provided text while adhering to the specified guidelines.

    1. Summary of "The Essence of Dataflow Programming"

      Introduction

      • Comonadic Approach to Dataflow Computation:
        • "We propose a novel, comonadic approach to dataflow (stream-based) computation."
        • The approach is based on comonads, which structure context-dependent computations.

      Key Concepts and Takeaways

      • Pure and Impure Computations:

        • Examples of pure and impure functions are provided to illustrate how impure computations can be handled using monads.
        • "Ever since the work by Moggi and Wadler, we know how to reduce impure computations with errors and non-determinism to purely functional computations."
      • Dataflow Computation:

        • Dataflow computation involves programming with streams or signals in discrete time.
        • "The style of programming is functional, but any expression denotes a stream (a signal)."

      Comonads vs. Monads in Dataflow

      • Limitations of Monads:

        • Monads are not suitable for structuring dataflow computations.
        • "Could it be that monads are capable of structuring notions of dataflow computation as well? No, there are simple reasons why this must be impossible."
      • Advantages of Comonads:

        • Comonads provide a better framework for dataflow computation.
        • "Comonads are even better, as there is more structure to comonads than to arrow types."

      Implementation and Examples

      • Comonadic Interpreter:

        • The paper develops a generic comonadic interpreter for context-dependent computation, specifically for stream-based computation.
        • "We show that general and causal stream functions... are elegantly described in terms of comonads."
      • Higher-order Dataflow Languages:

        • The approach allows for higher-order dataflow language designs with minimal effort.
        • "Remarkably, we get elegant higher-order language designs with almost no effort."

      Combining Effects and Context-dependence

      • Distributive Laws:
        • The paper discusses the use of distributive laws to combine effectful and context-dependent computations.
        • "We show how effects and context-dependence can be combined in the presence of a distributive law of the comonad over the monad."

      Conclusion

      • Relevance of Comonads:
        • Comonads are presented as highly relevant for programming language semantics, particularly for context-dependent computations.
        • "While notions of dataflow computation cannot be structured with monads, they can be structured perfectly with comonads."

      This summary encapsulates the critical points and concepts presented in the provided document on the comonadic approach to dataflow programming. It highlights the advantages of comonads over monads in this context, and provides an overview of the implementation and potential applications of this approach.

    1. Summary of "A History of Clojure" by Rich Hickey

      Key Points and Quotes:

      • Introduction and Design Goals:

        • "Clojure was designed to be a general-purpose, practical functional language, suitable for use by professionals wherever its host language, e.g., Java, would be."
        • It aims to combine the features of functional programming and Lisp while running on the runtime of another language such as the JVM.
      • Unique Positioning:

        • "Most of the ideas in Clojure were not novel, but their combination puts Clojure in a unique spot in language design (functional, hosted, Lisp)."
        • This combination allows Clojure to leverage existing libraries and interoperate efficiently with its host language.
      • Concurrency and State Management:

        • "It complements programming with pure functions of immutable data with concurrency-safe state management constructs that support writing correct multithreaded programs without the complexity of mutex locks."
        • This feature is crucial for developing robust multithreaded applications.
      • Adoption and Impact:

        • "In spite of combining two (at the time) rather unpopular ideas, functional programming and Lisp, Clojure has since seen adoption in industries as diverse as finance, climate science, retail, databases, analytics, publishing, healthcare, advertising, and genomics, and by consultancies and startups worldwide."
        • Clojure's adoption across various industries showcases its versatility and effectiveness.
      • Evolution and Community:

        • "This paper recounts the motivation behind the initial development of Clojure and the rationale for various design decisions and language constructs. It then covers its evolution subsequent to release and adoption."
        • The language has evolved through community contributions and the author's vision, continuously adapting to new challenges and requirements.
      • Host Language Integration:

        • "Clojure is intentionally hosted, in that it compiles to and runs on the runtime of another language, such as the JVM."
        • This intentional design choice allows Clojure to benefit from the performance and ecosystem of its host language.

      Main Ideas and Takeaways:

      • Practical Functional Language: Designed to be practical and usable in professional environments, leveraging the strengths of functional programming and Lisp.
      • Concurrency-Safe: Emphasizes immutability and concurrency-safe constructs to facilitate the development of correct and efficient multithreaded applications.
      • Wide Adoption: Despite initial skepticism, Clojure's unique approach has led to its adoption across a diverse range of industries.
      • Community and Evolution: The language's evolution is driven by both community input and the original design vision, ensuring it remains relevant and robust.
      • Integration with Host Languages: By compiling to and running on established runtimes like the JVM, Clojure can interoperate with existing libraries and technologies efficiently.

      This summary captures the essence of the provided text while adhering to the specified guidelines.

      • Introduction and Context:

        • The talk focuses on "rethinking identity with Clojure" by a senior software engineer at Vouch.
        • Vouch is developing a platform to address identity and trust issues among people, machines, and processes.
        • The speaker emphasizes leveraging emerging technologies for significant improvements in current best practices.
        • "At Vouch, we love Clojure and ClojureScript" for tackling ambitious problems with a small team.
      • Clojure Philosophy:

        • The Clojure philosophy, which emphasizes separating concerns and simplifying complexity, guides Vouch's approach to problem-solving.
        • The speaker references Rich Hickey's talk "Simple Made Easy," which highlights the importance of decoupling elements to reduce complexity.
        • "As software developers, we often do this thing called complecting," where unrelated aspects are combined, creating complications.
      • Security and User Experience:

        • A fundamental flaw in security systems is poor user experience, particularly in how users secure their accounts.
        • "If the user experience is wrong, there's nothing you can do because in any system, people are often the weakest link."
        • The speaker argues that complexity in security measures, like passwords, leads to vulnerabilities due to user error.
      • Clojure's Approach to Simplifying Complexity:

        • Clojure decouples programs from state, a core principle borrowed from functional programming.
        • This separation simplifies development and reduces bugs, particularly in concurrent programming.
        • "Clojure adopted the idea from Ruby, Python, and JavaScript that the common pattern is data literals."
      • Innovations in Clojure and Its Impact:

        • Rich Hickey innovated by proving that persistent data structures can perform well on modern hardware.
        • "Persistent data structures...give me these certain properties in a concurrent scenario but...in a very easy to use ergonomic way."
      • Applying Clojure Principles to Identity and Security:

        • Vouch uses secure enclaves in devices like iPhones and Google's Titan M to manage cryptographic keys securely.
        • "What just happened there are no secrets. The secret isn't on our server. We don't know anything about secrets."
        • This method ensures that even if a device is lost, unauthorized access is prevented due to the secure enclave.
      • Blockchain and Distributed Systems:

        • Vouch leverages blockchain (Hyperledger Fabric) to ensure tamper-evidence and secure user actions.
        • "All interactions with our system write to the blockchain...prevent tampering with the actions that users have taken."
      • Developer Experience with Clojure:

        • Clojure allows Vouch's small team to be highly productive, with one language for all system parts.
        • "We actually get a feedback loop into every piece of our architecture...we have a shell into it and can interact with it using our language."
        • ClojureScript enables writing code once and deploying it across multiple platforms, saving development time.
      • Practical Demonstration:

        • The speaker showcases an app that uses biometrics for logging into services, demonstrating a superior user experience compared to traditional passwords.
        • "The user experience needs to be so good that this feels better than the password approach."
      • Closing Thoughts:

        • The speaker encourages finding and addressing sources of complexity to improve user and developer experience.
        • "Complexity is not just the software; it will eventually burden your users."
        • Revisiting Clojure can benefit teams building ambitious full-stack solutions with many moving parts.
      • Q&A Highlights:

        • Addressing concerns about biometrics and fallback mechanisms, such as using blockchain for account recovery.
        • Clarifying the differences between Clojure spec and other type systems, emphasizing runtime-oriented specifications.
        • Discussing the soundness and limitations of using Clojure in security-focused projects.
      • The speaker, an experienced software developer with expertise in C++, Java, and Clojure, introduces key concepts in Clojure programming, emphasizing the shift from Java to Clojure for enhanced code efficiency and readability.

        • "I've been doing software development since I was kid and professionally for you know 1517 something like that years now I did a little bit of C++ work and then spent a long time in Java."
      • The talk focuses on the differences between values and objects, highlighting the immutability of values in Clojure and contrasting it with the mutability of objects in Java.

        • "I'm going to start by talking about values and what is the value... all of these things are immutable."
      • Values in Clojure have intrinsic, semantically evident meanings, allowing for precise and undisputed representation and equality comparison.

        • "It has a precise meaning and it's semantically evident what we mean by that... all of these things are immutable."
      • Composite values in Clojure allow for creating larger data structures from primitives while retaining immutability, ensuring consistent behavior and avoiding common issues with mutable objects.

        • "With a composite value, you should not have to write an equals method; it should be semantically evident whether two things are equal or not."
      • Clojure collections (vectors, lists, maps, sets) are introduced as immutable composite values that offer extensive functionality and efficiency despite initial concerns about performance.

        • "Vectors are what you use constantly in Clojure... they are expandable indexed collections."
      • Sequences in Clojure are immutable views into collections, providing a powerful abstraction for functional programming and enabling efficient handling of large or infinite data sets.

        • "Sequences are really immutable views into a collection... logically a list and you can get back from it the first."
      • Generic data interfaces in Clojure facilitate the handling of diverse data types through consistent, reusable functions, enhancing code reusability and reducing redundancy.

        • "Closure does not do that and that is one of the things I really like about it."
      • Identity and state are treated distinctly in Clojure, with immutable values representing states at particular points in time, offering a clearer and more predictable model for concurrent programming.

        • "Identity is an abstraction that represents an entity whose state changes over time, while state is the value of an identity at a point in time."
      • Clojure's software transactional memory (STM) model provides robust mechanisms for managing state changes in concurrent environments, ensuring consistency and reliability.

        • "STM protects you from inconsistent views and ensures that the states you perceive are coherent and correct."
      • The speaker emphasizes that transitioning to programming with immutable composite values significantly changes one's approach to software development, leading to more maintainable and less error-prone code.

        • "I say this night seriously it will change the way you think about programming to start from immutability like this you'll design and write code differently."
      • Metaphor of Knowledge and Innocence in "The Time Machine"

        • "HG Wells' 'The Time Machine' describes a world populated by two races, the Eloi and the Morlocks."
        • "The Eloi live in a paradise they don't fully understand, and sometimes what they don't know comes up and bites them."
      • Foundations of Software Engineering

        • "As software engineers, we build on foundations that we don't understand...we sit atop 60 years of accreted abstractions."
        • "At some point, we have to be content with what we know and build upwards."
      • Hierarchical Structures in Problem Solving

        • "We're taught ways to cope with the fact that we don't understand the systems that we build in their entirety, and chief among them is the hierarchy or the tree."
        • "For some problems, this is a completely accurate representation... but the systems we build look more like this (diagram of connected components)."
      • Polite Fiction and Model Limitations

        • "We spend a lot of our time pretending that these two things (simple hierarchies and complex systems) are the same... it's a necessary story because otherwise, we can't make progress."
        • "It's easy to forget that the model we have is not the world as it actually is."
      • Historical Example: Scientific Forestry

        • "Scientific forestry is roughly the process of going from a natural forest to a monoculture for efficient lumber production."
        • "The bureaucrat understood that his model was not the forest as it actually was... he made the world conform to his understanding of it."
      • Systems Thinking and Complexity

        • "Gerald Weinberg defines a space with two axes: complexity and randomness."
        • "Heuristic devices are lossy ways of looking at the world... effective models but not true models."
      • Mid-Century Critical Theory and Heuristic Devices

        • "Deleuze and Guattari talk about the interplay between the state (hierarchical structures) and the Nomad (local understanding)."
        • "They describe 'rhizomatic' structures versus 'arborescent' or tree-like structures."
      • Borges' Parable: The Map and the Territory

        • "In Borges' story 'On Exactitude in Science,' a map the size of the empire becomes useless."
        • "We use maps because they are reductive... as we become more reductive, the utility of the map falls off."
      • Modern Urban Design and Control

        • "Baron Haussmann's redesign of Paris included wide roads to prevent slums from seceding."
        • "Le Corbusier's sketches for the new center of Moscow had no concern for the context."
        • "Brasilia's construction showed the failure of a completely planned city, leading to disconnected spaces and 'Brasília' syndrome."
      • Pattern Language and Quality without a Name

        • "Christopher Alexander's 'A Pattern Language' and 'A Timeless Way of Building' describe a holistic quality in buildings."
        • "Alexander emphasizes that this quality is contextual and cannot be reduced."
      • Software Design Patterns and Habitability

        • "The Design Patterns book tries to give universal constructs, missing the contextual nature of Alexander's ideas."
        • "Richard Gabriel's 'Patterns of Software' focuses on habitability... the degree to which you can make a codebase your own and adapt to changes."
      • Timeless Knowledge versus Cunning

        • "The Greeks had two words for knowledge: 'techne' (timeless knowledge) and 'metis' (cunning)."
        • "In practice, everything depends on the domain and our experience, not on universal truths."
      • Conclusion and Broader Lessons

        • "We should plan for the eventuality that our models and systems will be wrong."
        • "The goal is to inform intuition and provide a vocabulary for discussing complex systems and their trade-offs."
    1. a person named Frank McSherry turns out he has solved he and his team back at Microsoft Research they have thought about this reactive iteration problem quite a bit and they actually solved it in an 00:20:52 interesting system called Naiad which was released quite a few years ago and differential dataflow is a complete reimplementation of that system by Frank is written entirely in rust

      https://github.com/TimelyDataflow/differential-dataflow https://github.com/TimelyDataflow/timely-dataflow

    2. these systems share the interesting property that data coordinates code and not the other way around

      Data arrival triggers computation, ensuring systems stay in sync with real-time changes.

      • Introduction and Context:

        • "Thank you for coming, this is reactive data log for data..."
        • Nicolas Gruber, a software consultant and graduate student at ETH Zurich, discusses reactive systems and introduces 3DF, a system for reactive data log evaluation.
      • Definition and Importance of Reactive Systems:

        • "We want to make reactive systems that don't poll and we want those systems to get a consistent view of the world."
        • Reactive systems trigger computations based on new data arrivals instead of polling at regular intervals.
        • Examples include live-updating web applications, alerting systems, and real-time dashboards.
      • Key Characteristics of Reactive Systems:

        • "All these systems share the interesting property that data coordinates code and not the other way around."
        • Data arrival triggers computation, ensuring systems stay in sync with real-time changes.
      • Reactive Data Management Approach:

        • Nikita Prokopev's vision for future web applications involves handling unbounded data streams and deriving refined views for individual users.
        • "We want this derivation to stay in sync as new data arrives."
      • Practical Example and Demonstration:

        • The example domain involves persons issuing loans with hierarchical bank branch structures and user permissions.
        • "For this child branch we have read permission initially and we have two initial loans..."
        • The demo illustrates how the system maintains real-time updates and permissions adjustments using data log rules.
      • System Overview - 3DF:

        • 3DF performs reactive data log evaluation on a distributed data parallel stream processor.
        • "We've been developing that for the better part of this year now."
      • Core Concept of Incremental Computation:

        • "We need the modified operator...that gives us the exact set of changes that are required."
        • 3DF propagates only changes to optimize performance and ensure real-time updates.
      • Handling Complex Queries and Iterative Computations:

        • "We want to do something that I actually didn't know at the beginning of this year was considered almost impossible..."
        • Iterative computations and reactive iteration are managed using the differential dataflow framework developed by Frank McSherry.
      • Implementation Details and Differential Dataflow:

        • "Differential dataflow manages persistent multi-sets...logical collection of edges as a sequence of additions and retractions..."
        • Utilizes multi-dimensional timestamps for consistent, efficient reactive iteration.
      • User and System Interaction:

        • External services and users can send data log queries to 3DF clusters, which update computations based on new data from sources like Datomic.
        • "External users other services can first of all of course transact data to Datomic..."
      • Applications and Advantages:

        • 3DF enables scalable, distributed reactive systems beyond the limits of a single database peer.
        • "It lets you scale queries beyond the limits of a single peer in the way I discussed."
      • Future Work and Community Involvement:

        • Continued development focuses on scalability, performance isolation, and robust integrations.
        • "We're working on bulletproofing that and we're working in general towards a better release of that system."
        • Encourages collaboration and engagement with others facing similar challenges.

      This summary encapsulates the essence of the talk, highlighting the main ideas and essential information while maintaining clarity and conciseness.

  16. May 2024
    1. Summary of "Revised Report on the Propagator Model" by Alexey Radul and Gerald Jay Sussman

      Introduction

      • Main Problem: Traditional programming models hinder extending existing programs for new situations due to rigid commitments in the code.
      • Quote: "The most important problem facing a programmer is the revision of an existing program to extend it for some new situation."
      • Solution: The Propagator Programming Model supports multiple viewpoints and integration of redundant solutions to aid program extensibility.
      • Quote: "The Propagator Programming Model is an attempt to mitigate this problem."

      Propagator Programming Model

      • Core Concept: Autonomous machines (propagators) communicate via shared cells, continuously adding information based on computations.
      • Quote: "The basic computational elements are autonomous machines interconnected by shared cells through which they communicate."
      • Additivity: New contributions are seamlessly integrated by adding new propagators without disrupting existing computations.
      • Quote: "New ways to make contributions can be added just by adding new propagators."

      Propagator System

      • Language Independence: The model can be implemented in any programming language as long as a communication protocol is maintained.
      • Quote: "You should be able to write propagators in any language you choose."
      • Cell Operations: Cells support adding content, collecting content, and registering propagators for notifications on content changes.
      • Quote: "Cells must support three operations: add some content, collect the content currently accumulated, register a propagator to be notified when the accumulated content changes."

      Implementing Propagator Networks

      • Creating Cells and Propagators: Cells store data, while propagators compute based on cell data. Propagators are attached using d@ (diagram style) or e@ (expression style) for simpler cases.
      • Quote: "The cells' job is to remember things; the propagators' job is to compute."
      • Example: Adding two and three using propagators.
      • Quote: "(define-cell a) (define-cell b) (add-content a 3) (add-content b 2) (define-cell answer (e:+ a b)) (run) (content answer) ==> 5"

      Advanced Features

      • Conditional Network Construction: Delayed construction using conditional propagators like p:when and p:if to control network growth.
      • Quote: "The switch propagator does conditional propagation -- it only forwards its input to its output if its control is 'true'."
      • Partial Information: Cells accumulate partial information, which can be incrementally refined.
      • Quote: "Each 'memory location' of Scheme-Propagators, that is each cell, maintains not 'a value', but 'all the information it has about a value'."

      Built-in Partial Information Structures

      • Types: Nothing, Just a Value, Numerical Intervals, Propagator Cells, Compound Data, Closures, Truth Maintenance Systems, Contradiction.
      • Quote: "The following partial information structures are provided with Scheme-Propagators: nothing, just a value, intervals, propagator cells, compound data, closures, supported values, truth maintenance systems, contradiction."

      Debugging and Metadata

      • Debugging: Scheme's built-in debugger aids in troubleshooting propagator networks. Metadata tracking for cells and propagators enhances debugging.
      • Quote: "The underlying Scheme debugger is your friend."
      • Metadata: Tracking names and connections of cells and propagators helps navigate and debug networks.
      • Quote: "Inspection procedures using the metadata are provided: name, cell?, content, propagator?, propagator-inputs, propagator-outputs, neighbors, cell-non-readers, cell-connections."

      Benefits of the Propagator Model

      • Additivity and Redundancy: Supports incremental additions and multiple redundant computations, enhancing flexibility and resilience.
      • Quote: "It is easy to add new propagators that implement additional ways to compute any part of the information about a value in a cell."
      • Intrinsic Parallelism: Each component operates independently, making the model naturally parallel and race condition-resistant.
      • Quote: "The paradigm of monotonically accumulating information makes [race conditions] irrelevant to the final results of a computation."
      • Dependency Tracking: Facilitates easier integration and conflict resolution via premises and truth maintenance.
      • Quote: "If the addition turns out to conflict with what was already there, it (or the offending old thing) can be ignored, locally and dynamically, by retracting a premise."

      Conclusion

      • Goal Achievement: The Propagator Model approaches goals of extensibility and additivity by allowing flexible integration and redundancy in computations.
      • Quote: "Systems built on the Propagator Model of computation can approach some of these goals."
      • The speaker initially describes the excitement and complexity of rendering a simple red triangle in Clojure using a Java library.

        • "I found a Java library that gave bindings I started playing around with it and after quite a bit of struggle I was able to get a red triangle to show up on the screen."
      • The creation of more complex shapes like the Sierpinski pyramid, which involves recursive geometric transformations, serves as an example to explore different programming abstractions.

        • "It's actually fairly straightforward and I'll switch to two dimensions for your convenience, you start with a triangle and you shrink it down to half its size and then you copy it twice."
      • The speaker introduces macros as a way to handle repetitive code by creating a reusable "Sierpinski macro" that can be nested to achieve different levels of detail.

        • "We can write ourselves a Sierpinski macro which takes a body which is really just anything and first we wrap it in a function and then we create a scoped transform which shrinks everything that we draw by half."
      • Despite their usefulness, macros have limitations in terms of composition and downstream flexibility, prompting the exploration of functions as a better abstraction.

        • "Macros are not necessarily our most composable sort of abstraction and this has two issues... there is no potential for downstream composition."
      • Functions, though still limited, allow for more composable and testable abstractions by introducing indirection and treating rendering operations as data.

        • "We take our draw triangle function and we call it now it's a renderer and now we have a render function which simply invokes it."
      • The speaker demonstrates how higher-order functions and mapping over data can create more flexible and testable code, moving from functions to data-centric approaches.

        • "We defined three of these that will offset and scale appropriately and now we Define a Sierpinski method which takes a list of shapes and returns a list of shapes which is three times larger with all of them sort of offset and scaled appropriately."
      • The comparison between data, functions, and macros emphasizes the generality and composability of data-centric approaches, albeit with the necessity of grounding them in executable code.

        • "Data doesn't do anything by itself we have to sort of eventually ground out in code that does something we have to wrap our data in functions and execute our functions somewhere within our code."
      • The speaker critiques taxonomies while acknowledging their utility in providing a framework for discussion and suggesting approaches.

        • "Taxonomies are not a way to perfectly model the world... but they give us both a vocabulary to talk about it and they give us sort of a predictiveness."
      • Applying these abstractions to more practical examples, like sequence transformations and transducers, illustrates the balance between abstraction and practical utility.

        • "We can use the double threaded arrow to structure it so that the flow of data is left to right rather than inside out."
      • The exploration of automaton theory and finite state machines (like Automat) showcases the power of data-driven approaches in providing flexibility and compositional capabilities beyond traditional regular expressions.

        • "Automaton Theory... there's a much richer set of tools there than I'd originally realized and I tried to encompass this in a library called Automat."
      • The discussion on backpressure and causality in asynchronous programming highlights the importance of managing side effects and execution order in concurrent systems.

        • "Backpressure here is this emergent property because we have this structural relationship between this it's just that one happens before the other and that one can cause the other not to happen."
      • The introduction of streams and deferreds in the Manifold library exemplifies a less opinionated approach to handling eventual values and asynchronous computation, allowing for greater flexibility in execution models.

        • "A stream is lossy right you take a message it's gone a deferred represents a single unrealized value and you can sort of consume it as many times as you like."
      • The concept of "let-flow" in Manifold, which optimizes concurrency by analyzing data dependencies, demonstrates advanced techniques for achieving optimal parallel execution.

        • "Let-flow will basically walk the entire let binding figure out what the data dependencies are and execute them in the correct order."
      • The speaker concludes by emphasizing the importance of understanding the different forces at play in software composition and encourages ongoing discussion to improve the ecosystem.

        • "This is what causes our ecosystem to flourish or dwindle right the degree to which all the disparate pieces can work together."
      • In the Q&A, the speaker briefly touches on the position of monads in the spectrum and compares Automat to other finite state machine libraries like Regal.

        • "Where do you put monads in your spectrum... somewhere between functions and data right."
        • "Automat... is meant to be sort of a combinator version of Regal effectively."
      • Introduction

        • Jerry Sussman discusses the importance of flexible programming systems.
        • "I want to show how to the ways of making systems that have the property that you can you write this big pile of code and you know you, you get all of a sudden it's gee I have a different problem I have to solve why can't I use the same piece of code."
      • Personal Background

        • Sussman's extensive experience as a programmer since the 1960s.
        • "I'm the oldest guy here probably and way back when I first started programming computers that's what they looked like."
      • Goals for Robust Systems

        • Desires systems that are generalizable, evolvable, and tough.
        • "I want is robust systems that have the following property that they're generalizable in the sense that... they're evolvable in the sense that they can be adapted to new jobs without modification."
      • Critique of Programming 'Religions'

        • Emphasizes the need for diverse tools and approaches in programming.
        • "We have a lot of people who have religions about how to program... Each of which is good for some particular problems but not good for a lot of problems."
      • Body Plans in Engineering and Programming

        • Discusses the concept of body plans from biology and its application in engineering.
        • "Superrod radio receiver... a body plan... that separates band selectivity from inter-channel selectivity."
      • Multiple Approaches in Problem Solving

        • Highlights the value of having multiple methods to solve the same problem, inspired by biology.
        • "There are two ways to make a frog... why don't we do that in programming?"
      • Biological Inspiration in Programming

        • Advocates learning from nature's solutions, such as the flexible human genome.
        • "One of the things nature does is it's very expensive... generates and tests... we don't do that very much in programming because it's expensive."
      • Generating and Testing in Programming

        • Discusses McCarthy's amb operator for non-deterministic search.
        • "McCarthy's amb operator... is a modeling non-deterministic autometer."
      • Generic Operations and Extensibility

        • Explains the power of generic operations and their application in programming.
        • "Most people think generic operations... in fact, automatic differentiation is just a generic extension of arithmetic."
      • Risks and Benefits of Generic Extensions

        • Acknowledges the dangers of generic programming while emphasizing its power.
        • "Generic arithmetic... very dangerous... the only thing that scares me in programming is floating point."
      • Teaching and Practical Applications

        • Emphasizes the importance of teaching robust, flexible programming techniques.
        • "I've only given you three of them... I have hundreds of them that I just understand how to avoid programming yourself into a corner."
      • Conclusion

        • Discusses his books and teaching philosophy, integrating traditional and programming languages for clarity.
        • "I've written several books about this... using programming as the way of expressing the ideas besides the traditional mathematics in addition to you make it unambiguous CL clear and therefore easy to read."
    1. Old School

      Back before there were computers there were databases and transactions. Databases were realized as accumulate-only ledgers and transactions were realized with atomically executed contracts.

      Datomic’s data and transaction models are highly analogous to these real-world constructs. Datomic accumulates facts (datoms) and, like a ledger, has no addressable places nor semantics for updating thereof.

      Datomic’s transactions are like contracts. A contract has a bunch of clauses that, while appearing in order, do not specify a procedure executed in that order. Instead they are a bunch of declarations (of rights, obligations etc) that will all become true together upon execution of the contract (or not at all!), typically by signing of the parties. There is no partial contract along the way - within a contract there is no notion of time or imperative execution, no partial operations on the world etc. Contract execution has no temporal extent - you sign it and it all becomes true.

      A contract execution thus identifies a point in time - that point dividing the time before the execution of the contract from the time after, in which the contract (in toto) is in effect. A Datomic transaction does the same.

      Obviously, not being a procedure bundling up imperative operations, there is nothing analogous to a traditional DB “stored procedure” in a contract. But Datomic doesn’t offer stored procedures. Instead it has “transaction functions” which, given the state of the db immediately preceding the transaction, calculate values for incorporation within it.

      Do contracts have “transaction functions”? Of course they do! Clauses such as “the buyer shall pay the NYSE opening share price on the day of closing + 0.1%” or “the buyer will reimburse the seller for utilities paid for the month of the closing pro-rated by the number of days elapsed as of the closing”, or “the purchaser shall get the contents of the house as of the closing except for the washer/dryer” etc all use a function of the state immediately preceding the moment of execution to calculate values utilized in the contract.

      Why do contracts, and Datomic transactions, have such functions? Because they allow you to define transactions that are more flexible as to when they are applied vs contracts/transactions which explicitly supply all values and thus are brittle (and much longer!) and need to be rewritten as the circumstances in which they are to execute change.

      That all such clauses/transaction-functions have the immediate past as their (fixed) basis is an essential feature. Having a fixed basis means they can’t be directly composed (i.e. the output of one can’t feed the input of another). In practice that means that there will be only one such clause/function that calculates any particular value, and if it requires compound logic it will be a compound clause, or in the case of a Datomic transaction function, leverage composition in the language (Clojure/Java) in which you write it.

      A lot of benefits accrue to Datomic’s “old-school” approach to transactions. I hope the above helps people better understand them.

      Rich Hickey https://clojurians.slack.com/archives/C03RZMDSH/p1716049896478429?thread_ts=1716049896.478429&cid=C03RZMDSH

    1. Here is a detailed and concise summary of the article on Clojure's agents and asynchronous actions:

      • Nature of Agents in Clojure: Unlike Refs which support coordinated changes, agents in Clojure enable independent and asynchronous updates to individual storage locations. "Agents provide independent, asynchronous change of individual locations."

      • Lifetime and Mutation of Agents: Agents are tied to a single mutable location for their entire lifecycle and can only mutate through specific actions. "Agents are bound to a single storage location for their lifetime, and only allow mutation of that location to occur as a result of an action."

      • Action Functions: Actions on agents are asynchronous functions or multimethods, allowing polymorphism and an open set of potential actions. "Actions are functions that are asynchronously applied to an Agent’s state and whose return value becomes the Agent’s new state."

      • Reactive Agents and State: Agents in Clojure are reactive; they do not operate under an autonomous imperative message loop, and their immutable state is always readily available for reading without blocking or coordination. "Clojure’s Agents are reactive, not autonomous - there is no imperative message loop and no blocking receive."

      • Agent Action Dispatch Process: Dispatching an action to an agent involves applying a function and its arguments to the agent's state, with changes being validated and, if valid, updating the agent’s state. "At some point later, in another thread, the return value of fn will become the new state of the Agent."

      • Error Handling: Any exceptions within agent actions prevent further nested dispatches, and errors are stored within the agent until they are cleared. "If any exceptions are thrown by an action function, no nested dispatches will occur, and the exception will be cached in the Agent itself."

      • Concurrency and Execution Order: Agent actions are managed concurrently in a thread pool, ensuring no more than one action per agent is executed at any time, with actions occurring in the order they were sent. "At any point in time, at most one action for each Agent is being executed."

      • Integration with STM and Thread Pool: Agents work in conjunction with Clojure's Software Transactional Memory (STM), holding actions until transactions commit, and operate on non-daemon threads that require explicit shutdown. "Agents are integrated with the STM - any dispatches made in a transaction are held until it commits."

      • Practical Example: A practical implementation example demonstrated is the "send-a-message-around-a-ring" test, showing the setup and dispatch mechanism in a real scenario. "This example is an implementation of the send-a-message-around-a-ring test."

      Each bullet point has extracted critical information and relevant quoted sentences from the text, providing a comprehensive yet succinct overview of the key concepts related to agents in Clojure.

    1. I haven't gotten rid of any coupling but the coupling that's there is cheaper and that's what cohesion is an element is cohesive if the sub elements are 00:41:03 coupled to each other

      We want our any element of our system to be cohesive & decoupled with respect to other elements his size.

    2. if I move this function into the same directory as the other functions it's coupled to I've reduced the cost of making changes to it
      • Why we split code into files?
      • Good way to restrict context?
      • We can afford coupling in small contexts but as the context gets large coupling becomes expensive.
      • Therefore we might optimize on context size vs coupling?
      • Context needs to be defined more precisely
    3. it's not your fault kinda not your 00:33:24 fault because if you how far how deep do I want to go into this I get so excited about this if you if you change the axes of this log and I've got some stuff published 00:33:38 about this and feel free to ask me about it more I promise it's only going to last one more minute if you change the axes of that that graph to logarithmic one ten a hundred 00:33:50 thousand million or ten thousand hundred thousand million and the same you get an absolutely straight line how does that happen well coupling is exactly one of these phenomenon

      This explanation shows the phenomenon is natural but software design | software engineering it's not necessarily natural, when we engineer something we often bypassing natural courses by precisionly understanding and avoiding their causes.

    1. So a solution that the Closure people proposes, and we implement in our C++ alternative, is to say okay this function is like a transaction you could say where I'm interested in the vector 00:19:18 that comes in as an argument and the vector that is returned at the end. Everything that happens in between, I don't care if it's doing mutation

      https://clojure.org/reference/transients

    1. this wasn't necessarily what was invented first in JavaScript this is very recent actually at least the first 00:24:07 part

      Distinction Between Prototypal and Classical Inheritance Models: The talk concludes with a discussion on how JavaScript’s inheritance differs from classical models like Java, focusing on how JavaScript uses a prototypal model which is more flexible.

    2. what we just saw prototypes 00:22:32 with pointers to other objects prototypes chains message passing that sort of thing

      Practical Application of Prototypes in JavaScript: The speaker emphasizes the practicality of prototypes in JavaScript, explaining how modern JavaScript incorporates prototypal inheritance through syntax like Object.create.

    3. if the child gets a message that doesn't know how to respond to it can ask its prototype and attach 00:21:07 on themselves

      Delegation in Object Prototypes: When an object receives a request (message) for a property it lacks, it delegates the request to its prototype. This system allows objects to utilize properties and behaviors of their ancestors

    4. objects aren't just about manipulating static things objects can do things and the way to get an object to do something 00:14:43 is descended a message

      Object Behavior and Communication: Objects in JavaScript communicate through a mechanism where messages are sent and received, with prototypes playing a key role in delegating responses when objects lack certain properties

    5. this really underscores this idea of differential inheritance my chair is like your chair 00:09:20 except

      Prototypal Chain and Differential Inheritance: The process involves creating objects based on a prototype and then customizing new objects, illustrating how inheritance chains work by describing objects in terms of their differences.

    6. if we add a rocker to the prototype the objects inherit the rock

      Shared Characteristics and Modifications: The speaker discusses how changes to a prototype affect all objects derived from it, using an example of adding features to a chair to demonstrate shared and differential inheritance.

    7. a prototype is your chip and I want to copy that

      Definition and Function of a Prototype: In computing, a prototype is a model used to create copies (instances) with the same properties. Prototypes in JavaScript are foundational for creating objects that can inherit properties from other objects.

    8. did you realize that some things are more convenient with computers like think of the chair that you're sitting on

      Utility of Prototypal Inheritance: Prototypal inheritance allows for more flexible manipulation of object properties compared to the static nature of physical objects. The speaker uses the analogy of a chair to illustrate copying and modifying objects in computing.

      • Clarification of Terminology: The speaker begins by clarifying that "prototypal inheritance" is the correct term, not "prototypical inheritance."

        • "First thing I learned is that this is not called prototypical inheritance, that's something else."
      • Utility of Prototypal Inheritance: Prototypal inheritance allows for more flexible manipulation of object properties compared to the static nature of physical objects. The speaker uses the analogy of a chair to illustrate copying and modifying objects in computing.

        • "Did you realize that some things are more convenient with computers like think of the chair that you're sitting on."
      • Definition and Function of a Prototype: In computing, a prototype is a model used to create copies (instances) with the same properties. Prototypes in JavaScript are foundational for creating objects that can inherit properties from other objects.

        • "A prototype is your chair, and I want to copy that."
      • Shared Characteristics and Modifications: The speaker discusses how changes to a prototype affect all objects derived from it, using an example of adding features to a chair to demonstrate shared and differential inheritance.

        • "If we add a rocker to the prototype, the objects inherit the rock."
      • Prototypal Chain and Differential Inheritance: The process involves creating objects based on a prototype and then customizing new objects, illustrating how inheritance chains work by describing objects in terms of their differences.

        • "This really underscores this idea of differential inheritance, my chair is like your chair except..."
      • Object Behavior and Communication: Objects in JavaScript communicate through a mechanism where messages are sent and received, with prototypes playing a key role in delegating responses when objects lack certain properties.

        • "Objects aren't just about manipulating static things, objects can do things, and the way to get an object to do something is to send it a message."
      • Delegation in Object Prototypes: When an object receives a request (message) for a property it lacks, it delegates the request to its prototype. This system allows objects to utilize properties and behaviors of their ancestors.

        • "If the child gets a message that doesn't know how to respond to it, it can ask its prototype and attach themselves."
      • Practical Application of Prototypes in JavaScript: The speaker emphasizes the practicality of prototypes in JavaScript, explaining how modern JavaScript incorporates prototypal inheritance through syntax like Object.create.

        • "What we just saw prototypes with pointers to other objects prototypes chains message passing that sort of thing."
      • Distinction Between Prototypal and Classical Inheritance Models: The talk concludes with a discussion on how JavaScript’s inheritance differs from classical models like Java, focusing on how JavaScript uses a prototypal model which is more flexible.

        • "This wasn't necessarily what was invented first in JavaScript, this is very recent actually at least the first part is."

      This detailed summary provides a comprehensive overview of the key points made in the talk about prototypal inheritance, illustrating its fundamental concepts and practical implications in JavaScript programming.

    1. it's certainly possible that in doing I talked about this before and doing the implementation details you may actually need to go back and alter your scope we 00:56:34 we thought we were going to do this

      Concludes with the implementation phase where the selected approach is detailed out, emphasizing the importance of revisiting and possibly revising strategic decisions based on practical considerations during implementation. "we thought we were going to do this and we didn't find a way to implement so that wasn't going to be too much work."

    2. if you do this during the day I promise you you're going to get new ideas when you hit the hammock or the bed

      The use of decision matrices and structured design processes encourages reflective thinking and abstraction, leading to more innovative and effective solutions. "you're starting to do abstraction...if you do this during the day I promise you you're going to get new ideas when you hit the hammock or the bed"

    3. you're going to have shared understanding 00:52:33 you're going to be able to say I don't think you're saying that right I don't think that you know that is the case that's good that's the Socratic method

      Discusses how a decision matrix helps maintain a clear focus on the problem, facilitates shared understanding, and drives thoughtful examination of each potential solution. "you're going to have shared understanding...you're going to be able to say I don't think you're saying that right."

    4. use colors right which you saw 00:46:49 before some colors on this sheet to show subjectivity

      Advises against subjective judgments within the matrix, recommending the use of color coding to visually represent the evaluation of each criterion. "use colors...to show subjectivity."

    5. you want things that are salient or relevant Sally it means it's an aspect of this thing that sticks out right and relevant means that it's an 00:44:54 aspect of this thing that matters to our problem

      Stresses the selection of relevant and salient criteria for evaluating approaches, differentiating between merely describing features and critically analyzing their implications. "you want things that are salient or relevant."

    6. A1 will be what what decision are you trying to make what problem are you working on always A1

      Details on how to set up a decision matrix, emphasizing the importance of clearly defining the problem in cell A1 and using the subsequent columns and rows to evaluate different approaches against defined criteria

    7. what is a decision matrix it's a spreadsheet it's a spreadsheet that more than one person can see at the same time and edit at the same time

      Introduces the decision matrix as a critical tool for evaluating different approaches, using a live-editing spreadsheet format to facilitate collaborative design. "what is a decision matrix it's a spreadsheet that more than one person can see at the same time and edit at the same time."

    8. the best first phase for use cases is to talk about only what 00:37:10 people intend to accomplish

      Highlights the process of creating use cases focused solely on user intentions, leaving out the implementation details for later phases. "the best first phase for use cases is to talk about only what people intend to accomplish."

    9. you're then going to start thinking about what are the ways that you could possibly address 00:35:21 address it we're going to call them approaches

      Emphasizes the need to explore different strategies (e.g., automated solutions versus tools for users) to address user objectives more effectively. "you're then going to start thinking about what are the ways that you could possibly address it."

    10. at the 00:34:56 direction stage it's about strategy right strategy means to be a general or to lead and fundamentally it means about where are you going

      Strategy is defined as determining the direction, focusing on user intentions and potential approaches to meet those intentions without specifying how they will be achieved initially. "strategy means to be a general or to lead and fundamentally it means about where are you going."

    11. if it's a bigger thing you will likely have two phases here you'll 00:34:43 have a direction setting moment and then you'll have many implementation decision moments

      The speaker discusses the importance of distinguishing between strategic direction setting and tactical implementation decisions in design, particularly for larger projects

    12. at the point you've 00:32:45 got a problem statement you're going to be able to do two things with your top story
      • you're going to be able to modify the top story title to try to make it about the problem
      • the other thing you're going to do is you're going to now add another thing to that top story you had the title description now you're going to have that problem statement
    13. a problem statement is this succinct statement of unmet objectives

      Covers the transition from identifying and understanding design problems to formulating solutions, emphasizing the necessity of a clear, succinct problem statement in this process.

    14. you need to have a ticket that says this is what the overarching plan is

      Describes the 'top story' as a pivotal part of project management in design, which synthesizes understanding and outlines the plan forward. "you need to have a ticket that says this is what the overarching plan is."

    15. this is reflective you're 00:16:25 thinking about your thinking this is super important being aware of what you're thinking about helps you think it also helps agendaize your background thinking so 00:16:38 that's where it becomes reflective so we're going to call this reflective inquiry

      Explains the concept of reflective inquiry where questioning one's own thoughts helps advance understanding in design. "reflective inquiry... helps you think it also helps agendaize your background thinking."

    16. another technique I recommend is to discover read about and utilize the Socratic method

      Introduces the Socratic method as a tool for probing and understanding through structured inquiry, which helps uncover underlying truths about design challenges.

    17. questions this is a very powerful tool asking questions and it's an old tool and one of the beautiful things about asking a question and formulating a question is 00:10:29 you've made clear that you're looking for something

      Discusses the strategic use of questions to clarify goals and intentions in the design process, promoting a deeper exploration of problems

    18. add a glossary to your set of stuff that you're building while you're working

      Recommends creating a glossary to consistently define and use terms, enhancing understanding within teams

    19. choosing good 00:04:57 words is super critical

      Advocates for precise use of language to avoid ambiguity and ensure that design intentions are clearly communicated and understood.

      • Be percise about the meaning
        • Builds better understanding (of differences and similarities)
      • Keeps everyone on the same page => better collaboration
    20. this is about writing as part of thinking

      Stresses the role of writing in design, not for documentation, but as a tool to visualize and refine ideas collaboratively: * Forces us to concrete thoughts * Allow us to not lose threads * Allow us to share our thoughts

    21. I think design is something that you can learn to do

      Emphasizes that design is not a magical skill but involves concrete, learnable practices that enhance team-based software development.

    1. Transformers give Clojurists some of the benefits of "Object Orientation" without many of the downsides Clojurists dislike about objects.

      1. Objects couple behaviors required from multiple callers into a single class, while transformers do not change existing behaviors for existing callers by default
      2. Objects push inheritance first design, whereas transformer inheritance is a function of shared structure between Clojure data structures derived from one another and design is driven by concrete implementation needs, like regular Clojure
      3. Objects couple state and methods in spaghetti ways and transformers are just immutable maps. And just like how Clojure lets you stash stateful things like atoms in functions, transformers allow you to build stateful transformers, but like Clojure the default convention is to do everything immutably
      4. Objects try to provide data hiding as a function of encapsulation whereas transformers are doing the opposite, exposing data otherwise hidden by a closure

      There are many strategies for reusing code in the software industry. In Clojure, we use what some call a "lego" method of building small, single purpose functions that just can be used in a million different contexts, because of a tasteful use of simplicity in the right places. This works tremendously well for 95% of use cases. In certain use-cases, like for building hierarchies of functions that are highly self-similar, like with UI toolkits, transformers provide a better alternative.Transformers allow you to build a UI toolkit with 25% the code of normal function composition and 25% of the code required for evolution over time for the widgets in that hierarchy. The lego method is great for vertically composing things together, but when you want to make lateral changes for only certain callers in the tree, you have to defensively copy code between duplicative implementation trees and just call them "grandpa-function-1" and "grandpa-function-2" and then make versions 1 and 2 for all functions that wrapped the grandpa-functions afterwards. Transformers provide a solution for that situation, in the rare cases we end up in them in Clojure, without the downsides of a traditional object system.

  17. Apr 2024
    1. Goals
      • Concise expression of application logic
      • Actually being incremental
      • Tab focusing
      • Simple types
      • Simple control flow
      • Overall system complexity
    2. Summary of Raph Levien's Blog: "Towards principled reactive UI"

      Introduction

      • Diversity of Reactive UI Systems: The blog notes the diversity in reactive UI systems primarily sourced from open-source projects. Levien highlights a lack of comprehensive literature but acknowledges existing sources offer insights into better practices. His previous post aimed to organize these diverse patterns.
        • "There is an astonishing diversity of 'literature' on reactive UI systems."

      Goals of the Inquiry

      • Clarifying Inquiry Goals: Levien sets goals not to review but to guide inquiry into promising avenues of reactive UI in Rust, likening it to mining for rich veins of ore rather than stamp collecting.
        • "I want to do mining, not stamp collecting."

      Main Principles Explored

      • Observable Objects vs. Future-like Polling: Discusses the importance of how systems manage observable objects or utilize future-like polling for efficient UI updates.
      • Tree Mutations: How to express mutation in the render object tree is crucial, focusing on maintaining stable node identities within the tree.
        • "Then I will go into deeper into three principles, which I feel are critically important in any reactive UI framework."

      Crochet: A Research Prototype

      • Introduction of Crochet: Introduces 'Crochet', a prototype exploring these principles, acknowledging its current limitations and potential for development.
        • "Finally, I will introduce Crochet, a research prototype built for the purpose of exploring these ideas."

      Goals for Reactive UI

      • Concise Application Logic: Emphasizes the need for concise, clear application logic that drives UI efficiently, with reactive UI allowing declarative state expressions of the view tree.
        • "The main point of a reactive UI architecture is so that the app can express its logic clearly and concisely."
      • Incremental Updates: Advocates for incremental updates in UI rendering to avoid performance issues related to full re-renders, highlighting the limitations of systems like imgui and the potential of systems like Conrod, despite its shortcomings.
        • "While imgui can express UI concisely, it cheats somewhat by not being incremental."

      Evaluation of Existing Systems

      • Comparison with Other Systems: Mentions SwiftUI, imgui, React, and Svelte, discussing their approaches to handling reactive UI and their adaptability to Rust.
        • "SwiftUI has gained considerable attention due to its excellent ergonomics in this regard."

      Technical Challenges and Proposals

      • Challenges in Tree Mutation and Stable Identity: Discusses the challenges in tree mutation techniques and the importance of stable identity in UI components to preserve user interaction states.
        • "Mutation of the DOM is expressed through a well-specified and reasonably ergonomic, if inefficient, interface."

      Conclusion and Future Work

      • Future Directions and Experiments: Encourages experimentation with the Crochet prototype and discusses the ongoing development and research in making reactive UIs more efficient and user-friendly.
        • "I encourage people to experiment with the Crochet code."

      This blog post encapsulates Levien's ongoing exploration into developing a principled approach to reactive UI in Rust, highlighting the complexity of the task and his experimental prototype, Crochet, as a step towards solving these challenges.

    3. Summary of "Towards Principled Reactive UI" by Raph Levien (September 25, 2020)

      • Introduction and Motivation:

        • Levien revisits the topic of reactive UI in Rust, building on his previous work to further explore efficient expression methods in the context of Rust's capabilities and limitations.
        • Discusses the diversity of reactive UI frameworks and the potential for adopting successful concepts from existing systems without needing to create an entirely new framework.
        • "It is not the intent of this post to provide a comprehensive review of the literature" – Indicates a focus on identifying promising techniques rather than exhaustive cataloging.
      • Goals for Reactive UI Systems:

        • Emphasizes the need for concise expression of application logic, with references to various frameworks like SwiftUI and imgui that have addressed this in differing ways.
        • Highlights the necessity of incremental updates to UI elements, criticizing some approaches like imgui for not truly being incremental.
        • Discusses challenges such as implementing tab focusing and handling simple types within the UI toolkit, pointing out the difficulties faced by frameworks like Iced in these areas.
      • Principles for Building Reactive UI Frameworks:

        • Identifies three critical principles for any reactive UI framework: use of observable objects, expression of mutations in the render object tree, and stable identity of nodes within that tree.
        • Questions the standard use of observable objects due to their complexity and potential inefficiency, especially in a Rust context.
        • Advocates for a system where UI logic can trigger re-computation without requiring detailed context about what changed, potentially borrowing concepts from Rust's async infrastructure.
      • Introduction of Crochet Prototype:

        • Introduces Crochet, a research prototype designed to explore and implement the principles discussed.
        • Crochet aims to simplify the reactive UI model by not relying on observables and using a simpler system for tracking UI changes.
        • Discusses potential advantages of Crochet, such as better integration with Rust's async features and a simpler approach to handling UI mutations.
      • Comparative Analysis with Other Frameworks:

        • Compares Crochet's approach to other frameworks like Jetpack Compose and imgui, highlighting differences in handling actions from widgets and mutations within the UI tree.
        • Describes Crochet's approach to avoiding recomposition from the root for performance efficiency and its potential impact on the development of reactive UIs in Rust.
      • Closing Thoughts and Future Directions:

        • Levien encourages experimentation and feedback on the Crochet prototype to refine and validate the proposed principles.
        • Notes ongoing discussions and contributions from the community that influence the development of Druid and Crochet.

      Concluding Insights:

      • This blog post represents Levien's ongoing efforts to refine the theoretical foundations and practical implementations of reactive UI frameworks in Rust, highlighting new research directions and the introduction of the Crochet prototype as a platform for further exploration and development.
    1. Summary of the Talk on the Future of Web Frameworks by Ryan Carniado

      • Introduction and Background:

        • Ryan Carniado, creator of SolidJS, has extensive experience in web development spanning 25 years, having worked with various technologies including ASP.NET, Rails, and jQuery.
        • SolidJS was started in 2016 and reflects a shift towards new paradigms in web frameworks, particularly in the front-end JavaScript ecosystem.
        • Quote: "I've been doing web development now for like 25 years... it wasn't really until the 2010s that my passion reignited for front-end JavaScript."
      • Core Themes and Concepts:

        • Modern front-end development heavily relies on components (e.g., class components, function components, web components) which serve as fundamental building blocks for creating modular and composable applications.
        • Components have runtime implications due to their update models and life cycles, influencing the performance and design of web applications.
        • Traditional component models use either a top-down diffing approach (like virtual DOM) or rely on compilation optimizations to enhance performance.
        • Quote: "Modern front-end development for years has been about components... however, in almost every JavaScript framework components have runtime implications."
      • Reactive Programming and Fine-Grained Reactivity:

        • Ryan advocates for a shift towards reactive programming to manage state changes more efficiently. This approach is likened to how spreadsheets work, where changes in input immediately affect outputs without re-execution of all logic.
        • Fine-grained reactivity involves three primitives: signals (atomic atoms), derived state (computeds or memos), and side effects (effects). These primitives help manage state and side effects without heavy reliance on the component architecture or compilation.
        • Quote: "What if the relationship held instead? What if whenever we changed B and C, A also immediately updated? That's basically what reactive programming is."
      • Practical Demonstration and Code Examples:

        • Ryan demonstrated the implementation of fine-grained reactivity using SolidJS, showing how state management and updates can be handled more efficiently compared to traditional methods that rely heavily on component re-renders and hooks.
        • The examples provided emphasized how reactive programming can simplify state management and improve performance by only updating components that need to change, reducing unnecessary re-renders.
        • Quote: "The problem is that if any state in this component changes, the whole thing reruns again... what if we didn't? What if components didn't dictate the boundary of our performance?"
      • Performance Implications and Advantages:

        • The "reactive advantage" in SolidJS and similar frameworks lies in their ability to run components minimally, avoiding stale closures and excessive dependencies that can degrade performance.
        • Ryan highlighted that in reactive frameworks, component boundaries do not dictate performance; instead, performance optimization is achieved through smarter state management and reactive updates.
        • Quote: "Components run once... state is independent of components. Component boundaries are for your sake, how you want to organize your code, not for performance."
      • Future Directions and Framework Evolution:

        • The talk touched on the broader impact of reactive programming and fine-grained reactivity on the evolution of web frameworks. This includes the potential integration with AI and compilers to further optimize performance and developer experience.
        • Ryan suggested that the future of web development might see more frameworks adopting similar reactive principles, possibly leading to a "reactive renaissance" in the industry.
        • Quote: "A revolution is not in the cards, maybe just a reactive Renaissance."
      • Q&A and Additional Insights:

        • During the Q&A, Ryan discussed the potential application of SolidJS principles in environments like React Native and native code development, indicating the flexibility and broad applicability of reactive programming principles across different platforms and technologies.
        • Quote: "The custom renderer and stuff is not something you need a virtual DOM to... the reactive tree as it turns out is completely independent."
    1. Summary of "Xilem: An Architecture for UI in Rust" by Raph Levien (May 7, 2022)

      • Introduction and Motivation:

        • Levien introduces Xilem as a new UI architecture specifically tailored for Rust, addressing the unique challenges posed by Rust's aversion to shared mutable state.
        • The architecture aims to combine modern reactive and declarative UI paradigms in a way that is idiomatic to Rust, learning from the limitations of previous architectures like Druid and other Rust UI projects.
        • "Rust is an appealing language for building user interfaces for a variety of reasons, especially the promise of delivering both performance and safety" – Explains the motivation behind developing a Rust-specific UI framework.
      • Xilem Architecture Overview:

        • Xilem leverages a view tree for declarative UI descriptions, supports incremental updates through diffing, and features an innovative event dispatch system using id paths.
        • Introduces Adapt nodes, an evolution of the lens concept from Druid, allowing for better composition by managing mutable state access between components.
        • "Like most modern UI architectures, Xilem is based on a view tree which is a simple declarative description of the UI" – Describes the core structural concept underpinning Xilem.
      • Comparison with Existing Architectures:

        • Levien critiques existing architectures like immediate mode GUI, The Elm Architecture, and React adaptations in Rust, identifying their limitations in the context of Rust's programming model.
        • Highlights the challenges with integrating asynchronous operations and complex state management in existing systems.
        • "Another common architecture is immediate mode GUI, both in a relatively pure form and in a modified form" – Discusses how Xilem addresses deficiencies in these models.
      • Innovative Features of Xilem:

        • Xilem's event handling distinguishes itself by not relying on shared mutable state but instead uses a path-based dispatch system to manage state mutations.
        • The architecture supports highly granular change propagation and efficient state management through features like memoization nodes and adaptive state nodes.
        • "The most innovative aspect of Xilem is event dispatching based on an id path, at each stage providing mutable access to app state" – Highlights the unique approach to event handling in Xilem.
      • Challenges and Future Directions:

        • Discusses potential areas for further development, including multithreading support, fine-grained change propagation, and integration with dynamic languages through Python bindings.
        • Expresses interest in exploring other domains like text-based user interfaces (TUI) or adapting Xilem principles to web-based environments.
        • "We haven’t yet built up real UI around the new architecture" – Acknowledges that Xilem is still in the conceptual stage and requires practical validation.

      Concluding Insights:

      • Xilem represents Levien's ongoing efforts to refine UI architecture in Rust, addressing the specific challenges posed by Rust's language features while embracing the principles of modern UI design. This architecture aims to provide a robust foundation for building intuitive, high-performance, and safe user interfaces in Rust, encouraging feedback and experimentation to validate and expand its capabilities.
    2. Summary of "Xilem: an architecture for UI in Rust" from raphlinus.github.io

      • Motivation for Xilem: Existing UI architectures in Rust like Druid face limitations due to Rust's avoidance of shared mutable state, making them a poor fit for conventional UI systems. Xilem aims to provide an architecture better suited to Rust's paradigms, inspired by existing works and innovations specific to UI needs.

        • "However, finding a good architecture is challenging... Rust is a poor fit for UI."
      • Design Goals and Influence: Xilem seeks to blend modern UI design principles such as reactive and declarative UIs with Rust's emphasis on performance and safety. It draws inspiration from SwiftUI, Flutter, and React.

        • "The goals include expression of modern reactive, declarative UI, in components which easily compose, and a high performance implementation."
      • Core Components of Xilem:

        • View Tree and Widget Tree: Utilizes a declarative view tree for UI description, which is diffed against its successive versions to update a more traditional widget tree. This setup helps manage UI state and changes efficiently.
          • "Xilem is based on a view tree which is a simple declarative description of the UI."
        • Incremental Computation: At its core, Xilem incorporates an incremental computation engine that optimizes UI updates and event handling.
          • "Xilem also contains at heart an incremental computation engine with precise change propagation, specialized for UI use."
        • Event Dispatching: Introduces an innovative event dispatching mechanism using 'id paths', allowing mutable access to app state throughout the component hierarchy.
          • "The most innovative aspect of Xilem is event dispatching based on an id path."
      • Comparative Analysis:

        • Druid and Immediate Mode GUI: Xilem addresses challenges like static vs dynamic widget hierarchies and complex state management seen in Druid and immediate mode GUIs.
          • "The existing Druid architecture has some nice features, but we consistently see people struggle with common themes."
        • Elm Architecture and React-like Implementations: Discusses the limitations of message dispatch in Elm and the adaptation challenges of React patterns in Rust.
          • "The Elm documentation specifically warns against components, saying, 'actively trying to make components is a recipe for disaster in Elm.'"
      • Technical Implementation:

        • Example: Uses a simple counter to demonstrate the declarative construction of UI and event handling within Xilem, showing how the architecture supports mutable access to app state without the typical challenges of Rust.
          • "fn app(count: &mut u32) -> impl View<u32> { ... }"
        • Identity and Event Propagation: Explains the use of stable identities and id paths for event propagation, which is crucial for maintaining state consistency across UI updates.
          • "A specific detail when building the widget tree is assigning a stable identity to each view."
      • Future Directions and Integration:

        • Async Integration: Discusses the potential integration of asynchronous operations, which are crucial for responsive UIs.
          • "Async and change propagation for UI have some common features, and the Xilem approach has parallels to Rust’s async ecosystem."
        • Environmental and Threading Considerations: Outlines plans for handling environment settings and multithreading, which could enhance performance and scalability.
          • "Another very advanced topic is the ability to exploit parallelism (multiple threads) to reduce latency of the UI."

      The post concludes by acknowledging the conceptual stage of Xilem, inviting feedback, and highlighting its potential application across various UI domains.

    1. Summary of "Towards a Unified Theory of Reactive UI" by Raph Levien (November 22, 2019)

      • Overview and Motivation:

        • Levien explores various reactive UI frameworks to develop a cohesive understanding and communicate effective strategies for the Druid UI system.
        • Reactive UI allows UI construction and updates to be more intuitive and less redundant compared to traditional object-oriented approaches.
        • "My own main motivation for exploring reactive UI is primarily because the object-oriented idioms don’t translate well to Rust" – This highlights the shift towards reactive UI due to language and paradigm limitations in Rust concerning ownership and callbacks.
      • Theoretical Framework and Tree Transformations:

        • Reactive UIs fundamentally involve transforming UI component trees across various stages: user data → widget tree → render object tree → draw tree.
        • Levien discusses tree representation either as a data structure or as an execution trace, both impacting how UI transformations are managed.
        • "The main theoretical construct is that reactive UI is at heart a pipeline of tree transformations." – This signifies the central role of managing UI component hierarchies through transformations.
      • Diversity in Framework Implementations:

        • Despite common goals, implementation details in reactive UI frameworks vary significantly, influencing how application state is managed and updated.
        • Differences are often seen in how these frameworks handle state, incremental updates, and dependency tracking.
        • "The details of implementation still seem wildly divergent." – Points to a lack of standardization in implementing reactive principles across different systems.
      • Incremental Transformations and Diffing:

        • Emphasizes the importance of incremental updates in UI frameworks to minimize the performance costs of updating UI states.
        • Discusses strategies like diffing, which involves comparing new and old UI trees to determine necessary updates.
        • "One of the fundamental goals of a UI framework is to keep the deltas flowing down the pipeline small." – Highlights the challenge of efficiently propagating changes through the UI system.
      • Push vs. Pull Interfaces in UI Frameworks:

        • The distinction between push and pull interfaces in managing data flow through UI systems is crucial, with many systems using hybrid approaches.
        • "Pulling from a tree basically means calling a function to access that part of the tree." – Describes how data is retrieved or updated in UI components.
      • Case Studies and Examples:

        • Provides insights into specific UI frameworks like Druid, Imgui, Flutter, Jetpack Compose, and React, each illustrating unique approaches to tree transformations and state management.
        • These examples highlight the practical applications of theoretical concepts discussed throughout the blog.
      • Future Directions and Academic Gaps:

        • Levien seeks further discussion and academic input to refine these theories and potentially translate them into systematic, robust UI development practices.
        • "I’m also curious if there’s good academic literature I’m missing." – Shows an openness to expanding the theoretical underpinnings of reactive UI based on community and academic feedback.

      Concluding Thoughts:

      • Raph Levien’s exploration into reactive UI frameworks is a deep dive into understanding and characterizing the diverse implementation strategies that exist, aiming to refine and communicate effective methods for building reactive user interfaces, particularly in the context of systems like Druid that utilize Rust’s programming paradigm.
    1. Here is a detailed summary of the article "Super Charging Fine-Grained Reactive Performance" by Milo:

      1. Introduction to Reactivity in JavaScript

        • Definition and Importance: "Reactivity allows you to write lazy variables that are efficiently cached and updated, making it easier to write clean and fast code."
        • Introduction to Reactively: "I've been working on a new fine grained reactivity library called Reactively inspired by my work on the SolidJS team."
      2. Characteristics of Fine-Grained Reactivity Libraries

        • Library Examples and Usage: "Fine-grained reactivity libraries... Examples include new libraries like Preact Signals, µsignal, and now Reactively, as well as longer-standing libraries like Solid, S.js, and CellX."
        • Functionality and Advantages: "With a library like Reactively, you can easily add lazy variables, caching, and incremental recalculation to your typescript/javascript programs."
      3. Core Concepts in Reactively

        • Dependency Graphs: "Reactive libraries work by maintaining a graph of dependencies between reactive elements."
        • Implementation Example: "import { reactive } from '@reactively/core'; const nthUser = reactive(10);"
      4. Goals and Features of Reactive Libraries

        • Efficiency and State Consistency: "Efficient: Never overexecute reactive elements... Glitch free: Never allow user code to see intermediate state where only some reactive elements have updated."
      5. Comparison Between Lazy and Eager Evaluation

        • Evaluation Strategies: "A lazy library... will first ask B then C to update, then update D after the B and C updates have been completed."
        • Algorithm Challenges: "The first challenge is what we call the diamond problem... The second challenge is the equality check problem."
      6. Algorithm Descriptions

        • MobX: "MobX uses a two pass algorithm, with both passes proceeding from A down through its observers... MobX stores a count of the number of parents that need to be updated with each reactive element."
        • Preact Signals: "Preact checks whether the parents of any signal need to be updated before updating that signal... Preact also has two phases, and the first phase 'notifies' down from A."
        • Reactively: "Reactively uses one down phase and one up phase. Instead of version numbers, Reactively uses only graph coloring."
      7. Benchmarking Results

        • Performance Observations: "In early experiments with the benchmarking tool, what we've discovered so far is that Reactively is the fastest."
        • Framework Comparisons: "The Solid algorithm performs best on wider graphs... The Preact Signal implementation is fast and very memory efficient."

      This summary encapsulates the key concepts, methodologies, and findings presented in the article, focusing on the innovations and performance of various fine-grained reactivity libraries, especially the newly introduced Reactively.

    1. if joining data from many sources is cheap we don't need big data you can have 00:37:11 small data all over the place

      Is Big-Data about storage and joins? Or is it about computing efficiently over massive scales?

    2. if I add some uh a couple of facts that say oh hey guess what if if you're a sister then you're 00:34:26 that's a sub property of family I add that one bit of data and now I can write just give me all the family members of this uh thing and because Sparkle is 00:34:40 this beautiful graph query language we can handle recursive queries really well

      Are these two different features? OWL for defining rules? and SPARQL for recursive queries?

    3. for validation we use the shape constraint language shackle it's got a bunch of different constraints it's a very expressive 00:29:32 data DSL for the specifying validation rules

      SHACL

    4. Here's a detailed summary of Daniel Petronik's talk about Fluree, an immutable graph database with Clojure, and its significance in the context of data sharing across trust boundaries:

      • Introduction of Speaker and Topic
      • "my name is Daniel Petronik and I work at Fleury where we are making an immutable graph database with closure" - Introduction to the speaker and the subject of the talk.

      • Historical Context of Shipping

      • "stepping back into a serious problem of shipping boxes in the days of yore" - Petronik starts with a historical analogy to illustrate a complex system of logistics and shipping before modern containerization.
      • "it's a very long journey and then you go across the ocean eventually you'll reach your Port of Call" - Description of the arduous process involved in shipping goods before the advent of container ships.

      • Introduction of Containerization

      • "shipping things isn't that expensive we have containers" - Introduction to containerization as a transformative technology that simplified and reduced the cost of shipping goods globally.

      • Parallels to Data Sharing

      • "shipping data across trust boundaries is expensive, risky and not worth it for many categories of applications" - Draws a parallel between the old shipping methods and the contemporary challenges in data sharing across organizational and technical boundaries.

      • Detailed Problems in Data Sharing

      • "every step of the way when somebody touches it you've got to pay for Freight, you've got to pay for labor" - Analogy to the logistical and bureaucratic hurdles in managing data across different systems.

      • Introduction to Semantic Web and RDF

      • "at the foundation of all these things that they came up with is the resource description framework rdf which is just a notation for graphs" - Introduces RDF as a foundational technology for managing data more effectively.

      • Proposal for Simplifying Data Interaction

      • "let's simplify the shape of the data and get rid of that application server that special snowflake" - Proposes a simplification of data handling to improve interoperability and reduce complexity.

      • Role of Fluree in This Paradigm

      • "Flurry right we've we've got a database" - Discusses how Fluree fits into this new paradigm by offering a graph database that incorporates these simplified, standardized approaches to data handling.

      • Future Vision and Call to Action

      • "this isn't going to cover every use case but just like containers didn't take over everything in order to provide value they took over enough that it made a huge difference" - Reflects on the potential impact of Fluree and similar technologies in changing how data is shared and managed.

      This talk emphasizes the need for new methodologies in data management that reduce friction and lower costs, drawing an extended metaphor between the evolution of physical goods shipping and data sharing practices.

    1. 99% of the time, these systems will be doing no work. This wastes time, as the schedule must constantly poll to see if anything needs to be done.

      Reactivity?

    2. Summary of "So you want to build an ECS-backed GUI framework"

      • Introduction to ECS and Bevy for GUI: The authors discuss using the Entity-Component-System (ECS) framework within Rust, specifically Bevy, to build a UI framework. They highlight the unconventional nature of ECS-based GUIs but reference prior implementations like flecs and various Bevy experiments demonstrating the feasibility and potential of this approach.
      • "It's a type-safe, trendy solution for state management and most importantly: it'll be blazing fast" and "existing experiments like belly, bevy_lunex, bevy_ui_dsl, cuicui_layout and kayak_ui show a ton of promise using Bevy's ECS."

      • Challenges of Developing bevy_ui: The text outlines that the issues with bevy_ui stem not from its ECS or Rust foundations, but rather from the inherent complexity and labor intensity of GUI framework development, which involves numerous components and meticulous coordination.

      • "most of the problems that plague bevy_ui aren't driven by the decision to use an ECS, or even to use Rust. They're the boring, tedious and frustrating ones: writing GUI frameworks is a lot of work with many moving parts."

      • Authors’ Background and Disclaimer: Alice is a maintainer (not the lead) of Bevy and Rose uses Bevy professionally, indicating their deep involvement but also stressing that their opinions are personal and not official.

      • "Alice is a maintainer of Bevy, but not the project lead or even a UI subject-matter-expert. Rose is an employee at the Foresight Spatial Labs."

      • Vision for bevy_ui: The post aims to clarify the rationale behind using ECS for GUI, addressing common critiques and misconceptions, and outlining necessary improvements for making bevy_ui competitive and functional.

      • "This post aims to record how you might make a GUI framework, why we're using an ECS at all, and what we need to fix to make bevy_ui genuinely good."

      • Common Misconceptions and Arguments: The discussion addresses several typical arguments against developing a native Bevy GUI framework, like the feasibility of a single framework satisfying diverse application requirements and the tendency to prefer existing solutions to avoid 'Not Invented Here' syndrome.

      • "One GUI framework to rule them all?" and "Bevy should just use an existing GUI framework."

      • Technical and Social Reasons for a Bevy-Specific GUI: The authors argue for a Bevy-specific GUI to ensure consistency, ease of maintenance, integration with Bevy’s core features, and avoiding dependency risks which can complicate maintenance and updates.

      • "Consistency with the rest of the engine is valuable in its own right. It makes for an easier and more consistent learning experience for new users."

      • Implementation Challenges: The article details specific technical challenges involved in building bevy_ui, such as managing a UI tree structure, input collection, text rendering, and integrating with Bevy's system for rendering, state management, and data transfer.

      • "Storing a tree of nodes... In bevy_ui, this is stored in the World: each node is an entity with the Node component."

      • Strategic Plan for Improvement: The authors propose a mix of straightforward, controversial, and research tasks to progressively refine and enhance bevy_ui. These include enhancing documentation, adopting new layout strategies, creating a styling abstraction, and improving integration with accessibility features.

      • "We can split the work to be done into three categories: straightforward, controversial and research."

      • Conclusion: The text concludes with optimism about the future of bevy_ui, encouraging the Bevy community to engage in its development to realize its potential fully.

      • "But critically, none of it is impossible. If we (the Bevy developer community) can come together and steadily fix these problems, one at a time, we (Alice and Rose) genuinely think bevy_ui will one day live up to the standard for quality, flexibility and ergonomics that we expect out of the rest of the engine."
    1. Here's a detailed and comprehensive summary of the information provided about the Sciter documentation on "Flows and Flexes":

      • Sciter's Layout Responsiveness Principles
      • Flex Units: Defined as 0.5, 1, 2, or simply , these are used in layout properties such as width, height, margins, paddings, and border-width. "The flex unit (width:1 or just width:) can be applied to pretty much any existing CSS property."
      • Flow Property: Manages the layout inside an element, similar to display:flexbox and display:grid in W3C CSS. "The flow property - declaration of layout manager used inside the element."

      • Flexible Margins

      • Affects horizontal positioning inside a container using left/right flex ratios. Example: "These rules will shift a child horizontally inside a container with left(0.7)/right(0.3) ratio."

      • Flexible Dimensions

      • Child elements can be set to fill the container's space completely using width: 1 and height: 1. "These rules will instruct a child to fill a container's free inner area in full."

      • Flexible Border Spacing

      • Distributes spacing evenly among children within a container. "These rules will instruct children to be spread inside a container's with equal spacing."

      • Flow Configurations

      • Default: Automatically discovers flow based on content.
      • Horizontal: Positions children in a single row. "Children are replaced in single row."
      • Horizontal-wrap: Arranges children in multiple rows. "Children are replaced in multiple rows, if they do not fit in one row."
      • Vertical: Organizes children in a single column. "Children are replaced in single column."
      • Vertical-wrap: Similar to horizontal-wrap but for vertical arrangements.
      • Stack: Stacks children on top of each other. "Children are replaced on top of one another."
      • Grid(): Places elements in a defined grid layout. "The grid flow allows to replace elements as grid cells using 'ascii template'."
      • Row(): Similar to grid but organizes content by columns automatically. "The row() flow is a variant of a grid() flow and is different only by method of grid content definition."

      • Specific Layout Tips

      • Tips for using flex units to fill vertical space, aligning children using vertical-align and horizontal-align, and breaking rows or columns explicitly are provided to optimize layout management.

      This summary condenses the critical aspects of the Sciter documentation on CSS flows and flexes, focusing on how these properties can be implemented to manage layout responsiveness effectively.

    1. Here's a structured summary of the article "Why Not Flexbox?" discussing Subform's decision to develop its own layout engine instead of using Flexbox:

      • Background on Flexbox and Subform: Initially, Subform used Flexbox post-Kickstarter launch, but user feedback revealed its complexity as a barrier for designers familiar with Photoshop or Sketch. "Flexbox was too difficult to learn."

      • Complexity of Flexbox: Flexbox adds new concepts like flex, justify-content, and align-items to existing CSS ideas (margins, padding, etc.), creating a complex, often unintuitive system. "Flexbox introduces new concepts...on top of existing CSS concepts."

      • Subform's Layout Engine:

      • Simpler Conceptual Model: Designed to be more intuitive with fewer concepts applied more uniformly. "Compared to Flexbox, the Subform layout engine has fewer concepts, applied uniformly."
      • Design Choices:

        • Uniform Usage: Utilizes the same units (pixels, percentages, stretch) uniformly across different settings. "Uniform stretch units."
        • Default Settings: Includes 'default between' settings to avoid common CSS issues like margin manipulations. "Default between."
        • Expressiveness: Facilitates more specific spacing configurations between elements. "This approach is far more expressive than Flexbox."
      • Learning and Implementation: The article encourages experimentation with Subform's layout engine and provides resources for further learning. "The best way to learn the layout system is to just play around with it."

      For further details and in-depth understanding, you can check out the full article here: Why Not Flexbox?

    1. The solution is just to have the computations register with their parent computation and for clean up the same way we do subscriptions whenever that parent re-evaluates.

      Does this mean, removing children computations from the graph every time the parent re-evaluates?

    2. 5. Optimize reactivity for creation

      Solid's design minimizes creation overhead by using efficient data structures for managing subscriptions and avoiding unnecessary memory allocation. "Signals hold the list of subscribers so that they can notify them when they update..."

    3. 4. Use less computations

      Balanced Granularity: By reducing the granularity where it pays off the most, such as by diffing a few values for attributes, Solid optimizes performance without sacrificing reactivity. "So what if we only made one for each template to handle all attributes as a mini diff..."

    4. 3. Loosen the granularity

      Balanced Granularity: By reducing the granularity where it pays off the most, such as by diffing a few values for attributes, Solid optimizes performance without sacrificing reactivity. "So what if we only made one for each template to handle all attributes as a mini diff..."

    5. But where is the highest creation cost? Creating all those computations. So what if we only made one for each template to handle all attributes as a mini diff, but still create separate ones for inserts. It's a good balance since the cost of diffing a few values to be assigned to attributes costs very little, but saving 3 or 4 computations per row in a list is significant. By wrapping inserts independently we still keep from doing unnecessary work on update.
      • What if we write self adjusting reactivity graph?
      • Reactive graph is composed of computations that depend on each other.
      • Managing those dependencies has overhead
      • The more granular the graph, the more overhead
      • At some point the cost at weights the value, there's an optimum to be found here.
      • What if we would write this graph by hand?
        • We'll probably write less nodes
        • Things that we know doesn't change won't even be in the graph
        • Things that change frequently together might be clumped together (kinda like neurons)
    6. But where is the highest creation cost? Creating all those computations. So what if we only made one for each template to handle all attributes as a mini diff, but still create separate ones for inserts. It's a good balance since the cost of diffing a few values to be assigned to attributes costs very little, but saving 3 or 4 computations per row in a list is significant. By wrapping inserts independently we still keep from doing unnecessary work on update.

      IMPORTANT * It's better to clump all attributes together and diff at the leaf. * I'm not sure about the insertion

    7. However, when you consider batching updates and that order matters it isn't that simple to solve.
      • I think that means: reactivity on orederly reduced lists is hard to granular.
    8. Where are we paying the highest cost on update? Nesting. Doing unnecessary work reconciling lists by far. Now you might be asking why even reconcile lists at all?
      • What does "reconciling lists" means in this context?
    9. With a compiler, you can remove this iteration and decision tree and simply just write the exact instructions that need to happen.
      • Does the translation happens at compilation (build) time? Or at run time?
      • At compilation time we are bound to the syntax (i.e if the compiler is implemented for jsx we can't use it with cljs).
      • Also it can't be dynamic (kind of the point of compilation)
      • If it happens at runtime, does translation more performant than decision tree?
    10. The next morning he had come back with his new library taking an almost too simple Set lookup combined with these techniques. And guess what it was smaller and about the same performance. Maybe even better.

      Efficiency through Simple Algorithms: The story of udomdiff illustrates that sometimes simpler, practical approaches to algorithm design can yield high performance. The library leverages a basic Set lookup combined with common list manipulation techniques, demonstrating efficiency through simplicity. "And guess what it was smaller and about the same performance. Maybe even better."

    11. Now to be clear there is a reason we hit the same performance plateau in the browser. The DOM. Ultimately that is our biggest limitation.

      Performance Plateaus and the DOM's Role: The author begins by discussing how despite various optimizations, web development libraries face performance limitations due to the fundamental constraints of the DOM. "The DOM. Ultimately that is our biggest limitation."

  18. Mar 2024
    1. For example, a player entity could have a bullet component added to it, and then it would meet the requirements to be manipulated by some bulletHandler system, which could result in that player doing damage to things by running into them.

      When this is properly partitioned and named it is just a set of rules and properties that compose to express complex behaviors.

      Just like physics in real life.

    2. The normal way to transmit data between systems is to store the data in components, and then have each system access the component sequentially.

      If the systems communicate sequencially using data changes to components of entities (state) it seems like order is of concern. This sounds like manual state propagation, can automatic propagation (reactive, propagator) solve ordering and improve efficiency?

    3. One of those features is the support of the type hierarchy for the components. Each component can have a base component type (or a base class) much like in OOP. A system can then query with the base class and get all of its descendants matched in the resulting entities selection.

      Semantic type system can be extremely powerful declarative tool

    4. An entity only consists of an ID for accessing components.

      Don't we also need some kind of associative store that associates entities ids to components?

    5. This eliminates the ambiguity problems of deep and wide inheritance hierarchies often found in Object Oriented Programming techniques that are difficult to understand, maintain, and extend.

      How?

    6. Systems act globally over all entities which have the required components

      This approach is especially suited game engines which operate of a single attribute of all the entities (i.e the position of all entities, shadow of all entities, etc..) and not the overall attributes of a single entity.