46 Matching Annotations
  1. Nov 2023
    1. The author states that they believe EventStores are more valuable because they maintain context whereas datomic's records changes without context. Datomic arguably leaves other things out... it doesn't, for example, truly provide an entity kind or a firm consistent schema. It's up to application code to define more formal relationships of data kinds. Arguable you could also say it's up to the application to capture context changes. I do wonder if a hybrid model between these two would result in an overall better solution.

  2. Jul 2023
    1. You might construct this view a little bit differently each time as you learn more about yourself, as you evolve, and as you discover different ways new interfaces allow you to do what you need to do better

      An early thought here... there are a lot of micro decisions that have to be made when building out a view layer. Reducing the design space is a necessary component to prevent analysis paralysis. I'm reminded of Charles Chamberlin's Apricot

  3. May 2023
    1. You don’t think you have a good plan. Sometimes I want to write and it’s impossible. I’ve come to think that’s often because Jim is smarter than me. He recognizes that I haven’t organized my thoughts and I need to do more research or make an outline.

      This is often a barrier for me. As a matter of fact, I've got a big life decision to make that I keep putting off because I don't feel like I have a good enough plan. Unfortunately, the plan is work too, ha.

  4. Apr 2023
    1. The above discussion argues that capabilities would have been a good way to build systems in an ideal world. But given that most current operating systems and programming languages have not been designed this way, how useful is this approach?

      This was my main question too. It seems like a novel exploration, but definitely not untenable in most software systems given the inability to restrict things like global variables.

    2. Rust doesn't allow global mutable state as it wouldn't be able to prevent races accessing it from multiple threads

      TIL. Makes sense though.

    3. All of the above problems stem from trying to separate security from the code. If the code were fully correct, we wouldn't need the security layer. Checking that code is fully correct is hard, but maybe there are easy ways to check automatically that it does at least satisfy our security requirements...

      A compelling insight.

    4. Having to read every line of every version of each of these packages in order to decide whether it's safe to generate the blog clearly isn't practical.

      Ever the challenge of relying on dependencies. A necessary tradeoff, but a tradeoff non-the-less. This loops back in to something that I've been thinking recently which is that our means of sharing code is critical to the overall stability of software. No code sharing solution obviously isn't ideal but modern package management leaves something to be desired. I think languages that handle this as a first class feature (https://www.unison-lang.org/, https://scrapscript.org/) have a leg up in this regard.

    5. Capabilities offer an elegant solution, but seem to be little known among functional programmers

      I've been reading a lot more about capabilities lately. This reminds me of the Capability-based computer systems paper by Henry M. Levy that was published in 1984

      https://homes.cs.washington.edu/~levy/capabook/

    1. Using the llamaIndex toolkit, we don’t have to worry about the API calls in OpenAI, because concerns about the complexity of embedding usage or prompt size limitations are easily removed by its internal data structure and LLM task management.

      I hadn't heard of LlamaIndex before. It can be found here: https://github.com/jerryjliu/llama_index

      A summary from the repo:

      LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.

    1. The first thing I know about burnout is this: burnout happens when we become locked in a cycle of caring about the results of our actions but having no meaningful control over those outcomes

      A simple yet profound observation.

    1. A few other good (related) resources: - http://ravichugh.github.io/sketch-n-sketch/ - https://maniposynth.org/

    2. So what we’ll likely need in the future is a highly customizable wasm-based JS interpreter that supports time-travel-friendly state snapshots out of the box.

      This is really fascinating. I wonder if the replay.io folks would have thoughts on this. It seems like the closest we could get today is a QuickJS integration (though it's notable that QuickJS doesn't seem to be that actively maintained these days). I do wonder if perhaps a language that's more embeddable (like lua) would be a better fit?

    1. If we want to cram n states for what she’s doing and m states for what she’s carrying into a single machine, we need n × m states. With two machines, it’s just n + m.

      This is fascinating! I wonder what other mathematical expressions other relationships from statecharts would bring?

    2. If we want to stick to the confines of an FSM, we have to double the number of states we have

      Interestingly this is part of the problem David Harel sought to solve when coming up with statecharts. As per the abstract of Statecharts: A visual formalism for complex systems

      Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communication

      https://www.sciencedirect.com/science/article/pii/0167642387900359

    3. This pairing echoes the early days of artificial intelligence. In the ’50s and ’60s, much of AI research was focused on language processing. Many of the techniques compilers now use for parsing programming languages were invented for parsing human languages.

      This is a fascinating aside! #note-to-self read up more on the early history of parsing techniques and AI research.

    4. State

      Haha, I love the little GoF (Gang of four) super script! Link decorations aren't used nearly enough imo.

  5. Mar 2023
    1. My favorite feature of Janet, though, is something that sounds really dumb when I say it out loud: you can actually distribute Janet programs to other people. You can compile Janet programs into statically-linked native binaries and give them to people who have never even heard of Janet before. And they can run them without having to install any gems or set up any virtual environments or download any runtimes or anything else like that.

      I'd love to see this sort of feature exposed more often from programming systems. Portability is huge and being able to easily share software is (or should be) one of its most important aspects.

    1. I really love the design of this page. It's clear and to the point. The little wikipedia supertext icons and hover cards are amazing.

    1. DO encourages us to represent data without the need to specify its shape in advance

      This seems to suggest structural typing. It makes me think a bit of components in Entity Component Systems.

    1. Separate code from data Keep data immutable Represent data with generic data structures

      b/c the highlight formatting is terrible, I'll repeat it below

      1. Separate code from data
      2. Keep data immutable
      3. Represent data with generic data structures
    1. Although Cambria has support for providing default data in the case of added or removed fields, it does not provide a mechanism to look up missing data, and so cannot support this kind of change today.

      This makes me think of StateML's approach to state charts. Essentially they define state charts as a simplified DSL that externalizes all behavior and contextual state to the executing system. If a similar approach was taken with Cambria then you could specify that external behavior was required to do a certain type of manipulation and leave the specifics of that out of the lens. The tradeoff here is that the lens itself isn't sufficient to do the translation fully, but it does remove the necessity from adding a full turning complete language inside the lens.

    2. It might be practical to support transformations which do not succeed on all possible data and produces errors on some input

      I think the key aspect here is that this highlights an incompatibility point that requires migration. Also, this makes me wonder if expressing complex transformations breaks outside of a simple DSL?

    3. Data translations in decentralized systems should be performed on read, not on write.

      My interpretation of this section is that transformation should happen on the outer bounds of a system not its inner bounds. E.g. in a web application you'd want translation happening at the API layer, not at the database layer.

    4. If at some point the costs outweigh the benefits, the best option might be to require all collaborators to upgrade. Cambria doesn’t force users to collaborate across versions with imperfect compatibility, but it provides the option of doing so.

      So the ability to specify at some point that users must upgrade is a key component. That doesn't necessitate requiring it all the time, but it acknowledges the reality that it may happen.

    5. perfect compatibility is impossible

      I think this is really key. While Cambria (and approaches like that) help provide some approximation of compatibility, whether that is sufficient or not really depends on the underlying semantics of the translation.

    6. mapping

      So the mapping property of convert is a 2-element array with the first element representing the transition from original to evolved and the second element representing evolved to original. I would've likely kept them explicit.

    7. These lenses must be kept in a place where even old versions of the program can retrieve them, such as in a database, at a well-known URL, or else as part of the document itself.

      This is interesting as it means there's at least one unbreakable promise that must be provided.

    8. Translation logic is defined by composing bidirectional lenses, a kind of data transformation that can run both forward and backward.

      lenses are an interesting concept that already exist as a concrete notion in software. The language Racket has a notion of a lens that it defines as

      A lens is a value that composes a getter and a setter function to produce a bidirectional view into a data structure

      https://hyp.is/LrFtPsq6Ee21rSsGHCRKRw/docs.racket-lang.org/lens/lens-intro.html

      See also the mention of lenses later in the document: https://hyp.is/V6TPVsstEe2G1a_KcmF7ww/www.inkandswitch.com/cambria/

      They explicitly link the Edit Lenses paper by Hofmann, Pierce, and Wagner

    9. Kafka’s streams can require both backward and forward compatibility

      I didn't realize this!

    10. because Stripe’s system uses dates to order its migrations, it is limited to a single linear migration path

      I want to understand how Cambria resolves this too. The dates are signifiers for breaking API changes... It seems to me that you wouldn't want anything other than a single linear migration path. Though I guess the argument here is for local first software where you may end up hitting a mix of newer and older version endpoints. The entirely tricky thing here is that it's easy enough to reason about what's a backwards breaking change but technically any new field becomes a forward breaking change. Resolving that seems like a challenge.

    11. Developers writing migration rules must implement translations by hand and write tests to ensure they are correct

      I don't (yet) see how cambria avoids this. I suppose it entirely depends on what they mean implement translations by hand. From my perspective having a DSL to accomplish the translation doesn't make it any less by hand nor does it remove the necessity of needing test coverage for the translations. I suppose it could lessen it to some extent?

    1. LLMs aren’t some dumb fad, like crypto. Yes, crypto was a dumb fad. This is not that.

      Haha 🔥

    1. Another awesome article. I didn't really understand abilities before this aside from knowing they were some sort of algebraic effect implementation (which I also didn't understand).

      The exceptions example and the note about dynamic scoping helped me tie it together mentally. It's like exceptions except it's two way, not one way. It bubbles up to some point, is handled, and then the calling code resumes.

    2. This type variable will match zero or more additional abilities used by the wrapped function, and will add then to the result. (I call that parameter rest, but the Unison documentation tends to use g.

      So g is just a generic catch-all for abilities

    3. watch expressions

      I'm not sure what a watch expression is...

    4. So you can’t implement abilities using exceptions, but you can implement exceptions using abilities

      This is fascinating. Not disputing what they're saying here, but react's whole suspense system kinda works in this way. When it suspends it's actually just throwing a promise that, when resolved, lets react's scheduler know it's okay to continue. Internally it's probably doing some crazy rehydration of the internal state of the component but that's effectively how I understand it to work.

      It's cool that unison has a this mechanism first classed.

    5. If we want to evaluate it, we need to turn this into a function call. That’s where the exclamation point comes in

      So when you're defining a zero-arity function in unison, prepending an ! executes the function.

      E.g.

      func = '42

      !func -- reports 42

    6. You’ll see this zero-arity functions called both thunks and deferred functions in the Unison documentation (although I don’t like the latter: it’s a deferred expression, not function, but…)

      This is interesting... is that what a thunk typically is?

    7. When you call handle to associate an ability with a function, you can pass it some initial state. The handler then passes this state into the implementation functions along with any parameters passed by the client code. When the implementation function returns, it does so by calling handle again, which means it can pass in updated state.

      So.. it gets some setup and teardown data?

    8. Dynamic Scoping Modern programming languages implement global and/or module scope and/or lexical scope. A name defined globally is available everywhere in the code. A name given module scope is only directly accessible within that module (and may be available outside if qualified with the module name). A name defined with lexical scope is available inside the current lexical block and (typically) the blocks it encloses. All three of these are statically defined: the meaning of a variable name can be determined at compilation time. In the past, languages such as Perl also offered dynamic scope. This looks a little like lexical scope, except the names defined in a block are available not just in that block but also in all the functions invoked by that block, and functions invoked below them, and so on. The scope is only determined at runtime: the name exists for the duration of the block that defines it, and it exists in all functions executed during that time. As you can imagine, this was both powerful and widely abused: it’s hard to know just what a name means when its definition depends on the execution flow. This is one reason we don’t often see dynamic scoping in current languages. Unison’s abilities are a form of dynamic scoping. However, they overcome many of the issues with previous kinds of dynamic scoping because they are fully type safe. You cannot accidentally use a name injected freom a higher context, and you always know where every name comes from.

      This is really fascinating! In some ways this makes me think of React's context which enables passing data deeply down a component tree.

    1. The term originated as a whimsical irregular form of the verb think. It refers to the original use of thunks in ALGOL 60 compilers, which required special analysis (thought) to determine what type of routine to generate.
    1. Fantastic article on how unison uses content addressablity to ensure renaming things doesn't break the world.

    2. Imagine coming back to that code two years later and expecting it to just run. Why wouldn’t it? Nothing has changed.

      It's notable while this is true for internal system concerns it may not necessarily be true for external system concerns. It does avoid the whole left-pad situation though.

    3. If you’ve come across Smalltalk, this is quite similar to it’s idea of an image

      I'm not familiar with the concept of an image in smalltalk.

      Here's what wikipedia has to say:

      Many Smalltalk systems, however, do not differentiate between program data (objects) and code (classes). In fact, classes are objects. Thus, most Smalltalk systems store the entire program state (including both Class and non-Class objects) in an image file. The image can then be loaded by the Smalltalk virtual machine to restore a Smalltalk-like system to a prior state.

      https://hyp.is/JSSGksUWEe2szz-6MaOv-w/en.wikipedia.org/wiki/Smalltalk