37 Matching Annotations
  1. Last 7 days
    1. This preserves a clear separation between text and annotations, keeps the original text freely editable, and avoids circular feedback loops that could happen if annotations were themselves searchable

      Meta programming

    2. The point is to defer this process until it’s absolutely needed. It’s okay to end up with structured schemas when we need them to support computational features, but when they’re not necessary, text is a perfectly adequate representation for humans to interact with

      Lazy formalization. Intuitively this makes most sense to me. I do wonder if there’s often cases where you want to reason with intention and doing it lazily causes burden.

    3. Notably, this process can’t avoid the need to teach the computer how to interpret meaning from freeform data

      Formalism considered a requirement

    4. There’s no way to ambiguously assign a recipe to either Tuesday or Wednesday, which would be natural to do in a paper notebook.

      Refinement types or arrays in place of scalars

    5. In Potluck, we encourage people to write data in freeform text, and define searches to parse structure from the text.

      From a gradual enrichment standpoint I understand but from a data entry standpoint this seems like more work.

    6. Formulas are written using JavaScript, in a live programming environment that resembles a spreadsheet

      Interesting the programming environment is separate from the note text.

  2. Sep 2022
    1. As this quote implies, formalisms are often difficult for people to use because they need to takemany extra steps (and make additional decisions) to specify anything.

      I wonder if the formalisms were more carefully baked into the affordances and interactions if it could feel like less work. Since you're getting something for the formalism rather than doing extra work to embed it.

    2. Experiences with workflow systems, systems which automatically route documents and workthrough defined procedures, show that systems without the ability to handle exceptions to theformalized procedure cannot support the large number of cases when exceptional procedures arerequired (Ellis, Gibbs, Rein, 1991).

      Feels like formalizing organic processes will have this outcome. The goal should be to make an actual formal workflow or build tools that enable users rather than trying to mix them.

    3. Of course, training and supervisionhelped users learning the general techniques for hypermedia authoring, but they tended to avoid(or lose interest in) the more sophisticated formalisms

      What affordances were they given in exchange for the formalisms?

    4. Many times he struggled to create a title for his note; heoften claimed that the most difficult aspect of this task was thinking of good titles

      Avoid requiring canonical naming

    5. Thus, hypertexts end up ashierarchical outlines with full pages of text connected by a single link to the next page of text

      Clearly this is just historical context. I'm wondering if we still have issues with hypertext authoring. There seems to be a stronger intuition built up of how to separate pages of information. I'm curious if we could/should be doing better.

    6. This level of formalization enablesthe system to apply knowledge-based reasoning techniques to support users by performing taskssuch as automated diagnosis, configuration, or planning.

      What I'm getting so far is that the formalization is what gives the users affordances to certain features. I'd imagine sophisticated data mining techniques (such as text-search, classification, etc) can alleviate this partially but is always going to be useful. It would be beneficial to opt into the formalism explicitly for the affordances and maintain bidirectional linking between non-formalized representations. In other words, you want the ability to create a formalized view.

    7. The authors propose, based on these experiences, that the cause ofa number of unexpected difficulties in human-computer interaction lies in users’ unwillingness orinability to make structure, content, or procedures explicit

      I'm curious if this is because of unwillingness or difficulty.

    1. The scalability issue is somewhat related to the versionability issue. In the small, checked exceptions are very enticing. With a little example, you can show that you've actually checked that you caught the FileNotFoundException, and isn't that great? Well, that's fine when you're just calling one API. The trouble begins when you start building big systems where you're talking to four or five different subsystems. Each subsystem throws four to ten exceptions. Now, each time you walk up the ladder of aggregation, you have this exponential hierarchy below you of exceptions you have to deal with. You end up having to declare 40 exceptions that you might throw. And once you aggregate that with another subsystem you've got 80 exceptions in your throws clause. It just balloons out of control. In the large, checked exceptions become such an irritation that people completely circumvent the feature. They either say, "throws Exception," everywhere; or—and I can't tell you how many times I've seen this—they say, "try, da da da da da, catch curly curly." They think, "Oh I'll come back and deal with these empty catch clauses later," and then of course they never do. In those situations, checked exceptions have actually degraded the quality of the system in the large.

      This is another case where I think inference would solve most of the issue.

    2. Anders Hejlsberg: Let's start with versioning, because the issues are pretty easy to see there. Let's say I create a method foo that declares it throws exceptions A, B, and C. In version two of foo, I want to add a bunch of features, and now foo might throw exception D. It is a breaking change for me to add D to the throws clause of that method, because existing caller of that method will almost certainly not handle that exception. Adding a new exception to a throws clause in a new version breaks client code. It's like adding a method to an interface. After you publish an interface, it is for all practical purposes immutable, because any implementation of it might have the methods that you want to add in the next version. So you've got to create a new interface instead. Similarly with exceptions, you would either have to create a whole new method called foo2 that throws more exceptions, or you would have to catch exception D in the new foo, and transform the D into an A, B, or C. Bill Venners: But aren't you breaking their code in that case anyway, even in a language without checked exceptions? If the new version of foo is going to throw a new exception that clients should think about handling, isn't their code broken just by the fact that they didn't expect that exception when they wrote the code? Anders Hejlsberg: No, because in a lot of cases, people don't care. They're not going to handle any of these exceptions. There's a bottom level exception handler around their message loop. That handler is just going to bring up a dialog that says what went wrong and continue. The programmers protect their code by writing try finally's everywhere, so they'll back out correctly if an exception occurs, but they're not actually interested in handling the exceptions. The throws clause, at least the way it's implemented in Java, doesn't necessarily force you to handle the exceptions, but if you don't handle them, it forces you to acknowledge precisely which exceptions might pass through. It requires you to either catch declared exceptions or put them in your own throws clause. To work around this requirement, people do ridiculous things. For example, they decorate every method with, "throws Exception." That just completely defeats the feature, and you just made the programmer write more gobbledy gunk. That doesn't help anybody.

      The issue here seems to be the transitivity issue. If method A calls B which in turn calls C, then if C adds a new checked exception B needs to add it even if it is just proxying it and A is already handling it via "finally". This seems like an issue of inference to me. If method B could dynamically infer its checked exceptions this wouldn't be as big of an issue.

      You also probably want effect polymorphism for the exceptions so you can handle it for higher order functions.

    1. Plus, if we can do this recursively, expanding inline items within inline items, we end up with something familiar: an outliner

      Outline view for a recursive hierarchical structure.

    2. Let’s say you’re in a workspace, listening to a podcast episode. Maybe you opened the podcast episode from a webpage you had open. As the episode plays, you realize that you would like to take some related notes. You open a new pane within your workspace, and take your notes. You can pause and play the podcast in the pane on the left, and you can take your notes in the pane on the right.

      This has me thinking about some sort of parametric workspace/view. Where you could "pull out" the podcast episode and have a generic podcast listening/note view which would change which note you were looking at based on which podcast you were listening to.

    3. The concept involves having an item just like every other in your itemized system: it has a type, attributes, and references to other items.

      I wonder if there's a good way to evolve wiki systems into the OS of the future by adding more sophisticated views and adding the abilities for computation. Seems like a good way to get multiplayer built in from the start. Maybe add the ability to have content addressed items as well.

    1. Now, not every programmer prefers that kind of development. Some programmers prefer to think of development as a process of designing, planning, making blueprints, and assembling parts on a workbench. There’s nothing wrong with that. Indeed, a multibillion-dollar international industry has been built upon it.

      I still think they should worry about it. Production systems need to evolve and contain data; reasoning about the systems completely statically from the source code with no regard to the existing data is a lot more complicated than it needs to be.

    2. In fact, there’s a style of programming, well known in Lisp and Smalltalk circles, in which you define a toplevel function with calls to other functions that don’t yet exist, and then define those functions as you go in the resulting breakloops. It’s a fast way to implement a procedure when you already know how it should work.
    3. Moreover, because the entire language and development system are available, unrestricted, in the repl, you can define the missing function bar, resume foo, and get a sensible result.

      This seems like one of the key points. The ability to edit computations while running. Type holes with resuming gets you most of the way there but there's probably also modifications. I wonder how you can keep it from getting confusing. Something similar to FRP?

  3. Aug 2022
    1. At 3 am he realized he needed to change the process scheduler. He read enough code to find the right method, changed it, and continued with his project

      How do we enable this while preventing people from accidentally nuking their systems?

    2. Dan Ingalls implemented the first opaque overlapping windows to let users see more code and other objects on the screen

      This is interesting context. I wonder if that need has gone away with large screens or if we're not using it the way it was originally intended. My intuition is that auto-layout is generally better but for smaller pieces of data ad hoc overlaps seem fine.

    1. when a programming technology is "too simple", it loses generality, but to compensate it often accretes unguessable magic, which leads to yet more complexity.

      Is there a way to make the spectrum smoother? So that we can have immense simplicity but not transition to "unguessable magic", but rather to comprehensible compositionallity while not requiring naive users to grok it.

    2. when you start with something simple but special purpose, it inevitably accretes features that attempt to increase its generality, as users run into its limitations. But the result of this evolutionary process is usually a complicated mess compared to what could be achieved by designing for generality up-front, in a more holistic way.

      I think this is true, but it's often difficult to design generality upfront. A nice approach is making sure that you are able to back into it and modify after the fact.

      We should be trying to make our technologies have more "two-door" decisions.

    3. you resort to complex hacks to subvert its limitations or combine it with some other special purpose technologies

      A lot of harm has come from using hacks or smoke and mirrors in order to advance technology instead of "wearing the hair shirt" and actually carrying the technology along. This was once necessary when our computing power and software was vastly limited but now it holds us back more than advances us.

    1. One problem is that a person can spend years reading analogies about black hole evaporation, quantum teleportation, and so on. And at the end of all that reading they typically have… not much genuine understanding to show for it. The analogies and heuristic reasoning simply don’t go far. They may be entertaining and produce some feeling of understanding.

      Limits to learning by example

    2. good tools for thought arise mostly as a byproduct of doing original work on serious problems

      In the context of use

  4. May 2022
    1. URLs are not democratic.

      I think monolithic might be a bigger issue than democratic

  5. Mar 2022
    1. A mixin class is a parent class that is inherited from - but not as a means of specialization. Typically, the mixin will export services to a child class, but no semantics will be implied about the child "being a kind of" the parent.

      Does this have to be done via inheritance?

    1. Unlike the web, the classic desktop computing paradigm makes a distinction between apps and files. The default way for a desktop app to save data is to save it to an external file.

      I think the most important part of the file vs web dichotomy is that applications HAVE to have their entire state management stored in files and web APIs design the APIs custom for the presupposed use case. Often the public UX isn't even built upon the original APIs and even if it is the APIs are built up as is necessary to facilitate the UX.

      Files on the other hand are the base level state of the data and therefore an equal playfield for all consumers. I believe a similar outcome would happen if you expose first class database constructs or other forms of state management. Files are too anemic to be a general purpose solution in my opinion.

    2. Because integrations are part of the application code, the developer of the app is responsible for integrating that app with other tools

      There's some need for something like dependency injection here on the part of the user.

    3. One-off API integration doesn’t scale

      My interpretation of this is because there's no language level semantics for REST APIs. In regular programming languages we have mechanisms to facilitate abstractions (interfaces, typeclasses, data structures). Those don't exist at the service level to the same capacity so it's hard to swap out or have encapsulation at the REST level

    1. For example, the range of sin is only defined for values [-1, 1]

      For some reason I feel like just doing mod 2 and subtracting by 1 would be useful. Since you could at least recapture the cyclical nature of sin.

    2. It’s worth noting that runtime errors can be avoided with systems like a static type checker, pushing the problem into compile time. The result is that your program is often in an in-between state where it won’t even start until all errors have been resolved

      You can definitely do the typechecking at edit time which is probably more important than runtime in this context.

    3. You can still easily get an unexpected result, but you’ll always get some result. In our experience, this approach promotes tinkering and feels much closer to sketching than to programming.

      I don't like this approach in general. I've often found that constraints crucial in the problem solving and thinking process. I think having more ways to experiment and expand the solution space are wonderful but it's important to have context where behavior is expected and not easy to have dissonance between what is happening and what you think is happening

    1. If you find a published item of a type your system has never seen before, your system loads the item with the item view that the publisher included

      I wonder how this is manifested. Is this delivered in a way that the view is coupled together with the data along with "sub-views" and a user can choose to modify the views being used at will. Or is the data it's own structure and there's a set of default views shipped separately that can be modified.

      Both options seem intriguing but I think I bias a bit towards the second since it may allow more sane self describing data and defaults to be used.