920 Matching Annotations
  1. Aug 2023
    1. [500, "kilo", "bytes"]

      Would it be useful for "memory" (and other byte value fields) to support number value in bytes?

    2. [500, "milli", "seconds"]

      Would it be useful for "timeout" to support number value in milliseconds.

      It's a rather standard approach, may be easy to use.

    1. The operation of adding upall changes is stream integration.

      Akin to reduce(previousDB, tx) => currentDB

    2. Δ푉 = D(↑푄(퐷퐵)) = D(↑푄(I(푇)))

      ^Q can be generalized as yet another T, denotade sa ^T (^ hints that it this "live" T may be applied on top of other Ts / maintains a "live" view).

      This gives the ability for a ^T to depend on other ^Ts.

      So, for each ^T in Ts, ^T(I(Ts)) = ^T(DB).


      Additionally, DB is a snapshot. Perhaps ^T(DB) better denoted as ^T(Ts).

      Thus the relation can be written as

      ΔV = D(^T(Ts)

      Additionally, D is akin to Δ, denoting it as such we end up with

      ΔV = Δ^T(Ts), for each ^T in Ts.

      And since Ts are versioned, ^T(TsN) implicitly has access to ^T(TsN-1).

      I.e., TsN contains ^T(TsN-1), for each ^T.

      Which allows ^T to be incrementally computed over it's previous value.

      ^T(^T(TsN-1), TN)

      ^T has function signature akin to that of reduce, i.e., ^T(accumulator, sequence element)

    1. However, while developing a system, classes will be defined in various places, and it makes sense to be able to see relevant (applicable) methods adjacent to these classes.

      Classes / onthologies are not a core feature of the language.

      It's how we have RDF and OWS - they're separate.

      Classes can be build on top of pure functions and data - these two are the core, nothing else.

      Perhaps even functions can be eliminated from the core. Function is a template of some computation. It can be baked-in into the program. Since names are user-level feature.

      So we end up with data and ops on it as core, and some flow control primitives (perhaps or and and is enough). The rest can be built on top. As to what data to provide, multisets seem to be the most universal data structure / less restrictive, out of which more special data structures can be derived. And with advent of SSDs we are not limited by performance to sequential reads, so perhaps it'll be not all-to-crazy to switch to multisets as basic structural block of programs from lists.

    2. There will also be means of associating a name with the generic function.

      Naming system is not the core part of a language.

      Naming system serves two purposes:

      1. Create structure of a program

      2. Give a user-friendly interface

      You don't need 2. in core of your language. How data (your program) is displayed should be up to the end-user (programmer). If he wants to see it as text, formatted as a LISP - his choise, if he wants to see it as text in a Java-like style - ok, Haskel-like - sure, visual - no prob.

      Having languages as data allows just that. It helps us get rid of accidental complexity from managing a syntax-heavy bag of text files (and having compilers). E.g., how Unison lang have AST as data structure and text-based interface to tweak it.

      Having code as data would also make run-time tweaking more easier, bringing us closer to the promise of LISP.

      And also all the rest of neat features on top of content-addressing of code, that are now waaay easier to implement, such as incremental compilation, distributed compute, caching.

      Have names as user-level feature, their personal dictionaries. Some will call reducing function reduce, some fold, some foldr, some will represent it as a triangle (for visual code management).

    3. more elaborate object-oriented support

      In no part is a core feature for a language.

      The mainstream OOP is complex and has many responsibilities.

      OOP as envisioned by it's creator is actor model - state management (state + managing actor) paired with linking actors together - a complex approach. Can be broken down to it's primitives. And OOP can be constructed out of them, if so desired, but not at the core level.

      A good reference of language decomplection is Clojure.

    4. In the second layer I include multiple values

      Treating single values as a special case of multiple values is generally more performant.

    5. the elaborate IO functions

      IO is not the core of the language. It's more of an utility layer that allow the language to speak to the outside world.

    6. macros

      Would be nice to have them at run-time.

    7. and very basic object-oriented support.

      OOP is an abstraction that is useful for a very narrow amount of use-cases, giving accidental complexity to others. Personally, I'd leave it out of the core.

    8. I believe nothing in the kernel need be dynamically redefinable.

      This moves us away from the value of LISP as a meta-language that can change itself. We have macros at compile time, and not run-time. Having them at run-time gives us power we've been originally promised by LISP philosophy. Having no run-time dynamism would not allow for this feature.

      I.e., having codebase as persistent data structure, tweakable at run-time sounds powerful.

    9. Our environments should not discriminate against non-Lisp programmers the way existing environments do. Lisp is not the center of the world.

      A possible approach to that is having LISP hosted, providing a simpler interface on top of established but less simple and expressive environments. E.g., the Clojure way.

    10. All interested parties must step forward for the longer-term effort.

      This is an effort to battle the LISP Curse. Would be a great movement, as it's one of the LISPs hinderspots. (another is adoption, tried to be solved by Clojure)

    11. And soon they will have incremental compilation and loading.

      That would be great. And it is. Unison Lang gives us this power. Hope for wider adoption.

      Having content-addressable codebase simplifies it a ton.

      Perhaps an AST can be built on top of IPVM, akin to Unison Lang, but for any WASM 40+ langs out there.

    12. and using the same basic data representations

      Having common data structures would be great.

      Atm each language implements the same data structures on its own.

      Would be nice to have them protocol-based, implementation-agnostic.

      Likely, IPLD gives us the common data structures.

    13. The very best Lisp foreign functionality is simply a joke when faced with the above reality.

      Been true. Fixed in Clojure. Clojure is top-tier in integration with host platforms.

    14. The real problem has been that almost no progress in Lisp environments has been made in the last 10 years.

      That's a good point. Again, not something inherent in LISPs. Possibly due to lack of adoption and the LISP Curse.

    15. Seventh, environments are not multi-user when almost all interesting software is now written in groups.

      Not an inherent trait of LISP envs. And LISPs do have multiuser envs, e.g., REPL to which you can connect multiple clients.

    16. Sixth, using the environment is difficult. There are too many things to know. It’s just too hard to manage the mechanics.

      Environments for LISPs are simpler due to simplicity of the language they're for. For a more complex language they'd be only more complex and hence more difficult to use.

    17. what is fully defined and what is partially defined

      Not sure I get that

    18. Fifth, information is not brought to bear at the right times.

      How that is a trait of non-LISP environments?

      More than that, it seems LISPs give you greater introspection. E.g., because of REPL. E.g., because of macros that can sprinkle code with introspection in dev environment and be rid of it in prod env.

    19. Fourth, they do not address the software lifecycle in any extensive way.

      How is that an inherent property of LISP environments?

      Surely they can be extended with all the mentioned properties.

      It does require effort to implement, and with sufficient resources behind an environment it would be wise to invest there.

      E.g., Emacs has great docs. Clojure ecosystem has great docs and is a joy to work in.

      I'd say LISPs have greater potential user-friendliness due to simplicity of interface. Simple interface + good docs is more user-friendly than complex interface + great docs.

      And you don't need that much docs in the first place for simple interface.

      As well as a simple, well-designed interface can serve as documentation itself, because you can grok on it, instead of going through docs. You mostly need docs for the mindset.

    20. Third, they are not multi-lingual even when foreign interfaces are available.

      This is great. It's a desirable trait. But I don't see how that is a unique value available only for Non-Lisp environments.

    21. Files are used to keep persistent data -- how 1960s.

      What's wrong with that? Files are universaly accesable on a machine (and with IPFS - across the machines), seems to be a good design for the times. Any programs can interoperate through files - a common interface.

      Files are 'caches' of computations made.

      Sure, it would be nice to capture computations behind it as well, although that is not practical for back in times and is not that much needed.

      But nowadays IPVM does just that at a globe-scale. And, thankfully, we also have a data structures as means to communicate and not a custom text.

      I don't see what's wrong with that approach. Taking it further (as IPVM does) gives a next level of simplicity and interoperability, along with immutability/persistence - a game changer.

    22. In fact, I believe no currently available Lisp environment has any serious amount of integration.

      Well, that's a shame. Composability is a valuable trait of computer programs, at user interface included. The facts that they're not composable may mean that the problem domain is not well known, so it wasn't clear what are the components. Perhaps with time it'll become clearer. This is, interestingly is a non-the-right-thing approach. UI got shiped to satisfy the need without covering all the cases (integration of UIs). A lean startup approach. E.g., Emacs starter as non-composable and now turns into composable.

    23. The virus lives while the complex organism is stillborn. Lisp must adapt, not the other way around.

      What's "right" is context-dependent. For programmers the right thing will be a simple and performant and mainstream etc. language.

      LISP did not check all the boxes back then. Clojure now tries to get closer to checking what a programmer may need in production, and has a broader success.

      Clojure had effort in it's design to make it a simple-interface thing, and it's excellent in that. It had effort in making it easy to adopt. So it's a well-design virus. The right thing. Virality is one of the traits of the right thing, in the context of production programming.

    24. You know, you cannot write production code as bad as this in C.

      Performance is not the only metric of "goodness" of code. Simplicity is one of.

    25. The following examples of badly performing Lisp programs were all written by competent Lisp programmers while writing real applications that were intended for deployment

      Often performance is not the higest value for business. It is especially so when we have ever-growing powerful hardware.

      LISP allows for simplicity of interface. You can iterate on making the implementation performant, if you need so, later on.

    26. The lesson to be learned from this is that it is often undesirable to go for the right thing first.

      Great, don't go 100% in. Especially since those last 20% take 80% of time. But please do have a good interface design in those 50%. It is to stay.

    27. The right thing is frequently a monolithic piece of software

      Unix is a mono kernel. C is a fixed language, whereas LISP can be extended with macros.

      Composability is a trait of a good design. I'd expect The Right Thing approach to produce composable products, and Worse Is Better approach to produce complexed ones.

    28. there are many more compiler experts who want to make C compilers better than want to make Lisp compilers better

      It is more fun to play with simple things. They're more rewarding.

      C is more simple in implementation than a LISP. It's more fun to play with it's compiler. LISP is more simpler in interface than C, it's more fun to play with it as a language. (hence the LISP Curse)

      I wonder why we don't have a C Curse at the level of compiler though. Or do we?

    29. and third will be improved to a point that is almost the right thing

      Huge doubts there. The original interface will stay there in some way or another. And it is complex. So users will pay the cost of it from then on.

      E.g., Java is OOP, it introduces functional style, but OOP stays there. Some folks would like to switch to functional, but there is pressure from legacy codebase and legacy mindset around to carry on in the same fashion.

      E.g., C is still a C. C++ and other C* are not far away in simplicity of their interface.

      Unix is still a text-based chatter mono kernel.

      It seems hard to impossible to change the core design, so in Worse is Better it stays Worse.

    30. namely, implementation simplicity was more important than interface simplicity.

      Can you go far with such a design before getting down in accidental complexity from having complex abstractions?

      Abstracting away is the bread and butter of programming. In order to do it efficiently you need simple abstractions - simple interface. For this task interface simplicity is way more valuable than the implementation simplicity. E.g., you may have a functional interface on top of an imprerative implementation.

    31. Early Unix and C are examples of the use of this school of design

      C is complex, compared to LISPs.

      Unix has a mono kernel and a ton of C.

      Are they the crux of simplicity, which is the highest value of Worse is Better?

  2. Jun 2023
    1. and is often intentionally limited further to reduce instability introduced by a fluctuating tickrate

      Although an alternative approach of processing inputs as soon as they arrive is used as well, and may provide for better experience.

      E.g., as seen in CS2.

    1. The second solution for persistent replication has to do with swapping pubsub for IPNS.

      This is a fine solution to discovery of a latest known version of a log of a machine. However, using it as a primary way would mean machines need to: store log in IPFS, publish it to IPNS, others need to resolve IPNS -> IPFS (and perhaps get notified in some way to know that there's change) and fetch from IPFS - as a solution for syncing state between two machines it will be pretty costly on time and computation required. As a solution to persist local log for other's occasional offline access - seems fine.

    2. since pubsub messages are not persistent

      They can be made persistent by storing a log of messages to IPFS though, as OrbitDB does

      The fact that they're not persistent by default may be a plus, as persistent is cost-heavy and can be done when required rather than always

  3. May 2023
    1. The rule QuadData, used in INSERT DATA and DELETE DATA, must not allow variables in the quad patterns.

      .

    2. because there is no match of bindings and so no solutions are eliminated.

      didn't get it, I thought MINUS acts as disjoin

    1. Furthermore, there are concerns regarding relevance andtrustworthiness of results, given that sources are selected dynamically.

      Perhaps immutability can be provided by having domain name of URL point to an immutable value, say content-addressable RDF or content-addressable log of txes.

    1. This leads to an eventuallyconsistent semantics for queries.

      Hmm. Query store may be eventually consistent with the command store. However, the issued query returned a stale result and there is no 'eventual getting the correct result' for it.

      So we may end up in an inconsistent state no prob, where there is a command that transacts stuff based on a stale query.

      Capturing query dependencies of a command in the command itself will allow for re-evaluation of queries on the consistent state.

    2. Where CRUD offersfour operations to apply modifications, event-sourced systems are restrained to onlyone operation: append-only.

      CRUD operations can be captured as events in an event log though.

    1. Peerings are bidirectional

      That's a strange restriction. Full-message connection has its cost, and I'd imagine peers would like to minimize that by being able to set it unidirectionaly and per-topic.

    1. Run your own instance of *-star signalling service. The default ones are under high load and should be used only for tests and development.

      .

    1. QuadData denotes triples to be removed and is as described in INSERT DATA, with the difference that in a DELETE DATA operation neither variables nor blank nodes are allowed

      .

    1. path.friends.filter

      Can this be captured as SPARQL FILTER instead? Then we can keep on building the query and delegate filtering to SPARQL engine.

    1. Array slots which are null are not required to have a particular value; any “masked” memory can have any value and need not be zeroed, though implementations frequently choose to zero memory for null values.

      May not-zeroing result in unintended access to previously stored in that physical layout data?

      E.g., if I intentionally create fully zeroed arrays and read previous physical layout.

  4. arrow.apache.org arrow.apache.org
    1. The Apache Arrow format allows computational routines and execution engines to maximize their efficiency when scanning and iterating large chunks of data.

      This will be damm handy for ECS engines. The memory layout, shown in the associated figure, organizes data in the way such engines are querying for it.

  5. Apr 2023
    1. Что касается пользы завтрака для снижения веса — она тоже пока не установлена.

      Time-restricted feeding causes fat mass reduction, according to Dr. Satchin Panda.

      It may be implemented as skipping breakfast, but it should not be breakfast necessarily.

      However, there are benefits found in adopting early time-restricted feeding (skipping supper), as presented in this vid.

    1. В работе Burke DG[6] ученые изучили 24-часовое выделение креатина и продукта его распада креатинина из организма и пришли к выводу, что в сутки усваивается не более 50 мг/кг добавки, все остальное выводится с мочой

      Not sure that's the takeaway of the study

    2. Это значит, что нет смысла принимать более 5-7 г креатина в сутки.

      Loading phase does increase muscle creatine levels drastically and it relies on ~20g/day. So there are effects from high dosages.

    3. сразу после тренировки, а не до начала

      That may not be correct.

      According to this figure, metabolic changes that increase absorbtion begin to appear during the train. If we are to take advantage of them perhaps we'd like to have peak creatine level at that point. However, it takes about 45 minutes for blood creatine levels to peak. Thus, it may be beneficial to ingest creatine 45 minutes prior to the train.

    1. Trusty URIs are URLs just for convenience, so you can use good old HTTP to get them (in most cases), but the framework described above gives you several options that are highly unlikely to all fail at once.

      To me Trusty URIs seem to have complected two concepts that make it bad at both.

      These concepts are: mutable name + immutable content.

      If you care about content-based addressing - then mutable name as part of it is of no value.

      If you care about resolution of immutable content from a name - you're locked to the domain name service. Whereas there may be many parties online that have the content you want to get and it could have been resolved from them.


      To me it seems IPNS got it right, decoupling the two, allowing mutable names on top of immutable content.

      So you can resolve name -> content-based name, from peers.

      So you can refer by content-based name.

      So you can deref content-based name, from peers.

    1. it prevents the right to be forgotten

      It seems by maintaining a 'blacklist' of removed entries per DMC we can both preserve the log of ops intact and remove enformation of the removed log op entry.

    2. Implementations MAY also maintain a set of garbage collected block references.

      I'd imagine it's a MUST. Otherwise replica may receive content it just removed and happily keep on storing it. Keeping such 'blacklist' is an ellegant solution. Such lists are ever-growing however and perhaps could be trimmed in some way. E.g., after everybody who's been holding the content signed that it's been removed. Although even then nothing stops somebody from uploading the newly deleted content.

      I guess another solution would be not to delete somebody's content but to depersonalize that somebody. So content stays intact, signed by that somebody, however any personal information of that somebody gets removed, leaving only their public key. That would require for personal information to be stored in one mutable place that is not content-addressed.

    3. However, this creates permanent links to past operations that can no longer be forgotten. This prevents the right to be forgotten and does not seem like a viable solution for DMC (Section 6.2.1).

      That is a valid concern.

      Perhaps we could have them both - partial order + removal of entities.

      I guess it could be achieved by having op log and have a 'remove entity by its hash' op. Logs would not preserve data for such entities. However, in order to not re-hash log entries from the 'removed' log entry onward logs could keep the hash of the removed entry (but not data).

      Maybe that's the way removal's done in orbitdb.

    4. Public-key cryptographic signatures are used to ensure that operations are issued by authorized entitites.

      Speaking about mutable graphs, signing ops seems to be a superior technique compared to signing entities. As when signing ops triples gets signed, giving finer granularity. So there may exist entities that are combined out of triples that are signed by different authorities. Finer profiling.

    5. Operation to add an additional authorized key to a container (dmc:AddKey see Section 4.6)

      Reifying key addition to be yet another op seems like a good idea. More generally it's about managing container's meta in container's state. One great benefit from it is that we have meta-ops and ops converging.

    6. mutating operations and key revocations do not commute.

      Perhaps having determenistic order for operations would sovle that problem. Then if key revocation happens before the op with that key - the op is dropped, if after - preserved.

      Akin how ops are ordered in ipfs-log.

      That requires key revokation to be reified - to be a plain op log entry.

    1. For example, if you want to find all entities that have component types A and B, you can find all the archetypes with those component types, which is more performant than scanning through all individual entities.

      Archetypes seems to be a kind of index. However, for the example given that index does not get used for its purpose. It seems a more fit solution would be to keep an index of entities per a set of components that your code actually filters by. E.g., such sets would come from Systems

    1. Responsibilities as a fiscal host.

      We may remove it, as this reference was meant for internal use

    2. Benefits from using OpenCollective (OC) with CarbonUnits (CU):

      We can rephrase it as: "Benefits from using Open Collective with Carbon Units:"

    3. OpenCollective

      To be more professional we can write it in a more correct way: Open Collective

    4. OC

      We can replace it with the full name: Open Collective

    5. This also allows for currently censored teammates (e.g., in Russia) to get paid.

      This can be removed.

    6. Bring more services to Web3Workshop

      Can be rephrased to: "Provide more services to Web3Workshop participants"

    7. (perhaps) Close integration with IPFS project-raising ecosystem

      Can be rephrased to: "Close integration with ProtocolLabs project-raising ecosystem"

    8. (perhaps) Attract the first customer that wants a Web3 application

      Can be rephrased to: "Attract customers that want Web3 applications"

  6. Mar 2023
    1. In order to allow Comunica to produce more efficient query plans, you can optionally expose a countQuads method that has the same signature as match, but returns a number or Promise<number> that represents (an estimate of) the number of quads that would match the given quad pattern.

      The amount of quads in a source may be ever-growing.

      Then we couldnt't count them. Is that a problem for Comunica or will it handle infinite streams fine?

      I.e., stream, returned by match() keeps accreating with new values (e.g., as they are being produced by somebody).

    2. If Comunica does not detect a countQuads method, it will fallback to a sub-optimal counting mechanism where match will be called again to manually count the number of matches.

      Can't Comunica count quads as they arrive through the stream returned by match()?

    1. WITH <http://example/bookStore> DELETE { ?book ?p ?v } WHERE { ?book dc:date ?date ; dc:type dcmitype:PhysicalObject . FILTER ( ?date < "2000-01-01T00:00:00-02:00"^^xsd:dateTime ) ?book ?p ?v }

      Can the DELET clause be right below the above INSERT clause? So we don't need to repeat the same WHERE twice.

      Also would be nice to have transactional guarantees on the whole query.

    1. Fiscal hosting enables Collectives to transact financially without needing to legally incorporate.

      .

    2. The fiscal host is responsible for taxes, accounting, compliance, financial admin, and paying expenses approved by the Collective’s core contributors (admins).

      .

    1. An improvement for helia would be to switch this around and have the monorepo push changes out to the split-out repos, the reason being the sync job runs on a timer and GitHub disables the timer if no changes are observed for a month or so, which means a maintainer has to manually go through and re-enable the timer for every split-out repo periodically - see ipfs-examples/js-ipfs-examples#44

      An ideal design seems to me to be the monorepo pulling from its dependent repos. It would allow for granular codebase management in repos, yet you can have discoverability via the monorepo. Also it would not complect repos with knowledge of the monorepo.

  7. Feb 2023
    1. What about companies for whom core-js helped and helps to make big money? It's almost all big companies. Let's rephrase this old tweet: Company: "We'd like to use SQL Server Enterprise" MS: "That'll be a quarter million dollars + $20K/month" Company: "Ok!" ... Company: "We'd like to use core-js" core-js: "Ok! npm i core-js" Company: "Cool" core-js: "Would you like to help contribute financially?" Company: "lol no"

      Corps optimise for money. Giving away money for nothing in return goes against their nature.

    1. Turnover (within the last 12 month period) of the sponsoring LLC or IE should exceed 50,000 GEL for each foreigner (director or employee) in the business

      .

  8. Dec 2022
    1. "scopes": { "/scope2/": { "a": "/a-2.mjs" }, "/scope2/scope3/": { "b": "/b-3.mjs" } }

      An alternative domain model could be: {"a": "/a.js" "b": "/b.js" "path1": {"a": "/a1.js" {"path11": {"b": "/b11.js"}}}} This way path to a name is a 'scope'/'namespace'. Also we're spared the need of "/" in scope's names. It does look harder to parse visually than a flat model..

    2. "scopes": { "/scope2/": { "a": "/a-2.mjs" }, "/scope2/scope3/": { "b": "/b-3.mjs" } }

      An alternative domain model could be: {:imports {} :scopes {:imports {} :scopes {}} This is more verbose, but more uniformal.

  9. Nov 2022
    1. Attributes of type :db.type/bytes cannot be found by value in queries (see bytes limitations).

      Bytes could be treated as an entity and resolved by a CID.

    2. Datomic does not know that the variable ?reference is guaranteed to refer to a reference attribute, and will not perform entity identifier resolution for ?country

      Datomic may learn at query execution time that it's a ref attribute, I'd expect it to be able to resolve a ref by it's id in such case..

  10. Oct 2022
    1. In Georgia, and many other countries, income which is earned through actual work performed within the country, whether the income comes from a foreign source or not, is considered to be Georgian source income. So, if you have a client who pays you from abroad, direct to a foreign bank account, even if the money never arrives in Georgia, it is still Georgian source income as the work was performed here.

      .

  11. Sep 2022
    1. The form state support is about just that: form state. It basically keeps a "pristine" copy of the data of one or more entities and allows you to track if (and how) a particular field has been interacted with.

      Having modifications done to the actual entity in the db results in reactive behaviour.

      E.g., you have an app called "Test App", and you display "Test App" in navbar in place of logo. Also you have a form that lets you modify it. When you modify "Test App" to "My Supa Test App" it gets updated in the navbar as you type (and across all other places). That may not be what user wants, as likely they want to set a new value. This is akin to a problem of validation, when we don't want to show "field's incorrect" when user did not touch it or is touching it.

      Perhaps having "dirty" form state kept separate, and being the subject of modification would be a solution to that, having the actual used value in DB pristine. I.e., swap "dirty" and "pristine".

    1. #com.wsscode.pathom3.connect.operation.Resolver

      Also may contain inferred-input (seems it's pathom's effort to guess what's the input by analyzing fn params destructuring).

    2. #com.wsscode.pathom3.connect.operation.Resolver

      Also may contain docstring

    3. requires

      Lists only the required attrs, optionals (via pco/?) will be listed under optionals.

  12. Aug 2022
    1. DISPLAY=$DISPLAY

      Can't we pass it via --share as we did with others?

    2. XAUTHORITY=$XAUTHORITY

      Didn't we already passed XAUTHORITY env via --share ?

    3. $PROFILE/lib=/lib64

      Couldn't we add a symlink from /lib64 pointing to /lib of this environment?

    4. $PROFILE/lib=/lib

      Why not /lib instead? Seems to do exactly the same, as there is no difference if /lib comes from a stand-alonely built environment or this one, manifest is the same.

    5. I need to know where in the store the profile is created for my environment

      Why?

    1. By popular demand, Bevy now supports States. These are logical "app states" that allow you to enable/disable systems according to the state your app is in.

      Could it be solved in a more ECS fashion by having a system that manages other systems?

      That would require to have systems as entities, however.

    2. Systems can now have inputs and outputs. This opens up a variety of interesting behaviors, such as system error handling:

      Hmm, I'd thought an in-style to ECS solution would be to have errors as entities and error handling as a system.

      But perhaps then we wouldn't have static analysis helping us by validating that all errors are being handled.

    3. It made sense to break up these concepts. In Bevy 0.4, Query filters are separate from Query components. The query above looks like this:

      That's sick! It's always good to see things being becomplected. Looks similar to Datomic's rules and pull pattern (rules being the second query expression here in Bevy, and pull pattern being the first.

    1. in order

      How would order be guaranteed if events can come to both streams at any time? E.g., At time1 Stream1 [] Stream2 [2]

      It gets processed, outputting 2

      And at time2 Stream1 [1] Stream2 []

      And 1 is being outputed, resulting in [2 1] downstream.

      It's nitpicking the terminology, but perhaps 'order' is not the right term, maybe 'precedence'?

    2. Slice

      Would be interesting to have a version with predicates. Idx-based slicing seems useful in a limited amount of cases.

    3. Skip

      Why not 'drop'?

    4. rx/from-atom

      Why not rx/from?

  13. Jul 2022
    1. (rx/subs #(rx/push! output-sb %))

      Won't this keep potoks alive forever? If so, it seems an app's potok pool would ever-grow, eventually dragging performance to a halt.

    2. fn

      Why is it not defined in the let, as the others?

    3. output-sb

      Why pass it as a prop and not get it from the scope?

    4. (rx/filter (fn [e] (not ^boolean (.has ^js failset e))))

      Why this filter does not appear in sync state transformation loop?

    5. swap! state*

      If subs are run async, then swap!s may execute out-of-order, do they?

    6. It also can be interpreted like a reducer.

      Would it be better to have [state event] signature in order to comply with reduce function?

    1. It controls the default parameters used by mke2fs(8) when it is creating ext2, ext3, or ext4 file systems.

      .

    1. These requirements are satisfiable throughthe use of a cryptographic primitive called homomorphic hashing.

      Sounds interesting! I wonder if it puts approach of accumulative hashing of sets to use.

    2. However, the main drawback to directly signing thedatabase in this manner is that for each update published, the distributor must iterate over theentire database to produce the signature.

      Persistent data structures attend this drawback by representing database as a tree and only re-hashing part of that tree that changed. E.g., how it's done in Clojure. (and some other languages I fail to recall that borrowed it)

      As it mentioned in Approach 3 via merkle ttries.

    3. To illustrate the wastefulness of this approach, consider the case in which the first row of thedatabase holds an integer counter, and each update simply increments this integer by 1. If there arem such updates, then the batch update and offline database validation operations could involvem signature validations, even though the complete sequence of transformations trivially updates asingle row of the database.

      Re-validating the whole database seems to be of no use, since validation is being performed already before an update is inserted into db.

      I can see how having to validate every update can be wasteful if what's wanted is the final value, then the most efficient way seems to be to have that value signed and validated, as would be the case in Approach 2. But having signed updates may be desirable for some cases (e.g., for collaborative editing, as shown useful in pijul), perhaps a composed solution can be used to get the pros of the two approaches.

    1. The inclusion of a name and email address can be spoofed in a way that a key signature cannot.

      Authorship can be spoofed, a solution is to sign.

  14. Jun 2022
    1. theme

      Excalidraw allows to disable UI theme toggle.

    2. This prop controls Excalidraw's theme. When supplied, the value takes precedence over intialData.appState.theme, the theme will be fully controlled by the host app, and users won't be able to toggle it from within the app. You can use THEME to specify the theme.

      Excalidraw can be supplied with 'theme' argument.

    3. Excalidraw is using CSS variables to style certain components.

      Excalidraw allows to style with CSS.

    1. если физическое лицо не является налоговым резидентом ни одного государства, в том числе Республики Беларусь, то оно признается налоговым резидентом Республики Беларусь, если в календарном году, за который определяется налоговое резидентство, имеет гражданство Республики Беларусь или разрешение на постоянное проживание в Республике Беларусь (вид на жительство)

      Belarus sets "default tax residency" for its citizens and people allowed to stay.

    1. Одним из ключевых понятий налогового законодательства является понятие — МЕСТО ОСУЩЕСТВЛЕНИЯ ДЕЯТЕЛЬНОСТИ. По месту осуществления деятельности определятся законодательство какой станы применяется. Например, у Вас ИП Грузии, но нет наёмных лиц и вся деятельности ведётся на территории РБ через интернет. Место осуществления деятельности — РБ и в РБ надо регистрировать ИП и платить налоги.

      Countries may want to tax you when you work on their territory.

    1. "source": "http://example.org/page1", "selector": "http://example.org/paraselector1"

      Why to have both "source" and "selector" when "source" can be a URI of a selector?

  15. May 2022
    1. Each browser performs MIME sniffing differently and under different circumstances.

      An app better not rely on it.

    1. and common chunks across files can be reused in order to minimize storage costs.

      Content on IPFS can share common parts.

    1. Click 3 dots and go to Plugins, then Marketplace

      Did not appear to me on the latest (atm) logseq dev build.

    1. Currently, Logseq plugin can only run in developer mode. Under this mode, plugin has ability to access notes data. So if you're working with sensitive data, you'd better confirm the plugin is from trusted resource before installing it.

      .

  16. Apr 2022
    1. While it is possible using only the constructions described above to create Annotations that reference parts of resources by using IRIs with a fragment component, there are many situations when this is not sufficient. For example, even a simple circular region of an image, or a diagonal line across it, are not possible. Selecting an arbitrary span of text in an HTML page, perhaps the simplest annotation concept, is also not supported by fragments.

      Fragments are not expressive enough to address an arbitrary structure.

    2. Annotations have 1 or more Targets.

      Web Annotation allow for multiple Web Annotation Targets.

    1. Zotero instantly creates references and bibliographies for any text editor, and directly inside Word, LibreOffice, and Google Docs.

      Zotero has integrations with editors.

    2. Cite in style.

      Zoterro allows to cite.

    3. Collect with a click.

      Zoterro can take and store a snapshot of a document.

    1. The plugin might be related to user data security.

      .

    2. The other day, somebody told me that some of others use children and order (abbr: order-solution for convenience) rather than parent_id and left_id(abbr: left-solution for convenience) to process the relation of blocks, and I might think about it. (PS: Datascript can only save look-refs as set rather than vector.)

      .

    3. We should move all the string and byte calculations out of the outliner logic in the first step. So we had moved the logic of serialization and deserialization of a block into plugins. Doing this brings another significant benefit: we can easily implement multiple persistent storages such as markdown files, org-mode files, AsciiDoc files, or SQLite, etc., by just writing different serialize & deserialize adaptors.

      .

    4. The complexity and no boundaries pulled the developer's legs. To my surprise, I found that supporting Markdown & org-mode might already be our limit that Logseq can support.

      .

    1. React components render for the first time. Event handlers are registered.

      Shouldn't event handlers for about-to-get-rendered React components be registered prior to their render?

    2. Most of Logseq's application state is divided into two parts. Document-related state (all your pages, blocks, and contents) is stored in DataScript. UI-related state (such as the current editing block) is kept in Clojure's atom.

      Why not stick to DataScript altogether?

    3. handler

      Why not handlers?

    4. /src/test contains all the test

      Why is it /src/test and not /test ?

      Not that it matters much, but I had an impression that that's the convention.

    5. /src/electron and /src/main/electron contains code specific to the desktop app.

      That comment would be cool to have as a metadata on those folders.

    6. counterparts

      What's meant by that?

    7. With the advent of web assembly, you can use almost any language to write browser apps.

      How does that work?

  17. Mar 2022
    1. This is because the editors are not powerful web browsers.

      Web browsers are a mess * design-wise (html, css, js, webworkers - all known to me leaves hoping for a better world) * performance wise (can't compete with a game engine) * cross-browser interoperability-wise (especially in small details, good luck getting it pixel perfect)

      E.g., Tonsky's rants and his work on having a better desktop program execution environment.

    2. Many of the Clojure editors that we have today are still basically text buffers. Very little innovation occurs around visualising code inside the editors.

      Should there be? The only responsibility of a text editor is to allow to manage text. They do it best, there are many editor's up to user's taste. There is LSP that eases navigation around the text field. Editors should stay editors. I'd love so much to have one editor to edit text everywhere in my OS, be it in this comment box, somewhere else in a browser or while redacting text files.

      A viz tool can be a stand-alone thing, doing it's own responsibility.

    3. as a medium for code serialization.

      As an alternative, a graph model + labels could be serialized.

    4. Text is a weak basis for understanding and thinking about code. Instead the semantics of the language must be the primary basis.

      An example of that is that some people love ligatures as they make it easier to reason about code, presenting semantical symbols in stead of a sequences of chars.

    1. Following Meyer and McRobbie [57], the use of square brackets to represent anmset has become almost standard.

      Square brackets notation is used to denote a multiset.

    1. they are used in membrane computing

      multisets are used in membrane computing

    1. In order to represent any sequence all we need to do is use a multiset system containing the multiset of elements up to a point in the sequence for each point in the sequence.

      A way to model sequences via multisets.

  18. lisp-ai.blogspot.com lisp-ai.blogspot.com
    1. Sets can be defined as a particular type of multisets whose values are all equal to one

      Sets can be modeled via multisets.

    1. files are serialized from graph objects in git so it may be possible to directly map this tools graph into the git representation

      Git objects are: - blob - tree - commit

      As mentioned here.

      A blob is a file's content, in case of a Clojure program such content is text from a .clj or .edn or some other file.

      From that, we'd still need to perform derivation of a text representation out of a graph model of a program, it seems.

    1. [?cq :codeq/code ?cs]

      What does it capture?

    2. Thus, one way to understand codeq is as an extension of the git model and tree down to a finer granularity, aligned to program semantics.

      I wonder how granular does it go.

    1. Again, I will stress the point that, while this use case is rare

      Perhaps, to some degree, due to inability of having it in the first place with non-ECS systems?

    2. To put it simply, nodes are just interfaces to the actual data being processed inside servers, while in ECS the actual entities are what gets processed by the systems.

      It may be useful to have nodes as an optional interface as some may find it more easy. It is not core to processing, as pointed, I'd like to have my hands on the core - data+logic.

    3. taking the complexity away from the user

      Complexity is not taken away this way, it's hidden, It's still there, but behind a more 'easy' interface perhaps.

    4. In other words, Godot as an engine tries to take the burden of processing away from the user, and instead places the focus on deciding what to do in case of an event.

      'event' is an easy interface.

      However, it adds complexity.

      The need to describe/"decide" is still there.

      "event" captures data needed for a decision. "event handler" decides.

      We can't escape the need of data+logic.

      What I like in ECS is how data and logic is separated and how it results in a generic processing system.

      Whereas with events it's.. yet another system to process data.

    5. Godot uses plenty of data-oriented optimizations for physics, rendering, audio, etc. They are, however, separate systems and completely isolated.

      Why should they be? Why is it not the one system to crunch data?

    6. Godot just has nodes, and a scene is simply a bunch of nodes forming a tree, which are saved in a file.

      And ECS just has Entities.

      Also, Systems can be Entities as well, however we'd need a piece of logic to bootstrap.

      And Entities are IDs, we don't need that either. Or, if we do, content-based id can be used.

    1. A predicate is a para-operator with Boolean values, and at least one argument but no Boolean argument.

      I would call function "boolean?(value)" a predicate. However, the scope to which "value" is bound contains values of "true" and "false". Would this function, then, fail to classify as a predicate, but a para-operator?

    2. The notion of operation generalizes that of function, by admitting a finite list of arguments (variables with given respective ranges) instead of one.

      Can the one argument to a function be a composite?

      E.g., a dictionary/map.

      If so, then function and operation seem to be equal in expressive power.

    3. 2.3 will show how to construct operations with arity > 1 by means of functions

      Functions can be used to construct n-ary operations.

    4. list

      Passing values as list embeds semantics into order/places. Place-based semantics/addressing is no good. Is there a better way?

    5. arguments

      I thought "argument" is a name for a function's input value symbol, whereas for an operation it's "operand".

    6. by admitting a finite list of arguments

      Why should it be finite / bound to a particular size? I can see how an operator "+" could take any number of arguments. (+ 1 2 3 4 ...)

    7. The notion of operation generalizes that of function, by admitting a finite list of arguments (variables with given respective ranges) instead of one.

      A function in programming languages can take an arbitrary amount of inputs, whereas usually operators are ternary at most.

    8. Not all variables of set theory will have a range.

      I thought a bound variable will have at least an empty range. When there won't be any range at all?

    9. possible or authorized values

      What's the difference between "possible" and "authorized" variable values?

  19. Dec 2021
    1. EPA dosage efficacy The dosage of EPA supplementation ranged from 180 mg/d to 4000 mg/d. We separated the EPA-pure and EPA-major groups into ≤1 g/d and >1 g/d, depending on the EPA dosage. The results indicated that with an EPA dosage ≤1 g/d, the EPA-pure and EPA-major groups demonstrated significant beneficial effects on the improvement of depression (SMD = −0.50, P = 0.003, and SMD = −1.03, P = 0.03, for the fixed-effects and random-effects models, respectively). We then set a dosage boundary of 1.5 g/d and 2 g/d, but no significant results were detected.

      EPA dosage optimal for antidepressant effects is <=1g/d.

    1. In humans, serotonin is a neurotransmitter used throughout the body having action of 14 variants

      There are 14 variants of serotonin receptors in the human body.

    1. There are seven subtypes of serotonin receptors present in the body.[2]

      There are seven subtypes of serotonin receptors in the human body.

    2. The most clinically relevant function of serotonin is in psychiatric disorders; most commonly, its absence appears to be related to depression, anxiety, and mania.[5][4]

      Absence of serotonin is related with depression.

    1. With this suppression of GABA, the result is an increase in production and action of dopamine

      Inhibition of GABA results in increased production and action of dopamine

    2. Of the three endorphin types, beta-endorphins have been the most studied and prevalent, accounting for the majority of the functional properties of endorphins as generalized and understood as a whole.

      beta-endorphin seems to account for majority of functional properties of endorphins

    3. The pain relief experienced as a result of the release of endorphins has been determined to be greater than that of morphine

      Analgesic effect of endorphins is more potent than that of morphine.

    4. Thus, the sequences of beta-endorphins and gamma-endorphins essentially have the sequence of alpha-endorphins nested within them.

      gamma-endorphin nests alpha-endorphin

    5. Thus, the sequences of beta-endorphins and gamma-endorphins essentially have the sequence of alpha-endorphins nested within them.

      beta-endorphin nests gamma-endorphin

    1. The prototypical μ-opioid receptor agonist is morphine

      morphine is an agonist of mu-opioid receptors

    1. maps are equal if they have the same number of entries, and for every key/value entry in one map an equal key is present and mapped to an equal value in the other.

      It is possible to compare equality of a map solemnly by the second predicate "for every key/value entry in one map an equal key is present and mapped to an equal value in the other".

      Perhaps the definition of equality of maps can be amended to consist solemnly of the second predicate?

    2. sets are equal if they have the same count of elements and, for every element in one set, an equal element is in the other.

      It is possible to compare whether two sets are equal solemnly by the second predicate "for every element in one set, an equal element is in the other".

      The first predicate "sets are equal if they have the same count of elements" does not check for equality, rather is a fast check for inequality.

      Perhaps the definition of equality of sets can be amended to consist solemnly of the second predicate?

    3. Unicode characters are represented with \uNNNN as in Java.

      \uNNNN will fit up to three bytes, whereas UTF-8 allows for up to four bytes to be encoded. Can there be \uNNNNNN?

    1. β-Endorphins are produced primarily by both the anterior lobe of the pituitary gland [2,3], and in pro-opiomelanocortin (POMC) cells primarily located in the hypothalamus [3,4]

      beta-endorphin is produced by pituitary gland

    2. β-Endorphins are produced primarily by both the anterior lobe of the pituitary gland [2,3], and in pro-opiomelanocortin (POMC) cells primarily located in the hypothalamus [3,4]

      beta-endorphin is produced by hypothalamus

    1. Two such snapshots thus encode the future.

      It seems to me the only way to be able to derive any possible state of a system - is by having a snapshot of that system.

      Where snapshot includes the laws on which it operates.

    2. If you know where something is at two closely spaced instants, you can tell its speed and direction.

      The proposed constrains seem to be not enough to derive the one way how snapshot2 can be derived out of snapshot1.

    3. All the world's history can be determined from two snapshots taken in quick succession.

      It seems that there may be an infinite number of ways how snapshot2 can be derived out of snapshot1.

    4. Thus, experienced time is linear, it can be measured and it has an arrow.

      Study suggests that our perception of time is not linear and correlates with level of dopamine.

    5. All these phenomena, like memories, define a direction in time, and they all point the same way. Time has an arrow.

      So, perception of the universe by matter, namely humans, define properties of that universe, such as order of Nows?

    1. dopamine cell

      What is meant by it?

    2. Together, these features may allow brain circuits to toggle between two distinct dopamine messages, for learning and motivation respectively.

      What affects setting state to 'performance'?

    3. Together, these features may allow brain circuits to toggle between two distinct dopamine messages, for learning and motivation respectively.

      What affects setting state to 'learning'?

    4. dopamine effects on plasticity can be switched on or off by nearby circuit elements

      Does it mean that dopamine's effect on a circuit can be both learning and performance at the same time?