1,065 Matching Annotations
  1. Apr 2023
    1. Это значит, что нет смысла принимать более 5-7 г креатина в сутки.

      Loading phase does increase muscle creatine levels drastically and it relies on ~20g/day. So there are effects from high dosages.

    2. сразу после тренировки, а не до начала

      That may not be correct.

      According to this figure, metabolic changes that increase absorbtion begin to appear during the train. If we are to take advantage of them perhaps we'd like to have peak creatine level at that point. However, it takes about 45 minutes for blood creatine levels to peak. Thus, it may be beneficial to ingest creatine 45 minutes prior to the train.

    1. Trusty URIs are URLs just for convenience, so you can use good old HTTP to get them (in most cases), but the framework described above gives you several options that are highly unlikely to all fail at once.

      To me Trusty URIs seem to have complected two concepts that make it bad at both.

      These concepts are: mutable name + immutable content.

      If you care about content-based addressing - then mutable name as part of it is of no value.

      If you care about resolution of immutable content from a name - you're locked to the domain name service. Whereas there may be many parties online that have the content you want to get and it could have been resolved from them.


      To me it seems IPNS got it right, decoupling the two, allowing mutable names on top of immutable content.

      So you can resolve name -> content-based name, from peers.

      So you can refer by content-based name.

      So you can deref content-based name, from peers.

    1. it prevents the right to be forgotten

      It seems by maintaining a 'blacklist' of removed entries per DMC we can both preserve the log of ops intact and remove enformation of the removed log op entry.

    2. Implementations MAY also maintain a set of garbage collected block references.

      I'd imagine it's a MUST. Otherwise replica may receive content it just removed and happily keep on storing it. Keeping such 'blacklist' is an ellegant solution. Such lists are ever-growing however and perhaps could be trimmed in some way. E.g., after everybody who's been holding the content signed that it's been removed. Although even then nothing stops somebody from uploading the newly deleted content.

      I guess another solution would be not to delete somebody's content but to depersonalize that somebody. So content stays intact, signed by that somebody, however any personal information of that somebody gets removed, leaving only their public key. That would require for personal information to be stored in one mutable place that is not content-addressed.

    3. However, this creates permanent links to past operations that can no longer be forgotten. This prevents the right to be forgotten and does not seem like a viable solution for DMC (Section 6.2.1).

      That is a valid concern.

      Perhaps we could have them both - partial order + removal of entities.

      I guess it could be achieved by having op log and have a 'remove entity by its hash' op. Logs would not preserve data for such entities. However, in order to not re-hash log entries from the 'removed' log entry onward logs could keep the hash of the removed entry (but not data).

      Maybe that's the way removal's done in orbitdb.

    4. Public-key cryptographic signatures are used to ensure that operations are issued by authorized entitites.

      Speaking about mutable graphs, signing ops seems to be a superior technique compared to signing entities. As when signing ops triples gets signed, giving finer granularity. So there may exist entities that are combined out of triples that are signed by different authorities. Finer profiling.

    5. Operation to add an additional authorized key to a container (dmc:AddKey see Section 4.6)

      Reifying key addition to be yet another op seems like a good idea. More generally it's about managing container's meta in container's state. One great benefit from it is that we have meta-ops and ops converging.

    6. mutating operations and key revocations do not commute.

      Perhaps having determenistic order for operations would sovle that problem. Then if key revocation happens before the op with that key - the op is dropped, if after - preserved.

      Akin how ops are ordered in ipfs-log.

      That requires key revokation to be reified - to be a plain op log entry.

    1. For example, if you want to find all entities that have component types A and B, you can find all the archetypes with those component types, which is more performant than scanning through all individual entities.

      Archetypes seems to be a kind of index. However, for the example given that index does not get used for its purpose. It seems a more fit solution would be to keep an index of entities per a set of components that your code actually filters by. E.g., such sets would come from Systems

  2. Mar 2023
    1. In order to allow Comunica to produce more efficient query plans, you can optionally expose a countQuads method that has the same signature as match, but returns a number or Promise<number> that represents (an estimate of) the number of quads that would match the given quad pattern.

      The amount of quads in a source may be ever-growing.

      Then we couldnt't count them. Is that a problem for Comunica or will it handle infinite streams fine?

      I.e., stream, returned by match() keeps accreating with new values (e.g., as they are being produced by somebody).

    2. If Comunica does not detect a countQuads method, it will fallback to a sub-optimal counting mechanism where match will be called again to manually count the number of matches.

      Can't Comunica count quads as they arrive through the stream returned by match()?

    1. WITH <http://example/bookStore> DELETE { ?book ?p ?v } WHERE { ?book dc:date ?date ; dc:type dcmitype:PhysicalObject . FILTER ( ?date < "2000-01-01T00:00:00-02:00"^^xsd:dateTime ) ?book ?p ?v }

      Can the DELET clause be right below the above INSERT clause? So we don't need to repeat the same WHERE twice.

      Also would be nice to have transactional guarantees on the whole query.

    1. The fiscal host is responsible for taxes, accounting, compliance, financial admin, and paying expenses approved by the Collective’s core contributors (admins).

      .

    1. An improvement for helia would be to switch this around and have the monorepo push changes out to the split-out repos, the reason being the sync job runs on a timer and GitHub disables the timer if no changes are observed for a month or so, which means a maintainer has to manually go through and re-enable the timer for every split-out repo periodically - see ipfs-examples/js-ipfs-examples#44

      An ideal design seems to me to be the monorepo pulling from its dependent repos. It would allow for granular codebase management in repos, yet you can have discoverability via the monorepo. Also it would not complect repos with knowledge of the monorepo.

  3. Feb 2023
    1. What about companies for whom core-js helped and helps to make big money? It's almost all big companies. Let's rephrase this old tweet: Company: "We'd like to use SQL Server Enterprise" MS: "That'll be a quarter million dollars + $20K/month" Company: "Ok!" ... Company: "We'd like to use core-js" core-js: "Ok! npm i core-js" Company: "Cool" core-js: "Would you like to help contribute financially?" Company: "lol no"

      Corps optimise for money. Giving away money for nothing in return goes against their nature.

  4. Dec 2022
    1. "scopes": { "/scope2/": { "a": "/a-2.mjs" }, "/scope2/scope3/": { "b": "/b-3.mjs" } }

      An alternative domain model could be: {"a": "/a.js" "b": "/b.js" "path1": {"a": "/a1.js" {"path11": {"b": "/b11.js"}}}} This way path to a name is a 'scope'/'namespace'. Also we're spared the need of "/" in scope's names. It does look harder to parse visually than a flat model..

    2. "scopes": { "/scope2/": { "a": "/a-2.mjs" }, "/scope2/scope3/": { "b": "/b-3.mjs" } }

      An alternative domain model could be: {:imports {} :scopes {:imports {} :scopes {}} This is more verbose, but more uniformal.

  5. Nov 2022
    1. Datomic does not know that the variable ?reference is guaranteed to refer to a reference attribute, and will not perform entity identifier resolution for ?country

      Datomic may learn at query execution time that it's a ref attribute, I'd expect it to be able to resolve a ref by it's id in such case..

  6. Oct 2022
    1. In Georgia, and many other countries, income which is earned through actual work performed within the country, whether the income comes from a foreign source or not, is considered to be Georgian source income. So, if you have a client who pays you from abroad, direct to a foreign bank account, even if the money never arrives in Georgia, it is still Georgian source income as the work was performed here.

      .

  7. Sep 2022
    1. The form state support is about just that: form state. It basically keeps a "pristine" copy of the data of one or more entities and allows you to track if (and how) a particular field has been interacted with.

      Having modifications done to the actual entity in the db results in reactive behaviour.

      E.g., you have an app called "Test App", and you display "Test App" in navbar in place of logo. Also you have a form that lets you modify it. When you modify "Test App" to "My Supa Test App" it gets updated in the navbar as you type (and across all other places). That may not be what user wants, as likely they want to set a new value. This is akin to a problem of validation, when we don't want to show "field's incorrect" when user did not touch it or is touching it.

      Perhaps having "dirty" form state kept separate, and being the subject of modification would be a solution to that, having the actual used value in DB pristine. I.e., swap "dirty" and "pristine".

    1. #com.wsscode.pathom3.connect.operation.Resolver

      Also may contain inferred-input (seems it's pathom's effort to guess what's the input by analyzing fn params destructuring).

  8. Aug 2022
    1. $PROFILE/lib=/lib

      Why not /lib instead? Seems to do exactly the same, as there is no difference if /lib comes from a stand-alonely built environment or this one, manifest is the same.

    1. By popular demand, Bevy now supports States. These are logical "app states" that allow you to enable/disable systems according to the state your app is in.

      Could it be solved in a more ECS fashion by having a system that manages other systems?

      That would require to have systems as entities, however.

    2. Systems can now have inputs and outputs. This opens up a variety of interesting behaviors, such as system error handling:

      Hmm, I'd thought an in-style to ECS solution would be to have errors as entities and error handling as a system.

      But perhaps then we wouldn't have static analysis helping us by validating that all errors are being handled.

    3. It made sense to break up these concepts. In Bevy 0.4, Query filters are separate from Query components. The query above looks like this:

      That's sick! It's always good to see things being becomplected. Looks similar to Datomic's rules and pull pattern (rules being the second query expression here in Bevy, and pull pattern being the first.

    1. in order

      How would order be guaranteed if events can come to both streams at any time? E.g., At time1 Stream1 [] Stream2 [2]

      It gets processed, outputting 2

      And at time2 Stream1 [1] Stream2 []

      And 1 is being outputed, resulting in [2 1] downstream.

      It's nitpicking the terminology, but perhaps 'order' is not the right term, maybe 'precedence'?

  9. Jul 2022
    1. (rx/subs #(rx/push! output-sb %))

      Won't this keep potoks alive forever? If so, it seems an app's potok pool would ever-grow, eventually dragging performance to a halt.

    1. These requirements are satisfiable throughthe use of a cryptographic primitive called homomorphic hashing.

      Sounds interesting! I wonder if it puts approach of accumulative hashing of sets to use.

    2. However, the main drawback to directly signing thedatabase in this manner is that for each update published, the distributor must iterate over theentire database to produce the signature.

      Persistent data structures attend this drawback by representing database as a tree and only re-hashing part of that tree that changed. E.g., how it's done in Clojure. (and some other languages I fail to recall that borrowed it)

      As it mentioned in Approach 3 via merkle ttries.

    3. To illustrate the wastefulness of this approach, consider the case in which the first row of thedatabase holds an integer counter, and each update simply increments this integer by 1. If there arem such updates, then the batch update and offline database validation operations could involvem signature validations, even though the complete sequence of transformations trivially updates asingle row of the database.

      Re-validating the whole database seems to be of no use, since validation is being performed already before an update is inserted into db.

      I can see how having to validate every update can be wasteful if what's wanted is the final value, then the most efficient way seems to be to have that value signed and validated, as would be the case in Approach 2. But having signed updates may be desirable for some cases (e.g., for collaborative editing, as shown useful in pijul), perhaps a composed solution can be used to get the pros of the two approaches.

  10. Jun 2022
    1. This prop controls Excalidraw's theme. When supplied, the value takes precedence over intialData.appState.theme, the theme will be fully controlled by the host app, and users won't be able to toggle it from within the app. You can use THEME to specify the theme.

      Excalidraw can be supplied with 'theme' argument.

    1. если физическое лицо не является налоговым резидентом ни одного государства, в том числе Республики Беларусь, то оно признается налоговым резидентом Республики Беларусь, если в календарном году, за который определяется налоговое резидентство, имеет гражданство Республики Беларусь или разрешение на постоянное проживание в Республике Беларусь (вид на жительство)

      Belarus sets "default tax residency" for its citizens and people allowed to stay.

    1. Одним из ключевых понятий налогового законодательства является понятие — МЕСТО ОСУЩЕСТВЛЕНИЯ ДЕЯТЕЛЬНОСТИ. По месту осуществления деятельности определятся законодательство какой станы применяется. Например, у Вас ИП Грузии, но нет наёмных лиц и вся деятельности ведётся на территории РБ через интернет. Место осуществления деятельности — РБ и в РБ надо регистрировать ИП и платить налоги.

      Countries may want to tax you when you work on their territory.

    1. "source": "http://example.org/page1", "selector": "http://example.org/paraselector1"

      Why to have both "source" and "selector" when "source" can be a URI of a selector?

  11. May 2022
    1. Currently, Logseq plugin can only run in developer mode. Under this mode, plugin has ability to access notes data. So if you're working with sensitive data, you'd better confirm the plugin is from trusted resource before installing it.

      .

  12. Apr 2022
    1. While it is possible using only the constructions described above to create Annotations that reference parts of resources by using IRIs with a fragment component, there are many situations when this is not sufficient. For example, even a simple circular region of an image, or a diagonal line across it, are not possible. Selecting an arbitrary span of text in an HTML page, perhaps the simplest annotation concept, is also not supported by fragments.

      Fragments are not expressive enough to address an arbitrary structure.

    1. Zotero instantly creates references and bibliographies for any text editor, and directly inside Word, LibreOffice, and Google Docs.

      Zotero has integrations with editors.

    1. The other day, somebody told me that some of others use children and order (abbr: order-solution for convenience) rather than parent_id and left_id(abbr: left-solution for convenience) to process the relation of blocks, and I might think about it. (PS: Datascript can only save look-refs as set rather than vector.)

      .

    2. We should move all the string and byte calculations out of the outliner logic in the first step. So we had moved the logic of serialization and deserialization of a block into plugins. Doing this brings another significant benefit: we can easily implement multiple persistent storages such as markdown files, org-mode files, AsciiDoc files, or SQLite, etc., by just writing different serialize & deserialize adaptors.

      .

    3. The complexity and no boundaries pulled the developer's legs. To my surprise, I found that supporting Markdown & org-mode might already be our limit that Logseq can support.

      .

    1. React components render for the first time. Event handlers are registered.

      Shouldn't event handlers for about-to-get-rendered React components be registered prior to their render?

    2. Most of Logseq's application state is divided into two parts. Document-related state (all your pages, blocks, and contents) is stored in DataScript. UI-related state (such as the current editing block) is kept in Clojure's atom.

      Why not stick to DataScript altogether?

  13. Mar 2022
    1. This is because the editors are not powerful web browsers.

      Web browsers are a mess * design-wise (html, css, js, webworkers - all known to me leaves hoping for a better world) * performance wise (can't compete with a game engine) * cross-browser interoperability-wise (especially in small details, good luck getting it pixel perfect)

      E.g., Tonsky's rants and his work on having a better desktop program execution environment.

    2. Many of the Clojure editors that we have today are still basically text buffers. Very little innovation occurs around visualising code inside the editors.

      Should there be? The only responsibility of a text editor is to allow to manage text. They do it best, there are many editor's up to user's taste. There is LSP that eases navigation around the text field. Editors should stay editors. I'd love so much to have one editor to edit text everywhere in my OS, be it in this comment box, somewhere else in a browser or while redacting text files.

      A viz tool can be a stand-alone thing, doing it's own responsibility.

    3. Text is a weak basis for understanding and thinking about code. Instead the semantics of the language must be the primary basis.

      An example of that is that some people love ligatures as they make it easier to reason about code, presenting semantical symbols in stead of a sequences of chars.

    1. Following Meyer and McRobbie [57], the use of square brackets to represent anmset has become almost standard.

      Square brackets notation is used to denote a multiset.

    1. In order to represent any sequence all we need to do is use a multiset system containing the multiset of elements up to a point in the sequence for each point in the sequence.

      A way to model sequences via multisets.

  14. lisp-ai.blogspot.com lisp-ai.blogspot.com
    1. files are serialized from graph objects in git so it may be possible to directly map this tools graph into the git representation

      Git objects are: - blob - tree - commit

      As mentioned here.

      A blob is a file's content, in case of a Clojure program such content is text from a .clj or .edn or some other file.

      From that, we'd still need to perform derivation of a text representation out of a graph model of a program, it seems.

    1. Thus, one way to understand codeq is as an extension of the git model and tree down to a finer granularity, aligned to program semantics.

      I wonder how granular does it go.

    1. To put it simply, nodes are just interfaces to the actual data being processed inside servers, while in ECS the actual entities are what gets processed by the systems.

      It may be useful to have nodes as an optional interface as some may find it more easy. It is not core to processing, as pointed, I'd like to have my hands on the core - data+logic.

    2. In other words, Godot as an engine tries to take the burden of processing away from the user, and instead places the focus on deciding what to do in case of an event.

      'event' is an easy interface.

      However, it adds complexity.

      The need to describe/"decide" is still there.

      "event" captures data needed for a decision. "event handler" decides.

      We can't escape the need of data+logic.

      What I like in ECS is how data and logic is separated and how it results in a generic processing system.

      Whereas with events it's.. yet another system to process data.

    3. Godot uses plenty of data-oriented optimizations for physics, rendering, audio, etc. They are, however, separate systems and completely isolated.

      Why should they be? Why is it not the one system to crunch data?

    4. Godot just has nodes, and a scene is simply a bunch of nodes forming a tree, which are saved in a file.

      And ECS just has Entities.

      Also, Systems can be Entities as well, however we'd need a piece of logic to bootstrap.

      And Entities are IDs, we don't need that either. Or, if we do, content-based id can be used.

    1. A predicate is a para-operator with Boolean values, and at least one argument but no Boolean argument.

      I would call function "boolean?(value)" a predicate. However, the scope to which "value" is bound contains values of "true" and "false". Would this function, then, fail to classify as a predicate, but a para-operator?

    2. The notion of operation generalizes that of function, by admitting a finite list of arguments (variables with given respective ranges) instead of one.

      Can the one argument to a function be a composite?

      E.g., a dictionary/map.

      If so, then function and operation seem to be equal in expressive power.

    3. by admitting a finite list of arguments

      Why should it be finite / bound to a particular size? I can see how an operator "+" could take any number of arguments. (+ 1 2 3 4 ...)

    4. The notion of operation generalizes that of function, by admitting a finite list of arguments (variables with given respective ranges) instead of one.

      A function in programming languages can take an arbitrary amount of inputs, whereas usually operators are ternary at most.

  15. Dec 2021
    1. EPA dosage efficacy The dosage of EPA supplementation ranged from 180 mg/d to 4000 mg/d. We separated the EPA-pure and EPA-major groups into ≤1 g/d and >1 g/d, depending on the EPA dosage. The results indicated that with an EPA dosage ≤1 g/d, the EPA-pure and EPA-major groups demonstrated significant beneficial effects on the improvement of depression (SMD = −0.50, P = 0.003, and SMD = −1.03, P = 0.03, for the fixed-effects and random-effects models, respectively). We then set a dosage boundary of 1.5 g/d and 2 g/d, but no significant results were detected.

      EPA dosage optimal for antidepressant effects is <=1g/d.

    1. The most clinically relevant function of serotonin is in psychiatric disorders; most commonly, its absence appears to be related to depression, anxiety, and mania.[5][4]

      Absence of serotonin is related with depression.

    1. With this suppression of GABA, the result is an increase in production and action of dopamine

      Inhibition of GABA results in increased production and action of dopamine

    2. Of the three endorphin types, beta-endorphins have been the most studied and prevalent, accounting for the majority of the functional properties of endorphins as generalized and understood as a whole.

      beta-endorphin seems to account for majority of functional properties of endorphins

    3. The pain relief experienced as a result of the release of endorphins has been determined to be greater than that of morphine

      Analgesic effect of endorphins is more potent than that of morphine.

    4. Thus, the sequences of beta-endorphins and gamma-endorphins essentially have the sequence of alpha-endorphins nested within them.

      gamma-endorphin nests alpha-endorphin

    5. Thus, the sequences of beta-endorphins and gamma-endorphins essentially have the sequence of alpha-endorphins nested within them.

      beta-endorphin nests gamma-endorphin

    1. maps are equal if they have the same number of entries, and for every key/value entry in one map an equal key is present and mapped to an equal value in the other.

      It is possible to compare equality of a map solemnly by the second predicate "for every key/value entry in one map an equal key is present and mapped to an equal value in the other".

      Perhaps the definition of equality of maps can be amended to consist solemnly of the second predicate?

    2. sets are equal if they have the same count of elements and, for every element in one set, an equal element is in the other.

      It is possible to compare whether two sets are equal solemnly by the second predicate "for every element in one set, an equal element is in the other".

      The first predicate "sets are equal if they have the same count of elements" does not check for equality, rather is a fast check for inequality.

      Perhaps the definition of equality of sets can be amended to consist solemnly of the second predicate?

    3. Unicode characters are represented with \uNNNN as in Java.

      \uNNNN will fit up to three bytes, whereas UTF-8 allows for up to four bytes to be encoded. Can there be \uNNNNNN?

    1. β-Endorphins are produced primarily by both the anterior lobe of the pituitary gland [2,3], and in pro-opiomelanocortin (POMC) cells primarily located in the hypothalamus [3,4]

      beta-endorphin is produced by pituitary gland

    2. β-Endorphins are produced primarily by both the anterior lobe of the pituitary gland [2,3], and in pro-opiomelanocortin (POMC) cells primarily located in the hypothalamus [3,4]

      beta-endorphin is produced by hypothalamus

    1. Two such snapshots thus encode the future.

      It seems to me the only way to be able to derive any possible state of a system - is by having a snapshot of that system.

      Where snapshot includes the laws on which it operates.

    2. If you know where something is at two closely spaced instants, you can tell its speed and direction.

      The proposed constrains seem to be not enough to derive the one way how snapshot2 can be derived out of snapshot1.

    3. All the world's history can be determined from two snapshots taken in quick succession.

      It seems that there may be an infinite number of ways how snapshot2 can be derived out of snapshot1.

    4. Thus, experienced time is linear, it can be measured and it has an arrow.

      Study suggests that our perception of time is not linear and correlates with level of dopamine.

    5. All these phenomena, like memories, define a direction in time, and they all point the same way. Time has an arrow.

      So, perception of the universe by matter, namely humans, define properties of that universe, such as order of Nows?

    1. Together, these features may allow brain circuits to toggle between two distinct dopamine messages, for learning and motivation respectively.

      What affects setting state to 'performance'?

    2. Together, these features may allow brain circuits to toggle between two distinct dopamine messages, for learning and motivation respectively.

      What affects setting state to 'learning'?

    3. dopamine effects on plasticity can be switched on or off by nearby circuit elements

      Does it mean that dopamine's effect on a circuit can be both learning and performance at the same time?

    1. Since version 0.6 (2002[19]), Gnutella is a composite network made of leaf nodes and ultra nodes (also called ultrapeers). The leaf nodes are connected to a small number of ultrapeers (typically 3) while each ultrapeer is connected to more than 32 other ultrapeers. With this higher outdegree, the maximum number of hops a query can travel was lowered to 4.

      It does not guarantee that a query will be performed on all the nodes, does it?

  16. Nov 2021
    1. Piece Hash Comparisons If two torrents have pieces with the same hash, the data from those pieces will be re-used. This optimization greatly favors torrents with piece-aligned files.

      How would it escape same-key-attack, mentioned here.?

    1. The RDF standard introduces the notion of reification as an approach to provide a set of RDFtriples that describe some other RDF triple [HPS14].

      Named Graphs approach can be used to provide a set of triples about a set of triples.

    1. Technically, this is a fourth element, which can be attached to the <subject, predicate, object> triple

      In case there are multiple graphs that intersect on the same triplets, there may be a lot of repeated triplets.

    1. It requires the usage of algorithms (e.g. the Expansion Algorithm or the Framing Algorithm) that are incomprehensible and just pure madness.

      I am not familiar with how it works, but it appears to me to be a straightforward derivation both ways.

      I wonder where I might be wrong.

    1. test.txt v1

      This git object is a blob, it does not carry information that it is named "test.txt v1" under a tree. (Also, the same blob can be referenced from different trees, and it can have different names as well.)

    1. git-cat-file

      Why is it not called 'git-cat-object', since it works on objects?

      An "object" can represent not only a "file" ("blob", in git's terms) but a "directory" ("tree", in git's terms) as well.

    1. Composing knowledge from events In any multi-actor system, like a team of collaborating robots, every actor must make decisions based on some current knowledge: a local world model, or an evolving set of goals. It may not be possible to centralise that knowledge – for example if the actors can lose their network connection; or the latency or locking behaviour of a central database is not acceptable. If the contributing data events are frequent, for example from sensors, every update has to be synthesised by every actor into their knowledge. This can rapidly become a computational headache. m-ld can help. The knowledge itself is maintained as convergent shared state, and each actor only needs to apply their own updates. The rich query syntax then lets them act upon what they need to know, when they need to act.

      Actors make decisions based on their local perception of the world.

    1. There is no single best way to structure one’s data; every choice comes with significant tradeoffs. The larger our dataset becomes, the more important it is to tailor its structure to the way we intend to use and access it.

      Why not create as many indexes/data-structures as there are use-cases?

      Since any piece of content can be referenced a file-system-like index can be as such:

      pics
      ├── 2018
      │   ├── 2018-02-23-tabby.png
      │   └── 2018-04-14-tropical.png
      ├── cats
      │   ├── 2018-02-23-tabby.png
      │   └── 2019-12-16-black.png
      └── fish
          ├── 2017-03-05-freshwater.png
          ├── 2018-04-14-tropical.png
          └── 2020-10-02-blowfish.png
      

      Although this data structure seem to serve too many use-cases, it can be split.

    1. Describe the style in which the source resource should be presented for the Annotation

      Preference of how to perceive content seems to be a matter of taste and use-case, making it imposible to satisfy all possible combinations with a provided way.

      Perhaps this should be left on responsibility of an end-user?

    2. or simply the location where the current harvesting system discovered the resource

      In case annotation copies are being accreated with reasoning it will be hard to connect such reasoning.

      E.i., reasoning on copy A would not know of reasoning on copy B.

      It seems that all reasoning should be performed on original.

      Perhaps, to achieve that copies could be made unannotatable?

    3. is NOT RECOMMENDED for uses where the Body may need to be referred to from outside of the Annotation

      I'd imagine if an Annotation is meant to stay on the web indefinite, as would be in the case of immutable web that we may get at some point, at some point in time there inevitably will be a third-party that would like to annotate that annotation.

      Annotations that do not support annotation in turn would prevent further reasoning.

      For the web that is an open platform that seems to not be fitting.

    4. is NOT RECOMMENDED for uses where the Body may need to be referred to from outside of the Annotation

      That seems to be the case not only for "bodyValue", but for "body" of type Resource as well, as it is also is not referencable.

    5. The identity of the SpecificResource is separate from the descriptions of the constraints.

      Why?

      SpecificResource can be uniquely identified by a composite identity of URI of Resource + constrains.

      That would allow to escape the need to have another extraneous unique id registered.

      Also that would allow different parties to deterministically uniquely identify the same SpecificResource without any communication.

    6. "type": "Video"

      Some clients may support one specific type and not support another specific type.

      By supplying generic type with no specific type we may encounter a case when a client may not be able to tell based on a generic type whether it can handle data or not.

      E.g., a client may support video/mp4 and not support video/ogg, leaving it unable to predict whether it can handle data typed as "video".

      It seems to me that a scenario when a client is not able to handle all specific types of a generic type is rather common.

    7. it is also useful for a client to know that a resource with the format text/csv should not simply be rendered as plain text

      It should be up to a client to decide how handle data.

    8. It is useful for clients to know the general type of a Web Resource in advance.

      Ain't a general type can be derived out of a specific one?

      E.g., it is possible to derive a generic type of a specific mime type, because mime types have structure of type/subtype.

      E.g., video/mp4 -> video

    9. if information provided by the external resource contradicts the information provided by the annotation about it, then the external resource is authoritative and the information from the annotation should be disregarded

      Annotation refer an exact value of a resource.

      Annotator most likely does not expect this value to change.

      However, resources cannot be enforced to be immutable.

      Allowing an annotator to provide such local parameters would allow them to rely on at least their immutability.

    10. if information provided by the external resource contradicts the information provided by the annotation about it, then the external resource is authoritative and the information from the annotation should be disregarded

      Ain't it up to Annotation to decide how to view a resource?

    11. if information provided by the external resource contradicts the information provided by the annotation about it, then the external resource is authoritative and the information from the annotation should be disregarded

      Why so?

    12. processingLanguage

      Why Annotation takes on responsibilities of presentation?

      It can be presented in a miriad of ways and it's up to the consumer to decide how to view it up to their preferences.

    13. This information may be recorded as part of the Annotation, even if the representation of the resource must be retrieved from the Web.

      This seems necessary, because an author of Annotation may be a third-party for Target or Body, having no control over it, and yet may want to specify some data about it.

    14. the Body may also be embedded within the Annotation

      It won't be possible to refernce such Body of a published Annotation.

      So it won't be possible to annotate such Annotation.

      I.e., it is not possible to use such Annotation for reasoning further on.

      That may be fine in case no third-party will ever want to annotate it / use as part of their reasoning, but personally I would not be comfortable to count on that.

      Based on that reasoning, it seems to me that for the open web it would be more suitable to require of any published Annotation to have a referencable Body.