58 Matching Annotations
  1. Feb 2021
  2. Dec 2020
    1. length of time the asynchronous messaging queue can exist

      I think this statement is misleading. It implies the time is measured from the birth of the queue. It's actually measured since the most recent transmission to the receiver. Source code (Connection.java) says:

      • How long to wait, with the receiver not accepting any messages, before kicking the receiver out
      • of the distributed system. Ignored if asyncDistributionTimeout is zero.
    1. We note that liveness of the consensus layer dependson a majority of bothCjandCj+1remaining correct toperform the ‘hand-off’: Between the time whenCjbe-comes current and untilCj+1does, no more than a mi-nority fail in either configuration.

      Make sense. Glad they call this out though!

    1. until the faulty switch recovered

      Is there not a configurable timeout that would have let nodes 2 and 3 form a cluster (with 3 still the leader) and causing node 1 to shut itself down (ostensibly ahead of the fault switch recovering)?

    1. In parallel to the receipt of the SuspectMembersMessage

      "in parallel to receipt" confuses me. If it said "in parallel to sending the SuspectMembersMessage, the suspicious member initiates a distributed…"

    1. The coordinator sends a network-partitioned-detected UDP message

      The implication here is that the coordinator can (in general) learn of a network-partition-detected event from other members. This implies that a non-coordinator member notifies the coordinator of a network-partition-detected event.

    2. total membership weight has dropped below 51%

      Seems to me that quorum is lost when the total weight is less-than-or-equal-to 50%. Another way of saying that is quorum requires a majority, and a majority is greater-than 50%.

  3. Dec 2018
    1. hosted entirely by one member

      Ah since all data for a replicated region is stored on all members, it follows that a transaction can be run on any member.

      For a partitioned region, a transaction would have to run on the member designated as the primary member for buckets containing the data of interest, I suppose.

    1. Modifying a value in place bypasses the entire distribution framework provided by Geode

      I think this applies regardless of the copy-on-read setting. If you need to modify a value (of an entry), you have to modify a copy, and then re-set it via put(k,v) (or some other data-modifying API method)

    2. If you do not have the cache’s copy-on-read attribute set to true

      …should I infer that if I do have copy-on-read set to true, that I can change objects returned from entry access methods?

  4. Nov 2018
    1. Replicated (distributed)

      this makes me think "distributed" is an alias for "replicated". but other doc pages make me think that "distributed" is a superset containing both "replicated" and "partitioned"

    1. The main choices are partitioned, replicated, or just distributed. All server regions must be partitioned or replicated

      does this mean that only a client cache, may configure a distributed region that is neither partitioned nor replicated?

    1. normal

      i.e. data policy == "normal" right? That data policy says:

      • Allows the contents in this cache to differ from other caches.
      • Data that this region is interested in is stored in local memory.
    2. performs conflict checking in the same way as for a replicated region

      If I understand this statement, then we are talking about an update event arriving at a partitioned region (on a peer JVM) or a client region (on a client JVM).

      There are then, three kinds of region that might receive this event, I think:

      1. primary partitioned region
      2. non-primary partitioned region
      3. client region (proxy or caching proxy)

      Seems to me that the conflict checking would differ in those three cases.

    3. When a member receives an update

      I think that "receives an update" here means "receives an update event". Contrast that with client code invoking a region update method via the Geode API in a member JVM.

  5. Oct 2018
    1. With a redundant partitioned region, if a member that hosts primary buckets fails or is shut down, then a Geode member that hosts a redundant copy of those buckets takes over WAN distribution for those buckets.

      this is talking about failure. does the same behavior hold during rebalancing?

    1. also

      By "also" is this contrasting regions (distributing updates) to systems distributing updates? It would help if I understood what systems meant above.

      Perhaps "system" means "cluster" and the distinction being made here is between serial and parallel replication?

    1. gemfire.resource.manager.threads

      Seems like this shouldn't be bound to the concept of a "thread". What if balancing a single region requires multiple threads? What if more than one region can be concurrently rebalanced with a single thread?

    1. Data and event distribution is based on a combination of the peer-to-peer and system-to-system configurations.

      This is a mysterious statement. Should it say "client/server" here instead of system-to-system?

  6. Sep 2018
    1. stream and aggregate the results

      Can I define a "monoidal" function in Geode such that the function can run on each partition in parallel, with results from each partition being aggregated up?

    2. entity groups

      "entity groups" is mentioned in the paragraph (twice) and nowhere else on this page and nowhere in the cited paper (Helland). The paper talks about an "entity" which is a think referenced by a key. I think the "groups" here are actually groups of associations (key+entity). The grouping is based on "data affinity".

  7. Sep 2017
  8. Apr 2016
    1. I’ve long imagined a standards-based annotation layer for the web

      Me too @jon! Good luck to you!

      PS it seems like it'd be a very small step from open annotation to allowing somebody to produce a chunk of original content (that is not a reference to anything else). Then we're done :)

      :bowtie:

  9. Jun 2015
    1. Properly rewriting all the URLs in the proxied page is a tricky business

      Surely. Look at what happens to HuffPo…

      before:

      HuffPo Original

      and after:

      HuffPo via Hypothes.is proxy

      Notice that the proxied page is missing the "Iowa Straw Poll" photograph. But perhaps more importantly it's missing the Subaru ad.

      As a result, do you think HuffPo would object to the proxied page?

  10. May 2015
  11. Mar 2015
  12. my.brainspace.com my.brainspace.com