3,755 Matching Annotations
  1. Nov 2022
    1. look at a computing engine not as a device to solve differential equations, nor to process data in any given way, but rather as an abstraction of a well-defined universe which may resemble other well-known universes to any necessary degree

      = computational universes - resemble other well-known universes

    2. Not being able to solve any one scientist's problems, they nevertheless feel that they can provide tools in which the thinker can describe his own solutions and that these tools need not treat specifically any given area of discourse

      = universal nature of information processing - requires universal solutions - where 'universal' ion the sense of a "universal machine" or 'universal function' as exemplified with LISP eval/apply mutual recursion

    1. IPFS and NDN share the same vision, that of content-addressable networks, but approach it from vastly different perspectives. NDN is a native network-layer approach, while IPFS is an application-layer approach.

      = share - vision = content-addressable networks - perpecives - network-layer - for = NDN - application-layer - for - IPFS

    1. The Bardo Thodol (Tibetan: བར་དོ་ཐོས་གྲོལ, Wylie: bar do thos grol, "Liberation Through Hearing During the Intermediate State"), commonly known in the West as The Tibetan Book of the Dead

      = Bardo Thodol - translated as = Liberation Through Hearing During the Intermediate State - commonly know - in the west - as the : Tibetian Book of the Dead

      https://upload.wikimedia.org/wikipedia/commons/e/e3/BardoThodolChenmo.jpg

    1. be able to run entire virtual worlds like actually put in a game like Minecraft um I think this isn't normally a gif but a PDF doesn't support GIF yet so I won't 00:18:41 be able to see the GIF but imagine being able to have an entire virtual world like Minecraft and all of the interactions of the players in that speed being able to be tracked with the 00:18:53 hard verifiability of um of consensus in a blockchain setting so like that's where we're headed

      = run virtual worlds - like MineCraft - all the interactions of the player - tracked with verifiability of consensus - in a blockchain setting - at speed - massive scale computations - all the shards of the world that players want to polay

    2. I've been hoping to talk about this for a long time this is a really like a long-term bit of work for a lot of us um the um the consensus lab in general has been 00:00:16 um pioneering a ton of like amazing contributions to the ecosystem in terms of scaling in a bunch of different ways and this one I think is going to be one of the greatest contributions that this lab is going to make um to the whole blockchain space

      long time comming

      greatest contribution to the blockchain space

      and beyond

      = comment - seen this potential from the word GO - blockchain with its global consensus with Distributed Ledger needs Distributed Hash Tables to scale consensus and reach

    3. massive scale data science um all of this sort of coordinated and think of all the data pipelines um built with IPC

      = massive scale data science - with InterPlanetary Concensus

    4. you could do 00:12:48 traditional backends in a web 3 sort of sense um with optimistic or zero knowledge proof based verifiability which might be very secure verifiability to be a bit expensive but you you make it up in the scale out so by being able to Fan out 00:13:01 and have so many computers you can pay off a few orders of magnitude in running a traditional standard web app backend in the zero knowledge setting

      = can do = traditional back ends in web 3 - optimistic or zero knowledge proof based verify-ability - could be bit expensive - but pay off a few orders of magnitude in scale out - run subnets - in running a traditional standard web app back end - zero knowledge setting

    5. there's this consensus bottleneck where you're trying to push in tons of amounts of of transactions you everything is getting bottlenecks

      = consensus bottleneck

    6. permissionless 00:02:02 large-scale Byzantine fault tolerant networks that have an economic construction within them um it's a much stronger way of building um digital resistance and applications but we need to make them scale
      • permissionless
      • large-scale Byzantine fault tolerant
      • economic incentives
    7. why Bitcoin was an ethereum and so on were sort of like disregarded by the traditional Cloud people because it just kind of seemed crazy that you know a transactional system that could do only 00:01:49 a fraction of what your phone could do was going to run the entire monetary system

      disregarded by the traditional cloud

      transactional systems

    8. it gives you Byzantine 00:00:43 fault tolerant tolerance in a very nice scalable setting and it has very nice properties for a lot of classic distributed systems applications and so you can think of doing cluster management and large-scale computational 00:00:57 arrangements

      fault tolerance

      in scaleable settings

      nice properties for distributed systems

    9. let's talk about the interplanetary 00:06:31 principle it's something that the professor Community came up with and the idea is like um you know if you remember the end-to-end principle that says that

      interplanetary principle

      end to end principle professor Community came up

    10. inside networks um you want to keep things dumb and stateless the endpoints have to do all 00:06:44 the work

      keep things dumb in the network

      end points should do all the work

    11. if you were doing something simpler you could have a much simpler protocol if you were just kind 00:06:19 of like trying to do like um you know log machine replication or something like that or having eventually consistent structures and whatnot like if you want to be able to like have hard security um that's a much harder problem

      log replication eventual consistent structures

    12. outside of a blockchain context because it gives you Byzantine 00:00:43 fault tolerant tolerance in a very nice scalable setting and it has very nice properties for a lot of classic distributed systems applications and so you can think of doing cluster management and large-scale computational 00:00:57 arrangements

      outside of a block chain context

    1. the fact that you got it working is only half the job 00:05:23 once the code works that's when you have to clean it no one writes clean code first nobody does because it's just too hard to get code to work so once the code works it will 00:05:37 be a mess human beings do not think in nice straight lines they don't think in if statements and while loops they cannot foresee the entire algorithm so we piece the thing together we cobble it together 00:05:50 with wire and scotch tape and then it suddenly works and we're not quite sure why and that's the moment when you say all right now i need to clean it how much time do you invest in cleaning 00:06:03 it roughly the same amount of time it took you to write it and that's the problem nobody wants to put that effort in because they think they're done when it works 00:06:13 you're not done when it works

      You are not done when it works!

    1. the morphic graphics model that I'll be working with came also John McCarthy's half-page Lisp 00:01:02 eval is just the perfect example of meta-circular programming

      = meta-circular programming

    1. The summum bonum is simply that numbers are ideas (mental constructs representing a perception, and in that sense, they do exist platonically). As has already been very well explained, these ideas are useful for describing the world around us, and so we continue to use and improve upon these ideas.

      numbers just ideas?

    1. “Software should be as easy to edit as a PowerPoint presentation,” Simonyi asserts. That means giving it just as intuitive an interface.

      software simple

    1. We're here at QCon London 2010 and I'm sitting here with Dan Ingalls. Dan, why don't you explain to us what you've been doing for the last 40 years?

      Dan What have you been doing for the past 40 (by now 50) years

    1. IPFS is an ambitious vision of new decentralized Internetinfrastructure, upon which many different kinds of applica-tions can be built

      = decent(ralized) Internet infrstructure

    2. Object content addressing constructs a web

      = constructs = Object content addressing =a Web - significant bandwidth optimization - untrusted content serving - permanent links - ability to make full permanent backups of - any objects & its references

      = comment = for = IndyPLEX - nodes with all - outgoing and incoming - qualified links and target references

      Fundamental Unit of coherent local complete structured information pervasive and universal across all computstion and communication and exhange and storage

      made permanent via IPFS

      permanence ensured by human readable composite naming conventions that the access and qualifying structure of links, forming shapes

      slef-organizing, self-revealing, co-evolving conten in contexts exchanged in trust networks with full provenance

    3. Without it,all communication of new content must happen off-band,sending IPFS links.
      • all communication must happend off band
      • by sending IPFS links

      = comment = for : IndyWeb - instead of mutable names - rely on off-band interpersonal trust networks - themselves maintained using IPNS instead of making data mutable make the capability in use mutable with permanent names

    4. epresent arbi-trary datastructures, e.g. file hierarchies and commu-nication systems.

      = arbitrary = datastructures - file hierarchies - communication systems

      = consider - for = IndyWeb - linked computational capabilities / interpretations

    5. S/Kademlia [1] extends Kademlia to protect against ma-licious attack

      = section = S/Kadmelia - extends Kadmelia - protect against = malicious nodes

    6. finding nearby data without querying distant nodes” [5]and greatly reducing the latency of lookups.

      = finding nearby = data - wiithout - querying distant = nodes

    7. Coral relaxes the DHT API

      = relaxes = DHT API = Coral - sloppy in DSHT - need only a single working peer - can distribut only subset of the values to the nearest nodes - avoiding hot-spots

    8. Kademlia stores values in nodes whose ids are “nearest

      = stores values = Kadmelia - in nearest nodes - not application data locality - ignores far nodes that may have the data

      = stores addresses of peers = Coral - can provide data blocks

    9. Efficient lookup through massive networks: queries onaverage contact dlog2(n)e nodes. (e.g. 20 hops for anetwork of 10, 000, 000 nodes)

      = efficient lookup - through - massive networks - queries on asverage contact log2(n) nodes - 20 hops for a network of 20 million nodes

    10. The central IPFS principle ismodeling all data as part of the same Merkle DAG

      = central - principle - is - modelling all data as - part of the same Merkle DAG

    11. Careful interface-focused integration yields a system greaterthan the sum of its parts.

      = careful interface-focused integration - yields = system greater than the sum of its parts

    12. New solutions inspired by Gitare emerging, such as Camlistore [?], a personal file stor-age system, and Dat [?]

      = new solutions - inspired by = Git - camlistore - reanamed = perkeep = personal file storage system = Dat

      = comment - missing = Named Data Networks

    13. What remains to be explored is how this datastructure can influence the design of high-throughput ori-ented file systems, and how it might upgrade the Web itself.

      = explore = data structures - influence = design - of - high throughput oriented file systems - might upgrade the Web

    14. its content addressedMerkle DAG data model enables powerful file distributionstrategies

      = has = Git - content addressed = Merkle DAG = data model

    15. Orthogonal to efficient data distribution, version controlsystems have managed to develop important data collabo-ration workflows

      = orthogonal to data distribution = collaboration workflows

    16. Pressed by critical features and bandwidth con-cerns, we have already given up HTTP for different datadistribution protocols.

      web given up

      for other distribution protocols

    17. boiled down to “lots of data, accessible ev-erywhere.

      "lots of data, accessible everywhere"

      = comment : Data - Godlike - omnipresent - eternal - omnipotent

    18. But we are enter-ing a new era of data distribution with new challenges

      = new - challenges = data distribution hosting - petabyte datasets - - computing - on - large = data - across = organizations - high- - volume - definition - on demand - realtime media streams - versioning and linking massive datasets - preventing accidental disappearance if important files

    19. What is lacking is upgradingdesign: enhancing the current HTTP web, and introducingnew functionality without degrading user experience

      = lacking = upgrading design

    20. evolving Webinfrastructure is near-impossible, given the number of back-wards compatibility constraints and the number of strong

      = claim - evolving Web infrastructure - near-impossible

      = cause - backward-compatibility constraints - strong parties invested

    21. ails to take advantageof dozens of brilliant file distribution techniques invented inthe last fifteen years.

      = HTTP - fails to take advantage of - dozens of = file distribution techniques - emerged in the last 15 years

    22. HTTP is the mostsuccessful “distributed system of files” ever deployed

      = HTTP - is - the most successful - = "distributed system of files" - - claim : ever deployed

    23. no general file-system has emerged that offers global,low-latency, and decentralized distribution

      = - no = general file system - emerged - with - global - low-latency - decentralized distribution

      = IPFS - is - global - low-latency - decentralized distribution network

    24. eployed largefile distribution systems supporting over 100 million simul-taneous user

      = large file distribution systems - - supporting - over : 100 million simultaneous users

    25. IPFS has no singlepoint of failure, and nodes do not need to trust each other

      = IPFS - has - no single point of failure

      = nodes - do not need to - trust each other - Trust but verify

    26. IPFScombines a distributed hashtable, an incentivized block ex-change, and a self-certifying namespace

      = IPFS - combines - = distributed hashtable - = incentivized block exchange - = self-certifying namespace

      = - comment - Once the names are exchanged within - a network of parties of interest - the original source of the names may even go away! - or on need can be recreated

      This is key to permanence!

  2. web.archive.org web.archive.org
    1. Kernel is eight weeks of conversation in a "block" of 250 brilliant people intended to connect creativity with care

      = what is? = Kernel - 8 weeks of conversations - 250 people - intended to - connect = creativity - with - care

    1. When you lose interest in a program, your last duty to it is to hand it off to a competent successor.

      = have = the duty - to find = a scucessor

    1. The web is more a social creation than a technical one

      the current web is shaped by the properties and affordances that as a protocol it engenders.

      Location Addressing is a key feature and it naming system shaped it, and constrains all our nontechnical endevaours

    1. works similar to Android Intents, and it's a good example of how Capyloon is putting the user's experience first and creating a permissionless interface

      a new way of doing Web Intents

    2. UCANs and WNFS and how they can enable decentralized identity encrypted at rest file storage
      • User Controlled Authentication
      • WebNative File system
      • decentt(ralized) identity
    3. only requires HTML, CSS, and JavaScript to work. This made apps much more portable and didn't require developers to specialize in one walled garden.

      not walled gardens

    4. Learn about Capyloon, a resurrected version of Firefox OS built on top of the decentralized web technologies like the IPFS protocol.

      ressurected firefox OS

    1. companies start with less centralized, federated solid data storage and sharing pods, and a single knowledge-graph enabled data model for all providers in the chain.

      = tweet : pipeline automation provides short term kludges long term intractable complications where each data source has it's quirks

      @SoLid

      federated solid pods

    2. Data analytics pipeline best practices: Data governance Data analytics pipelines bring a plethora of benefits, but ensuring successful data initiatives also means following best practices for data governance in analytics pipelines.

      data analytics pipelines

    3. Without a transformed data-centric architecture, companies could unwittingly add to the technical and data debt they already face

      data debt

      in addition to technical debt

    1. IPLD is the data model of the content-addressable web. It allows us to treat all hash-linked data structures as subsets of a unified information space, unifying all data models that link data with hashes as instances of IPLD.

      = for = Conceptipedia

      = what - is? = IPLD - data model - for the = content-addressable = web - treat = hash-linked data structures - as subsets if a - unified = information space - unifying - all = data models - that - link = data - with = hashes - as = instances - of - IPLD

    2. Through IPLD, links can be traversed across protocols, allowing you to explore data regardless of the underlying protocol

      = why = IPLD - links can be traversed across protocols - explore data regardless of the underlying protocols

    Annotators

    URL

    1. We do not put a subject code in each sense defi-nition (as [Guthrie et al., 1992] do).

      = conceptual move = no subject code - but human readable stemmed names

    2. stemmedsense definitions in LDOCE, represented as Prologdatabase structures such as

      = stemmed sense definitions in LDOCE prolog databse structures

    3. The conventions we use are: a) Each word to bedisambiguated is the functor of a predicate, contain-ing a list with stemmed sense definitions (in lists)

      = gloss = stemmed sense definitions named association lists

    1. Lexical disambiguation using Constraint Handling in Prolog (CHIP) George C. Demetriou 1993 Proceedings of the sixth conference on European chapter of the Association for Computational Linguistics -  

      original

    1. I would also like to express my appreciation to Dr Gyuri Lajos for the organisational support and advice on CHIP programming and Mr Clive Souter for his useful recommendations.  ... 

      favorable mention permalink

    1. IBM’s design of the single-level storage was originally conceived and pioneered by Frank Soltis in the late 1970s as a way to build a transitional implementation to computers with 100% solids state memory. The thinking at the time was that disk drives would become obsolete, and would be replaced entirely with some form of solid state memory.

      pioneered by = Frank Soltis

    1. Multics ("Multiplexed Information and Computing Service") is an influential early time-sharing operating system based on the concept of a single-level memory.[4][5]

      informtation and computing service

      single-level memory

    1. One of the interesting features of NLS was that its user interface was parametric and could be supplied by the end user in the form of a "grammar of interaction" given in their compiler-compiler TreeMeta. This was similar to William Newman's early "Reaction Handler" [Newman 66] work in specifying interfaces by having the end-user or developer construct through tablet and stylus an iconic regular expression grammar with action procedures at the states (NLS allowed embeddings via its context free rules)

      Tree-Meta

    1. Camerata literally means a small orchestra or choir. This Camerata was a diverse group of people who gathered and worked on a common problem: they were bored with polyphony, the esteemed music of their day.

      = camerata - diverse group of people - musicians, artists, astrologers, philosophers scientists - met informally - people with diverse skills and expertise working together - gathered and worked on common problems Description

    1. Twitter’s collapse into an unusable wreck is some time off, the engineer says, but the telltale signs of process rot are already there. It starts with the small things: “Bugs in whatever part of whatever client they’re using; whatever service in the back end they’re trying to use. They’ll be small annoyances to start, but as the back-end fixes are being delayed, things will accumulate until people will eventually just give up.”

      into and unusable wreck.

    1. The main require-ment was a programming system for manipulating ex-pressions representing formalized declarative and irnpera-live sentences so that the Advice Taker system could makedeductions.

      =- requirement = a programming system for - manipulating expressions - represent formalized - declarative & imperative sentences - = Advice Taker - make = deductions

    1. Computingmachines will do the routinizable work that must be done toprepare the way for insights and decisions in technical andscientific thinking

      = computing machinery - will do - routinizable work - prepare the way for insights & - decisions - technica; - scientific thinking

    2. enable men and computers to cooperate in making decisionsand controlling complex situations without inflexible dependenceon predetermined programs.

      = enable man and computers to - cooperate in making decisions & - controlling complex situations - without = inflexible dependence on predetermined programs

    3. to let computers facilitate formulative thinking asthey now facilitate the solution of formulated problems,

      = let = computers - facilitate - = formulative thinking - as they now facilitate - solutions to = formulated problems

    1. That ideas should freely spread from one toanother over the globe, for the moral and mutual instruction of man, andimprovement of his condition, seems to have been peculiarly and benevolentlydesigned by nature, when she made them,

      The downfall of computing was when they invented, enfored copy right on system created in the open.

      As long there is a licence we all loose!

      Then you can privatize 20 years of development in the commons called Linux

      If you can beat them run it inside a proprietary operating system. Better still, rely on emulators for old windows so you no longer need to worry about backward compatibiity

    Annotators

    1. How to Prepare for the End of Card PaymentsCash is safe—for now. Contactless payment methods, like Apple Pay or Google Wallet, are more of a threat to the existence of physical cards.FacebookTwitterEmailSave StoryTo revist this article, visit My Profile, then View saved stories.

    1. “Wealth is most essentially knowledge,” Mr. Gilder says. “Let’s face it, the caveman had access to all the materials we have today. Therefore, economic growth is learning, manifested in ‘learning curves’ of collapsing costs driven by markets.” Yet these learning curves get waved away by economists. Mr. Gilder says information, not materials, drives growth: “Crash a car and all its value disappears, though every molecule remains.”

      = claim = Wealth is most essentially knowledge - = mutual learning - = symmathesy

      is =Symmathesy

    1. therefore that i i think that we all live on the intel instruction set there's a whole lot would seem to me 00:27:08 that the the way we push towards uh more diversity of of structure of algorithmic rules and and methods that actually then i suppose tie those together so a lot of these interoperability protocol 00:27:21 solutions polka dot cosmos and so forth so listen to every smart contract essentially a set of algorithmic rules uh i mean that's right

      live on the intel instruction set

    2. the ethereum virtual machine being this this base layer he thinks of the algorithm he says i 00:26:31 think of this sort of algorithmic dependency as being something like you know animal farm there's there's this one set of rules that ultimately is the base layer around which we are all compelled to live by right 00:26:43 and and that's i just found that a very disturbing way of thinking about it and it struck me that as much as you know there are great intentions behind uh all of the core developers i would imagine or most of the core developers in a lot of these decentralized systems 00:26:55 and as much as it is an open source system there is still a lot of bias that gets baked into algorithms

      bias backed into the algorithm

    3. the vision there is that instead of technology enabling a small set of equity owners to stack up value 00:25:55 more quickly and larger than ever in history uh governance can be decentralized on these platforms and people can have much greater ownership much greater agency on lots of these different networks that there's 00:26:07 pickup participating the trust layer component of this right because you know i think uh you're right there's a lot more built into this system that will hopefully prevent us from 00:26:19 ending up in some other centralized world

      equity owners stack up value

    1. We intentionally do not want to reproduce "awesome lists" here that present you with hundreds of links no-one ever reads. Such lists often induce anxiety rather than providing

      = - respond - could have those "awesome list" - as hypermapped territories - share = curated trails - designed for specific - = purpose, intent and audience, learning objectives

    2. The curated briefs are much more paradoxical documents in that they tend to be quite long. Do I contradict myself? Very well, I contradict myself. I am large, I contain multitudes.

      paradoxical documents

      I contain multitudes

  3. www.kernel.community www.kernel.community
    1. how to free the shared record of human knowledge from closed, rent-seeking corporations and extricate ourselves from an extractive attention economy.
      • free the = shared record - of = human knowledge
      • from - closed, rent-seeking = corporations
      • extricate = ourselves
      • from extractive = attention economy
  4. www.kernel.community www.kernel.community
    1. decentralization moving from something that's centralized to something that's no

      decentralization

      =- comment : more than that - ambient - whole - with emergent properties - and self-organization

    2. the Internet is humanity's most important technology it's our shared nervous 00:01:07 system our shared brain
      • humanity's most important technology
      • shared nervous system