1,740 Matching Annotations
  1. Apr 2023
    1. Responsive Image Gallery How to use CSS media queries to create a responsive image gallery that will look good on desktops, tablets and smart phones.
    1. These systems provide quite powerful tools for automaticreasoning, but encoding many kinds of knowledge using their rigid formal representations requiressignificant- -and often completely infeasible-amounts of effort.

    Tags

    Annotators

    1. There are a few obvious objections to this mechanism. The most serious objection is that duplicate information must be maintained consistently in two places. For example, if the conference organizers decide to change the abstracts deadline from 10 August to 15 August, they'll have to make that change both in the META element in the HEAD and in some human-readable area of the BODY.

      Microdata addresses this.

    1. And then, of course, browsers are themselves being likened to operating systems. Walled gardens, with no efficiency to speak of, with very little freedom, with too much leverage from the browser vendors. A perfect exploitation machine for keeping you within itself, all while it will do anything to harvest information about your activities, so it can show you some ads as soon as it can. An operating system alright. Yeah, just relax and no harm will come to you.
    1. Moreover, browsers are not the right way to be using web anyway. See my thought on this in the Data-Supplied Web article.
    2. he only advantage of building something in a web browser is that you can view websites right in them. If your task is not to display a webpage, or build a website, if CSS+HTML is not the limit of your imagination, then there's no reason to be building complex shit in the web browser! I can see hitching a web browser ride as a ubiquitous cross-platform graphical backend (over WebGL) if you are willing to deal with all the overhead and impact on speed. But with the libraries like SDL and Skia (which browsers use), that seems kind of pointless.
    1. something so ephemeral as a URL

      Well, they're not supposed to be ephemeral. They're supposed to be as durable as the title of whatever book you're talking about.

    1. Real Graph is a model which predicts the likelihood of engagement between two users. The higher the Real Graph score between you and the author of the Tweet, the more of their tweets we'll include.

      ...who thought this was a good idea??

    2. I realized after fully digesting this document that it effectively outlines a mechanism of anti-discovery.

  2. Mar 2023
    1. le regroupement des principaux acteurs du Web — et plus largement la concentration des producteurs des programmes (comme Google dont la moindre panne suffit à altérer une grande partie du fonctionnement des réseaux823) — fait courir le risque d’un Web à péages, où toute expérience serait anticipée et calculée

      Grand problème de la centralisation des programmes et des instances productrices de programmes: uniformisation des usages, comportements, et des programmes récursivement; dépendance à des structures tierces (aux intérêts commerciaux souvent conflictuels avec les besoins des usagers).

    1. Problem details for HTTP APIs HTTP status codes are sometimes not sufficient to convey enough information about an error to be helpful. The RFC 7807 defines simple JSON and XML document formats to inform the client about a problem in a HTTP API. It's a great start point for reporting errors in your API. It also defines the application/problem+json and application/problem+xml media types.
    1. ALIR - un manuel interactif pour l'algèbre linéaire produit avec PreTeXt  (présentation en français)  Lien vers la RELnorth_eastlien externe  fabriqueREL (2022-23)
    1. Streaming across worker threads

      ```js import { ReadableStream } from 'node:stream/web'; import { Worker } from 'node:worker_threads';

      const readable = new ReadableStream(getSomeSource());

      const worker = new Worker('/path/to/worker.js', { workerData: readable, transferList: [readable], }); ```

      ```js const { workerData: stream } = require('worker_threads');

      const reader = stream.getReader(); reader.read().then(console.log); ```

    1. The common perception of the Web as a sui generis medium is also harmful. Conceptually, the most applicable relevant standard for Web content are just the classic standards of written works, generally. But because it's embodied in a computer people end up applying the standards of have in mind for e.g. apps.

      You check out a book from the library. You read it and have a conversation about it. Your conversation partner later asks you to tell them the name of the book, so you do. Then they go to the library and try to check it out, but the book they find under that name has completely different content from what you read.

  3. Feb 2023
  4. tantek.com tantek.com
    Five years ago last Monday, the @W3C Social Web Working Group officially closed^1. Operating for less than four years, it standardized several foundations of the #fediverse & #IndieWeb: #Webmention #Micropub #ActivityStreams2 #ActivityPub Each of these has numerous interoperable implementations which are in active use by anywhere from thousands to millions of users. Two additional specifications also had several implementations as of the time of their publication as W3C Recommendations (which you can find from their Implementation Reports linked near the top of each spec). However today they’re both fairly invisible "plumbing" (as most specs should be) or they haven’t picked up widespread use like the others: #LinkedDataNotifications (LDN) #WebSub To be fair, LDN was only one building block in what eventually became SoLiD^2, the basis of Tim Berners–Lee’s startup Inrupt. However, in the post Elon-acquisition of Twitter and subsequent Twexodus, as Anil Dash noted^3, “nobody ran to the ’web3’ platforms”, and nobody ran to SoLiD either. The other spec, WebSub, was roughly interoperably implemented as PubSubHubbub before it was brought to the Social Web Working Group. Yet despite that implementation experience, a more rigorous specification that fixed a lot of bugs, and a test suite^4, WebSub’s adoption hasn’t really noticeably grown since. Existing implementations & services are still functioning though. My own blog supports WebSub notifications for example, for anyone that wants to receive/read my posts in real time. One of the biggest challenges the Social Web Working Group faced was with so many approaches being brought to the group, which approach should we choose? As one of the co-chairs of the group, with the other co-chairs, and our staff contacts over time, we realized that if we as chairs & facilitators tried to pick any one approach, we would almost certainly alienate and lose more than half of the working group who had already built or were actively interested in developing other approaches. We (as chairs) decided to do something which very few standards groups do, and for that matter, have ever done successfully. From 15+ different approaches, or projects, or efforts that were brought^5 to the working group, we narrowed them down to about 2.5 which I can summarize as: 1. #IndieWeb building blocks, many of which were already implemented, deployed, and showing rough interoperability across numerous independent websites 2. ActivityStreams based approaches, which also demonstrated implementability, interoperability, and real user value as part of the OStatus suite, implemented in StatusNet, Identica, etc. 2.5 "something with Linked Data (LD)" — expressed as a 0.5 because there wasn’t anything user-visible “social web” with LD working at the start of the Working Group, however there was a very passionate set of participants insisting that everything be done with RDF/LD, despite the fact that it was less of a proven social web approach than the other two. As chairs we figured out that if we were able to help facilitate the development of these 2.5 approaches in parallel, nearly everyone who was active in the Working Group would have something they would feel like they could direct their positive energy into, instead of spending time fighting or tearing down someone else’s approach. It was a very difficult social-technical balance to maintain, and we hit more than a few bumps along the way. However we also had many moments of alignment, where two (or all) of the various approaches found common problems, and either identical or at least compatible solutions. I saw many examples where the discoveries of one approach helped inform and improve another approach. Developing more than one approach in the same working group was not only possible, it actually worked. I also saw examples of different problems being solved by different approaches, and I found that aspect particularly fascinating and hopeful. Multiple approaches were able to choose & priortize different subsets of social web use-cases and problems to solve from the larger space of decentralized social web challenges. By doing so, different approaches often explored and mapped out different areas of the larger social web space. I’m still a bit amazed we were able to complete all of those Recommendations in less than four years, and everyone who participated in the working group should be proud of that accomplishment, beyond any one specification they may have worked on. With hindsight, we can see the positive practical benefits from allowing & facilitating multiple approaches to move forward. Today there is both a very healthy & growing set of folks who want simple personal sites to do with as they please (#IndieWeb), and we also have a growing network of Mastodon instances and other software & services that interoperate with them, like Bridgy Fed^6. Millions of users are posting & interacting with each other daily, without depending on any large central corporate site or service, whether on their own personal domain & site they fully control, or with an account on a trusted community server, using different software & services. Choosing to go from 15+ down to 2.5, but not down to 1 approach turned out to be the right answer, to both allow a wide variety^7 of decentralized social web efforts to grow, interoperate via bridges, and frankly, socially to provide something positive for everyone to contribute to, instead of wasting weeks, possibly months in heated debates about which one approach was the one true way. There’s lots more to be written about the history of the Social Web Working Group, which perhaps I will do some day. For now, if you’re curious for more, I strongly recommend diving into the group’s wiki https://www.w3.org/wiki/Socialwg and its subpages for more historical details. All the minutes of our meetings are there. All the research we conducted is there. If you’re interested in contributing to the specifications we developed, find the place where that work is being done, the people actively implementing those specs, and even better, actively using their own implementations^8. You can find the various IndieWeb building blocks living specifications here: * https://spec.indieweb.org/ And discussions thereof in the development chat channel: * https://chat.indieweb.org/dev If you’re not sure, pop by the indieweb-dev chat and ask anyway! The IndieWeb community has grown only larger and more diverse in approaches & implementations in the past five years, and we regularly have discussions about most of the specifications that were developed in the Social Web Working Group. This is day 33 of #100DaysOfIndieWeb #100Days ← Day 32: https://tantek.com/2023/047/t1/nineteen-years-microformats → 🔮 Post Glossary: ActivityPub https://www.w3.org/TR/activitypub/ ActivityStreams2 https://www.w3.org/TR/activitystreams-core/ https://www.w3.org/TR/activitystreams-vocabulary/ Linked Data Notifications https://www.w3.org/TR/ldn/ Micropub https://micropub.spec.indieweb.org/ Webmention https://webmention.net/draft/ WebSub https://www.w3.org/TR/websub/ References: ^1 https://www.w3.org/wiki/Socialwg ^2 https://www.w3.org/wiki/Socialwg/2015-03-18-minutes#solid ^3 https://mastodon.cloud/@anildash/109299991009836007 ^4 https://websub.rocks/ ^5 https://indieweb.org/Social_Web_Working_Group#History ^6 https://tantek.com/2023/008/t7/bridgy-indieweb-posse-backfeed ^7 https://indieweb.org/plurality ^8 https://indieweb.org/use_what_you_make - Tantek
    1
    1. debían estarprotagonizados por extranjeros y tratar de cosas con las que no podía identificarme. Puesbien, la situación cambió cuando descubrí los libros africanos.No había muchos disponibles, y no eran tan fáciles de encontrar como los extranjeros.Pero gracias a escritores como Chinua Achebe y Camara Laye, mi percepción de laliteratura cambió. Comprendí que en la literatura también podía existir gente como yo,chicas con la piel de color chocolate cuyo pelo rizado no caía en colas de caballo.Empecé a escribir sobre asuntos que reconocía.5

      texto pdf

    1. My Fifth Year as a Bootstrapped Founder

      My Fifth Year as a Bootstrapped Founder February 10, 2023 12-minute read annual review • tinypilot Five years ago, I quit my job as a developer at Google to create my own bootstrapped software company.

      For the first few years, all of my businesses flopped. None of them earned more than a few hundred dollars per month in revenue, and they all had negative profits.

      Halfway through my third year, I created a device called TinyPilot. It allows users to control their computers remotely without installing any software. The product quickly caught on, and it’s been my main focus ever since.

      In 2022, TinyPilot generated $812k in revenue, a 76% increase from 2021.

      In this post, I’ll share what I’ve learned about being a bootstrapped founder from my fifth year at it.

      Previous updates My First Year as a Solo Developer My Second Year as a Solo Developer My Third Year as a Solo Developer My Fourth Year as a Bootstrapped Founder Highlights from the year TinyPilot grew annual revenue to $812k Income/Expense 2021 2022 Change Sales $459,529 $807,459 +$347,930 (+76%) Credit card rewards $2,241 $4,327 +$2,086 (+93%) Raw materials -$224,046 -$333,656 +$109,610 (+49%) Payroll -$142,744 -$206,187 +$63,443 (+44%) Electrical engineering consulting -$28,662 -$124,643 +$95,981 (+335%) Advertising -$3,873 -$51,764 +$47,891 (+1,237%) Web design / branding -$15,931 -$30,215 +$14,284 (+90%) Postage -$24,227 -$30,779 +$6,552 (+27%) Cloud services -$5,553 -$7,865 +$2,312 (+42%) Office space -$4,400 -$6,600 +$2,200 (+50%) Equipment -$2,083 -$5,915 +$3,832 (+184%) Everything else -$4,902 -$8,183 +$3,281 (+67%) Net profit $5,349 $5,979 +$630 (+12%) While it sounds impressive to grow revenue by $350k, it’s a little less exciting that I’m only walking away with $6k in profit. I don’t pay myself a salary, so $6k is the full amount I earned from the business in 2022. Still, I’m excited about these numbers and what they mean for 2023.

      One of the major cost increases was electrical engineering. Throughout 2021, TinyPilot’s electrical engineering vendor was struggling to keep up with TinyPilot’s growth. In late 2021, I switched to a new vendor that fits our needs better, but they cost three times as much.

      The ongoing chip shortage forced us into frequent redesigns, which bloated costs in engineering hours and raw materials. We were often in a race to redesign a circuit board before we ran out of our existing version, so we repeatedly paid a premium to expedite the process.

      We finally escaped the redesign treadmill in September. I’m hopeful that our fourth quarter results will reflect the coming year. Our profit was $28.6k for the quarter, so if we average $9.5k per month in 2023, I’ll be happy.

      TinyPilot got a new website When I launched TinyPilot in 2020, I told myself the website and logo were just placeholders. Then, things took off so quickly that I never had time to replace them.

      In 2022, I finally hired a design agency to create a new logo and redesign the website.

      Screenshot of old landing page Screenshot of new landing page Before and after the TinyPilot website redesign

      I wrote previously about how frustrating and expensive it was working with the design agency, but I’m pleased with the result. My old website looked like a hobby project, and the new design looks like a real company. I suspect that at least a portion of my increased sales resulted from the new design.

      The TinyPilot team grew from six people to seven At the end of 2021, the TinyPilot team was:

      Me, the sole founder Three part-time software developers Two part-time local staff who handle assembling devices and fulfilling orders One of whom also handled customer service By the end of 2022, we had added two support engineers and adjusted responsibilities, so the team is now:

      Me, the sole founder Two part-time software developers Two part-time local staff who handle assembling devices and fulfilling orders Both now work on customer service Two part-time support engineers Adding the support engineers felt like finding the missing piece of the puzzle. Before they joined, I was the only person handling technical support, and it occupied about 20% of my time. Now, I spend less than 5% of my time on support requests, and customers receive faster support.

      The support engineers also do things I didn’t have time for, like investigating complex bugs, writing documentation, and improving our diagnostic tools.

      Growing the team stretched my skills as a manager. In 2021, TinyPilot’s workflows were fairly simple. Almost everyone did their work as a single-person unit. The results either went directly to me or to a customer. When employees needed to coordinate with each other, it was always among teammates of the same role.

      Integrating support engineers meant figuring out how different teams work together. How do support requests work when they require cooperation between fulfillment staff and support engineers? What’s the feedback loop between the support engineers and the dev team?

      PicoShare became my fastest-growing project One of my pet peeves in the last few years is how difficult it is to share a single file with cloud storage providers like Google Drive or Dropbox. They won’t give you a direct link to your file — just a link to their web interface, where they pressure your recipient to sign up for an account. If you upload a video to Google Drive, they make you wait 15+ minutes while they re-encode it, even if it was already optimized to play in the browser.

      As an alternative to the existing cloud storage options, I made a minimalist file-sharing app called PicoShare. You just upload a file, and it gives you a direct link that you can share. Easy! No re-encoding, no prompts to sign up for anything.

      Animated demo of uploading a video file to PicoShare and streaming it in another browser window Demo of PicoShare There are a few open-source tools that offer similar functionality, but PicoShare is unique in not requiring a database server. That means you can run it in a single Docker container, whereas other solutions require more complicated orchestration.

      PicoShare became the fastest-growing open-source project I ever published. It received 600 Github stars within two weeks of its release. As of this writing, PicoShare has over 100k installs.

      Lessons learned Don’t become anyone’s smallest client I made many mistakes throughout the whole TinyPilot website redesign fiasco, but the core problem was that the design agency was a fundamental mismatch for TinyPilot.

      The agency’s other clients had 5-20x TinyPilot’s budget. At first, I thought that was such a gift — this fancy agency with expensive clients was betting on a little company like mine.

      The reality was that TinyPilot was the agency’s lowest priority. They managed the project poorly, which drove up costs, bloated scope, and stretched out timelines.

      Now, when I work with new vendors, I ask them how my company compares to their other clients. If I’m an outlier in any important dimension like size, revenue, or industry, I look elsewhere.

      Run at 50% capacity Wouldn’t it be wonderful if your business’ capacity perfectly matched your customers’ needs? Your employees would fulfill every order and satisfy every support request while working exactly 40 hours per week. They’d never feel overworked nor underworked, and there’d be no idle time.

      In practice, that would be a terrible system. Running at 100% utilization would mean you have no margin for error. Ordinary occurences like a bump in sales or an employee taking a vacation would immediately overwhelm you.

      I aim for everyone at TinyPilot to run at around 50% capacity. That is, a balance of 50% reactive work and 50% proactive work. For some roles, the balance isn’t quite 50/50, but it’s a good rule of thumb.

      The technical support team is the clearest example of a 50/50 split: they spend half of their time responding to support requests and the other half finding ways to save users from needing support. The proactive tasks include fixing bugs in the product, writing documentation, and improving our diagnostic tools.

      Every TinyPilot team comprises two people. When one person is unavailable, the other can suspend their proactive work and handle time-sensitive tasks without feeling overwhelmed. If we get a rush of orders because a popular YouTube channel mentions us, we have spare capacity to absorb it.

      Team Reactive tasks Proactive tasks Founder Team management Vendor management Reviewing work Filling gaps in responsibilities Marketing Sales Re-evaluating strategy Hiring and training Support engineers Answering technical support questions Writing documentation Writing tutorials Investigating difficult bugs Software developers Fixing urgent bugs Releasing new features Improving dev experience Creating automated tests Fixing non-urgent bugs Fulfillment staff Assembling devices Fulfilling orders Customer service Creating support playbooks Assisting in marketing Ansible and git are not software distribution tools When I started working on TinyPilot, I didn’t know how to distribute Linux software.

      To publish the prototype of TinyPilot, I used the tools I knew: bash scripts, Ansible, and git. The bash script bootstrapped an Ansible environment and executed an Ansible playbook. Ansible installed dependencies, made necessary changes to the operating system, and cloned the TinyPilot git repository.

      The installation process was okay, not great. It was slow but reliable and didn’t require the user to configure anything manually.

      Two years later, TinyPilot’s update process was a mess. It still relied on the same shaky foundations from the prototype, except now there was a complex web of interdependencies. Ansible roles depended on Git repositories, which depended on other Ansible roles, which depended on parameters in a bunch of YAML files. Minor changes swallowed weeks of development time.

      All this because I never bothered to learn standard Linux packaging tools.

      This year, the TinyPilot team learned to use Debian packages. It was far less painful than I’d feared. I thought we’d have to deploy all sorts of package servers and key servers, but it turns out we didn’t need any of that. The process was relatively easy once we found the right guides.

      Debian packages have accelerated our development. The tooling catches expensive mistakes earlier, and we can deploy pre-release versions to our test devices easily, whereas our previous installation system made that process prohibitively complex.

      Grading last year’s goals Last year, I set three high-level goals that I wanted to achieve during the year. Here’s how I did against those goals:

      Grow TinyPilot to $1M in annual revenue Result: Grew TinyPilot’s revenue by 76% to $812k Grade: B I always knew that $1M was an aggressive goal. We fell short, but I’m still impressed at how close we came.

      Manage TinyPilot on 20 hours per week Result: I spent more time managing TinyPilot in 2022 than in 2021. Grade: D I was hoping to automate and delegate away enough of my job to reduce my management time to 20 hours per week, but it didn’t happen. Between growing sales, spinning up the support engineering team, and putting out fires due to the chip shortage, my management time increased.

      Ship TinyPilot Voyager 3 Result: We never even completed the design phase Grade: F TinyPilot has always used the Raspberry Pi 4B as the core hardware. There’s a wonderful ecosystem around the Pi 4B, but the hardware is relatively expensive and difficult to integrate with custom chips.

      My plan for 2022 was to create a custom circuit board for the slimmer, less expensive Raspberry Pi Compute Module 4. That could cut our manufacturing costs by up to 60% and simplify our hardware design.

      Instead, all of our hardware engineering time went to chasing down manufacturing issues and supply shortages, so we made no progress on a new product.

      Goals for year six Manage TinyPilot on 20 hours per week I failed miserably at reducing my hours last year, but it’s now my top priority. I’m hopeful about my chances this year. A lot of my 2022 work laid the groundwork to remove me from the critical path in 2023.

      Earn $100k in profit For TinyPilot’s first two and a half years, I focused on growth. I pay the same in hardware and software engineering costs whether I’m selling 20 devices per month or 2,000, so I needed to reach a certain scale to make the business viable.

      For most of 2023, TinyPilot’s production will be constrained by supply. It was disappointing to find out I’d have no chance at growing sales, but the silver lining is that I can slow down and focus on profit rather than growth.

      TinyPilot has always roughly broken even, but I think I can reach $100k in profit this year if I avoid further hardware redesigns. Without the hardware redesigns in 2022, I would have saved around $100k on engineering and $20k on materials. If I keep sales steady and run leaner on the hardware side, 2023 should be a profitable year.

      Close the TinyPilot office I’ve leased an office for TinyPilot since early 2021. We use it for assembling devices, fulfilling orders, and storing inventory.

      Having our own local office has helped us adapt quickly to changes in our hardware and processes, but it’s a lot of extra overhead. This year, I hope to transition assembly to China, where all of our parts originate. I’m also in the process of moving our fulfillment to a third-party logistics warehouse.

      Eliminating the TinyPilot office would spare us the work of maintaining a physical space, managing inventory, and tracking in-person shifts. Outsourcing manufacturing and fulfillment will also give the team more flexibility in time and location.

      Do I still love it? Every year, when I write these blog posts, I ask myself whether I still love what I’m doing.

      2022 was a hard year — certainly my hardest since going off on my own. I wasn’t miserable, but I can’t say I loved it.

      The global chip shortage meant we could never manufacture a batch of products the same way twice. There was always some missing component or manufacturing issue, so we were constantly racing to fix issues and adapt our processes before we ran out of stock. We got through it, and there were only a handful of days that I had to mark any product as sold out, but it was stressful.

      That said, there were certainly many things to appreciate about the year. I had a relatively small amount of time for writing and software development, but I’m proud of what I produced. Expanding the TinyPilot organization and figuring out how teams work together grew my skills as a manager. It’s been gratifying to see the team grow in their roles and expand their skills as the company evolves.

      I still prefer working for myself to having an employer. I still feel grateful for the freedom to have my own company. And I still want to do it forever.

  5. Jan 2023
    1. Mailgun is primarily a developer’s tool so the best way use Mailgun is through our APIs.

      developers first API first

    1. the most significant Web 2.0 creation to harness a massaudience and engage a mass audience in knowledge production and dissemination isWikipedia

      Wikipedia really is an excellent example of why and how Web 2.0 was so impactful to online society. Unlike Web 1.0, where content consumers were mostly limited to read-only, Web 2.0 allowed content consumers to produce their own consumable content for the first time.

    1. Example 2 HTTP/1.1 200 OK Content-Type: application/ld+json; profile="http://www.w3.org/ns/anno.jsonld" Link: <http://www.w3.org/ns/ldp#Resource>; rel="type" ETag: "_87e52ce126126" Allow: PUT,GET,OPTIONS,HEAD,DELETE,PATCH Vary: Accept Content-Length: 287 { "@context": "http://www.w3.org/ns/anno.jsonld", "id": "http://example.org/annotations/anno1", "type": "Annotation", "created": "2015-01-31T12:03:45Z", "body": { "type": "TextualBody", "value": "I like this page!" }, "target": "http://www.example.com/index.html" }
  6. Dec 2022
    1. Tom MacWright, a software developer in Brooklyn, has firsthand experience with the pitfalls of ActivityPub. As an experiment, he tried to turn his photo blog into an actor that could be followed by users via their Mastodon accounts. It worked in the end—and you can search for @photos@macwright.com from your Mastodon instance to follow his photography—but it wasn't easy.

      Example of how ActivityPub standards don't work in practice, in part because Mastodon is an 800 pound gorilla which actively flauts or adds their own "standards".

    2. "Queer people built the Fediverse," she said, adding that four of the five authors of the ActivityPub standard identify as queer. As a result, protections against undesired interaction are built into ActivityPub and the various front ends. Systems for blocking entire instances with a culture of trolling can save users the exhausting process of blocking one troll at a time. If a post includes a “summary” field, Mastodon uses that summary as a content warning.
    1. Spend some time with Arc, the new browser from The Browser Company of New York.

      https://arc.net/

      First I've heard of this.

    1. The modern internet was born out of an epic struggled between "Bellheads" (who believed centralized powers should decide how you used networks) and "Netheads" (who believed that services should be provided and consumed "at the edge"): https://www.wired.com/1996/10/atm-3/
    1. Note: it is not possible to apply a boolean scope with just the query param being present, e.g. ?active, that's not considered a "true" value (the param value will be nil), and thus the scope will be called with false as argument. In order for the scope to receive a true argument the param value must be set to one of the "true" values above, e.g. ?active=true or ?active=1.

      Is this behavior/limitation part of the web standard or a Rails-specific thing?

    1. This brings interesting questions back up like what happens to your online "presence" after you die (for lack of a better turn of phrase)?

      Aaron Swartz famously left instructions predating (by years IIRC) the decision that ended his life for the way that unpublished and in-progress works should be licensed and who should become stewards/executors for the personal infrastructure he managed.

      The chrisseaton.com landing page has three social networking CTAs ("Email me", etc.) Eventually, the chrisseaton.com domain will lapse, I imagine, and the registrar or someone else will snap it up to squat it, as is their wont. And while in theory chrisseaton.github.io will retain all the same potential it had last week for much longer, no one will be able to effect any changes in the absence of an overseer empowered to act.

    1. This document is a companion to the IIIF Content Search API Specification, Version 2.0. It describes the changes to the API specification made in this major release, including ones that are backwards incompatible with version 1.0, the previous version.
    1. You can filter the resource using criteria specified as query[*]. You can provide multiple criteria, to use AND logic. You can sort the resource using parameters specified as sort[*]. You can specify multiple fields to sort by.
    2. Enum:"add" "delete" An additional flag parameter with the value add will add masks provided in the request body to the list. A flag value delete will delete masks from the list. If there's no parameter provided, masks are replaced.
    1. API TypeMailgun API NamePostmark API NameSending EmailsMessagesEmailManaging SuppressionsSuppressionsSuppressionsManaging TemplatesTemplatesTemplatesManaging Sending SettingsServerManaging ServersServersManaging Sent EmailsEventsMessagesManaging Inbound EmailsMessages, EventsMessagesManage Inbound Processing SettingsRoutesManage email domains you can send fromDomainsDomains
  7. Nov 2022
    1. dealised utopia

      Possibly, besides web monetization, there can be donation basket like ko-fi beside curation, as well as cleatly linking back to the original that can have a donation basket as well. The options are complementary.

    1. From a technical point of view, the IndieWeb people have worked on a number of simple, easy to implement protocols, which provide the ability for web services to interact openly with each other, but in a way that allows for a website owner to define policy over what content they will accept.

      Thought you might like Web Monetization.

    1. partnerships, networking, and revenue generation such as donations, memberships, pay what you want, and crowdfunding

      I have thought long about the same issue and beyond. The triple (wiki, Hypothesis, donations) could be a working way to search for OER, form a social group processing them, and optionally support the creators.

      I imagine that as follows: a person wants to learn about X. They can head to the wiki site about X and look into its Hypothesis annotations, where relevant OER with their preferred donation method can be linked. Also, study groups interested in the respective resource or topic can list virtual or live meetups there. The date of the meetups could be listed in a format that Hypothesis could search and display on a calendar.

      Wiki is integral as it categorizes knowledge, is comprehensive, and strives to address biases. Hypothesis stitches websites together for the benefit of the site owners and the collective wisdom that emerges from the discussions. Donations support the creators so they can dedicate their time to creating high-quality resources.

      Main inspirations:

      Deschooling Society - Learning Webs

      Building the Global Knowledge Graph

      Schoolhouse calendar

    1. locally-based staff and carries out its programs in conjunction with local partners. Teams of international instructors and volunteers support the programs through projects year-round.

      So many good features in your project!

      Employing local staff that know the setting and can be role models for the kids.

      Supporting mentoring by volunteers to scale.

      Working with bodies to get a visceral experience that change is possible.

      Mentoring in groups to build a community.

      Spotlighting diversity and building bridges beyond the local community.

      Some related resources: Ballet dancer from Kibera

      Fighting poverty and gang violence in Rio's favelas with ballet

    1. Publishers can create interactive stories on the platform and incorporate them in their website.

      I love this! It is similar to Prezi or VoiceThread.

      Do you also support collaborative editing (public or with invited collaborators)? If yes, a high-resolution world map could be used for collaborative pinning of local events, meetups, news, videos, and so on, such as radio.garden or YouTube Geofind.

    1. Localisation ≠ Translation To start with, we have been researching, publishing, and producing articles on the topics of localisation to gain a wider understanding for implementing it. Here's some of what we published with @sophie authoring:

      Have you thought about crowdsourcing localization via weblate? It includes DeepL and can also be a learning ground, such as Duolingo Immersion.

    1. Creating video tutorials has been hard when things are so in flux. We've been reluctant to invest time - and especially volunteer time - in producing videos while our hybrid content and delivery strategy is still changing and developing. The past two years have been a time of experimentation and iteration. We're still prototyping!

      Have you thought about opening the project setting and the remixing to educators or even kids? That could create additional momentum.

      A few related resources you might want to check out for inspiration: Science Buddies, Seesaw, Exploratorium

    1. 11/30 Youth Collaborative

      I went through some of the pieces in the collection. It is important to give a platform to the voices that are missing from the conversation usually.

      Just a few similar initiatives that you might want to check out:

      Storycorps - people can record their stories via an app

      Project Voice - spoken word poetry

      Living Library - sharing one's story

      Freedom Writers - book and curriculum based on real-life stories

    1. not really about the content of the sessions. Or anything you take from it. The most important thing are the relationships, the connections you gain from sharing the things you're passionate about with the people who are interested in it, the momentum you build from working on your project in preparation for a session

      I somewhat disagree - I think this community building is successful precisely because there is a shared interest or goal. It goes hand in hand. If there is no connecting theme or goal, the groups fall apart.

    1. Donations

      To add some other intermediary services:

      To add a service for groups:

      To add a service that enables fans to support the creators directly and anonymously via microdonations or small donations by pre-charging their Coil account to spend on content streaming or tipping the creators' wallets via a layer containing JS script following the Interledger Protocol proposed to W3C:

      If you want to know more, head to Web Monetization or Community or Explainer

      Disclaimer: I am a recipient of a grant from the Interledger Foundation, so there would be a Conflict of Interest if I edited directly. Plus, sharing on Hypothesis allows other users to chime in.

    1. All at once, billions of people saw themselves as celebrities, pundits, and tastemakers.

      ...but what if some of them...didn't.

    1. The JFK assassination episode of Mad Men. In one long single shot near the beginning of the episode, a character arrives late to his job and finds the office in disarray, desks empty and scattered with suddenly-abandoned papers, and every phone ringing unanswered. Down the hallway at the end of the room, where a TV is blaring just out of sight, we can make out a rising chatter of worried voices, and someone starting to cry. It is— we suddenly remember— a November morning in 1963. The bustling office has collapsed into one anxious body, huddled together around a TV, ignoring the ringing phones, to share in a collective crisis.

      May I just miss the core of this bit entirely and mention coming home to Betty on the couch, letting the kids watch, unsure of what to do.

      And the fucking Campbells, dressed up for a wedding in front of the TV, unsure of what to do.

      Though, if I might add, comparing Twitter to the abstract of television, itself, would be unfortunate, if unfortunately accurate, considering how much more granular the consumptive controls are to the user. Use Twitter Lists, you godforsaken human beings.

    1. Digital Initiatives and Web Services team

      I somehow missed changing this to Web Technologies

    2. Beginning in August 2019, all new content will meet or exceed WCAG 2.0 AA standards.

      Needs to be updated to reflect the fact that it is now past 2019 :)

  8. tantek.com tantek.com
    #TwitterMigration, first time? Have posted notes to https://tantek.com/ since 2010, POSSEd tweets & #AtomFeed. Added one .htaccess line today, and thanks to #BridgyFed, #Mastodon users can follow my #IndieWeb site @tantek.com@tantek.com No Mastodon install or account needed. Just one line in .htaccess: RewriteRule ^.well-known/(host-meta|webfinger).* https://fed.brid.gy/$0 [redirect=302,last] is enough for Mastodon users to search for and follow that @tantek.com@tantek.com username. Took a little more work to setup Bridgy Fed to push new posts to followers. Note by the way both the redundancy & awkwardness (it’s not a clickable URL) of such @-@ (AT-AT) usernames when you’re already using your own domain. Why can’t Mastodon follow a username of “@tantek.com”? Or just “tantek.com”? And either way expanding it internally if need be to the AT-AT syntax. Why this regression from what we had with classic feed readers where a domain was enough to discover & follow a feed? Also, why does following show a blank result? Contrast that with classic feed readers which immediately show you the most recent items in a feed you subscribed to. Lastly (for now), I asked around and no one knew of a simple public way to “preview” or “validate” that @tantek.com@tantek.com actually “worked”. You have to be *logged-in* to a Mastodon instance and search for a username to check to see if it works. Contrast that with https://validator.w3.org/feed/ which you can use without any log-in to validate your classic feed file. Why these regressions from the days of feed readers? - Tantek
    1
    1. the platform’s reliability is entirely dependent on which one you sign up for.

      It's been fine for years! I understand the intention behind informing readers of what the onboarding experience is like at this very moment, but if you're going to be part of this absurdly latent, dense wave of folks suddenly giving Mastodon a try, I think it's important you be very explicit about your lack of experience before the most intense influx of users in the history of the Fediverse.

    2. Joining Mastodon is undoubtably more complicated than starting a Twitter account.

      Are you sure about this argument, Janus? Are you sure you comprehensively tried all methods of onboarding?

    1. Page recommended by @wfinck. Seems @karlicoss is the author. This project seems similar to what I've been trying to do with Hypothes.is, Obsidian, Anki, Zotero, and PowerToys Run but goes beyond the scope of my endeavors to just quickly access whatever resource comes to mind (without creating duplicates). The things that Promnesia adds beyond my PKM stack is the following: - prioritize new info - keeping track of which device things were read and how long

    1. layers of wat are essentially hacks to build something resembling a UI toolkit on top of a document markup language

      So make your application document-driven (i.e. actually RESTful).

      It's interesting that we have Web forms and that we call them that and yet very few people seem to have grokked the significance of the term and connected it to, you know, actual forms—that you fill out on paper and hand over to someone to process, etc. The "application" lies in that latter part—the process; it is not the visual representation of any on-screen controls. So start with something like that, and then build a specialized user agent for it if you can (and if you want to). If you find that you can't? No big deal! It's not what the Web was meant for.

    2. There is no good way to develop a UI in HTML/CSS/JS

      So don't.

    1. Clean code examples (YouTube)Why Are You Still Creating CRUD APIs?Remove Your If-Else and Switch CasesWhy Cognitive and Cyclomatic Complexity Matters in Software DevelopmentWriting Cleaner Code (With Examples)Resources for the curious📚 Source Code (GitHub) by Nicklas Millard, the authorRESTful API Design by MicrosoftArchitectural Styles and the Design of Network-based Software Architectures by R.T. FieldingWhat is REST by codeacademyIs Crud Bad For Rest? by Boris LublinskyHATEOAS Driven REST APIs by restfulapi.netHATEOAS — a simple explanation by Bartosz JedrzejewskiWhy HATEOAS is useless and what it means for REST by Andreas ReiserRESTful Considered Harmful by Tomasz NurkiewiczTask-Based UI on cqrs.wordpress.comCRUD is an antipattern by Mathias VerraesWhy REST sucks by Troy A. Griffitts

      Useful links for Web & generic programming.

    2. RPC-like but still REST-full is way more preferred than those rotten CRUD designs.

  9. Oct 2022
    1. Supabase is an open source Firebase alternative. Start your project with a Postgres database, Authentication, instant APIs, Edge Functions, Realtime subscriptions, and Storage.

      https://supabase.com/


      Found as presumably it's being used by https://www.explainpaper.com/ with improper configurations

    1. PolyScale is an intelligent, serverless database caching engine which allows low-latency reads from Supabase globally, no coding required
    1. @1:10:20

      With HTML you have, broadly speaking, an experience and you have content and CSS and a browser and a server and it all comes together at a particular moment in time, and the end user sitting at a desktop or holding their phone they get to see something. That includes dynamic content, or an ad was served, or whatever it is—it's an experience. PDF on the otherhand is a record. It persists, and I can share it with you. I can deliver it to you [...]

      NB: I agree with the distinction being made here, but I disagree that the former description is inherent to HTML. It's not inherent to anything, really, so much as it is emergent—the result of people acting as if they're dealing in live systems when they shouldn't.

  10. Sep 2022
    1. "detail": [ { "loc": [ "body", "name" ], "message": "Field required" }, { "loc": [ "body", "email" ], "message": "'not-email' is not an 'email'" } ]

      not complient with Problem Details, which requires details to be a string

    1. For example, let’s consider the type property. For most of the projects I am working on, it isn’t practical to have a webpage dedicated to each type of possible error.

      That's not required. The standard doesn't require this to be a URL locator — merely a URI! So you can just make up a URI and use it even if it's not resolvable. ... like you did for the URN below.

    2. For the instance property, the most practical way I’ve found of implementing this is to define a URN that encapsulates additional information regarding the error. Here is an example URN for reference. urn:companyname:api:error:protocol:badRequest:f29f57d7-e1f8-4643-b226-fa18f15e9b71
    1. Ever tried to look up some news from 12 years ago? Back in library days you were able to do that. On news portals, most articles are deleted after a year, and on newspaper web sites you hardly ever get access to the archives – even with a subscription.

      This is a massive failure of infrastructure (and education/"professionalism"—by and large, most people whose careers are in operating or maintaining Web infrastructure don't haven't been inculcated into or adopted the sort of "code of ethics" that sees this as a failure).

      The thing might just be for something like the Internet Archive to get into training or selling professional services for handling companies' "Web presence, done the right way". (This is def. take some organizational restructuring, however.) I'd like to see, for example, IA-certified partner organizations that uphold the principles described here and the original vision for the Web, and professional associations that work hard at making sure the status quo improves a lot over what's common today (and doesn't slide back).

    1. ErrorResponse: description: Container object for one or more errors returned by the API. type: object required: - errors properties: errors: type: array items: $ref: '#/definitions/Error'
    1. FetchErrorResponse: type: object properties: meta: $ref: '#/definitions/FetchMetaResponse' errors: $ref: '#/definitions/Error' example: { "meta": { "req_id": "d07c8b12-c95e-4a06-8424-92aac94bb445" }, "errors": [{ "code": "Unauthorized", "detail": "A valid bearer token is required", "status":"401" } ] }
    1. Mais la justice fait face à un autre problème bien plus difficile à régler. Le blocage par les FAI n'est en effet efficace que si les internautes se servent des réglages DNS de base de leur fournisseur. Une simple modification permet donc de les contourner et de retrouver par conséquent un accès à la Z-Lib. Le seul moyen d'en couper définitivement l'accès serait donc d'en trouver les serveurs et de les désactiver. Une mission particulièrement ardue : ceux-ci sont disséminés dans de nombreux pays… dont la Russie, qui n'est peut-être pas encline à suivre les recommandations de la justice française actuellement.

      Contourner blocage FAI

    1. However, links between resources need not be format specific; it can be useful to have typed links that are independent of their serialisation, especially when a resource has representations in multiple formats.
    1. CTO services

      CTO services, or CTOaaS, stands for part-time tech and business advisory of the Chief Technology Officer to assist Small and Medium-sized Enterprises (SMEs).

      The core benefit of a startup fractional CTO compared to an in-house CTO is the price effectiveness of such a service as a company only pays for the services needed.

  11. Aug 2022
    1. I can't get behind the call to anger here, even if I don't approve of Apple's stance on being the gatekeeper for the software that runs on your phone.

      Elsewhere (in the comments by the author on HN), he or she writes:

      The biggest problem I try to convey is that you have no way of knowing you'll get the rejection

      No, I think there were pretty good odds that before even submitting the first iteration it would have been rejected, based purely on the concept alone. This is not an app. It's a set of pages—only implemented with the iOS SDK (and without any of the affordances, therefore, that you'd get if you were visiting in a Web browser. For whatever reason, the author both thought this was a good idea and didn't review the App Store guidelines and decided to proceed anyway.

      Then comes the part where Apple sends the rejection and tells the author that it's no different from a Web page and doesn't belong on the App Store.

      Here's where the problem lies: at the point where you're - getting rejections, and then - trying to add arbitrary complexity to the non-app for no reason other than to try to get around the rejection

      ... that's the point where you know you're wasting your time, if it wasn't already clear before—and, once again, it should have been. This is a series of Web pages. It belongs on the Web. (Or dumped into a ZIP and passed around via email.) It is not an app.

      The author in the same HN comment says to another user:

      So you, like me, wasted probably days (if not weeks) to create a fully functional app, spent much of that time on user-facing functions that you would have probably not needed

      In other words, the author is solely responsible for wasting his or her own time.

      To top it off, they finish their HN comment with this lament:

      It's not like on Android where you can just share an APK with your friends.

      Yeah. Know what else allows you to "just" share your work...? (No APK required, even!)

      Suppose you were taking classes and wanted to know the rubric and midterm schedule. Only rather than pointing you to the appropriate course Web page or sharing a PDF or Word document with that information, the professor tells you to download an executable which you are expected to run on your computer and which will paint that information on the screen. You (and everyone else) would hate them—and you wouldn't be wrong to.

      I'm actually baffled why an experienced iOS developer is surprised by any of the events that unfolded here.

    1. The “work around” was to detect users in an IAB and display a message on first navigation attempt to prompt them to click the “open in browser” button early.

      That's a pretty deficient workaround, given the obvious downsides. A more robust workaround would be to make the cart stateless, as far as the server is concerned, for non-logged-in users; don't depend on cookies. A page request instead amounts to a request for the form that has this and this and this pre-selected ("in the cart"). Like with paper.

    1. Historical Hypermedia: An Alternative History of the Semantic Web and Web 2.0 and Implications for e-Research. .mp3. Berkeley School of Information Regents’ Lecture. UC Berkeley School of Information, 2010. https://archive.org/details/podcast_uc-berkeley-school-informat_historical-hypermedia-an-alte_1000088371512. archive.org.

      https://www.ischool.berkeley.edu/events/2010/historical-hypermedia-alternative-history-semantic-web-and-web-20-and-implications-e.

      https://www.ischool.berkeley.edu/sites/default/files/audio/2010-10-20-vandenheuvel_0.mp3

      headshot of Charles van den Heuvel

      Interface as Thing - book on Paul Otlet (not released, though he said he was working on it)

      • W. Boyd Rayward 1994 expert on Otlet
      • Otlet on annotation, visualization, of text
      • TBL married internet and hypertext (ideas have sex)
      • V. Bush As We May Think - crosslinks between microfilms, not in a computer context
      • Ted Nelson 1965, hypermedia

      t=540

      • Michael Buckland book about machine developed by Emanuel Goldberg antecedent to memex
      • Emanuel Goldberg and His Knowledge Machine: Information, Invention, and Political Forces (New Directions in Information Management) by Michael Buckland (Libraries Unlimited, (March 31, 2006)
      • Otlet and Goldsmith were precursors as well

      four figures in his research: - Patrick Gattis - biologist, architect, diagrams of knowledge, metaphorical use of architecture; classification - Paul Otlet, Brussels born - Wilhelm Ostwalt - nobel prize in chemistry - Otto Neurath, philosophher, designer of isotype

      Paul Otlet

      Otlet was interested in both the physical as well as the intangible aspects of the Mundaneum including as an idea, an institution, method, body of work, building, and as a network.<br /> (#t=1020)

      Early iPhone diagram?!?

      (roughly) armchair to do the things in the web of life (Nelson quote) (get full quote and source for use) (circa 19:30)

      compares Otlet to TBL


      Michael Buckland 1991 <s>internet of things</s> coinage - did I hear this correctly? https://en.wikipedia.org/wiki/Internet_of_things lists different coinages

      Turns out it was "information as thing"<br /> See: https://hypothes.is/a/kXIjaBaOEe2MEi8Fav6QsA


      sugane brierre and otlet<br /> "everything can be in a document"<br /> importance of evidence


      The idea of evidence implies a passiveness. For evidence to be useful then, one has to actively do something with it, use it for comparison or analysis with other facts, knowledge, or evidence for it to become useful.


      transformation of sound into writing<br /> movement of pieces at will to create a new combination of facts - combinatorial creativity idea here. (circa 27:30 and again at 29:00)<br /> not just efficiency but improvement and purification of humanity

      put things on system cards and put them into new orders<br /> breaking things down into smaller pieces, whether books or index cards....

      Otlet doesn't use the word interfaces, but makes these with language and annotations that existed at the time. (32:00)

      Otlet created diagrams and images to expand his ideas

      Otlet used octagonal index cards to create extra edges to connect them together by topic. This created more complex trees of knowledge beyond the four sides of standard index cards. (diagram referenced, but not contained in the lecture)

      Otlet is interested in the "materialization of knowledge": how to transfer idea into an object. (How does this related to mnemonic devices for daily use? How does it relate to broader material culture?)

      Otlet inspired by work of Herbert Spencer

      space an time are forms of thought, I hold myself that they are forms of things. (get full quote and source) from spencer influence of Plato's forms here?

      Otlet visualization of information (38:20)

      S. R. Ranganathan may have had these ideas about visualization too

      atomization of knowledge; atomist approach 19th century examples:S. R. Ranganathan, Wilson, Otlet, Richardson, (atomic notes are NOT new either...) (39:40)

      Otlet creates interfaces to the world - time with cyclic representation - space - moving cube along time and space axes as well as levels of detail - comparison to Ted Nelson and zoomable screens even though Ted Nelson didn't have screens, but simulated them in paper - globes

      Katie Berner - semantic web; claims that reporting a scholarly result won't be a paper, but a nugget of information that links to other portions of the network of knowledge.<br /> (so not just one's own system, but the global commons system)

      Mention of Open Annotation (Consortium) Collaboration:<br /> - Jane Hunter, University of Australia Brisbane & Queensland<br /> - Tim Cole, University of Urbana Champaign<br /> - Herbert Van de Sompel, Los Alamos National Laboratory annotations of various media<br /> see:<br /> - https://www.researchgate.net/publication/311366469_The_Open_Annotation_Collaboration_A_Data_Model_to_Support_Sharing_and_Interoperability_of_Scholarly_Annotations - http://www.openannotation.org/spec/core/20130205/index.html - http://www.openannotation.org/PhaseIII_Team.html

      trust must be put into the system for it to work

      coloration of the provenance of links goes back to Otlet (~52:00)

      Creativity is the friction of the attention space at the moments when the structural blocks are grinding against one another the hardest. —Randall Collins (1998) The sociology of philosophers. Cambridge, MA: Harvard University Press (p.76)

    1. The management plugin is included in the RabbitMQ distribution. Like any other plugin, it must be enabled before it can be used. That's done using rabbitmq-plugins:

      rabbitmq的web界面需要先enable