80 Matching Annotations
  1. Apr 2023
    1. A truly valuable principal engineer makes their whole team better by advocating for best practices, gently reminding people of why the processes we have exist, and helping the less experienced engineers find ways to ‘level up’. They can speak to technical aspects of the product, connect planned work to business strategy and to what makes the company more successful, and maybe most importantly they have the interpersonal skills to influence others around them towards these goals.
  2. Jan 2022
    1. this implies the eroding of democracy. As I said before, democratic institutions derive their legitimacy from the fact that the values and ideals they represent correspond with the values and ideals of the majority of their citizens, and from the resulting belief of these very citizens in their legitimacy. If it is possible to design and program my own institutions, laws, administrative services, etc. from my home desk, to find an unchallenged ideological home in a cloud community of my own choosing, then it is no longer necessary that I confront myself with conflicting truths and values. Then there is no need to grapple with the ideals and ideas represented by those institutions outside my door, whose function is obsolete for me, whose actions, punishments, etc. are then possibly perceived as illegitimate violence. In other words, by offering an alternative and allegedly better organization of individual or social life, blockchain technology aggravates the growing mistrust towards democratic institutions and thus undermines their ability to act as legitimate actors.

      this is interesting.

    2. we need to ask whether there is a possible and hidden reintermediation. Here, the religiously imbued veneration of their founders or “earthly representatives” as central leadership figures comes to mind. Secondly, the technical competencies and financial resources that are needed in lots of blockchain contexts clearly allow for a bundling of power among those who know what they are doing and who have the financial means to do it. Having mentioned this, the consensual basis that is so praised should therefore be taken with a grain of salt: given the computer power or financial means that are needed for the proof-of-work and the proof-of-stake procedures, these instruments for consensus building, at a second glance, turn out to be less democratic and more meritocratic, plutocratic, or oligarchic.

      I agree with everything here, not sure where this is going. Founders are supposed to be involved on projects and for every type of new technology one needs to know how it works and have the means to acquire eg: computer on the early 2000, GPUs for IA research. Previously third party were companies, organization, with interests ($$) in place, for blockchain ideology third parties are founders and domain-knowledge?

    3. these ideas suggest that a connection to the institutional structure we are born into or move into is no longer necessary

      I'm a bit confused on the why. What in the previous text suggests that? All the social structure we live in is mandatory, while all the blockchain-based structures are opt-in, the feasible way for them to be mandatory is for the real-world insititutional structures (governments) to make them mandatory which as I see makes the institutional structure more necessary. Why can't both things coexist?

    4. think the main risk lies in the possibility that someone manages to make use of this technology and its powerful control and steering mechanisms for other purposes than the common good – and that the traditional constitutional and democratic institutions will no longer have effective means to counter such an incident, given that, with the growing mistrust towards them, which is deepened by technologies like the blockchain, they are increasingly losing their validity and legitimacy.

      isn't this the risk of all kinds technologies? eg: IA, social media companies

  3. Nov 2021
    1. Using blockchains to implement new and experimental forms of ownership for land and other scarce assets, as well as new and experimental forms of democratic governance.

      this is key

    1. Exhaustion is not evidence of a lack of courage but of its abundance. To deny the struggle is to deny the very thing that allows us to triumph in the end.
    2. “Courage is a measure of our heartfelt participation with life.” The depth of my heartbreak is not just evidence of my failure. It is evidence of my courage, of the lengths I was willing to go to participate fully and completely in my pursuit of this goal.
  4. Oct 2021
    1. In particular, it's often the case that there's a seemingly obvious but actually incorrect reason something is true, a slightly less obvious reason the thing seems untrue, and then a subtle and complex reason that the thing is actually true2. I would regularly figure out that the seemingly obvious reason was wrong and then ask a question to try to understand the subtler reason, which sounded stupid to someone who thought the seemingly obvious reason was correct or thought that the refutation to the obvious but incorrect reason meant that the thing was untrue.
    1. Most people consider doing 30 practice runs for a talk to be absurd, a totally obsessive amount of practice, but I think Gary Bernhardt has it right when he says that, if you're giving a 30-minute talk to a 300 person audience, that's 150 person-hours watching your talk, so it's not obviously unreasonable to spend 15 hours practicing (and 30 practice runs will probably be less than 15 hours since you can cut a number of the runs short and/or repeatedly practice problem sections). One thing to note that this level of practice, considered obessive when giving a talk, still pales in comparison to the amount of time a middling table tennis club player will spend practicing.

      try this for my next talk, I usually don't practice what to say, but it's worth taking the time to thing about people in the audience and how what I say can impact them

    1. Here's a framing I like from Gary Bernhardt (not set off in a quote block since this entire section, another than this sentence, is his). People tend to fixate on a single granularity of analysis when talking about efficiency. E.g., "thinking is the most important part so don't worry about typing speed". If we step back, the response to that is "efficiency exists at every point on the continuum from year-by-year strategy all the way down to millisecond-by-millisecond keystrokes". I think it's safe to assume that gains at the larger scale will have the biggest impact. But as we go to finer granularity, it's not obvious where the ROI drops off. Some examples, moving from coarse to fine: The macro point that you started with is: programming isn't just thinking; it's thinking plus tactical activities like editing code. Editing faster means more time for thinking. But editing code costs more than just the time spent typing! Programming is highly dependent on short-term memory. Every pause to edit is a distraction where you can forget the details that you're juggling. Slower editing effectively weakens your short-term memory, which reduces effectiveness. But editing code isn't just hitting keys! It's hitting keys plus the editor commands that those keys invoke. A more efficient editor can dramatically increase effective code editing speed, even if you type at the same WPM as before. But each editor command doesn't exist in a vacuum! There are often many ways to make the same edit. A Vim beginner might type "hhhhxxxxxxxx" when "bdw" is more efficient. An advanced Vim user might use "bdw", not realizing that it's slower than "diw" despite having the same number of keystrokes. (In QWERTY keyboard layout, the former is all on the left hand, whereas the latter alternates left-right-left hands. At 140 WPM, you're typing around 14 keystrokes per second, so each finger only has 70 ms to get into position and press the key. Alternating hands leaves more time for the next finger to get into position while the previous finger is mid-keypress.) We have to choose how deep to go when thinking about this. I think that there's clear ROI in thinking about 1-3, and in letting those inform both tool choice and practice. I don't think that (4) is worth a lot of thought. It seems like we naturally find "good enough" points there. But that also makes it a nice fence post to frame the others.
    2. Another common reason for working on productivity is that mastery and/or generally being good at something seems satisfying for a lot of people. That's not one that resonates with me personally, but when I've asked other people about why they work on improving their skills, that seems to be a common motivation.
    3. As with this post on reasons to measure, while this post is about practical reasons to improve productivity, the main reason I'm personally motivated to work on my own productivity isn't practical. The main reason is that I enjoy the process of getting better at things, whether that's some nerdy board game, a sport I have zero talent at that will never have any practical value to me, or work. For me, a secondary reason is that, given that my lifespan is finite, I want to allocate my time to things that I value, and increasing productivity allows me to do more of that, but that's not a thought i had until I was about 20, at which point I'd already been trying to improve at most things I spent significant time on for many years.
    4. A specific example of something moving from one class of item to another in my work was this project on metrics analytics. There were a number of proposals on how to solve this problem. There was broad agreement that the problem was important with no dissenters, but the proposals were all the kinds of things you'd allocate a team to work on through multiple roadmap cycles. Getting a project that expensive off the ground requires a large amount of organizational buy-in, enough that many important problems don't get solved, including this one. But it turned out, if scoped properly and executed reasonably, the project was actually something a programmer could create an MVP of in a day, which takes no organizational buy-in to get off the ground. Instead of needing to get multiple directors and a VP to agree that the problem is among the org's most important problems, you just need a person who thinks the problem is worth solving.
    5. Unlike most people who discuss this topic online, I've actually looked at where my time goes and a lot of it goes to things that are canonical examples of things that you shouldn't waste time improving because people don't spend much time doing them. An example of one of these, the most commonly cited bad-thing-to-optmize example that I've seen, is typing speed (when discussing this, people usually say that typing speed doesn't matter because more time is spent thinking than typing). But, when I look at where my time goes, a lot of it is spent typing.
    6. It is commonly accepted, verging on a cliche, that you have no idea where your program spends time until you actually profile it, but the corollary that you also don't know where you spend your time until you've measured it is not nearly as accepted.
    7. I'm not a naturally quick programmer. Learning to program was a real struggle for me and I was pretty slow at it for a long time (and I still am in aspects that I haven't practiced). My "one weird trick" is that I've explicitly worked on speeding up things that I do frequently and most people have not.
    1. The answer is: having strong Token economics for their project. We call Tokenomics (Token + Economics) all the things that enable participants to contributing positively enabled by strong token design. Setting up Tokenomics for a project means "What can a creator put in place to allocate & incentivize a community to participate in the project."
    1. The dominant social networks tightly restrict access, hindering the ability of third-party developers to scale. Startups and independent developers are increasingly competing from a disadvantaged position. A potential way to reverse this trend are crypto tokens — a new way to design open networks that arose from the cryptocurrency movement that began with the introduction of Bitcoin in 2008 and accelerated with the introduction of Ethereum in 2014. Tokens are a breakthrough in open network design that enable: 1) the creation of open, decentralized networks that combine the best architectural properties of open and proprietary networks, and 2) new ways to incentivize open network participants, including users, developers, investors, and service providers. By enabling the development of new open networks, tokens could help reverse the centralization of the internet, thereby keeping it accessible, vibrant and fair, and resulting in greater innovation.
    1. A blockchain system has no ability to regular "the market" in the sense of people's general ability to freely make transactions. But what it can do is regulate and structure (or even create) specific markets, setting up patterns of specific behaviors whose incentives are ultimately set and guided by institutions that have anti-collusion guardrails built in, and can resist pressure from economic actors.
    2. people normally tend to focus on the unrealistic nature of perfect information and perfect rationality. But the unrealistic assumption that is hidden in the list that strikes me as even more misleading is individual choice: the idea that each agent is separately making their own decisions, no agent has a positive or negative stake in another agent's outcomes, and there are no "side games"; the only thing that sees each agent's decisions is the black box that we call "the mechanism".
    3. There is a large body of intellectual work that criticizes a bubble of concepts that they refer to as "economization", "neoliberalism" and similar terms, arguing that they corrode democratic political values and leave many people's needs unmet as a result. The world of cryptocurrency is very economic (lots of tokens flying around everywhere, with lots of functions being assigned to those tokens), very neo (the space is 12 years old!) and very liberal (freedom and voluntary participation are core to the whole thing). Do these critiques also apply to blockchain systems? If so, what conclusions should we draw, and how could blockchain systems be designed to account for these critiques? Nathan's answer: more hybrid approaches combining ideas from both economics and politics. But what will it actually take to achieve that, and will it give the results that we want?
    1. With offline first applications, you already have a realtime replication with the backend. Most offline first databases provide some concept of changestream or data subscriptions and with RxDB you can even directly subscribe to query results or single fields of documents. This makes it easy to have an always updated UI whenever data on the backend changes.

      but?

    2. reducing the latency is not so easy. It is defined by the physical properties of the transfer medium, the speed of light and the distance to the server. All of these three are hard to optimize.
    3. In offline-first apps, the operations go directly against the local storage which happens almost instantly. There is no perceptible loading time and so it is not even necessary to implement a loading spinner at all.
    4. Offline-First is a software paradigm where the software must work as well offline as it does online. To implement this, you have to store data at the client side, so that your application can still access it when the internet goes away. This can be either done with complex caching strategies, or by using an offline first database (like RxDB) that stores the data inside of IndexedDb and replicates it from and to the backend in the background. This makes the local database, not the server, the gateway for all persistent changes in application state.
  5. Sep 2021
    1. Active Indexers, Curators and Delegators can earn income from the network proportional to the amount of work they perform and their GRT stake.
    2. Curators are subgraph developers, data consumers or community members who signal to Indexers which APIs should be indexed by The Graph Network. Curators deposit GRT into a bonding curve to signal on a specific subgraph and earn a portion of query fees for the subgraphs they signal on; incentivizing the highest quality data sources. Curators will curate on subgraphs and deposit GRT via the Graph Explorer dApp. Because this occurs on a bonding curve, that means that the earlier you signal on a subgraph, the greater share of the query fees you earn on that subgraph for a given amount of GRT deposited. This also means that when you go to withdraw, you could end up with more or less GRT than you started with.

      cryptoeconomics still amazes me, how everything can be an opportunity for 'investment'

    1. Liquidity pools are pools of tokens that are locked in a smart contract. By offering liquidity, they guarantee trading, and because of this, they are widely used by decentralized exchanges.
    1. Because of high demand, the Ethereum network is getting overloaded. This resulted in very high transaction fees, making it to expensive for small investors to use it's dapps.

      High gas prices

    2. Then there are tokens. Tokens by definition do not run on their own blockchain, unlike a coin. They have been added to an already existing blockchain. Tokens can have the same functionality as a coin, although this is not common.Tokens that are created on the Ethereum network are typically ERC-20 tokens.
    3. Developers can program applications that can create, store and manage digital assets, also known as tokens, on the blockchain. For this to work, smart contracts and decentralized applications (DApps) are written and built. The expiration of these contracts and agreements is automatically enforced if the blockchain receives the correct data. You can make complex, irreversible agreements without the need for an intermediary.
    1. When important decisions are not documented, one becomes dependent on individual memory, which is quickly lost as people leave or move to other jobs.
    2. A good manager must have unshakeable determination and tenacity. Deciding what needs to be done is easy, getting it done is more difficult. Good ideas are not adopted automatically. They must be driven into practice with courageous impatience. Once implemented they can be easily overturned or subverted through apathy or lack of follow-up, so a continuous effort is required.
    3. To do a job effectively, one must set priorities. Too many people let their “in” basket set the priorities. On any given day, unimportant but interesting trivia pass through an office; one must not permit these to monopolize his time. The human tendency is to while away time with unimportant matters that do not require mental effort or energy. Since they can be easily resolved, they give a false sense of accomplishment. The manager must exert self-discipline to ensure that his energy is focused where it is truly needed.
    4. The man in charge must concern himself with details. If he does not consider them important, neither will his subordinates. Yet “the devil is in the details.” It is hard and monotonous to pay attention to seemingly minor matters. In my work, I probably spend about ninety-nine percent of my time on what others may call petty details.
    1. My research internships shocked me because they expected me to tell them what I was going to work on. They gave me a crazy amount of freedom in order to do this. I got shockingly comfortable with wandering the office buildings and asking senior employees in other divisions for their time. As long as I could periodically show value, my mentors gave me free reign.

      I wonder how useful this can be in my environment

    1. Ensure there's only one version of your site running at once. That last one is pretty important. Without service workers, users can load one tab to your site, then later open another. This can result in two versions of your site running at the same time. Sometimes this is ok, but if you're dealing with storage you can easily end up with two tabs having very different opinions on how their shared storage should be managed. This can result in errors, or worse, data loss.

      I wonder how can we identify issues like this when they occur

    1. Redux was created by Dan Abramov for a talk. It is a “state container” inspired by the unidirectional Flux data flow and functional Elm architecture. It provides a predictable approach to managing state that benefits from immutability, keeps business logic contained, acts as the single source of truth, and has a very small API.
    1. redux-thunk does: it is a middleware that looks at every action that passes through the system, and if it’s a function, it calls that function.
    1. We were able to reduce calls to Infura and save on costs. And because the proxy uses an in-memory cache, we didn’t need to call the Ethereum blockchain every time.
    2. our application continuously checks the current block and retrieves transactions from historical blocks;Our application was very heavy on reads and light on writes, so we thought using ‘cached reads’ would be a good approach;We developed a small application to act as a thin proxy between our application and Infura. The proxy application is simple and it only hits Infura/Ethereum on the initial call. All future calls for the same block or transaction are then returned from cache;Writes are automatically forwarded to Infura. This was a seamless optimization. We simply had to point our application to our proxy and everything worked without any changes to our application.
  6. Aug 2021
    1. Complex challenges, on the other hand, require innovative responses. These are the confounding head-scratchers with no right answers, only best attempts. There’s no straight line to a solution, and you can only know that you’ve found an effective strategy in retrospect. You never really solve your complex challenges–most of the time, you have to push forward and see how it goes.
    2. Complicated challenges are technical in nature. They have straight-line, step-by-step solutions, and tend to be predictable. People with the right expertise can usually design solutions that are easy to implement.
    3. Humans can master highly sophisticated technical and technological challenges because we’re very skilled at making linear connections from one technical feat to the next. But when it comes to multi-dimensional challenges, it’s a whole different ballgame. We can’t solve them with linear thinking or rely on technical prowess. Sometimes, they move and change at a rate faster than we can act. They don’t patiently await solutions. They are complex problems–which is a whole different ball game than merely complicated issues.
    1. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes
    1. It's time to put some of these pieces together. We know that: Calling setState() queues a render of that component React recursively renders nested components by default Context providers are given a value by the component that renders them That value normally comes from that parent component's state This means that by default, any state update to a parent component that renders a context provider will cause all of its descendants to re-render anyway, regardless of whether they read the context value or not!.
    2. Immutability and Rerendering 🔗︎

      This section is gold to use as a teaching example

    3. All of these approaches use a comparison technique called "shallow equality". This means checking every individual field in two different objects, and seeing if any of the contents of the objects are a different value. In other words, obj1.a === obj2.a && obj1.b === obj2.b && ......... This is typically a fast process, because === comparisons are very simple for the JS engine to do. So, these three approaches do the equivalent of const shouldRender = !shallowEqual(newProps, prevProps).
    4. When trying to improve software performance in general, there are two basic approaches: 1) do the same work faster, and 2) do less work. Optimizing React rendering is primarily about doing less work by skipping rendering components when appropriate.
    5. After it has collected the render output from the entire component tree, React will diff the new tree of objects (frequently referred to as the "virtual DOM"), and collects a list of all the changes that need to be applied to make the real DOM look like the current desired output. The diffing and calculation process is known as "reconciliation".
    1. Junior and Senior TranchesBorrower Pools have both a junior and senior tranche.Backers supply capital to thejunior tranche, and the Senior Pool supplies capitalto the senior tranche. When aborrower makes repayments, the Borrower Pool appliesthe amount first toward anyinterest and principal owed to the senior trancheat that time, and then toward anyinterest and principal owed to the junior trancheat that time.

      incentive to senior investors

    1. Here’s where immutability comes in: if you’re passing props into a PureComponent, you have to make sure that those props are updated in an immutable way. That means, if they’re objects or arrays, you’ve gotta replace the entire value with a new (modified) object or array. Just like with Bob – kill it off and replace it with a clone. If you modify the internals of an object or array – by changing a property, or pushing a new item, or even modifying an item inside an array – then the object or array is referentially equal to its old self, and a PureComponent will not notice that it has changed, and will not re-render. Weird rendering bugs will ensue.
    2. An easy way to optimize a React component for performance is to make it a class, and make it extend React.PureComponent instead of React.Component. This way, the component will only re-render if its state is changed or if its props have changed. It will no longer mindlessly re-render every single time its parent re-renders; it will ONLY re-render if one of its props has changed since the last render.
    1. connect is pure connect automatically makes connected components “pure,” meaning they will only re-render when their props change – a.k.a. when their slice of the Redux state changes. This prevents needless re-renders and keeps your app running fast.
    1. My personal summary is that new context is ready to be used for low frequency unlikely updates (like locale/theme). It's also good to use it in the same way as old context was used. I.e. for static values and then propagate updates through subscriptions. It's not ready to be used as a replacement for all Flux-like state propagation.
    2. One problem with the "Context vs Redux" discussions is that people often actually mean "I'm using useReducer to manage my state, and Context to pass down that value". But, they never state that explicitly - they just say "I'm using Context". That's a common cause of the confusion I see, and it's really unfortunate because it helps perpetuate the idea that Context "manages state"
    3. We can even say that server caching tools like React-Query, SWR, Apollo, and Urql fit the definition of "state management" - they store initial values based on the fetched data, return the current value via their hooks, allow updates via "server mutations", and notify of changes via re-rendering the component
    4. createContext() was designed to solve that problem, so that any update to a value will be seen in child components even if a component in the middle skips rendering.
    1. The Redux FAQ has some rules of thumb to help decide whether state should go in Redux, or stay in a component.In addition, if you separate your state by domain (by having multiple domain-specific contexts), then the problem is less pronounced as well.
    2. but in a more practical scenario, you often suffer from "death by a thousand cuts" which means that there's not really a single place that's slow, so you wind up applying React.memo everywhere. And when you do that, you have to start using useMemo and useCallback everywhere as well (otherwise you undo all the work you put into React.memo). Each of these optimizations together may solve the problem, but it drastically increases the complexity of your application's code and it actually is less effective at solving the problem than colocating state because React does still need to run through every component from the top to determine whether it should re-render. You'll definitely be running more code with this approach, there's no way around that.
    1. I consistently see developers putting all of their state into redux. Not just global application state, but local state as well. This leads to a lot of problems, not the least of which is that when you're maintaining any state interaction, it involves interacting with reducers, action creators/types, and dispatch calls, which ultimately results in having to open many files and trace through the code in your head to figure out what's happening and what impact it has on the rest of the codebase.
    1. Improving our information diet is essential, not only to avoid getting distracted, but also to put our time to much better use and learn new things and skills instead.

      That's a interesting way to put it. Information diet.

    1. When you download a torrent file directly, you are getting that .torrent file from the web server.When you use a magnet link, the URL you clicked on is passed over to your torrent client, which uses the DHT P2P network to find other torrent clients with that file and download the .torrent file from them. The original web server only gave you the original URL, and isn't involved in you fetching that content.So, magnet URLs have the advantage that they don't require the server to actually serve up the .torrent files, and they give an easy way for users to share links to torrents instead of having to share the entire .torrent file. The original web server can be years dead, yet the magnet URL can still keep working as long as there are users out there with that file.
    1. If it looks ugly, it is most likely a terrible mistake.

      I have a rule similar for this myself: If it looks wrong or ingenuous, it is wrong

    2. Only learn from the best. So when I was learning Go, I read the standard library.
    1. The authors of Team Topologies suggest that we flip this law on its head. If we can make teams that map to the structure that we want our software system to be like, then we’ll succeed when Conway’s Law kicks in.
    1. I can give the following advice: be a multiplier! I've seen many talented senior engineers who were very productive on their own but failed to help others to grow.
    1. When I follow a tutorial, I like to play with the code. Instead of copy/pasting the provided code verbatim, try experimenting with it: what happens if you omit one of the lines? Or if you change some of the values?
  7. Jul 2021
    1. Every commit should be runnable, that is we should be able to git checkout any commit and get a functional code base. This means no “WIP” commits or commit chains that only restore functionality at the end. This is important so that we can revert or rollback to any commit if things go sideways.

      I feel this is somewhat impractical, I may loose to much time trying to build the perfect history when I can make small PRs and stash commits (from the PR) into one

    1. First, try to avoid making career decisions while in a bad mental place. Take a two week vacation. Get out of your house a few times if you’ve been living and working at home for the last eighteen months. Try to restart something you loved but have stopped due to pandemic concerns. You don’t need to find a sustainable long-term solution, just enough of a break to find some space to reflect before making life-altering changes.The second piece of advice I offer is that great careers are often presented as linear growth stories, but if you dig deeply enough they often have a number of lulls embedded inside them. When you’re high energy, these lulls are opportunities to learn and accelerate your trajectory. When you’re low energy, they are a ripe opportunity to rest. It’s easy to forget about these pockets in retrospect, and I recently realized I’ve gotten so practiced at narrating my own career in a particular way that I’ve forgotten my own lulls that have made the faster periods sustainable.
  8. Jun 2021
    1. Worse still is the issue of “service” layers requiring you to basically build your own ORM. To really do a backend-agnostic service layer on top of the Django ORM, you need to replace or abstract away some of its most fundamental and convenient abstractions. For example, most of the commonly-used ORM query methods return either instances of your model classes, or instances of Django’s QuerySet class (which is a kind of chained-API results wrapper around a query). In order to avoid tightly coupling to the structure and API of those Django-specific objects, your service layer needs to translate them into something else — likely generic iterables to replace QuerySet, and some type of “business object” instance to replace model-class instances. Which is a non-trivial amount of work even in patterns like Data Mapper that are designed for this, and even more difficult to do in an Active Record ORM that isn’t.

      I see what this guy means and he has a point. However, I don't think about reimplementing these things when talking about services on Django. I want a centralized place to store business logic (not glue queries) and avoid multiple developers writing the same query multiple times in multiple places. The name "service" here sucks.

    2. A second problem is that when you decide to go the “service” route, you are changing the nature of your business. This is related to an argument I bring up occasionally when people tell me they don’t use “frameworks” and never will: what they actually mean, whether they realize it or not, is “we built and now have to maintain and train our developers on our own ad-hoc private framework, on top of whatever our normal business is”. And adopting the service approach essentially means that, whatever your business was previously, now your business is that plus developing and maintaining something close to your own private ORM.

      I don't think these two things are even close to be the same thing. Django's ORM is not replaced by services, from what I know services are the ORM with the difference that they are concentrated in a module.

    1. This isn't about writing boilerplate setter properties for each field in the model, but rather about writing methods that encapsulate the point of interaction with the database layer. View code can still inspect any field on the model and perform logic based on that, but it should not modify that data directly. We're ensuring that there is a layer at which we can enforce application-level integrity constraints that exist on top of the integrity constraints that the database provides for us.

      Addresses the issue raise on this tweet. We are not writing getters and setters out of obligation or convention.

  9. May 2021
    1. That’s how blogging is complimentary to other forms of more serious work: when you’ve done enough of it, you can get entire essays, speeches, stories, novels, spontaneously appearing in a state of near-completeness, ready to be written.

      This sounds a lot like the Zettelkasten method. If writing is your default mode, writing complex pieces is just making concrete an organization of things that were already formalized in your mind