50 Matching Annotations
  1. Dec 2024
    1. Unlike SQL designs that build on K/V stores, this allows to DSQL to do much of the heavy lifting of filtering and finding data right next to the data itself, on the storage replicas, without sacrificing scalability of storage or compute.

      This is differeny to cockroachdb then. the KV doesn't really know how to do the filtering, aggregation, projection stuff, because it takes care of reliability instead, but presumably there is a performance cost

  2. Jan 2024
  3. Oct 2023
  4. Sep 2023
  5. Jul 2023
  6. Jun 2023
  7. May 2023
  8. Jan 2023
  9. Dec 2022
    1. Postgres itself is a database “server.” There are several ways to connect to Postgres via “clients,” including GUIs, CLIs, and programming languages often via ORMs
  10. Nov 2022
  11. Oct 2022
    1. Supabase is an open source Firebase alternative. Start your project with a Postgres database, Authentication, instant APIs, Edge Functions, Realtime subscriptions, and Storage.

      https://supabase.com/


      Found as presumably it's being used by https://www.explainpaper.com/ with improper configurations

  12. Jul 2022
  13. May 2021
  14. Mar 2021
  15. Feb 2021
  16. Jan 2021
  17. Oct 2020
  18. Dec 2019
  19. Jul 2019
  20. Jun 2019
    1. Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL servers
  21. May 2019
  22. Apr 2019
    1. It is important to understand the interaction between aggregates and SQL's WHERE and HAVING clauses. The fundamental difference between WHERE and HAVING is this: WHERE selects input rows before groups and aggregates are computed (thus, it controls which rows go into the aggregate computation), whereas HAVING selects group rows after groups and aggregates are computed. Thus, the WHERE clause must not contain aggregate functions; it makes no sense to try to use an aggregate to determine which rows will be inputs to the aggregates. On the other hand, the HAVING clause always contains aggregate functions. (Strictly speaking, you are allowed to write a HAVING clause that doesn't use aggregates, but it's seldom useful. The same condition could be used more efficiently at the WHERE stage.)

      WHERE >> AGGREGATE >> HAVING (use aggregate functions)

      this is SQL procedural sequence on data querying.

      and yes HAVING kind of do almost the same thing in WHERE. but it accepts the grouping and aggregation functions that fails to be in WHERE.

  23. Jan 2018
  24. Dec 2017
  25. Oct 2017
    1. MySQL’s replication architecture means that if bugs do cause table corruption, the problem is unlikely to cause a catastrophic failure.

      I can't follow the reasoning here. I guess it's not guaranteed to replicate the corruption like Postgres would, but it seems totally possible to trigger similar or identical corruption because the implementation of the logical statement would be similar on the replica.

  26. Sep 2017
    1. if the table they would write to is not a temporary table

      Writes are allowed on temporary tables.

  27. Aug 2017
  28. Jul 2017
  29. Dec 2016
    1. The real benefit of JSONB: IndexesWe want our application to be fast. Without indexes, the database is forced to go from record to record (a table scan), checking to see if a condition is true. It’s no different with JSON data. In fact, it’s most likely worse since Postgres has to step in to each JSON document as well.

      This solves the problem of the last implementation I handled where json (not jsonb) data was stored in postgres

    1. When you’re picking a data store, the most important thing to understand is where in your data — and where in its connections — the business value lies. If you don’t know yet, which is perfectly reasonable, then choose something that won’t paint you into a corner. Pushing arbitrary JSON into your database sounds flexible, but true flexibility is easily adding the features your business needs.

      This is an old article but valuable thinking for system design.

    1. The BSON format used by MongoDB is limited to a maximum of 64 bits for representing an integer or floating point number, whereas the JSONB format used by Postgres does not have this limit. Postgres provides data constraint and validation functions to help ensure that JSON documents are more meaningful: for example, preventing attempts to store alphabetical characters where numerical values are expected. MongoDB offers automatic database sharding for easy horizontal scaling of JSON data storage. Scaling of Postgres installations has often been vertical. Horizontal scaling of Postgres is also possible, but tends to be more involved or use an additional third party solution. MongoDB also offers the possibility of increasing write throughput by deferring writing to disk. The tradeoff is potential loss of data, but this may suit users who have less need to persist their data.

      Good pros and cons of Mongo vs Postgres for JsonB

  30. Mar 2016
    1. The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated. (These statements do not apply when dropping the system oid column; that is done with an immediate rewrite.)