authentication
sequence diagram flow to understand auth
authentication
sequence diagram flow to understand auth
OpenTelemetry
takeaway
Prometheus = metric time-series engine OpenTelemetry = unified instrumentation standard for metrics, traces, logs
Prometheus tells “system slow” OpenTelemetry tells “this span made it slow”
command pattern
good question. they overlap a bit but they serve different purposes.
command = intent
event = outcome
command pattern is about explicit control of an action.
you create a Command object, pass it to an executor, and call execute() when you want. you know who asked for it, when it ran, and what the outcome was. it’s about invoking behavior in a controlled and traceable way.
event-driven architecture is about reactions to things that already happened. an event is a statement of fact: “user.created”. it doesn’t command anyone to do something; it just signals that something occurred. listeners may respond or ignore it, you don’t control that directly.
so:
many systems use both. commands cause events.
example:
CreateUserCommand → executed → emits UserCreatedEvent.
service mesh (Istio, Linkerd)
Here’s how it works:
Each service in your system gets a sidecar proxy (like Envoy) deployed next to it. This proxy intercepts all inbound and outbound traffic for that service.
These proxies handle network-level logic: retries, timeouts, encryption (mTLS), rate limiting, circuit breaking, etc.
A control plane (in Istio or Linkerd) manages all these proxies. It pushes configurations dynamically — so you can, for example, shift 10% of traffic to a new version or enforce mTLS across all services without touching the app code.
Because all traffic goes through these proxies, you get automatic observability — metrics, tracing, and logs across your entire service graph.
circuit breakers
circuit breaker a guard in front of your call. when a downstream keeps failing beyond a set threshold (say 5 fails in a row), the circuit “opens.” once open, all future calls immediately fail instead of wasting resources on requests that’ll fail anyway. after a cooldown (timeout), it switches to half-open, allowing a few test calls through. if they succeed, it closes back to normal; if they fail, it stays open longer.
sharding
partitioning
quick takeaway
Partitioning = performance tuning (single node)
Sharding = scaling out (multiple nodes)
Replication = redundancy or read scaling (same data copied)
so, 3 separate ideas:
partition = split table
shard = split dataset across DBs
replica = copy dataset across DBs
consistency
before vs after transaction, db is in valid state.. and still all external parts linking to db data is valid ( lilke triggers, indexes, foreign keys )
read uncommitted: can read dirty data• read committed: only committed data, no dirty reads• repeatable read: same reads during txn, may allow phantom rows• serializable: strictest, prevents all anomalies, lowest concurrency
read uncommitted
dirty reads, non repeatable reads, phantom reads
read committed
non repeatable reads, phantom reads and no dirty reads
repeatable reads
phantom reads and not nonrepeatable reads, no dirty reads
serializable :
no phantom, nonrepeatable, dirty reads
covering index
engine skips main table read → serves result right from index.
indexing
primary - data stored in index order - 1step lookup - main access key
secondary - points to primary index fields - 2 step lookup - extra filters / sorts
covered - stores full needed cols - 1 step lookup - readheavy endpoints
flexibility
no strict schema
higher throughput can degrade latency due to queuing or resourcesaturation
putting in other words : higher latency affects throughput
Dayflow
what problem it does solve?
ask what would be good ideas for someone else to explore
whattttt?? blews my mind
(Exception: Don't avoid love.)
si sensei
Growing an audience is another: the more fans you have, the more new fans they'll bring you.
classic community sales paradigm