114 Matching Annotations
  1. Jan 2024
  2. Sep 2023
  3. Jul 2023
    1. Tell future-you why you did that thing; they can read but don't know what you intended. Oral tradition is like never writing state to disk; flush those buffers.
  4. Jun 2023
    1. Platform engineering is trying to deliver the self-service tools teams want to consume to rapidly deploy all components of software. While it may sound like a TypeScript developer would feel more empowered by writing their infrastructure in TypeScript, the reality is that it’s a significant undertaking to learn to use these tools properly when all one wants to do is create or modify a few resources for their project. This is also a common source of technical debt and fragility. Most users will probably learn the minimal amount they need to in order to make progress in their project, and oftentimes this may not be the best solution for the longevity of a codebase. These tools are straddling an awkward line that is optimized for no-one. Traditional DevOps are not software engineers and software engineers are not DevOps. By making infrastructure a software engineering problem, it puts all parties in an unfamiliar position. I am not saying no-one is capable of using these tools well. The DevOps and software engineers I’ve worked with are more than capable. This is a matter of attention. If you look at what a DevOps engineer has to deal with day-in and day-out, the nuances of TypeScript or Go will take a backseat. And conversely, the nuances of, for example, a VPC will take a backseat to a software engineer delivering a new feature. The gap that the AWS CDK and Pulumi try to bridge is not optimized for anyone and this is how we get bugs, and more dangerously, security holes.
  5. Jan 2023
    1. tl;dw (best DevOps tools in 2023)

      1. Low-budget cloud computing : Civo (close to Scaleway)
      2. Infrastructure and Service Management: Crossplane
      3. App Management - manifests : cdk8s (yes, not Kustomize or Helm)
      4. App Management - k8s operators: tie between Knative and Crossplane
      5. App Management - managed services: Google Cloud Run
      6. Dev Envs: Okteto (yeap, not GitPod)
      7. CI/CD: GitHub Actions (as it's simplest to use)
      8. GitOps (CD): Argo CD (wins with Flux due to its adoption rate)
      9. Policy Management: Kyverno (simpler to use than industry's most powerful tool: OPA / Gatekeeper)
      10. Observability: OpenTelemetry (instrumentation of apps), VictoriaMetrics (metrics - yes not Prometheus), Grafana / Loki (logs), Grafana Tempo (tracing), Grafana (dashboards), Robusta (alerting), Komodor (troubleshooting)
    1. Get production-accurate data and preview databases to code against, fast.

      운영환경 데이터베이스의 스냅샷을 떠서 개발용 데이터베이스를 갖추는 전문 도구이다. Postgres 을 지원한다. 초기단계의 제품이라 Postgres 관련 기능을 먼저 개발한 후에 MySQL 등도 지원할 모양이다.

  6. www.dbmarlin.com www.dbmarlin.com
    1. DB marlin. 데이터베이스 전용 모니터링 솔루션. DataDog 보다 나을까? APM과 연계하는 편이 모니터링의 효과가 가장 좋을 듯 한데 전용 솔루션은 약하지 않을까?

  7. Dec 2022
  8. Nov 2022
  9. Oct 2022
  10. Sep 2022
    1. DevOps is a set of practices, tools, and a cultural philosophy that automate and integrate the processes between software development and IT teams. It emphasizes team empowerment, cross-team communication and collaboration, and technology automation.

      This is important

  11. Aug 2022
    1. Computers and networks have transformed many aspects of our everyday routines. The evolution resulted in new learning and communications techniques as well as security requirements for virtual systems.

  12. Jun 2022
    1. DevOps team roles and responsibilities

      In a DevOps team, everyone participates in software development and is responsible for its early release. Characteristic and different from other development methods is a process that we call Continuous Deployment. Which roles do we recognize within a team, and which skills are needed?

    1. In a Staging workflow, releases are slower because of more steps, and bigger because of batching.

    2. For Staging to be useful, it has to catch a special kind of issues that 1) would happen in production, but 2) wouldn’t happen on a developer's laptop.What are these? They might be problems with data migrations, database load and queries, and other infra-related problems.

      How "Staging" environment can be useful

  13. Apr 2022
    1. Which Components of IT Infrastructure do we need for DevOps?

      Many companies that want to move to DevOps eventually struggle with the question “What are the Components of IT Infrastructure we need? The use of DevOps stems from the desire to be able to release software more often and faster. The traditional Operations Team (OPS) however will not wholeheartedly embrace this because they would rather benefit from maintaining a stable infrastructure and its maintenance.

  14. Mar 2022
    1. DevOps is an interesting case study for understanding MLOps for a number of reasons: It underscores the long period of transformation required for enterprise adoption.It shows how the movement is comprised of both tooling advances as well as shifts in cultural mindset at organizations. Both must march forward hand-in-hand.It highlights the emerging need for practitioners with cross-functional skills and expertise. Silos be damned.

      3 things MLOps can learn from DevOps

    1. Deployment: Significance of Branching for Continuous Delivery

      The deployment of software in DevOps is based on Continuous Delivery. Continuous Delivery enables all kinds of changes, including new features, configuration changes, bug fixes and experiments, to be put into production safely and quickly in a sustainable manner. A Branching strategy and Base Truncs play an important role in this.

  15. Jan 2022
    1. What is End User Computing (EUC)? Thanks to the progressive introduction of DevOps, attention to the role of the end user in software development and testing has increased significantly. It is now important to think like an end user when we develop and test software. After all, that's what we're all doing it for.

      What is End User Computing (EUC)? Thanks to the progressive introduction of DevOps, attention to the role of the end user in software development and testing has increased significantly. It is now important to think like an end user when we develop and test software. After all, that's what we're all doing it for.

    1. you can also mount different FastAPI applications within the FastAPI application. This would mean that every sub-FastAPI application would have its docs, would run independent of other applications, and will handle its path-specific requests. To mount this, simply create a master application and sub-application file. Now, import the app object from the sub-application file to the master application file and pass this object directly to the mount function of the master application object.

      It's possible to mount FastAPI applications within a FastAPI application

    1. There are officially 5 types of UUID values, version 1 to 5, but the most common are: time-based (version 1 or version 2) and purely random (version 3). The time-based UUIDs encode the number of 10ns since January 1st, 1970 in 7.5 bytes (60 bits), which is split in a “time-low”-“time-mid”-“time-hi” fashion. The missing 4 bits is the version number used as a prefix to the time-hi field.  This yields the 64 bits of the first 3 groups. The last 2 groups are the clock sequence, a value incremented every time the clock is modified and a host unique identifier.

      There are 5 types of UUIDs (source):

      Type 1: stuffs MAC address+datetime into 128 bits

      Type 3: stuffs an MD5 hash into 128 bits

      Type 4: stuffs random data into 128 bits

      Type 5: stuffs an SHA1 hash into 128 bits

      Type 6: unofficial idea for sequential UUIDs

    2. Even though most posts are warning people against the use of UUIDs, they are still very popular. This popularity comes from the fact that these values can easily be generated by remote devices, with a very low probability of collision.
  16. Dec 2021
    1. Artifactory/Nexus/Docker repo was unavailable for a tiny fraction of a second when downloading/uploading packagesThe Jenkins builder randomly got stuck

      Typical random issues when deploying microservices

    2. Microservices can really bring value to the table, but the question is; at what cost? Even though the promises sound really good, you have more moving pieces within your architecture which naturally leads to more failure. What if your messaging system breaks? What if there’s an issue with your K8S cluster? What if Jaeger is down and you can’t trace errors? What if metrics are not coming into Prometheus?

      Microservices have quite many moving parts

    3. If you’re going with a microservice:

      9 things needed for deploying a microservice (listed below)

    4. Let’s take a simple online store app as an example.

      5 things needed for deploying a monolith (listed below)

    5. some of the pros for going microservices

      Pros of microservices (not always all are applicable):

      • Fault isolation
      • Eliminating the technology lock
      • Easier understanding
      • Faster deployment
      • Scalability
    1. We’ve spent a lot of times telling our developers, go faster, go faster. And then turning to our operators and say, keep it stable. And I’ve been that operator. I know the best way to keep the system stable is to not change it. But that’s not good for our customers.

      That is a very good quote!

  17. Oct 2021
    1. In contrast, a defining feature of ML-powered applications is that they are directly exposed to a large amount of messy, real-world data which is too complex to be understood and modeled by hand.

      One of the best ways to picture a difference between DevOps and MLOps

  18. Sep 2021
    1. Upcoming Trends in DevOps and SRE in 2021 DevOps and SRE are domains with rapid growth and frequent innovations. With this blog you can explore the latest trends in DevOps, SRE and stay ahead of the curve.

      Top 2021 trends for DevOps, SRE.

  19. Jul 2021
    1. This means that an event-driven system focuses on addressable event sources while a message-driven system concentrates on addressable recipients. A message can contain an encoded event as its payload.

      Event-Driven vs Message-Driven

    1. we want systems that are Responsive, Resilient, Elastic and Message Driven. We call these Reactive Systems.

      Reactive Systems:

      • responsive - responds in a timely manner
      • resilient - stays responsive in the face of failure
      • elastic - system stays responsive under varying workload
      • message driven - asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency

      as a result, they are:

      • flexible
      • loosely-coupled
      • scalable
      • easy to develop and change
      • more tolerant of failure
      • highly responsive with interactive feedback
    2. Resilience is achieved by replication, containment, isolation and delegation.

      Components of resilience

    3. Today applications are deployed on everything from mobile devices to cloud-based clusters running thousands of multi-core processors. Users expect millisecond response times and 100% uptime. Data is measured in Petabytes.

      Today's demands from users

  20. Jun 2021
    1. It basically takes any command line arguments passed to entrypoint.sh and execs them as a command. The intention is basically "Do everything in this .sh script, then in the same shell run the command the user passes in on the command line".

      What is the use of this part in a Docker entry point:

      #!/bin/bash
      set -e
      
      ... code ...
      
      exec "$@"
      
    1. We should think about the number of simultaneous connections (peak and average) and the message rate/payload size. I think, the threshold to start thinking about AnyCable (instead of just Action Cable) is somewhere between 500 and 1000 connections on average or 5k-10k during peak hours.
      • number of simultaneous connections (peak and average)

      • the message rate/payload size.

    1. As it stands, sudo -i is the most practical, clean way to gain a root environment. On the other hand, those using sudo -s will find they can gain a root shell without the ability to touch the root environment, something that has added security benefits.

      Which sudo command to use:

      • sudo -i <--- most practical, clean way to gain a root environment
      • sudo -s <--- secure way that doesn't let touching the root environment
    2. Much like sudo su, the -i flag allows a user to get a root environment without having to know the root account password. sudo -i is also very similar to using sudo su in that it’ll read all of the environmental files (.profile, etc.) and set the environment inside the shell with it.

      sudo -i vs sudo su. Simply, sudo -i is a much cleaner way of gaining root and a root environment without directly interacting with the root user

    3. This means that unlike a command like sudo -i or sudo su, the system will not read any environmental files. This means that when a user tells the shell to run sudo -s, it gains root but will not change the user or the user environment. Your home will not be the root home, etc. This command is best used when the user doesn’t want to touch root at all and just wants a root shell for easy command execution.

      sudo -s vs sudo -i and sudo su. Simply, sudo -s is good for security reasons

    4. Though there isn’t very much difference from “su,” sudo su is still a very useful command for one important reason: When a user is running “su” to gain root access on a system, they must know the root password. The way root is given with sudo su is by requesting the current user’s password. This makes it possible to gain root without the root password which increases security.

      Crucial difference between sudo su and su: the way password is provided

    5. “su” is best used when a user wants direct access to the root account on the system. It doesn’t go through sudo or anything like that. Instead, the root user’s password has to be known and used to log in with.

      The su command is used to get a direct access to the root account

  21. Mar 2021
  22. Jan 2021
    1. Different data sources are better suited for different types of data transformations and provide access to different data quantities at different freshnesses

      Comparison of data sources

      • Data warehouses / lakes (such as Snowflake or Redshift) tend to hold a lot of information but with low data freshness (hours or days). They can be a gold mine, but are most useful for large-scale batch aggregations with low freshness requirements, such as “number of lifetime transactions per user.”
      • Transactional data sources (such as MongoDB or MySQL) usually store less data at a higher freshness and are not built to process large analytical transformations. They’re better suited for small-scale aggregations over limited time horizons, like the number of orders placed by a user in the past 24 hrs.
      • Data streams (such as Kafka) store high-velocity events and provide them in near real-time (within milliseconds). In common setups, they retain 1-7 days of historical data. They are well-suited for aggregations over short time-windows and simple transformations with high freshness requirements, like calculating that “trailing count over the last 30 minutes” feature described above.
      • Prediction request data is raw event data that originates in real-time right before an ML prediction is made, e.g. the query a user just entered into the search box. While the data is limited, it’s often as “fresh” as can be and contains a very predictive signal. This data is provided with the prediction request and can be used for real-time calculations like finding the similarity score between a user’s search query and documents in a search corpus.
    2. MLOps platforms like Sagemaker and Kubeflow are heading in the right direction of helping companies productionize ML. They require a fairly significant upfront investment to set up, but once properly integrated, can empower data scientists to train, manage, and deploy ML models. 

      Two popular MLOps platforms: Sagemaker and Kubeflow

    3. …Well, deploying ML is still slow and painful

      How the typical ML production pipeline may look like:

      Unfortunately, it ties hands of Data Scientists and takes a lot of time to experiment and eventually ship the results to production

    1. DevOps Services

      If you want to find DevOps consulting services, I suggest you checking Cleveroad.

  23. Nov 2020
    1. Automation suggests that a sysadmin has invented a system to cause a computer to do something that would normally have to be done manually. In automation, the sysadmin has already made most of the decisions on what needs to be done, and all the computer must do is execute a "recipe" of tasks. Orchestration suggests that a sysadmin has set up a system to do something on its own based on a set of rules, parameters, and observations. In orchestration, the sysadmin knows the desired end result but leaves it up to the computer to decide what to do.

      Most intuitive difference between automation and orchestration

    2. For instance, automation usually involves scripting, often in Bash or Python or similar, and it often suggests scheduling something to happen at either a precise time or upon a specific event. However, orchestration often begins with an application that's purpose-built for a set of tasks that may happen irregularly, on demand, or as a result of any number of trigger events, and the exact results may even depend on a variety of conditions.

      Automation is like a subset of orchestration.

      Orchestration suggest moving many parts, and automation usually refers to a singular task or a small number of strongly related tasks.

  24. Oct 2020
    1. Tabular Comparison Between All Deployment Methods:

      Tabular comparison of 4 deployment options:

      1. Travis-CI/Circle-CI
      2. Cloud + Jenkins
      3. Bitbucket Pipelines/Github Actions
      4. Automated Cloud Platforms
  25. May 2020
    1. more developers are becoming DevOps skilled and distinctions between being a software developer or hardware engineer are blurring
    1. Secrets management is one of the most sensitive and critical disciplines in all of DevOps and is becoming increasingly important as we move toward a fully continuous deployment world. AWS Keys, deploy keys, ssh keys are often the key attack vector for a bad actor or insider threat, and thus all users and customers are concerned about robust secrets management.
    1. In some contexts, "ops" refers to operators. Operators were the counterparts to Developers represented in the original coining of the term DevOps.

      I have always believed the Ops was short for Operations, not Operators.

      https://en.wikipedia.org/wiki/DevOps even confirms that belief.

    1. Continuous Delivery of Deployment is about running as thorough checks as you can to catch issues on your code. Completeness of the checks is the most important factor. It is usually measured in terms code coverage or functional coverage of your tests. Catching errors early on prevents broken code to get deployed to any environment and saves the precious time of your test team.

      Continuous Delivery of Deployment (quick summary)

    2. Continuous Integration is a trade off between speed of feedback loop to developers and relevance of the checks your perform (build and test). No code that would impede the team progress should make it to the main branch.

      Continuous Integration (quick summary)

    3. A good CD build: Ensures that as many features as possible are working properly The faster the better, but it is not a matter of speed. A 30-60 minutes build is OK

      Good CD build

    4. A good CI build: Ensures no code that breaks basic stuff and prevents other team members to work is introduced to the main branch Is fast enough to provide feedback to developers within minutes to prevent context switching between tasks

      Good CI build

    5. The idea of Continuous Delivery is to prepare artefacts as close as possible from what you want to run in your environment. These can be jar or war files if you are working with Java, executables if you are working with .NET. These can also be folders of transpiled JS code or even Docker containers, whatever makes deploy shorter (i.e. you have pre built in advance as much as you can).

      Idea of Continuous Delivery

    6. Continuous Delivery is about being able to deploy any version of your code at all times. In practice it means the last or pre last version of your code.

      Continous Delivery

    7. Continuous Integration is not about tools. It is about working in small chunks and integrating your new code to the main branch and pulling frequently.

      Continuous Integration is not about tools

    8. The app should build and start Most critical features should be functional at all times (user signup/login journey and key business features) Common layers of the application that all the developers rely on, should be stable. This means unit tests on those parts.

      Things to be checked by Continous Integration

    9. Continuous Integration is all about preventing the main branch of being broken so your team is not stuck. That’s it. It is not about having all your tests green all the time and the main branch deployable to production at every commit.

      Continuous Integration prevents other team members from wasting time through a pull of faulty code

  26. Apr 2020
    1. While talking about DevOps, three things are important continuous integration, continuous deployment, and continuous delivery.

      DevOps process

      • Continuous Integration - code gets integrated several times a day (checked by automated pipeline(server))
      • Continuous Delivery - introducing changes with every commit, making code ready for production
      • Continuous Deployment - deployment in production is automatic, without explicit approval from a developer

      another version of the image: and one more:

    2. Basic prerequisites to learn DevOps

      Basic prerequisites to learn DevOps:

      • Basic understanding of Linux/Unix system concepts and administration
      • Familiarity with command-line interface
      • Knowing how build and deployment process works
      • Familiarity with text editor
      • Setting up a home lab environment with VirtualBox
      • Networking in VirtualBox
      • Setting up multiple VMs in VirtualBox
      • Basics of Vagrant
      • Linux networking basics
      • Good to know basic scripting
      • Basics of applications - Java, NodeJS, Python
      • Web servers - Apache HTTPD, G-Unicorn, PM2
      • Databases - MySQL, MongoDB
    3. DevOps benefits

      DevOps benefits:

      • Improves deployment frequency
      • Helps with faster time to market
      • Lowers the failure rate of new releases
      • Increased code quality
      • More collaboration between the teams and departments
      • Shorter lead times between fixes
      • Improves the mean time to recovery
    4. Operations in the software industry include administrative processes and support for both hardware and software for clients as well as internal to the company. Infrastructure management, quality assurance, and monitoring are the basic roles for operations.

      Operations (1/2 of DevOps):

      • administrative processes
      • support for both hardware and software for clients, as well as internal to the company
      • infrastructure management
      • quality assurance
      • monitoring
    1. I set it with a few clicks at Travis CI, and by creating a .travis.yml file in the repo

      You can set CI with a few clicks using Travis CI and creating a .travis.yml file in your repo:

      language: node_js
      node_js: node
      
      before_script:
        - npm install -g typescript
        - npm install codecov -g
      
      script:
        - yarn lint
        - yarn build
        - yarn test
        - yarn build-docs
      
      after_success:
        - codecov
      
    2. I set it with a few clicks at Travis CI, and by creating a .travis.yml file in the repo

      You can set CI with a few clicks using Travis CI and creating a .travis.yml file in your repo:

      language: node_js
      node_js: node
      
      before_script:
        - npm install -g typescript
        - npm install codecov -g
      
      script:
        - yarn lint
        - yarn build
        - yarn test
        - yarn build-docs
      
      after_success:
        - codecov
      
    3. Continuous integration makes it easy to check against cases when the code: does not work (but someone didn’t test it and pushed haphazardly), does work only locally, as it is based on local installations, does work only locally, as not all files were committed.

      CI - Continuous Integration helps to check the code when it :

      • does not work (but someone didn’t test it and pushed haphazardly),
      • does work only locally, as it is based on local installations,
      • does work only locally, as not all files were committed.
    4. With Codecov it is easy to make jest & Travis CI generate one more thing:

      Codecov lets you generate a score on your tests:

    1. Continuous Deployment is the next step. You deploy the most up to date and production ready version of your code to some environment. Ideally production if you trust your CD test suite enough.

      Continuous Deployment

  27. Mar 2020
  28. Feb 2020
    1. DevOps has taught us that the software development process can be generalized and reused for dealing with change not just in application code but also in infrastructure, docs and tests. It can all just be code.
  29. Dec 2019
    1. Environment variables are 'exported by default', making it easy to do silly things like sending database passwords to Airbrake.

      airbrake -- monitoring service

  30. May 2019
    1. Valdomiro Bilharvas - Squads mais eficientes com Devops

      Mais um caso prático que vai te mostrar a importância da preparação de um ambiente de desenvolvimento que facilita a vida de todos e garante entregas contínuas e de qualidade. O assunto é transversal a vários tópicos de nossa certificação DevOps Tools.

    2. Daniel Wildt, Guilherme Lacerda - Voltando para as raízes do Desenvolvimento Ágil

      Se você acha que DevOps é modinha, coisa nova, assista a palestra do Daniel e do Guilherme, uma dupla sempre motivadora.

    3. João Brito - CI/CD - Pense um pouco além das ferramentas

      Continuous Integration e Delivery são também tópicos importantes da certificação LPI DevOps Tools, mas como o João Brito vai falar nessa palestra, é importante entender o porque do uso dessas ferramentas.

      701.4 Continuous Integration and Continuous Delivery (weight: 5)

    4. Allan Moraes - Automatizando o Monitoramento de Infraestrutura

      Docker, Grafana e Ansible fazem parte da palestra do Allan e também são tópicos cobertos na prova DevOps Tools do Linux Professional Institute.

      705.1 IT Operations and Monitoring (weight: 4)

    5. Amanda Matos - Métricas & DevOps - Por que você deve medir para conquistar?

      Operações e monitoramento têm um tópico específico entre as exigências do LPI na certificação DevOps. A Amanda irá contar como implementar métricas com ferramentas de código aberto. Fica de olho!

      705.1 IT Operations and Monitoring (weight: 4)

    6. Aurora Li Min de Freitas Wang - Sendo dev em meio ao DevOps: Mudança de cultura de baixo para cima

      A carreira de DevOps é bastante atrativa e desafiante, mas requer uma mudança na cultura predominante. Veja o que a Aurora tem a dizer sobre isso.

    7. Mateus Prado - DevOps Engineers: por que é tão difícil contratar?

      Há falte de profissionais prontos para o mercado que exige DevOps. Compare o que o Mateus precisa com os tópicos que cobrimos em nossa certificação LPI DevOps Tools. Aproveite para seguir os tópicos em https://wiki.lpi.org/wiki/DevOps_Tools_Engineer_Objectives_V1 para criar seu programa de estudos para se tornar um bom profissional DevOps.

    8. Tiago Roberto Lammers - Nossa jornada DevOps na Delivery Much para microserviços e o que aprendemos

      Microservices é um dos temas cobertos pela certificação DevOps Tools do Linux Professional Institute e também é um assunto determinante na escolha de ferramentas do cinto de utilidade de um profissional DevOps. Aproveite para conversar com o Tiago sobre a sua experiência com o uso do Docker, assunto que também cai na prova.

      Tópicos (dentre outros):

      701.1 Modern Software Development (weight: 6) 701.4 Continuous Integration and Continuous Delivery (weight: 5) 702.1 Container Usage (weight: 7)

    9. Program

      Comentei, aqui no programa do DevOpsDays Porto Alegre, as palestras que podem te motivar e dar mais informações sobre os tópicos cobertos em nossa prova de certificação DevOps Tools. Visite https://wiki.lpi.org/wiki/DevOps_Tools_Engineer_Objectives_V1 para a lista completa dos tópicos.

  31. Mar 2019
    1. Acabe com o caos no seu pipeline com 4 ferramentas de métricas e controle

      Lembra de clicar no título para ver mais anotações nas palavras ou frases em destaque!

    2. Encerramento e Sorteios

      Será que vai ter algum sorteio do LPI? Gruda lá na trilha! ;-)

    3. Hashicorp Vault: One-Time Password para SSH

      Está aí um assunto sob o qual quero aprender! Não é explicitamente coberto pelos tópicos de certificação DevOps, mas dá uma olhada nos assuntos cobrindo ssh e security (procura também por vault em https://wiki.lpi.org/wiki/DevOps_Tools_Engineer_Objectives_V1).

    4. Como o iFood criou seu próprio RDS?

      Sempre é bom conhecer casos reais de empresas (e perguntar muito) para conhecer bem o que é o trabalho de um DevOps, especialmente se você é novato ou está querendo ser um profissional desse tipo.

    5. Repositorio NPM privado grátis com Verdaccio e AWS

      Excelente para você entender, na prática, sobre Cloud Deployment (um de nossos importantes subtópicos!). Além disso, vai sair da palestra com mais ferramentas para seu cinto de utilidades!

    6. Usando Traefik para automatizar o proxy reverso de seus containers docker

      Ainda que esse não seja um assunto cobrado diretamente na prova, essas são ferramentas que devem fazer parte do cinto de utilidades de um bom DevOps. E busca por "container" nos nossos tópicos, nesse link, que tu vais descobrir a importância de conhecer bem sobre o assunto.

    7. Como construir uma stack ELK sem dor de cabeça

      Assunto amplamente cobrado pelo subtópico 705.2 - Log Management and Analysis. Dá uma olhada também nesse post de nosso blog.

    8. Pipeline de CI/CD no Kubernetes usando Jenkins e Spinnaker

      Uau! Muitos assuntos da prova LPI DevOps são explorados nessa palestra. Fica de olho no tópico: 702 Container Management.

    9. Implementando CI com GitLab

      Ainda que os tópicos da prova LPI DevOps não cubram apenas o Git para a integração contínua (ele é usado especialmente em Source Code Management), é muito importante conhecer bem os conceitos de integração e entrega contínua cobertos nessa palestra. Eles estão nesse tópico:

      701.4 Continuous Integration and Continuous Delivery

    10. Trilha DevOps Tools

      Que tal aproveitar a trilha DevOps Tools do TDC para se preparar para a prova LPI Devops? Minhas anotações nessa página conectam os assuntos desenvolvidos nas palestras com os tópicos da prova.

      Em cada uma das palestras expanda o assunto (clicando no título delas) para ver mais anotações que fiz.

    11. Jenkins Pipelines

      Veja esse tópico específico: 701.4 Continuous Integration and Continuous Delivery (weight: 5), especialmente o assunto:

      • Understand how Jenkins models continuous delivery pipelines and implement a declarative continuous delivery pipeline in Jenkins

      Mais informações também no nosso blog.

    12. Grafana

      Parte do 1.o assunto no tópico 705 Service Operations: 705.1 IT Operations and Monitoring (weight: 4)

      Understand the architecture of Prometheus, including Exporters, Pushgateway, Alertmanager and Grafana

      Mais informações também nesse post de nosso blog.

    13. Prometheus

      Parte do 1.o assunto no tópico 705 Service Operations: 705.1 IT Operations and Monitoring (weight: 4)

      Understand the architecture of Prometheus, including Exporters, Pushgateway, Alertmanager and Grafana

      Mais informações também nesse post de nosso blog.

    14. Intervalo para Almoço

      Vamos aproveitar para falar sobre carreiras e certificações nesse intervalo?

    15. Você consegue visualizar a saúde da sua aplicação?

      Ainda que aqui os tópicos da certificação não cubram exatamente esse assunto, monitorar a saúde de um sistema e suas aplicações é missão do profissional DevOps. Atente para os tópicos:

      701 Software Engineering 701.1 Modern Software Development (weight: 6)

      e

      705.2 Log Management and Analysis (weight: 4)

  32. Oct 2018
    1. We have also begun developing an instrument to assess organizations' readiness to adopt Agile and DevOps. We would welcome opportunities to pilot this assessment instrument with your organization.

      How do we get involved in the pilot?

  33. Jan 2018
    1. There is only one codebase per app, but there will be many deploys of the app

      Typically Terraform violates the spirit of this principle. Though each deploy may be defined (typically as an environment) in the same repo, the codebase is different. We work around this making heavy use of modules to limit divergence between deploys.

  34. Oct 2017
  35. Feb 2017
  36. Jan 2014
    1. Is it because ops care deeply about systems while devs consider them a tool or implementation detail?

      What is the divide?

    2. When I look at the DevOps “community” today, what I generally see is a near-total lack of overlap between people who started on the dev side and on the ops side.

      I see this same near-total lack of overlap. There is a different language, mindset, and approach.