[{"content":"I\u0026rsquo;ve been doing some clean of our image builds. With that I\u0026rsquo;ve wanted to add in test-kitchen to process as its a nice wrapper to allow for running and re-running tests and ansible playbooks without to much leg work. If you don\u0026rsquo;t know test-kitchen or sometimes called kitchenci take a look here: https://github.com/test-kitchen/test-kitchen\nPre-reqs test-kitchen is a rubyGem and can either be installed via gem install test-kitchen or as part of the chef workstation. As we are also planning on using inspec, i went with the later. I use bundle to manage the deployment of gems and ruby as\n","permalink":"https://staggerlee011.github.io/posts/test-kitchen-by-example/","summary":"template and steps to set up a test-kitchen with ansible, ec2 and inspec","title":"test-kitchen with ansible, ec2 and inspec"},{"content":"pytest is a popular python test framework. below are a collection of snippets and examples for usage.\n Configuration files  pytest.ini conftest.py   Markers  Document markers in pytest.ini Run tests against markers   Fixtures Parameterize  Configuration files pytest is based on 2 files that you host at the root of your testing suites.\npytest.ini example:\n[pytest] python_files test_* python_classes = *Tests python_functions test_* markers =smoke: collection of smokes tests to be run on every build slow: slow collection of tests logic: all logic tests conftest.py example:\nfrom pytest import fixture @fixture def global_example(): print(\u0026#34;called global_exaple\u0026#34;) Markers Allow you to collect tests together via adding a construct against each function or class.\nfrom pytest import mark @mark.my_class class my_classTests: @mark.smoke @mark.input def test_input_value(): assert true @mark.output def test_output_value(): assert true The above will mark:\n test_input_value with my_class, smoke, input test_output_value with my_class, output  Document markers in pytest.ini You can add documentation to markers via updating the pytest.ini file with:\nmarkers =smoke: collection of smokes tests to be run on every build math: all math logic tests logic: all logic tests You can return the markers information via:\npytest --markers Run tests against markers To run tests against markers you have some basic sql like syntax\npytest -m smoke pytest -m \u0026#34;input or output\u0026#34; pytest -m \u0026#34;smoke and input\u0026#34; pytest -m \u0026#34;not slow\u0026#34; Fixtures Allow for reusable snippets of code.\nAdd the fixture to conftest.py\nfrom pytest import fixture @fixture def global_example(): print(\u0026#34;called global_exaple\u0026#34;) Use the fixture in test:\ndef test_global_example(global_example): \u0026#34;\u0026#34;\u0026#34; calls the global_example fixture \u0026#34;\u0026#34;\u0026#34; assert true Parameterize def test_is_palindrome_empty_string(): assert is_palindrome(\u0026#34;\u0026#34;) def test_is_palindrome_single_character(): assert is_palindrome(\u0026#34;a\u0026#34;) def test_is_palindrome_mixed_casing(): assert is_palindrome(\u0026#34;Bob\u0026#34;) def test_is_palindrome_with_spaces(): assert is_palindrome(\u0026#34;Never odd or even\u0026#34;) def test_is_palindrome_with_punctuation(): assert is_palindrome(\u0026#34;Do geese see God?\u0026#34;) def test_is_palindrome_not_palindrome(): assert not is_palindrome(\u0026#34;abc\u0026#34;) def test_is_palindrome_not_quite(): assert not is_palindrome(\u0026#34;abab\u0026#34;) into\n@pytest.mark.parametrize(\u0026#34;palindrome\u0026#34;, [ \u0026#34;\u0026#34;, \u0026#34;a\u0026#34;, \u0026#34;Bob\u0026#34;, \u0026#34;Never odd or even\u0026#34;, \u0026#34;Do geese see God?\u0026#34;, ]) def test_is_palindrome(palindrome): assert is_palindrome(palindrome) @pytest.mark.parametrize(\u0026#34;non_palindrome\u0026#34;, [ \u0026#34;abc\u0026#34;, \u0026#34;abab\u0026#34;, ]) def test_is_palindrome_not_palindrome(non_palindrome): assert not is_palindrome(non_palindrome) ","permalink":"https://staggerlee011.github.io/posts/python-pytest-by-example/","summary":"Examples of usage and configuration for pytest","title":"Pytest by Example"},{"content":"We currently have a terraform code in multiple versions at present. Below are the basic steps to upgrade code from 0.12 - 0.14. Note this solution is dependant on using tfenv which lets you install multiple versions of terraform.\nFirst here\u0026rsquo;s the error you will see:\nUpgrading to Terraform v0.14 - Terraform by HashiCorpwww.terraform.io › upgrade-guides › 0-14 You will need to successfully complete a terraform apply at least once under Terraform v0. 13 https://www.terraform.io/upgrade-guides/0-14.html remove terrafomr (may have to remove terragrunt as well if you use it)\nbrew remove terraform\nbrew install tfenv\ntfenv install latest tfenv install latest:^0.13\nfind all versions possible to install@ tfenv list-remote list all versions installed@ tfenv list\nget error:\nuse specific version of tf\ntfenv use tfenv use latest\ntfenv use 0.13.6\n RUN UPGRADE:\ntfenv use 0.13.6\nterraform upgrade 0.13upgrade\n$ terraform 0.13upgrade\rThis command will update the configuration files in the given directory to use\rthe new provider source features from Terraform v0.13. It will also highlight\rany providers for which the source cannot be detected, and advise how to\rproceed.\rWe recommend using this command in a clean version control work tree, so that\ryou can easily see the proposed changes as a diff against the latest commit.\rIf you have uncommited changes already present, we recommend aborting this\rcommand and dealing with them before running this command again.\rWould you like to upgrade the module in the current directory?\rOnly 'yes' will be accepted to confirm.\rEnter a value: yes\r-----------------------------------------------------------------------------\rUpgrade complete!\rUse your version control system to review the proposed changes, make any\rnecessary adjustments, and then commit.\rterraform apply errir:\nyou need to run terraform init after upgrade!\n$ terraform apply\rError: Could not load plugin\rPlugin reinitialization required. Please run \u0026quot;terraform init\u0026quot;.\rPlugins are external binaries that Terraform uses to access and manipulate\rresources. The configuration provided requires plugins which can't be located,\rdon't satisfy the version constraints, or are otherwise incompatible.\rTerraform automatically discovers provider requirements from your\rconfiguration, including providers used in child modules. To see the\rrequirements and constraints, run \u0026quot;terraform providers\u0026quot;.\r2 problems:\r- Failed to instantiate provider \u0026quot;registry.terraform.io/hashicorp/aws\u0026quot; to\robtain schema: unknown provider \u0026quot;registry.terraform.io/hashicorp/aws\u0026quot;\r- Failed to instantiate provider \u0026quot;registry.terraform.io/-/aws\u0026quot; to obtain\rschema: unknown provider \u0026quot;registry.terraform.io/-/aws\u0026quot;\rterraform init:\n$ terraform init\rInitializing modules...\rInitializing the backend...\rInitializing provider plugins...\r- Finding hashicorp/aws versions matching \u0026quot;~\u0026gt; 3.7.0, \u0026gt;= 2.23.*, \u0026gt;= 2.23.*, \u0026gt;= 2.50.*, \u0026gt;= 2.50.*\u0026quot;...\r- Finding latest version of -/aws...\r- Installing hashicorp/aws v3.7.0...\r- Installed hashicorp/aws v3.7.0 (signed by HashiCorp)\r- Installing -/aws v3.25.0...\r- Installed -/aws v3.25.0 (signed by HashiCorp)\rThe following providers do not have any version constraints in configuration,\rso the latest version was installed.\rTo prevent automatic upgrades to new major versions that may contain breaking\rchanges, we recommend adding version constraints in a required_providers block\rin your configuration, with the constraint strings suggested below.\r* -/aws: version = \u0026quot;~\u0026gt; 3.25.0\u0026quot;\rTerraform has been successfully initialized!\rYou may now begin working with Terraform. Try running \u0026quot;terraform plan\u0026quot; to see\rany changes that are required for your infrastructure. All Terraform commands\rshould now work.\rIf you ever set or change modules or backend configuration for Terraform,\rrerun this command to reinitialize your working directory. If you forget, other\rcommands will detect it and remind you to do so if necessary.\rversions.tf\nterraform { required_version = \u0026#34;\u0026gt;= 0.14\u0026#34; } $ terraform init Initializing modules... Initializing the backend... Error: Invalid legacy provider address This configuration or its associated state refers to the unqualified provider \u0026#34;aws\u0026#34;. You must complete the Terraform 0.13 upgrade process before upgrading to later versions. ","permalink":"https://staggerlee011.github.io/posts/terraform-code-upgrade/","summary":"How to upgrade your legacy Terraform code to 0.14","title":"Terraform Code Upgrades"},{"content":"This is my current setup for terraform (running on WSL2 ubunutu-18)\nInstall software I currently use the following software to manage and interact with terraform:\n tfenv tfsec driftctl  brew You can install all the above software via brew\nbrew install tfenv tfsec driftctl tfenv Terraform releases are quick and keeping all our environments on the same version isn\u0026rsquo;t possible. tfenv resolves that by letting us have multiple versions installed and easy to manage.\ntfsec Gives you a guize\ntfenv configuration We currently have code in 12/13 and 14. We upgrade any terraform that is edited or when we have time in a sprint to refactor. With that I get the latest terraform 14 and 13.\ntfenv install latest:^0.14 tfenv install latest:^0.13 tfenv use latest tfenv basic syntax  List all versions of terraform available: tfenv list-remote List installed versions of terraform tf list (This also shows the current version in use) Define terraform version to use tfenv use latest or tfenv use 0.13.6  tfenv support If you run tfenv list and get the below error. This is because you have not set which version of terraform to use. once defined error goes away ie: tfenv use latest\n$ tfenv list cat: /home/linuxbrew/.linuxbrew/Cellar/tfenv/2.0.0/version: No such file or directory Version could not be resolved (set by /home/linuxbrew/.linuxbrew/Cellar/tfenv/2.0.0/version or tfenv use \u0026lt;version\u0026gt;) Driftctl ","permalink":"https://staggerlee011.github.io/posts/terraform-workstation/","summary":"My workstation setup for Terraform","title":"Terraform Workstation Setup"},{"content":"Another collection of snippets this time for NPM\nSetup Collection of init, load basic examples\nCreate package.json Creates a empty package.json file with standard metadata\nnpm init Load package.json modules Update package.json ","permalink":"https://staggerlee011.github.io/posts/npm-by-example/","summary":"Example Setup and config for NPM","title":"NPM by Example"},{"content":"Below are step by step instructions for setting up SSH access to an GitLab git repository\nCreate SSH key First create a key via ssh-keygen\ncd ~/.ssh ssh-keygen -f gitlab -t rsa -b 4096 You will be asked for passphrase I\u0026rsquo;ve had issues with using one with VSCode / Remote WSL so suggest not using one.\nA typical output will look like below:\n$ ssh-keygen -f gitlab -t rsa -b 4096 Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in gitlab. Your public key has been saved in gitlab.pub. The key fingerprint is: SHA256:1nDTaZZ7vUjfYjxcSwW+OEEpCGUrhcJ8HgShPd+iv+Y stephen@navi The key\u0026#39;s randomart image is: +---[RSA 4096]----+ | oo+o++. ... | | o+ +o...o.+ . | | . o+..o o.B . .| | o.o + + + o.| | o S . +.o.o| | . o .=+.+| | . .*+.| | .. . o | | oE. | +----[SHA256]-----+ Gitlab Configuration Now log into your Gitlab account, from the upper-right corner, click your profile photo, then click Edit Profile and Click SSH keys\nCopy and Paste your new public ssh key into the window:\ncat ~/.ssh/gitlab.pub Configure SSH Next you need to configure SSH via the ~/.ssh/config, to use the new ssh key for your repos. If the file doesn\u0026rsquo;t exist, create it via touch ~/.ssh/config. You will need create / update it like below Note to update the User key with the one genereated above\n# GitLab.com Host gitlab.com PreferredAuthentications publickey IdentityFile ~/.ssh/gitlab # Private GitLab instance (update Host to your URL) Host gitlab.company.com PreferredAuthentications publickey IdentityFile ~/.ssh/gitlab Test access Now we just test our connection\nssh -T git@gitlab.company.com You should return a Welcome message.\n","permalink":"https://staggerlee011.github.io/posts/git-ssh-gitlab/","summary":"Steps to connect to a GitLab via SSH","title":"Connect to GitLab via SSH"},{"content":"I got good news today, in that I passed the CKAD exam i toke yesterday :)\nBelow are some notes of my study resources and exam tips.\nResources Below are the primary resources i used to study from:\n O\u0026rsquo;Reilly CKAD livelessons by Sander van Vugt O\u0026rsquo;Reilly CKAD Study Guide Udemy Kubernetes Mastery: Hands-On Lessons From A Docker Captain Kubernetes docs  Practice Exams As its a practical exam, practise is key! I strongly recommend starting at the top and as you get closer working your way down, with taking the killer_ exams the day before. Im sure theres plenty more out there (especially the github style ones, there all great and well worth going through).\n Github bbachi - CKAD Practice Questions Github dgkanatsios CKAD Exercises KubeAcademy killer_  Exam Tips This was my first lab based exam, so I didnt really know what to expect, with that i had a lot of questions. Hopefully the below is helpful.\nAlways check your in the right k8s cluster For each question you are told which cluster to connect to. Make sure you double check it, as you dont want to waste time doing something in the wrong place!\nBookmark relevant links You\u0026rsquo;r allowed to have 1 browser tab open, so you can bookmark a collection of resources to help you. PIck and choose the ones you like, but I strongly recommend having at least these 2 ready.\n Kubectl Reference Docs kubectl Cheat Sheet  Time Management Apparently the exam used to be 3 hours. That soulds very nice to me! when i was trying the kubeacademy exams the day before my exam, i found i was constantly getting close or timing out (If i hadnt of done those labs i think i would 100% have failed because it taught me to speed up!). Keep an eye on the time and use time management on the tasks.\nYou can pick and choose which question to, so you can skip a question and come back to it. Each question has a weight to it for how much of the passing mark it is worth. You can also mark questions to highlight ones you want to come back to!\nI ended up skipping 4 questions, based on either not being sure how to answer it, or knowing it would take a big chunk of my time to complete it. Once I had completed all the rest of my questions (I only had 15 mins left of my 2 hours!), I reviewed the ones I had missed and completed the ones with the highest worth! I still finished with 2 questions unanswered.\nComputer / home setup I had to spend my first 40 minutes trying to get the exam proctor to view my drivers license. I apparently have a pretty poor webcam and the new UK drivers licenses have very small text. That combined with weird lighting in the room, meant spending far to long trying to all things to align so he could read my name (I even ran to the kitchen to get a cup with water to see if that could help magnify the text enough for him!). We got there eventually thanks to running off and finding my daughters mangifying class!\nMake sure your desk is clear. In other exams they were ok with me having speakers on the desk and a mouse mat, on this one, it was a 100% clear desk. the only things i could have were clear water in a cup, mouse and keyboard. Save yourself the stress and effort, get it all off desk before you start, so its not another to have to deal with!\nYou are allowed to use multiple screens for the exam (Most dont let you do that) which is a real help with having the kubernetes docs open on one screen and the exam on the other!\n","permalink":"https://staggerlee011.github.io/posts/exam-ckad/","summary":"I passed the CKAD exam today!","title":"Passed CKAD"},{"content":"I\u0026rsquo;ve just published a new github repo that contains a collection of base best practises for your rds instances.\nhttps://github.com/Staggerlee011/rds-bp-benchmark Example usage Create you inspec profile (For help see my blog post: https://blog.serialexperiments.co.uk/posts/inspec-by-example/)\nUpdate file inspec.yml depends on section with rds-bp-benchmark\ndepends: - name: inspec-aws url: https://github.com/inspec/inspec-aws/archive/master.tar.gz - name: rds-bp-benchmark git: https://github.com/Staggerlee011/rds-bp-benchmark branch: master Add file controls/include.rb and edit\ninclude_controls \u0026#39;rds-bp-benchmark\u0026#39; Add or update inputs.yml\nrds_name: \u0026#39;my-rds-instance\u0026#39; region: \u0026#39;eu-west-2\u0026#39; rds_engine: \u0026#39;postgres\u0026#39; rds_securitygroup: \u0026#39;rds-sg\u0026#39; Run inspec\ninspec exec . -t aws:// --input-file inputs.yml I\u0026rsquo;ve put each test into its own control so you can skip them if you wish as well as making most of the controls have editable values. Again you can see more of how to do that in the in my blog post inspec by example.\nI\u0026rsquo;m hoping this helps you and others. please feel free to offer updates.\n","permalink":"https://staggerlee011.github.io/posts/inspec-rds-bp-benchmark/","summary":"Inspec github for rds best practices","title":"Inspec rds-bp-benchmark"},{"content":"Collection of examples and commands to run, manage and develop with inspec:\n Installation  Install plugin pre-steps Install plugin   Using Inspec  Create a new profile Execute a profile  Execute profile with Input values     Development with Inspec  Inspec.lock Depends_on  depends_on github depends_on git depends_on local   Managing dependency tests  Skipping controls  Skip specific control Run specific dependency control   Edit dependency controls   Libraries   Resources  Installation Install inspec\nbrew install ruby # you have ruby installed but unless you specifically need an older version upgrade? gem install inspec Install plugin pre-steps Inspec is built around plugin extensions. I had to install a few extra bits first to get extensions installing\nsudo gem install chef-utils -v 16.6.14 Install plugin You may wish to add a plugin, that can be done via:\nsudo gem install train-kubernetes Using Inspec Create a new profile You can create a new profile and base it on some pre-created profiles, the below creates a inspec-aws based basic profile:\ninspec init profile --platform aws my-profile Execute a profile To run a profile you use the exec command. The below is an example of running a test against an aws:// resource:\ninspec exec . -t aws:// inspec exec . -t aws://\u0026lt;aws profile name\u0026gt; Execute profile with Input values inspec exec . -t aws:// --input-file inputs.yml inspec exec . -t aws:// --input rds_name=myrdsinstance Development with Inspec Collection of examples for editing and developing inspec profiles.\nInspec.lock This file locks your inspec.yml so all future runs are the same. This means that any dependency changes or config changes to inspec.yml will not made if you keep the inspec.lock. To run updated tests you will need to delete the file.\nDepends_on You may want to build your profile on other profiles. Using this kind of modulation lets you re-use your tests in different environments.\ndepends_on github Example shows how you load up the inspec-aws profile\ndepends: - name: inspec-aws url: https://github.com/inspec/inspec-aws/archive/master.tar.gz depends_on git Example using git which gives a good version lock in via using branches/tag\ndepends: - name: git-profile git: http://url/to/repo branch: desired_branch tag: desired_version commit: pinned_commit version: semver_via_tags relative_path: relative/optional/path/to/profile depends_on local Example shows how you load up a file from local storage:\ndepends: - name: profile path: ../path/to/profile Managing dependency tests When you pull in a set of tests, you need to reference the tests to have them running. I do this via adding a new file under controls called include.rb which a reference to each profile you want to add:\ninclude_controls \u0026#39;rds-bp-benchmark\u0026#39; Skipping controls You may want to ignore some controls. this can be done in 2 ways:\nSkip specific control via updating the include.rb\ninclude_controls \u0026#39;rds-bp-benchmark\u0026#39; do skip_control \u0026#39;snapshot tags\u0026#39; end Run specific dependency control Alternatively you can only run specific controls you want from the dependency via:\nrequire_controls \u0026#39;rds-bp-benchmark\u0026#39; do control \u0026#39;snapshot tags\u0026#39; end Edit dependency controls You can also edit a dependency controls on the fly changing values via:\nrequire_controls \u0026#39;rds-bp-benchmark\u0026#39; do control \u0026#39;snapshot tags\u0026#39; do impact 0.1 end end Libraries This allows you to add ruby based code to your profile.\nFor examples see: https://github.com/Staggerlee011/rds-bp-benchmark/blob/master/libraries/rds_helper.rb\nResources  Inspec Glossary Inspec Profile inheritance  ","permalink":"https://staggerlee011.github.io/posts/inspec-by-example/","summary":"Example commands for Inspec","title":"Inspec by Example"},{"content":"You may be aware of tools like grammerly to assist in fixing grammar and spelling issues in your email. Vale is a similar tool, but its a open source cli, so can be used to help automate and standardize your teams prose. below is there description of the production from the github page\nVale is a command-line tool that brings code-like linting to prose. It\u0026#39;s fast, cross-platform (Windows, macOS, and Linux), and highly customizable. Install To install vale you can use brew, for other options please see: https://docs.errata.ai/vale/install\nbrew install vale Configuration Add a .vale.ini file to the root of your repo, below is a basic example for it\nStylesPath = styles Vocab = tech [*.md] BasedOnStyles = Google StylesPath This is your root folder for 3rd party styles, language rules etc.\nVocab You can add single or multiple values here. Its a section for adding words that you want to ignore or highlight from your linting. I have added a tech folder and put in words like aws, kubernetes etc.\nYou create a folder and under your StylesPath with the vocab name and then add 2 files, accept.txt and reject.txt\nFile types Next list which file types you wish to run vale against example [*.md] will only check markdown files\nBasedOnStyles This is where you pick which styles you want to run. In my example I have downloaded the google style. But many others exist (like Microsoft, Write-good, etc)\nAdd a 3rd party style Here you can just copy the folder of yaml rules from your chosen style guide, I copied the Google folder from https://github.com/errata-ai/Google under my styles folder\nVSCode Setup You can install the vscode extension that allows you to see the suggestions in the problems tab, the installation is standard, search for vale in the extensions section. Remember to update the extensions cli settings and to restart vscode to enable it.\nCli usage As well as integrating it with vscode you can also run vale via the command line. The easiest option there is cd into the root of the folder or where ever you have the .vale.ini\nAll files To run vale against all files with a matching format run\nvale . Specific file To run vale against a specific file\nvale content/posts/aws-configure.md Next steps One thing that isn\u0026rsquo;t available out the box is a pre-commit hook to add your linting to standard commit workflows.\nThe other thing to look at is do you want to use a standard style or create you own. I\u0026rsquo;m happy to use a standard one and have it as a simple improvement to my current writing, but you may want to refine it.\nI really like the option of adding some linting to my and the teams docs proses and this is great easy option. Hope its helpful to you to.\nResources  Vale docs Example boilerplate usage vscode extension Style Options  ","permalink":"https://staggerlee011.github.io/posts/vale-by-example/","summary":"Setting up and running Vale","title":"Vale - linting for prose"},{"content":"You never want to keep your secrets in plaintext and you never want to keep your plaintext secrets in source control!\nWith terraform a lot of the time you are creating objects and you can use the random resource to generate a secret and either push to output or to a secure service to store it (We use parameter store/ssm).\nBut in some cases you may have to add a secret into terraform that has already been created and there comes the problem! below is a possible solution for AWS using aws-kms and ssm.\nPre-reqs  A aws-kms to connect to and generate your secrets from aws-cli installed (\u0026gt; v2 in the below examples)  Generate plaintext file Create a temporary text file, this will contain your secret in plaintext and uses key : value pairing.\n`We will delete after generating our secret file!`\r Example file An example file would be:\npassword: examplepassword Generate secret file Run the below command, updating the following:\n profile = aws named profile or remove line if you use default key-id = the key-id of your kms, you can get this from the console or output from terraform region = the region your kms is stored in plaintext = the location of the file with the plaintext secret  aws kms encrypt \\  --profile mgmt-eks \\  --key-id 346e8eab-39a6-455b-ac88-fcd8a6cf7043 \\  --region eu-west-2 \\  --plaintext fileb://user.yml \\  --output text \\  --query CiphertextBlob This will generate the secret, copy the output to a file named: password.yml.encrypted and save it (I put my secrets in a folder with the same name as the tf file)\nDelete the `plaintext` secret!\r Update TF Now we have a secret that can be saved to source control. Add a data source pointing to the file\ndata \u0026#34;aws_kms_secrets\u0026#34; \u0026#34;myapp\u0026#34; { secret { name = \u0026#34;password\u0026#34; payload = file(\u0026#34;${path.module}/myapp/password.yml.encrypted\u0026#34;) } } Add a local call to the decrypted answer. In the below example im just taking the secret and saving it to ssm this can then be called in tf or any other app and never be hardcoded.\nlocals { myapppass = yamldecode(data.aws_kms_secrets.myapp.plaintext[\u0026#34;password\u0026#34;]) } resource \u0026#34;aws_ssm_parameter\u0026#34; \u0026#34;myapp_password\u0026#34; { name = \u0026#34;/myapp/user/admin/password\u0026#34; description = \u0026#34;password for admin user\u0026#34; type = \u0026#34;SecureString\u0026#34; value = local.myapppass.password tags = { \u0026#34;Terraform\u0026#34; = \u0026#34;true\u0026#34; \u0026#34;myapp\u0026#34; = \u0026#34;true\u0026#34; } } There we have it. You are ready to terraform apply This will leave your source control clean and your secrets safe.\nThere are other options for how to do keep your secrets safe. But this is the easiest to set up and run with.\n","permalink":"https://staggerlee011.github.io/posts/terraform-secrets/","summary":"How to safely store secrets in your terraform source control using AWS SSM / KMS","title":"Terraform secrets using SSM and KSM"},{"content":"Below are step by step instructions for setting up SSH access to an Github git repository\nCreate SSH key First create a key via ssh-keygen\ncd ~/.ssh ssh-keygen -f github -t rsa -b 4096 You will be asked for passphrase I\u0026rsquo;ve had issues with using one with VSCode / Remote WSL so suggest not using one.\nA typical output will look like below:\n$ ssh-keygen -f github -t rsa -b 4096 Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in codecommit. Your public key has been saved in codecommit.pub. The key fingerprint is: SHA256:1nDTaZZ7vUjfYjxcSwW+OEEpCGUrhcJ8HgShPd+iv+Y stephen@navi The key\u0026#39;s randomart image is: +---[RSA 4096]----+ | oo+o++. ... | | o+ +o...o.+ . | | . o+..o o.B . .| | o.o + + + o.| | o S . +.o.o| | . o .=+.+| | . .*+.| | .. . o | | oE. | +----[SHA256]-----+ Github Configuration Now log into your Github account, from the upper-right corner, click your profile photo, then click Settings\nClick SSH and GPG keys\nClick New SSH Key\nCopy and Paste your new public ssh key into the window and give it a name :\ncat ~/.ssh/github.pub If prompted confirm your Github password.\nConfigure -Agent Check the ssh-agent is running\neval `ssh-agent -s` if you get a response of Agent pid \u0026lt;number\u0026gt; then its up\nAdd your new ssh key\nssh-add ~/.ssh/github\rIf successful you get a message like: Identity added: /home/stephen/.ssh/github (/home/stephen/.ssh/github)\n","permalink":"https://staggerlee011.github.io/posts/git-ssh-github/","summary":"Steps to connect to a Github via SSH","title":"Connect to Github via SSH"},{"content":"Below are step by step instructions for setting up SSH access to an azure-repo\nCreate SSH key First create a key via ssh-keygen\ncd ~/.ssh ssh-keygen -f azure-repo -t rsa -b 4096 You will be asked for passphrase I\u0026rsquo;ve had issues with using one with VSCode / Remote WSL so suggest not using one.\nA typical output will look like below:\n$ ssh-keygen -f azure-repo -t rsa -b 4096 Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in azure-repo. Your public key has been saved in azure-repo.pub. The key fingerprint is: SHA256:1nDTaZZ7vUjfYjxcSwW+OEEpCGUrhcJ8HgShPd+iv+Y stephen@navi The key\u0026#39;s randomart image is: +---[RSA 4096]----+ | oo+o++. ... | | o+ +o...o.+ . | | . o+..o o.B . .| | o.o + + + o.| | o S . +.o.o| | . o .=+.+| | . .*+.| | .. . o | | oE. | +----[SHA256]-----+ Configure SSH Next you need to configure SSH via the ~/.ssh/config, to use the new ssh key for all your repos. If the file doesn\u0026rsquo;t exist, create it via touch ~/.ssh/config. You will need create / update it, with the below:\nHost ssh.dev.azure.com IdentityFile ~/.ssh/azure-devops IdentitiesOnly yes Azure Devops Configuration Now log into your Azure-Devop account and open SSH public keys via:\nUser Settings icon -\u0026gt; SSH public keys Select New Key\nCopy and Paste your new public ssh key into the web portal of Azure-Devops:\ncat ~/.ssh/azure-repo.pub Test SSH Now test your connection:\nssh -T git@ssh.dev.azure.com This should output below:\nremote: Shell access is not supported. shell request failed on channel 0 If you don\u0026rsquo;t get this message, check your config or look at the resources section below for more troubleshooting steps.\nConnect to Azure-Repo Your now ready to connect to all repos in the azure-devops organization (Unless RBAC has been implemented) via your normal git clone, First get the URL for ssh from the repo:\nThen run the clone command\n$ git clone git@ssh.dev.azure.com:v3/\u0026lt;YOUR ORG\u0026gt;/\u0026lt;PROJECT\u0026gt;/\u0026lt;REPO\u0026gt; Cloning into \u0026#39;\u0026lt;REPO\u0026gt;\u0026#39;... remote: Azure Repos remote: Found 69 objects to send. (96 ms) Receiving objects: 100% (69/69), 21.50 KiB | 1.02 MiB/s, done. Resources  Azure-Repo Questions and Troubleshooting  ","permalink":"https://staggerlee011.github.io/posts/git-ssh-azure-repo/","summary":"Steps to connect to a Azure-Repo via SSH","title":"Connect to Azure-Repo via SSH"},{"content":"Below are step by step instructions for setting up SSH access to an CodeCommit git repository\nCreate SSH key First create a key via ssh-keygen\ncd ~/.ssh ssh-keygen -f codecommit -t rsa -b 4096 You will be asked for passphrase I\u0026rsquo;ve had issues with using one with VSCode / Remote WSL so suggest not using one.\nA typical output will look like below:\n$ ssh-keygen -f codecommit_rsa -t rsa -b 4096 Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in codecommit. Your public key has been saved in codecommit.pub. The key fingerprint is: SHA256:1nDTaZZ7vUjfYjxcSwW+OEEpCGUrhcJ8HgShPd+iv+Y stephen@navi The key\u0026#39;s randomart image is: +---[RSA 4096]----+ | oo+o++. ... | | o+ +o...o.+ . | | . o+..o o.B . .| | o.o + + + o.| | o S . +.o.o| | . o .=+.+| | . .*+.| | .. . o | | oE. | +----[SHA256]-----+ CodeCommit Configuration Now log into the AWS Console navigate to the IAM service and select the User you wish to add the ssh key to. Choose the Security Credentials tab, scroll down and select Upload SSH public key Copy and Paste your new public ssh key into the console:\ncat ~/.ssh/codecommit.pub This generates an SSH key ID note this down! Configure SSH Next you need to configure SSH via the ~/.ssh/config, to use the new ssh key for your repos. If the file doesn\u0026rsquo;t exist, create it via touch ~/.ssh/config. You will need create / update it like below Note to update the User key with the one genereated above\nHost git-codecommit.*.amazonaws.com User APKA6N2TQ6WGE2NZ6M4O IdentityFile ~/.ssh/codecommit Test SSH Now test your connection:\nssh git-codecommit.us-east-2.amazonaws.com This should output something like below:\nYou have successfully authenticated over SSH. You can use Git to interact with AWS CodeCommit. Interactive shells are not supported.Connection to git-codecommit.us-east-2.amazonaws.com closed by remote host. Connection to git-codecommit.us-east-2.amazonaws.com closed. If you don\u0026rsquo;t get this message, check your config or look at the resources section below for more troubleshooting steps.\nConnect to CodeCommit Repo Your now ready to connect to a repos from CodeCommit Then run the clone command\n$ git clone ssh://git-codecommit.eu-west-2.amazonaws.com/v1/repos/kubernetes Cloning into \u0026#39;kubernetes\u0026#39;... The authenticity of host \u0026#39;git-codecommit.eu-west-2.amazonaws.com (52.94.48.161)\u0026#39; can\u0026#39;t be established. RSA key fingerprint is SHA256:r0Rwz5k/IHp/QyrRnfiM9j02D5UEqMbtFNTuDG2hNbs. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added \u0026#39;git-codecommit.eu-west-2.amazonaws.com,52.94.48.161\u0026#39; (RSA) to the list of known hosts. warning: You appear to have cloned an empty repository. Resources  CodeCommit AWS Docs - Setting up SSH CodeCommit AWS Docs - Use SSH keys and SSH with CodeCommit  ","permalink":"https://staggerlee011.github.io/posts/git-ssh-code-commit/","summary":"Steps to connect to a CodeCommit via SSH","title":"Connect to CodeCommit via SSH"},{"content":"driftctl is a new tool, recently released that reports on drift of your terraform code in AWS.\nRunning a scan will output all objects created in a region that are not part of your terraform code. We\u0026rsquo;ve been using it to find drift of terraform code and for rogue manual objects being created.\nSo far I\u0026rsquo;m really liking it and expect more functionality to added it evolves. But definitely worth checking it out if you want to add more testing around your IaC.\nInstall Currently there\u0026rsquo;s no package management options but you can install via\ncurl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl chmod +x driftctl sudo mv driftctl /usr/local/bin/ Compare against s3 state using a AWS named profile Run a driftctl scan against ALL tfstate in a s3 bucket\nAWS_PROFILE=eng driftctl scan \\ --from tfstate+s3://\u0026lt;S3 Bucket\u0026gt;/ Run a scan against specific tagged resources You may want to check for drift against deployed IaC which is tagged. The below will only show drift for objects with a tag key of TerraformWorkspace and value of core\nAWS_PROFILE=\u0026lt;Profile Name\u0026gt; driftctl scan --from tfstate+s3://\u0026lt;S3 Bucket\u0026gt;/core/terraform.tfstate --filter \u0026#34;Attr.Tags.TerraformWorkspace == \u0026#39;core\u0026#39;\u0026#34; Ignore objects There are going to be objects created outside of terraform that you want to ignore, things like your tfstate s3 bucket / dynamodb table. Or maybe objects created via the Serverless Framework or SAM which overlays onto Cloudformation\nCreate a file in the location you are running the scan from named: .driftignore\nFormat is like:\n## terraform state managemenet aws_s3_bucket.engineering-statefile aws_dynamodb_table.engineering-locks ## ignore ami created via packer aws_ami:* Resources  driftctl github driftctl discord server driftctl resources lists  ","permalink":"https://staggerlee011.github.io/posts/driftctl-by-example/","summary":"Examples of driftctl","title":"Driftctl by Example"},{"content":"Pre-commit is an easy to use tool that allows you to add in git hooks for your repos. This means that every time you run a commit command pre-commit will run what ever apps you\u0026rsquo;ve told it to. This is great for things like linting and formatting.\ninstall There\u0026rsquo;s a couple of ways to install pre-commit but I use brew\nbrew install pre-commit pre-commit-config.yaml For pre-commit to run, you need to add a pre-commit-config.yaml file to the root of your git repo. Below is a common example I use of loading markdownlinter, detect-secrets and terraform fmt (Note these are from different pre-commit repos you can stack as many as you like!)\nrepos: - repo: git://github.com/antonbabenko/pre-commit-terraform rev: v1.47.0 # Get the latest from: https://github.com/antonbabenko/pre-commit-terraform/releases hooks: - id: terraform_fmt - repo: https://github.com/Yelp/detect-secrets rev: v1.0.1 hooks: - id: detect-secrets args: [\u0026#39;--baseline\u0026#39;, \u0026#39;.secrets.baseline\u0026#39;] exclude: package.lock.json - repo: https://github.com/igorshubovych/markdownlint-cli rev: v0.26.0 hooks: - id: markdownlint init To connect your pre-commit file to a repo you need to run install in the root of the repo\npre-commit install  commands Collection of common commands\nrun all hooks against all files This will run all your hooks against all files in the repo\npre-commit run -a run specific hook against all files  Note that terraform_ftm is the id of a hook you wish to run  pre-commit run terraform_fmt -a Resources  pre-commit website  ","permalink":"https://staggerlee011.github.io/posts/pre-commit-by-example/","summary":"Example Setup and config for NPM","title":"Pre-Commit by Example"},{"content":"This came up today when I created a sealedsecret and wanted to confirm the secret had the correct value. Normally I can just use -o jsonpath=\u0026quot;{.data.password} to parse out the json value I want, but this time the value I wanted was like myfile.conf so jsonpath came up empty as there it was looking for a path that didn\u0026rsquo;t exist. The answer is to escape out with the below:\nk get secrets mysecret -o jsonpath=\u0026#34;{.data.myfile\\.conf}  ","permalink":"https://staggerlee011.github.io/posts/kubernetes-escaping-jsonpath/","summary":"Note on how to escape when using \u003ccode\u003e-o jsonpath=\u003c/code\u003e in kubernetes","title":"Escaping kubernetes jsonpath"},{"content":"I normally just keep VSCode on the default dark theme and be done with it. But the vast number of dev.to and medium posts on it finally made me give it a try. Theres hundreds of posts out there on picking themes but below are some commands to help switch around and change things.\nChange theme After installing your 100 different themes you will want to switch between them.\nManual: File \u0026gt; Preferences \u0026gt; Color Theme\nShortcut: ctrl+k ctrl+t\nEither option, opens up the Command palette and lists out all themes you currently have installed ordered by theme type.\nChange icon theme You can also change images if you wish and have installed those:\nManual: File \u0026gt; Preferences \u0026gt; File Icon Theme\nChange font You can also change your font\nManual: File \u0026gt; Preferences \u0026gt; Settings then in settings Text editor \u0026gt; Font \u0026gt; Font Family\nReferences  VSCode Docs - Get Started - Themes  ","permalink":"https://staggerlee011.github.io/posts/vscode-themes/","summary":"Controls and management of VSCode","title":"VSCode Themes"},{"content":"I don\u0026rsquo;t use windows-terminal much. But I do find that I want it to be my WSL Ubuntu-18 profile instead of PowerShell. To set that up you just need to:\nGet Profile ID for Desired Profile Open Windows Terminal -\u0026gt; Settings this opens the settings.json\nBelow is an example:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://aka.ms/terminal-profiles-schema\u0026#34;, \u0026#34;profiles\u0026#34; : [ { \u0026#34;guid\u0026#34; : \u0026#34;{61c54bbd-c2c6-5271-96e7-009a87ff44bf}\u0026#34;, \u0026#34;icon\u0026#34; : \u0026#34;ms-appx:///ProfileIcons/{61c54bbd-c2c6-5271-96e7-009a87ff44bf}.png\u0026#34;, \u0026#34;name\u0026#34; : \u0026#34;Windows PowerShell\u0026#34;, \u0026#34;startingDirectory\u0026#34; : \u0026#34;%USERPROFILE%\u0026#34;, \u0026#34;useAcrylic\u0026#34; : false }, { \u0026#34;guid\u0026#34;: \u0026#34;{c6eaf9f4-32a7-5fdc-b5cf-066e8a4b1e40}\u0026#34;, \u0026#34;hidden\u0026#34;: false, \u0026#34;name\u0026#34;: \u0026#34;Ubuntu-18.04\u0026#34;, \u0026#34;source\u0026#34;: \u0026#34;Windows.Terminal.Wsl\u0026#34;, \u0026#34;snapOnInput\u0026#34; : true, \u0026#34;startingDirectory\u0026#34;: \u0026#34;/mnt/e\u0026#34; }, { \u0026#34;guid\u0026#34;: \u0026#34;{46ca431a-3a87-5fb3-83cd-11ececc031d2}\u0026#34;, \u0026#34;hidden\u0026#34;: false, \u0026#34;name\u0026#34;: \u0026#34;kali-linux\u0026#34;, \u0026#34;source\u0026#34;: \u0026#34;Windows.Terminal.Wsl\u0026#34; } ], Note the guid for the profile you want to use.\nUpdate the setttings.json defaultprofile Add json name of defaultProfile with the guid of your chosen profile like below:\n{ \u0026#34;$schema\u0026#34;: \u0026#34;https://aka.ms/terminal-profiles-schema\u0026#34;, \u0026#34;defaultProfile\u0026#34;: \u0026#34;{c6eaf9f4-32a7-5fdc-b5cf-066e8a4b1e40}\u0026#34;, \u0026#34;profiles\u0026#34; : Save and close.\n","permalink":"https://staggerlee011.github.io/posts/windows-terminal-setting-default-profile/","summary":"Steps to change the default profile on Windows Terminal from PowerShell to WSL","title":"Setting the Default Profile for Windows Terminal"},{"content":"Quick note on if you see an error like below:\n$ k port-forward pgadmin -n pgadmin 8080:80 Unable to listen on port 8080: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 127.0.0.1:8080: bind: address already in use unable to create listener: Error listen tcp6 [::1]:8080: bind: address already in use] error: unable to listen on any of the requested ports: [{8080 80}] This is caused by kubectl not releasing its port binding. You can manually kill the pid via the below (Example is based on trying to run a 8080 port forward)\nlsof -i :8080 After getting the pid you can then kill it via the standard kill -9 12345\n","permalink":"https://staggerlee011.github.io/posts/kubernetes-port-forward-already-in-use/","summary":"Collection of resources for learning / managing EKS","title":"Kubectl port forwarding - address already in use"},{"content":"Networking Security Groups for Pods  If you use the default CNI aws-node then you are limited to hosting a number of pods based on the instance type:  https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI\n If you wish to use security groups for pods you have to use a ec2 type on the list below:  https://docs.amazonaws.cn/en_us/eks/latest/userguide/security-groups-for-pods.html#supported-instance-types\n If you have ran kubectl set env daemonset aws-node -n kube-system ENABLE_POD_ENI=true and still see vpc.amazonaws.com/has-trunk-attached=false for all nodes in the cluster. Try rotating your nodes (ie auto-scaling instance refresh) OR Again checking if you nodes are on the supported instance types list above (This was our problem! and wasted half of my day :()  Troubleshooting  You can safely ignore the below the logs which can be seen in k describe pod  Normal SecurityGroupRequested 8m18s vpc-resource-controller Pod will get the following Security Groups [sg-01abfab8503347254] Normal ResourceAllocated 8m17s vpc-resource-controller Allocated [{\u0026#34;eniId\u0026#34;:\u0026#34;eni-0bf8102e8bf0fa369\u0026#34;,\u0026#34;ifAddress\u0026#34;:\u0026#34;02:78:59:8f:ee:b2\u0026#34;,\u0026#34;privateIp\u0026#34;:\u0026#34;10.243.50.203\u0026#34;,\u0026#34;vlanId\u0026#34;:1,\u0026#34;subnetCidr\u0026#34;:\u0026#34;10.243.48.0/20\u0026#34;}] to the pod Warning FailedCreatePodSandBox 8m17s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \u0026#34;bdacc9416438c30c46cdd620a382a048cb5ad5902aec9bf7766488604eef6a60\u0026#34; network for pod \u0026#34;pgadmin\u0026#34;: networkPlugin cni failed to set up pod \u0026#34;pgadmin_pgadmin\u0026#34; network: add cmd: failed to assign an IP address to container Normal SandboxChanged 8m16s kubelet Pod sandbox changed, it will be killed and re-created.  You can see if your pod has connected to the sg and eni via running a k describe pod.. as you should get an output like:  Annotations: kubernetes.io/psp: eks.privileged vpc.amazonaws.com/pod-eni: [{\u0026#34;eniId\u0026#34;:\u0026#34;eni-0bf8102e8bf0fa369\u0026#34;,\u0026#34;ifAddress\u0026#34;:\u0026#34;02:78:59:8f:ee:b2\u0026#34;,\u0026#34;privateIp\u0026#34;:\u0026#34;10.243.50.203\u0026#34;,\u0026#34;vlanId\u0026#34;:1,\u0026#34;subnetCidr\u0026#34;:\u0026#34;10.243.48.0/20\u0026#34;}] Limits: vpc.amazonaws.com/pod-eni: 1 Requests: vpc.amazonaws.com/pod-eni: 1 As well as the logs from describe showing:\nPod will get the following Security Groups [sg-01abfab8503347254] ","permalink":"https://staggerlee011.github.io/posts/kubernetes-eks-resources/","summary":"Collection of resources for learning / managing EKS","title":"EKS Resources"},{"content":"Below are a collection of Blogs / Video / Github / Courses / Tool on kubernetes that I have found useful.\nNOTE: This list will be regularly updated when i find new resources to add.\r Kustomize  Video - 20m basic example usage of kustomize  ConfigMap / Secrets -Video - 15m showing examples of env_var and volume mounting both secrets and configmaps -Video - 15m explaining sealedsecrets\nNetworkPolicies  Video - 30m explanation of networkpolciies GitHub - Collection of example networkpolicies Tool - Tufin NetworkPolicy Viewer  ","permalink":"https://staggerlee011.github.io/posts/kubernetes-resources/","summary":"Collection of resources for learning kubernetes","title":"Kubernetes Resources"},{"content":"Secrets in kubernetes didn\u0026rsquo;t make sense to me (storing passwords in base64) when they are in a cluster I could understand it being secured by RBAC but to get it in there you either don\u0026rsquo;t have it in a manifest making gitops harder or you worse you have a password in plain text in your source control (dont do that!).\nAs we migrate more apps to kubernetes I did like the idea of the krew plugin ssm-secret that means your passwords in ssm and you have a one time process of pushing them into kubernetes. But as its breaking away from gitops and keeping everything together and simple I wanted something else. That\u0026rsquo;s when I found sealedsecrets. This lets you keep manifests for your secrets but the passwords are encrypted, they are then decrypted on apply to the kubernetes cluster.\nSetup You need to install the client on workstation\nbrew install kubeseal And deploy the application to kubernetes. this can be done via manifest or helm. We use kustomize so just downloaded the release (currently v0.13.1)\nkubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.13.1/controller.yaml Kustomize install We use private eks clusters. so cant just download from public container repos. with that we created a simple kustomize file for each cluster to pull from ecr example below of an overlay\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base images: - name: quay.io/bitnami/sealed-secrets-controller:v0.13.1 newName: xxx.dkr.ecr.eu-west-2.amazonaws.com/sealedsecrets newTag: v0.13.1 With an even more simple base:\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ./controller.yml Backup private key To create a backup of the private key used by sealedsecrets run:\nkubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml \u0026gt;master.key NOTE: THIS KEY SHOULD NOT BE STORED IN SOURCE CONTROL!\r Restore private key To restore apply the master key and delete the controller pod\nkubectl apply -f master.key kubectl delete pod -n kube-system -l name=sealed-secrets-controller Create Secret using SealedSecret The only difference between creating a kubernetes secret and a sealed secret is that you pipe the file / command to kubeseal and have that output the file.\nFor secret examples please see my post: Kubernetes Secrets by Example\nformat Like all manifests you can either format to json or yaml. I prefer yaml so use\nkubeseal --format yaml | tee name-of-manifest-file-to-store-in-source-control.yml leaving kubeseal without the --format switch will output to json\nexample \u0026ndash;from-literal This will create a new manifest called wordpress-user-password-secure.yml which can be kept with the other manifests for your application as the password is now encrypted.\nk create secret generic wordpress-user-password --dry-run=client --from-literal password=MySuperSecretPassword --output yaml | kubeseal --format yaml | tee wordpress-user-password-secure.yml Notes on sealedsecrets  You must be connected to the cluster you wish to deploy to with kubectl when running kubeseal You cant update the sealedsecret after its created and deploy it. It wont work! (This is a good thing) You cant deploy the same secret to different namespaces. It wont work! (This is also a good thing)  Resources  GitHub Repo for Sealed-Secrets The DevOps Toolkit Series - Bitnami Sealed Secrets - How To Store Kubernetes Secrets In Git Repositories Kubernetes Secrets by Example  ","permalink":"https://staggerlee011.github.io/posts/kubernetes-sealedsecrets-by-example/","summary":"Deployment and usage of sealed secrets","title":"SealedSecrets by Example"},{"content":"Collection of examples to run and support kube-linter suggestions to kubernetes manifests\ninstall You can install kube-linter with brew\nbrew install kube-linter Run against single or multiple manifests To run kube-linter\nkube-linter lint . Possible fixes to kube-linter Below are warnings you could get from kube-linter and example solutions to them.\ncontainer \u0026ldquo;xxx\u0026rdquo; does not have a read-only root file system apiVersion: v1  kind: Pod  metadata: name: xxx  spec: containers: # specification of the pod’s containers  # ...  securityContext: readOnlyRootFilesystem: true container \u0026ldquo;xx\u0026rdquo; is not set to runAsNonRoot apiVersion: v1  kind: Pod  metadata: name: xxx  spec: containers: # specification of the pod’s containers  # ...  securityContext: runAsNonRoot: true ","permalink":"https://staggerlee011.github.io/posts/kubernetes-kubelinter/","summary":"Example usage and possible fixes to kube-linter","title":"Kube-linter by Example"},{"content":"Collection of kubernetes secrets by example:\nCreate secret Examples of how to create a secret\n\u0026ndash;from-literal Creates the manifest file from the command line:\nk create secret generic wordpress-user-password --dry-run=client --from-literal password=MySuperSecretPassword --output yaml You can pass multiple --from-literal values into the secret if you wish\n\u0026ndash;from-file Create a secret from a file (ie a txt file with a password in it)\nNOTE: When creating a secret from file remember the name of the file is used as the key ie: mypassword.txt = MySuperSecretPassword for below\necho MySuperSecretPassword | tee mypassword.txt You can then create a secret from the file\nk create secret generic wordpress-user-password --dry-run=client --from-file=./mypassword.txt --output yaml You can pass multiple --from-file values into the secret if you wish\nSee Secrets in Kubernetes Collection of example commands to see secrets you have put in kubernetes\nList secrets k get secrets pgadmin-secret -o yml Get secret value k get secrets pgadmin-secret -o jsonpath=\u0026#34;{.data.password}\u0026#34; | base64 --decode \u0026amp;\u0026amp; echo Use secret Once you have created a secret we now can use it via:\nExpose secret as a environment variable to container In this example we will create a secret to use with pgadmin. To run pgadmin you need to pass it a default user and password via environmental variables.\nCreate the secret via\nk create secret generic pgadmin-secret --dry-run=client \\ --from-literal email=admin@admin.com \\ --from-literal password=SuperSecretPassword \\ --output yaml secret manifest would look like:\napiVersion: v1 data: email: YWRtaW5AYWRtaW4uY29t password: U3VwZXJTZWNyZXRQYXNzd29yZA== kind: Secret metadata: creationTimestamp: null name: pgadmin-secret We would then use the secrets via:\napiVersion: v1 kind: Pod metadata: name: pgadmin spec: containers: - name: pgadmin image: 991775749516.dkr.ecr.eu-west-2.amazonaws.com/pgadmin:4.28 env: - name: PGADMIN_DEFAULT_PASSWORD # name of the environmental var valueFrom: secretKeyRef: name: pgadmin-secret # name of the secret  key: password # name of the key in the secret - name: PGADMIN_DEFAULT_EMAIL valueFrom: secretKeyRef: name: pgadmin-secret key: email Pass secret to file in container You can also create a file (like you can with configMaps) that you can pass to a container. This can be a solution to idea of not wanting to put passwords into configMaps but it does then put all that text and keys into the secret that you don\u0026rsquo;t need (A good solution for that will be in a future post!).\nCreate a secret to file:\napiVerison: v1 kind: Secret metadata: name: flyway-secret type: Opaque data: flyway.secret: |flyway.url=jdbc:postgresql://\u0026lt;RDS-INSTANCE\u0026gt;:54321/db flyway.user=\u0026lt;FLYWAY-ACCOUNT\u0026gt; flyway.password=\u0026lt;PASSWORD-FOR-ACCOUNT\u0026gt; Deploy a container with the file attached via a volume\napiVersion: batch/v1 kind: Job metadata: name: flyway spec: template: metadata: name: flyway spec: containers: - name: flyway image: flyway/flyway command: [\u0026#34;flyway\u0026#34;, \u0026#34;migrate\u0026#34;] volumeMounts: - name: flyway-secret-volume mountPath: /flyway/conf volumes: - name: flyway-secret-volume secret: name: flyway-secret restartPolicy: Never Decode kubernetes secret This section is a quite reminder / steps to show why kuberetes secrets should not be in source control:\nYou can create a kubernetes secret via:\nk create secret generic wordpress-user-password --dry-run=client --from-literal password=MySuperSecretPassword --output yaml this would generate:\napiVersion: v1 data: password: TXlTdXBlclNlY3JldFBhc3N3b3Jk kind: Secret metadata: creationTimestamp: null name: wordpress-user-password We can reserve the secret via (Hence why we should not leave secrets in source control):\necho TXlTdXBlclNlY3JldFBhc3N3b3Jk | base64 --decode Resources  Kubernetes secrets doc YouTube Video - ThatDevOps Guy - Kubernetes Secret Management Explained SealedSecrets by Example  ","permalink":"https://staggerlee011.github.io/posts/kubernetes-secrets-by-example/","summary":"Examples to create and use kubernetes secrets","title":"Kubernetes Secrets by Example"},{"content":"Managing database migrations via code is good! back in DBA days i used SSDT and RedGate tools to do my SQL Server deployments. In my current shop we normally use postgres so I needed to find a new tool. Searching around i found flyway which also happens to have owned by RedGate and this lets you do migration deployments to postgres, sql server, mysql and a host of others im sure.\nTo run it locally is super easy, download the client create your conf file with the database you want to deploy to and write your .sql files. But when you have a more locked down production environment that you need to copy the .sql files into, get the tool on a box and be allow patching and all the rest of it, we ran into problems. My solution (though some may say not very elegant) is the create a docker image from the flyway base image load in the sql files as part of a CI/CD and push it to kubernetes Then i just needed to run it. Which is a perfect example of a job and pass it the conf (Hello configmap).\nCode for the solution is below\nDockerfile FROMflyway/flyway:7.3.2RUN [\u0026#34;rm\u0026#34;, \u0026#34;-fr\u0026#34;, \u0026#34;/flyway/sql\u0026#34;]COPY sql/ /flyway/sql/ENTRYPOINT [\u0026#34;flyway\u0026#34;, \u0026#34;migrate\u0026#34;, \u0026#34;-url=jdbc:postgresql://localhost:5432/customerdb\u0026#34;, \u0026#34;-user=postgres\u0026#34;, \u0026#34;-password=postgres\u0026#34;]Kubernetes Manifests I used a configmap to create my flyway.conf file. You can do it as a secret or I believe in pass in values as environment variables\nConfigMap\napiVersion: v1 kind: ConfigMap metadata: name: flyway-configmap data: flyway.conf: |flyway.url=jdbc:postgresql://\u0026lt;RDS-INSTANCE\u0026gt;:5432/\u0026lt;database\u0026gt; flyway.user=\u0026lt;FLYWAY-ACCOUNT\u0026gt; flyway.password=\u0026lt;PASSWORD-FOR-ACCOUNT\u0026gt; Then you just need a job to call the image and run it\napiVersion: batch/v1 kind: Job metadata: name: flyway spec: template: metadata: name: flyway spec: containers: - name: flyway image: ECR_URL:\u0026lt;TAG\u0026gt; command: [\u0026#34;flyway\u0026#34;, \u0026#34;migrate\u0026#34;] volumeMounts: - name: flyway-config-volume mountPath: /flyway/conf volumes: - name: flyway-config-volume configMap: name: flyway-configmap restartPolicy: Never With it now being in kubernetes you can use kustomize to allow easy deployments to different environments and image upgrades.\nHope this helps!\n","permalink":"https://staggerlee011.github.io/posts/kubernetes-flyway-job/","summary":"Running flyway database migrations in kubernetes","title":"Flyway database migrations on Kubernetes"},{"content":"To run kubectl commands against an a EKS cluster you must first authenticate with it. kubectl manages credentials via the ~/.kube/config file. To get your credentials for a new eks cluster you will need to use the below aws-cli command.\nPre-Req  kubectl installed aws named profile set up for the account IAM permissions to access the EKS cluster  Command aws eks --region eu-west-2 update-kubeconfig --name \u0026lt;EKS-CLUSTERNAME\u0026gt; --profile \u0026lt;AWS-PROFILE-NAME\u0026gt; ","permalink":"https://staggerlee011.github.io/posts/aws-eks-kube-config/","summary":"Code for connecting to a new EKS Cluster","title":"EKS Kube Config"},{"content":"If you run your AWS environment via multiple accounts. Then you will properly end up with multiple AWS Named Profiles to manage access to each account. When pushing a new image to a ECR repo, a standard quick cheat, is to use the View Push Commands button which is on the AWS Console as it describes the steps to deploy. The issue with this is that using a named profile means adding an extra switch in, sadly this different for powershell and linux (And i always forget what it is!) Below answers that\nmacOS/Linux Linux uses the -profile switch\naws ecr get-login-password --region eu-west-2 --profile \u0026lt;my-profile\u0026gt; | docker login --username AWS --password-stdin xxx.dkr.ecr.eu-west-2.amazonaws.com PowerShell PowerShell uses the -ProfileName switch\n(Get-ECRLoginCommand -Region eu-west-2 -ProfileName \u0026lt;my-profile\u0026gt;).Password |docker login --username AWS --password-stdin xxx.dkr.ecr.eu-west-2.amazonaws.com ","permalink":"https://staggerlee011.github.io/posts/aws-ecr-named-profile/","summary":"Quick example of using the profile switches for macOS/Linux and Windows","title":"Push Images to ECR via AWS Named Profile"},{"content":"At present I\u0026rsquo;m using the serverless framework to deploy and manage all my lambda functions. For more details on how to use serverless framework please see my links at the bottom of the post. These are examples are quick templates I use to cut and paste into new projects to get going a bit quicker.\n Basics  Create a new project Deploy project Test Function Serverless.yml template   Plugins  Add a plugin Serverless-iam-roles-per-function  Usage Add serverless-iam-roles-per-function Add to servereless.yml     Serverless.yml Functions  CRON job Using SSM values Deploy function to VPC   Serverless.yml Resources  Dynamodb  References     References  Basics Collection of simple snippets to get started\nCreate a new project Creates a new serverless.yml and framework files. To deploy to a specific folder add -p and the name of the folder you want\nsls create --template aws-python3 Deploy project We used aws named profiles if you are deploying straight to the default profile you can remove the switch\nsls deploy --aws-profile \u0026lt;aws profile\u0026gt; Test Function To test a single function deployed you can run invoke\nsls invoke -f \u0026lt;function name\u0026gt; --aws-profile \u0026lt;aws profile\u0026gt; Serverless.yml template below is starting template for serverless that sets up the following:\n secure s3 bucket (Blocks public access and encrypted at rest) Adds some standard tags (serverless = true) iam separation per function using the plug serverless-iam-roles-per-function  service: name: ${self:custom.application} frameworkVersion: \u0026#39;2.4.0\u0026#39; custom: application: ses-test provider: name: aws runtime: python3.8 region: eu-west-2 stackName: ${self:custom.application} tags: Application: ${self:custom.application} Serverless : \u0026#34;true\u0026#34; deploymentBucket: maxPreviousDeploymentArtifacts: 1 blockPublicAccess: true serverSideEncryption: AES256 tags: Application: ${self:custom.application} Serverless : \u0026#34;true\u0026#34; functions: ses-test: name: \u0026#34;ses-email-test\u0026#34; description: \u0026#34;sends a test email from ses\u0026#34; handler: functions/ses-test.handler timeout: 60 iamRoleStatementsName: ses-test-role iamRoleStatements: - Effect: \u0026#34;Allow\u0026#34; Action: - ses:SendEmail - ses:SendRawEmail Resource: \u0026#34;*\u0026#34; - Effect: \u0026#34;Allow\u0026#34; Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: \u0026#34;arn:aws:logs:*:*:*\u0026#34; plugins: - serverless-iam-roles-per-function Plugins Add a plugin To add a plugin you will need to add it under the plugins: section of serverless.yml\nplugins: - serverless-iam-roles-per-function You will also need to load the npm of the plugin:\nnpm install serverless-iam-roles-per-function Below are regular plugins I use and there configs\nServerless-iam-roles-per-function This plugin allows you set individual iam roles per function instead of the standard single iam for all.\nNOTE: serverless-iam-roles-per-function stopped working on serverless@2.5 ensure you install serverless to 2.4!!!\nUsage The below example creates a iam role called ses-test-role with specific permissions to ses and logs\n** NOTE: Always add the logs action for monitoring of the function\niamRoleStatementsName: ses-test-role iamRoleStatements: - Effect: \u0026#34;Allow\u0026#34; Action: - ses:SendEmail - ses:SendRawEmail Resource: \u0026#34;*\u0026#34; - Effect: \u0026#34;Allow\u0026#34; Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: \u0026#34;arn:aws:logs:*:*:*\u0026#34; Add serverless-iam-roles-per-function npm install serverless-iam-roles-per-function Add to servereless.yml plugins: - serverless-iam-roles-per-function Serverless.yml Functions Collection if snippets to use in serverless.yml\nCRON job Setting up a cron job for a function\nfunctions: start-rds: name: \u0026#34;start-rds\u0026#34; description: \u0026#34;start rds instances based on tagging\u0026#34; handler: functions/start-rds.handler timeout: 60 iamRoleStatementsName: start-rds-role iamRoleStatements: - Effect: \u0026#34;Allow\u0026#34; Action: - rds:* Resource: \u0026#34;*\u0026#34; - Effect: \u0026#34;Allow\u0026#34; Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: \u0026#34;arn:aws:logs:*:*:*\u0026#34; events: - schedule: cron(00 08 ? * MON-FRI *) # 08:00 on every day-of-week from Monday through Friday. Using SSM values I like separating config out and using ssm to store it\nfunctions: example: name: \u0026#34;example-ssm\u0026#34; description: \u0026#34;example function that pulls a value from ssm\u0026#34; handler: functions/example.handler timeout: 60 iamRoleStatementsName: example-ssm-role iamRoleStatements: - Effect: \u0026#34;Allow\u0026#34; Action: - ssm:* Resource: \u0026#34;*\u0026#34; vpc: securityGroupIds: - ${ssm:/global/sec/serverless/id~true} subnetIds: - ${ssm:/global/subnets/intra/az/a/id~true} - ${ssm:/global/subnets/intra/az/b/id~true} Deploy function to VPC Deploy a function to a VPC and specific subnets\nfunctions: my-function: name: \u0026#34;My Function\u0026#34; description: \u0026#34;Lambda function\u0026#34; handler: functions/example.handler  timeout: 20 vpc: securityGroupIds: - ${ssm:/global/sec/serverless/id~true} subnetIds: - ${ssm:/global/subnets/intra/az/a/id~true} - ${ssm:/global/subnets/intra/az/b/id~true} Serverless.yml Resources Collection of resources snippets.\nDynamodb Example dynamodb table\n KeyType - The role that the key attribute will assume:  HASH - partition key RANGE - sort key    resources: Resources: Templates: Type: AWS::DynamoDB::Table Properties: TableName: Templates AttributeDefinitions: - AttributeName: user_id AttributeType: S - AttributeName: template_id AttributeType: S KeySchema: - AttributeName: user_id KeyType: HASH - AttributeName: template_id KeyType: RANGE ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 References  AWS Dynamodb Create Table Serverless Dynamodb  References Collection of primary resources to troubleshoot and learn more:\n Serverless Framework Reference Doc  ","permalink":"https://staggerlee011.github.io/posts/serverless-by-example/","summary":"Example setup and usage of the Serverless Framework with AWS and python","title":"Serverless Framework by Example"},{"content":"NOTE: This guide is for using kustomize in kubectl which uses a old version of kustomize this means your writing a lot of deprecated codes like using bases.\nKustomize is a standalone tool to customize Kubernetes objects through a kustomization file. It has been part of kubectl since v1.14 These examples are based on using the built in version of kubectl but it is strongly suggested to migrate away and use the latest version.\nRun kustomization file To apply or delete a set of manifests you use the -k command\nkubectl apply -k k8s/my-app/overlays/production/ View generated output of kustomization To view the manifest that gets generated from kustomize you can the below command pointing at the the base or overlays/env folder\nkubectl kustomize k8s/my-app/overlays/production/ base settings Below are examples of the base kustomization file\nresources This is the most basic usage of kustomization. Allowing you to deploy a namespace and other manifests at the same time\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: confluence resources: - ./namespace.yml - ./storage.yml - ./deployment.yml - ./service.yml overlays settings Below are examples of overlays for various manifests and options.\nimage or tag Update the tag of the image pulled via overlay:\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base images: - name: nginx newTag: 1.19 You could also update the image as well via if you say need to use a different repository\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base images: - name: nginx # note this is the image: tag not the name of the container newName: \u0026lt;NEW ECR PATH\u0026gt;/nginx newTag: 3.4.5 patchesStrategicMerge If you need to update a value of a manifest file from the base, the easist way is to create a new file in the overlays/env folder and reference it via patchesStrategicMerge.\nThe file will need to re-create the yml down to the value you are overwriting for example:\ndeployment example A common issue will be updating the env variables for the pods in a deplyment for each overlay this can be done via creating a file (Below could is called: db-deployment.yml) containing your env settings like:\napiVersion: apps/v1 kind: Deployment metadata: name: confluence spec: template: spec: containers: - name: app env: - name: ATL_JDBC_URL value: \u0026#34;jdbc:postgresql://my-prod-rds-server:5432/db\u0026#34; - name: ATL_JDBC_USER value: \u0026#34;postgres\u0026#34; and then add a patchesStrategicMerge section to kustomization.yml and reference the file db-deployment.yml\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base patchesStrategicMerge: - db-deployment.yml service example Another example would be updating the arn for a certificate in each overlay. Again you would simply create a new file in overlays/env folder (In this example its called arn-serivce.yml)\narn-service.yml will copy all yml from the base serivce but ignore all other values that are set so would look like:\napiVersion: v1 kind: Service metadata: name: dos namespace: dos annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:xx:certificate/xxx Again you would update kustomization.yml in the overlays/env to reference the new resource\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base patchesStrategicMerge: - arn-service.yml config-maps You can either write the configmap into the kustomization file or keep it in an external file. I prefer the later:\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base patchesStrategicMerge: - config-map.yml ","permalink":"https://staggerlee011.github.io/posts/kustomization-by-example/","summary":"Example syntax for Kustomization files using kubectl","title":"Kubectl Kustomization by Example"},{"content":"Below is a collection of examples of how to archive different tasks using Terraform\nRemove a specific resource from an environment Example scenario you created an ECR but no longer need it as the project has failed or its been moved to a different location. Either way. You have something in Terraform and you no longer want it there!\nFirst get the name of the resource you want to delete. To get the name of the resource use:\nterraform state list I got the below output:\nThe output shows me 2 ECR resources (The policy and the ECR)\nFirst lets delete the policy:\nterraform destroy -target=aws_ecr_lifecycle_policy.life_policy Then lets delete the ECR\nterraform destroy -target=aws_ecr_repository.dos you can now remove the file from your workspace and job done :)\nRemove an object from Terraform State In this example someone has kindly destroyed the object in the aws console and you now need to remove the resource from the terraform statefile\nAgain use state list to get the resource name:\nterraform state list Now we run state rm\nterraform rm `module.foo` ","permalink":"https://staggerlee011.github.io/posts/terraform-by-example/","summary":"Examples of using the Terraform CLI","title":"Terraform by Example"},{"content":"Another blog post on something super simple that I always forgot the syntax for. Note that aws-cli has to be v2 for some of these commands.\nNamed profiles Create a named profile:\naws configure --profile \u0026lt;new profile name\u0026gt; List named profiles List all AWS profiles you saved:\naws configure list-profiles ","permalink":"https://staggerlee011.github.io/posts/aws-configure/","summary":"Basic usage aws-cli for account management","title":"aws-cli Configure by Example"},{"content":"I\u0026rsquo;m always on the hunt for an easy way to create good architecture diagrams, I would love to be good at visio but anyone who has worked with me will tell you am I most definitely note. So the idea of creating diagrams via code is definitely something that interests me.\nPython Diagrams Github - diagrams\nDiagrams is a python package that lets you create some very pretty diagrams via code. It outputs a .png file, lets do a quick example:\nSetup To use diagrams you first need to install graphviz, I try and keep to a central package manager and currently use brew but you can also install it numerous ways as well as choco install graphviz -y for Windows.\nbrew install graphviz We then need to set up a python environment that is 3.6+, I create a virtual environment, install diagrams and output that to a requirements.txt\n# create virtualenv python3 -m venv env source env/bin/activate ## install diagrams python3 -m pip install diagrams python3 -m pip freeze \u0026gt; requirements.txt Create a diagram There\u0026rsquo;s lots of example on the site, but I created a simple one creating a file called my-app-diagram.py with the below code:\nfrom diagrams import Cluster, Diagram from diagrams.aws.network import VPC, ELB from diagrams.aws.compute import EKS from diagrams.aws.mobile import APIGateway from diagrams.aws.compute import Lambda from diagrams.aws.database import RDS with Diagram(\u0026#34;My-app\u0026#34;, show=False): lb = ELB(\u0026#34;lb\u0026#34;) k8s = EKS(\u0026#34;EKS Cluster\u0026#34;) igw = APIGateway(\u0026#34;API Gateway\u0026#34;) with Cluster(\u0026#34;Lambda Functions\u0026#34;): svc_group = [Lambda(\u0026#34;fnc1\u0026#34;), Lambda(\u0026#34;fnc2\u0026#34;), Lambda(\u0026#34;fnc3\u0026#34;)] rds = RDS(\u0026#34;Postgres RDS\u0026#34;) lb \u0026gt;\u0026gt; k8s \u0026gt;\u0026gt; igw \u0026gt;\u0026gt; svc_group \u0026gt;\u0026gt; rds Nothing to scary in there, we import all the icons we want to use open a with set the name of the diagram and start listing all the objects. Then at the bottom we link them together via \u0026gt;\u0026gt;.\nGenerate a diagram To generate the diagram we then just run the python script:\npython3 my-app-diagram.py This creates a file called my-app.png which looks like this:\nAnd there we have it, there\u0026rsquo;s a lot of nice examples in the diagrams repo to look through, but I\u0026rsquo;m currently a bit of a fan to say the least. I think it would fall down if you\u0026rsquo;re trying to diagram a AZ, EC2 heavy environment and I\u0026rsquo;ve not found a good example of doing that, but for simple diags it looks a really nice choice.\n","permalink":"https://staggerlee011.github.io/posts/aws-diagrams-python-diagrams/","summary":"Create AWS diagrams from python","title":"AWS Diagrams via Python Diagrams"},{"content":"Below are some basic python dependency management steps. that can be used to control your python work.\nNew Project If your starting a new python script/function/module/package you\u0026rsquo;ll properly want to create one or many virtual environments I like virtualenv (read my post here about setting this up)\nInstall dependencies Next you\u0026rsquo;ll want to install some modules/packages to support your work\npython3 -m pip install x y z Save dependencies After installing we will want to save them to a requirements.txt for future builds / setups\npython3 -m pip freeze \u0026gt; requirements.txt Load dependencies from requirements If you have a requirements.txt file you can then load all dependencies in a single command via\npython3 -m pip install -r requirements.txt ","permalink":"https://staggerlee011.github.io/posts/python-dependency-management/","summary":"Basic python dependency management steps","title":"Python Dependency Management"},{"content":"After spending some time trying to get different confluence HELM charts to work and failing (Im sure it was me not the code!), i gave up and wrote my own manifest files to deploy it. You can find the code in my github repo: atlassian-docker.\nThe deployment gives you:\n A single pod confluence deployment Offloaded HTTPS to a custom URL using a loadbalancer EFS storage to allow for HA  Pre-reqs / Notes My manifests and deployment is specific to running kubernetes on AWS and uses EFS storage to allow HA via having a EKS cluster over 2 AZs.\nYou will need to have the EFS CSI or someother variation to allow kubernetes to connect a PersistentVolume to EFS see below for more details:\nhttps://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html\nAWS Configuration  Kubernetes deployed (I use EKS) Either upload a certificate or generate one for HTTPS RDS Postgres database accessible by the EKS cluster EFS deployed with connection to the EKS cluster  Update Kubernetes Manifests Update the below manifests settings to match your environment.\ndeployment.yml The below values need updating to connect to the database:\n Update ATL_JDBC_URL value Update ATL_JDBC_USER value Update ATL_JDBC_PASSWORD value  Set the proxy name for your URL (ie confluence.mysite.com)\n Update the ATL_PROXY_NAME value  service.yml Add the ARN for your certificate:\n Update service.beta.kubernetes.io/aws-load-balancer-ssl-cert with the cert ARN  storage.yml Add the EFS id to the PersistentVolume\n Update volumeHandle with the EFS ID  Kubernetes Deployment Once all the manifests have been updated, your ready to deploy.\nkubectl apply -f confluence/namespace kubectl apply -f confluence And there you have it, with fingers crossed you should now be able to go to your URL and see a confluence setup page :)\n","permalink":"https://staggerlee011.github.io/posts/confluence-on-kubenetes/","summary":"Run Confluence on Kuberetes","title":"Deploying Confluence to Kubernetes"},{"content":"Todo-tree is a handy little extension to track issues and comments in your code (I\u0026rsquo;m not going to get into the debt of weather you should put a TODO comment in code or in story board that for you decide). It adds a new pane to vscode letting you quickly look a repo/pages outstanding issues, or things to note see blow\nIts really simple to use (you add a TODO into your code and new line pop ups in the pane, showing where it is), but I couldnt find the default options where on the extensions site: https://marketplace.visualstudio.com/items?itemName=Gruntfuggly.todo-tree or an easy guide to customize it should i want to.\nStandard options By installing you get the options of:\nTODO: Creates a todo note FIXME: Creates a bug like note Customize via palette You can add a new tag via opening the Command Palette and typing in Todo Tree: add tag you then populate it with the name of the tag (say NOTE) and job done you can add a note (it looks like this:)\nAs you can see it uses the same icon as TODO, so while quick and easy to add, not great.\nCusotmize via settings.json If you want to add your own tag with a whole bunch of customization options! (read the docs for more info) then you need to edit a file, this is where i got lost as most examples talk about editing a file, but i could never see what file they edited was!. Turns out its the settings.json in vscode. This file is used by all extensions so be careful as you dont want to mess other extensions up when your Todo-tree.\nTo edit settings.json open the Command Palette and type: Preferences: Open Settings (JSON) then add in your customized code. Theres a few out there if you google around, i like jsonasbn dev blog post (Note if you use hes copy the updated code in the comments).\nMine currently is currently:\n\u0026#34;todo-tree.highlights.defaultHighlight\u0026#34;: { \u0026#34;type\u0026#34;: \u0026#34;text-and-comment\u0026#34; }, \u0026#34;todo-tree.general.tags\u0026#34;: [ \u0026#34;TODO\u0026#34;, \u0026#34;FIXME\u0026#34;, \u0026#34;NOTE\u0026#34; ], \u0026#34;todo-tree.highlights.customHighlight\u0026#34;: { \u0026#34;TODO\u0026#34;: { \u0026#34;foreground\u0026#34;: \u0026#34;black\u0026#34;, \u0026#34;background\u0026#34;: \u0026#34;#22B965\u0026#34;, \u0026#34;iconColour\u0026#34;: \u0026#34;#22B965\u0026#34;, \u0026#34;icon\u0026#34;: \u0026#34;squirrel\u0026#34;, }, \u0026#34;FIXME\u0026#34;: { \u0026#34;foreground\u0026#34;: \u0026#34;black\u0026#34;, \u0026#34;background\u0026#34;: \u0026#34;#B4292B\u0026#34;, \u0026#34;iconColour\u0026#34;: \u0026#34;#B4292B\u0026#34;, \u0026#34;icon\u0026#34;: \u0026#34;bug\u0026#34; }, \u0026#34;NOTE\u0026#34;: { \u0026#34;foreground\u0026#34;: \u0026#34;black\u0026#34;, \u0026#34;background\u0026#34;: \u0026#34;#2B6DD5\u0026#34;, \u0026#34;iconColour\u0026#34;: \u0026#34;#2B6DD5\u0026#34;, \u0026#34;icon\u0026#34;: \u0026#34;octoface\u0026#34; } } I was trying to get the colours from the ubuntu wsl as like them against the dark theme in vscode, but there not quiet right. but after spending far to long playing around with it this will do for now! :)\nIt gives me the below\nAs a note you can pick the icons from the octicons and thats it. hope this helps.\n","permalink":"https://staggerlee011.github.io/posts/vscode-todotree/","summary":"Basic usage of the vscode extension todo-tree","title":"VSCode Todo Tree"},{"content":"With PSP being deprecated in 1.21 and fully removed in 1.25 (See the github conversation here) its time to start looking around at other options. At present that really sits with OPA which means learning a new code/syntax which doesn\u0026rsquo;t seem to friendly to me, or Kyverno which uses a native kubernetes manifests to let you deal with your policy management. For me, as we don\u0026rsquo;t have to many policies at the moment, kyverno fits our needs better. below is basic syntax and usage examples.\nInstall You can install via manifest or HELM, we use kustomize so download the install.yml file and use that as a base then overlay our ecr images\nkubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml Basic overlay example\napiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base images: - name: ghcr.io/kyverno/kyverno newName: dkr.ecr.eu-west-2.amazonaws.com/kyverno newTag: v1.3.2-rc1 - name: ghcr.io/kyverno/kyvernopre newName: dkr.ecr.eu-west-2.amazonaws.com/kyvernopre newTag: v1.3.2-rc1 Reading Policies Policies are split between namespace and cluster\nNamespace kubectl get policyreport -A Cluster kubectl get clusterpolicyreport -A View Violations kubectl describe polr -A | grep -i \u0026#34;status: \\+fail\u0026#34; -B10 or specific to namespace\nkubectl describe polr polr-ns-default | grep \u0026#34;Status: \\+fail\u0026#34; -B10 Delete all policies Example policy to audit the use of the label app: \u0026quot;?*\u0026quot;\napiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: audit-app-label spec: validationFailureAction: audit rules: - name: check-for-app-labels match: resources: kinds: - Pod validate: message: \u0026#34;The label `app` is required.\u0026#34; pattern: metadata: labels: app: \u0026#34;?*\u0026#34; ","permalink":"https://staggerlee011.github.io/posts/kubernetes-kyverno/","summary":"Examples and usage of Kyverno on kubernetes","title":"Kubernetes Policy as Code with Kyverno"},{"content":"You\u0026rsquo;ve deployed a RDS instances for your EKS/kubernetes cluster into a private subnet and don\u0026rsquo;t have a bastion up to run pgadmin on.\nYou want to connect to a postgres database quickly\nSolution Bring up a pod with pgadmin (if your running a private EKS you need to use a private ECR for the --image value)\nkubectl run pgadmin --image dpage/pgadmin4 --env=\u0026#34;PGADMIN_DEFAULT_EMAIL=admin@admin.com\u0026#34; --env=\u0026#34;PGADMIN_DEFAULT_PASSWORD=logmein\u0026#34; port-forward into pgadmin\nkubectl port-forward pgadmin 8080:80 Open your favourite web browser and go to http://localhost:8080\nAnd there it is you can now enjoy the joys the pgadmin to connect to your private database server without the need of jump boxes, or external load balancers etc. All locked down to only those than can connect to to your cluster via kubectl\nClean up Delete pod till needed again\nkubectl delete pod pgadmin ","permalink":"https://staggerlee011.github.io/posts/kubernetes-pgadmin-port-fowarding/","summary":"Kubernetes port-forwarding pgadmin","title":"Port-forwarding pgadmin"},{"content":"This is my current setup for kubernetes (running on WSL2 ubunutu-18)\nInstall software I currently use the following software to manage and interact with k8s:\nKubectl Standard k8s cli\n Link to Kubectl  Kube-ps1 Visualizes which k8s cluster you are connected to\n Link to Kube-ps1  Kubectx Easily switch between k8s clusters and re-name them!\n Link to Kubectx  Octant Web based dashboard that uses port-forwarding to access the k8s cluster\n Link to Octant  KubeSeal Aka sealedsecrets. Used to encrypt secrets on file.\n-sealedsecrets\nKustomize kubectl comes with a very old version kustomzise its well worth sticking on the latest version.\n-kustomize\nKubeLinter Analyses Kubernetes YAML files and Helm charts, and checks them against a variety of best practices, with a focus on production readiness and security.\n Link to KubeLinter  bash-completion So you can get tab completion with kubernetes\nInspektor Gadget Collection of tools to debug and inspect kubernetes applications\n Link to Inpsecktor Gadget  ssm-secret Allow import/export of kubernetes secrets to/from AWS SSM\n Link to kubectl-ssm-secret  Install via Brew All of these can be installed via brew:\nbrew install kubectl kube-ps1 kubectx octant kube-linter kustomize kubeseal bash-completion Install via Krew krew is a tool that allows you to add plugins to kubectl\n Link to Krew Install Krew  Note run: source ~/.bashrc to refresh wsl\nkubectl krew install gadget kubectl krew install ssm-secret Set up kubectl alias and tab completion As someone who cant spell or type, alias\u0026rsquo;s / tab completion are my friend\nAlias I use the common alias of k = kubectl to try and lower my command line mistakes\nsudo vim ~/.bash_aliases Insert into the file the below:\nalias k=\u0026#39;kubectl\u0026#39; Save the changes :wq and exit out\nTab Completion Not something I\u0026rsquo;m a big fan off as it seems very and unresponsive. But worth having anyway.\nsource \u0026lt;(kubectl completion bash) echo \u0026#39;source \u0026lt;(kubectl completion bash)\u0026#39; \u0026gt;\u0026gt;~/.bashrc Configure kube-ps1 After installing kube-ps1 you will also need to update ~/.bashrc\nsudo vim ~/.bashrc insert into the file MAKE SURE TO DO THIS AT THE BOTTOM OF THE PAGE! the code below and save and exit :wq\nsource \u0026#34;$(brew --prefix)/opt/kube-ps1/share/kube-ps1.sh\u0026#34; PS1=\u0026#39;$(kube_ps1)\u0026#39;$PS1 Once you\u0026rsquo;ve saved the file re source it and it should load up in your terminal\nsource ~/.bashrc Kubectx renaming I also then use kubectx to rename all my EKS clusters, otherwise my terminal would be full before I even started writing anything!\nFor example if I had a EKS cluster that was in a developement VPC I could\nkubectx # select the development eks cluster kubectx development=. # updates the cluster to be named \u0026#34;development\u0026#34; Summary and that\u0026rsquo;s it for the moment, I really like kube-ps1 for the easy knowledge that I\u0026rsquo;m in the right cluster and kubectx for the naming and ease to switch context between them. Octant I\u0026rsquo;ve not used much, but looks a good replacement for the risk / issues of using the kubernetes dashboard.\n","permalink":"https://staggerlee011.github.io/posts/kubernetes-workstation/","summary":"My workstation setup for kubernetes","title":"Kubernetes Workstation Setup"},{"content":"This is a quick gist to install wsl2, download a disto and enable it.\n References  Manually download Windows Subsystem for Linux distro packages  ","permalink":"https://staggerlee011.github.io/posts/wsl2-setup/","summary":"Quick script to install and set up wsl2","title":"WSL2 Setup and Configuration"},{"content":"Quick reference commands for dealing with wsl in windows10\nlist wsl distro this also returns the version of the distro you are running\nwsl -l -v upgrade distro from wsl1 to wsl2 Get the distro name from wsl -l -v in the below example im upgrading ubuntu from wsl1 to 2\nwsl --set-version Ubuntu-18.04 2 set new default distro wsl -s Ubuntu-18.04 restart wls distro wsl -t Ubuntu-18.04 uninstall single distro go into windows apps and features select the distro you wish to uninstall and select remove\nunregister via the command line wsl --unregister Ubuntu-18.04 install single distro go to windows store and search with wsl\ninstall via the command line Invoke-WebRequest https://aka.ms/wsl-kali-linux-new -OutFile kali.appx -UseBasicParsing Add-AppxPackage .\\kali.appx wslconfig as well as wsl there is also a wslconfig command\nlist distros wslconfig.exe /l set new default distro wslconfig.exe /setdefault Ubuntu-18.04 ","permalink":"https://staggerlee011.github.io/posts/wsl-commands/","summary":"Set of helpful WSL commands","title":"WSL Helpful Commands"},{"content":"After getting myself into far to many messes with using both powershell and wsl, I\u0026rsquo;m moving uninstalling all things windows and trying to only run work apps via the wsl ubunutu image. with that I stil use vscode for all my coding and having the terminal open for all comamnds that i need. To update it to use wsl is super easy via\nSteps to update vscode default terminal  open the terminal select the drop on the right hand side dropdown bar select \u0026ldquo;Select default shell\u0026rdquo;  This opens the command palette with the options you cna switch to\n select WSL bash  Common issues I found that my default wsl image was docker when setting this up. So after completing the above I would then get an error saying\nThe terminal process \u0026#34;C:\\Windows\\System32\\wsl.exe\u0026#34; failed to launch (exit code: 1). The fix is to update wsl and set the ubuntu image as your default wsl:\nrunning the below will list out all your wsl images as well indicate which is your current default:\nwslconfig.exe /l run the below to update it to your preferred distribution:\nwslconfig.exe /setdefault Ubuntu-18.04 confirm via running wslconfig again\nYou should now be able to open wsl from vscode\nReferences  vscode terminal integrations troubleshooting vscode terminal launch  ","permalink":"https://staggerlee011.github.io/posts/vscode-terminal-wsl/","summary":"Configure VS Code terminal to use WSL","title":"VSCode Terminal Set to WSL"},{"content":"When you deploy Terraform you\u0026rsquo;ll want to have a remote state setup to manage team access. For AWS the standard is to use S3 bucket. As you cant store the state of bucket IN the bucket, its one of the only things that you have to leave outside of being controlled via the remote state.\nFor our teams we manage this via still creating the s3 bucket in Terraform and keep the code in source control, this is normally stored in a .state folder along with the other workspaces.\n- terraform - .state - core - eks - rds The below is a gist example of the code we use, id suggest also adding a version.tf in the folder that matches the rest of your workspaces.\n ","permalink":"https://staggerlee011.github.io/posts/terraform-statefile/","summary":"Terraform code to generate a secure S3 bucket for remote state","title":"Terraform Remote Statefile Creation"},{"content":"Quick note on running python virtualenv, its a repetitive task that always seem to forget the steps on :/\nInstallation python3 -m pip install virtualenv Create virtual environment (shorthand) python3 -m venv env Create virtual environment (specify version of python) python3 -m virtualenv -p python3 venv Activate environment Note, as I was switching between windows 10 and WSL Ubuntu I found out you cant create an environment in one and use it in the other!\nWindows: .\\env\\Scripts\\activate.ps1\nUbuntu: source env/bin/activate\nExit environment Windows: deactivate\nUbuntu: source deactivate\n","permalink":"https://staggerlee011.github.io/posts/python-setup-virtualenv/","summary":"Setup python virtualenv for Windows or Ubuntu","title":"Setup Python virtualenv"},{"content":"Its really simple and to be honest doesn\u0026rsquo;t need a blog post, but since I managed to ignore all the warning signs, someone else might :).\nPre-reqs none\nSteps steps can be followed by reading the install as it happens, if you miss it like U did! read on:\nInstall brew the url and home of brew for linux is here: https://brew.sh/ (this may have an updated url so please check if you get errors)\n/bin/bash -c \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\u0026#34; Configure brew this is where I messed up! apon finishing the install, it feeds you lots of HELPFUL info saying you need to update your PATH and suggests installing some other software, I ignored this and spent a few hours moaning to my team that things don\u0026rsquo;t work like they should and trying to workout why I could install things but not use them \u0026gt;\u0026lt;\nthis is an example solution! if your ubuntu login is not stephen this wont work for you!!!\necho \u0026#39;eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)\u0026#39; \u0026gt;\u0026gt; /home/stephen/.profile eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv) it also suggests installing the below\nsudo apt-get install build-essential brew install gcc Test That\u0026rsquo;s it, you should now be good to go and install all the lovely software and have it work properly!\nbrew help ","permalink":"https://staggerlee011.github.io/posts/install-brew-on-ubunutu/","summary":"Steps to install and configure brew for Ubuntu-18","title":"Install Brew on Ubunutu-18"},{"content":"","permalink":"https://staggerlee011.github.io/archive/","summary":"archive","title":"Archive"},{"content":"","permalink":"https://staggerlee011.github.io/search/","summary":"search","title":"Search"}]