[{"content":"Overview I was recently looking to quickly add notifications in an internal tool. Nothing fancy, just a simple toast message I could easily pop up when necessary. Usually this is something like &ldquo;I didn&rsquo;t load the resource you wanted to look at because it isn&rsquo;t available&rdquo;.\nThere are a ton of ways to plumb notifications in, but I wanted to write the minimal amount of code possible. Polling, SSE, websockets are all options &ndash; and I plan to dig into those at some point! I already knew about the Flask flashing system so I opted to start there and build something out using HTMX hx-swap-oob.\nThe basic flow is this:\nDuring a request, call flash(&lt;some_message&gt;) to generate a notification While processing the response, either the normal template will render the messages OR we will add the messages to the rendered HTML before sending it back to the browser Initialize any new toast elements that show up in the DOM Message Flashing This part is pretty simple. During request handling, use the flash function to record a message. The message is added to the session cookie and can be retrieved during template rendering. This is fine for my current use case where I just want to give simple feedback on user actions, not a full blown notification subsystem!\nSimple, Contrived Example:\nfrom flask import flash from uuid import uuid4 from my_app import app # Just an example so we can pretend to define a route @app.route(&#39;\/message&#39;) def message(): # THIS IS ALL IT TAKES flash(f&#39;Random message: {str(uuid4())}&#39;) # NO REALLY ^^ this becomes a notification for the current user! return &#39;ok&#39; HTML Rendering I have hx-boost set on the &lt;body&gt; tag in my base template. There are some elements that are added directly to the document body by 3rd part JS, and these need to be left alone when navigating. So I already have hx-target and hx-select in place in my base template. Since I want to attach new messages to any HTMX request, I can&rsquo;t rely on my base template.\nInstead, I created a new template for my notifications. This template is included in my base template, outside my main content. This allows me to render any notifications during a normal page load, but it will be ignored during an HTMX request.\n{% set category_icon_map = { &#34;error&#34;: &#34;fa-exclamation-triangle&#34;, &#34;info&#34;: &#34;fa-info-circle&#34;, } %} &lt;div id=&#34;alerts&#34; class=&#34;toast-container position-absolute top-0 end-0 p-3&#34; style=&#34;z-index: 100;&#34; hx-swap-oob=&#34;afterbegin&#34;&gt; {% with messages = get_flashed_messages(with_categories=True) %} {% for category, message in messages %} &lt;div class=&#34;toast align-items-center bg-secondary text-white bg-gradient border-0&#34; role=&#34;alert&#34; aria-live=&#34;assertive&#34; aria-atomic=&#34;true&#34;&gt; &lt;div class=&#34;d-flex&#34;&gt; &lt;div class=&#34;toast-body&#34;&gt; &lt;span class=&#34;fa fa-s {{ category_icon_map[category] }} fa-lg me-1&#34;&gt;&lt;\/span&gt; &lt;span class=&#34;fw-normal&#34;&gt; {{ message }} &lt;\/span&gt; &lt;\/div&gt; &lt;button type=&#34;button&#34; class=&#34;btn-close btn-close-white me-2 m-auto&#34; data-bs-dismiss=&#34;toast&#34; aria-label=&#34;Close&#34;&gt;&lt;\/button&gt; &lt;\/div&gt; &lt;\/div&gt; {% endfor %} {% endwith %} &lt;\/div&gt; Yeah yeah ignore the bootstrap stuff ;)\nNow that I have notifications being in a standalone template, I can render this template in isolation. This is where hx-swap-oob comes in: I can add a Flask after_request handler to render this template and add it to the response. Since the rendered HTML is at the top level of whatever response is provided, HTMX will see the hx-swap-oob attribute and go update the appropriate element wherever it actually exists in the DOM based on the id. I am using &quot;afterbegin&quot; to append new notifications to the ones already present.\nThe handler is pretty simple (and probably needs refined):\n@app.after_request def render_messages(response: Response) -&gt; Response: if request.headers.get(&#34;HX-Request&#34;) and response.data.find(b&#34;div id=\\&#34;alerts\\&#34;&#34;) == -1: messages = render_template(&#34;includes\/alerts.jinja2&#34;) response.data = response.data + messages.encode(&#34;utf-8&#34;) return response Voila, now I can call flash(&lt;some_message&gt;) during a request and it automatically shows up whether during a normal request or an HTMX request.\nNotification Display Not too tricky, and an area I&rsquo;d like to improve on. That said&hellip; it works and I&rsquo;ve spent more time on this post than on the actual implementation :P\n\/\/ Normal page load $(&#39;.toast&#39;).map(function (index, element) { if (element.classList.contains(&#39;hide&#39;)) { return; } let toast = new bootstrap.Toast(element); toast.show(); setTimeout(function () { element.remove(); }, 15 * 1000); }); \/\/ HTMX requests htmx.onLoad(function (content) { [content].map(el =&gt; { if (el.classList.contains(&#39;toast&#39;)) { let toast = new bootstrap.Toast(el); toast.show(); setTimeout(function () { el.remove(); }, 15 * 1000); } }); }); Basically, on a normal page load find any elements with a toast class and init the Toast. On an HTMX request, just inspect the new content and do the same. This way we don&rsquo;t get old toasts popping up again :D\nEnd There you have it, stupid simple toast notifications without extra tools\/processes\/etc. This is definitely not an ideal state, but it works reasonably well now and I can tackle a more elegant solution later. Probably.\n","permalink":"https:\/\/blog.jesseops.net\/posts\/htmx-flask-notifications\/","summary":"Overview I was recently looking to quickly add notifications in an internal tool. Nothing fancy, just a simple toast message I could easily pop up when necessary. Usually this is something like &ldquo;I didn&rsquo;t load the resource you wanted to look at because it isn&rsquo;t available&rdquo;.\nThere are a ton of ways to plumb notifications in, but I wanted to write the minimal amount of code possible. Polling, SSE, websockets are all options &ndash; and I plan to dig into those at some point!","title":"Notifications using Flask & HTMX"},{"content":"My usual method for setting up a new computer is to install Syncthing and add all my main synchronized folders, then rsync any specific dotfiles (.vimrc, .gitconfig, etc) to the new system.\nIt&rsquo;s never super clean, but this does speed me up enough to get moving quickly. I&rsquo;ve considered setting up a more global $HOME sync folder with a ton of excludes, but I want to be a bit more intentional about updating my system configuration.\nI decided to just use a private GitHub repo to store these dotfiles, taking a lot of care to avoid committing any secrets (I like to assume that anything I store on someone else&rsquo;s computer will eventually be public).\nThis turned out to be so much easier than I expected, just a simple alias to override the $GIT_DIR and a custom exclude file to make sure I only add very specific files to this repo.\ngit init --bare $HOME\/.cfg echo &#34;*&#34; &gt;&gt; $HOME\/.cfg\/info\/exclude alias dotfile=&#34;git --git-dir=$HOME\/.cfg\/ --work-tree=$HOME&#34; I have the above alias added to my .zshrc (so bonus: it is automatically sync&rsquo;d!) and can do all the normal git commands by calling dotfile:\ndotfile remote add origin git@github.com:private\/repo.git dotfile add .gitconfig dotfile add .zshrc ... # etc dotfile commit -m &#39;add my dotfiles&#39; dotfile push -u github main Now, to setup a new machine, I need to setup my dotfile repo in my home directory. Just as before when I initialized a bare git repo with a custom GIT_DIR, I need to clone my repo on the new system with the bare flag and pointing to the same .cfg directory for consistency. You can test this non-destructively by creating a temporary folder and referencing it instead of $HOME in the commands below:\n&#x26a0;&#xfe0f; This will overrwrite any tracked files in $HOME &#x26a0;&#xfe0f;\ngit clone --bare git@github.com:private\/repo.git $HOME\/.cfg git --git-dir=$HOME\/.cfg --work-tree=$HOME checkout -f Now I can do things like manage OS-specific entries and files via branches, or try out major changes without worrying about leaving myself in a broken state on this machine or another one.\n","permalink":"https:\/\/blog.jesseops.net\/posts\/manage-dotfiles\/","summary":"My usual method for setting up a new computer is to install Syncthing and add all my main synchronized folders, then rsync any specific dotfiles (.vimrc, .gitconfig, etc) to the new system.\nIt&rsquo;s never super clean, but this does speed me up enough to get moving quickly. I&rsquo;ve considered setting up a more global $HOME sync folder with a ton of excludes, but I want to be a bit more intentional about updating my system configuration.","title":"Managing Dotfiles"},{"content":"Here&rsquo;s a simple way to expose an Ingress controller running in a private network to the public. My use case is for adhoc standup of Kubernetes clusters via K3d on my computer while testing automation (eg, self managed ArgoCD) so that I can validate cert-manager &amp; external-dns.\nI purposefully kept the tools required to a minimum - this is proof of concept stage more than anything. The concepts aren&rsquo;t new, but so far I haven&rsquo;t see this exact solution shared. Not that there isn&rsquo;t prior art in the space: see InletsPRO or ngrok. Both are paid tools - which is a good thing, pay people for good tools!\nMy use-case can be met with simpler tooling though:\n1 VPS w\/ public IP (DigitalOcean is a great option at $5\/mo) An SSH client sidecar attached to an Ingress controller Pod That&rsquo;s it! I don&rsquo;t have this polished up, but it is working on my home cluster right now.\nGenerate SSH Key Use key-based authentication. Just do it.\n\u276f ssh-keygen -t ed25519 -f .\/digitalocean.clusteraccess -C &#34;remote port forward DO to private ingress&#34; Don&rsquo;t create a passphrase. Copy the digitalocean.clusteraccess.pub file generated and use it when creating the VPS.\nConfigure VPS Again, this isn&rsquo;t polished - ideally I&rsquo;d have a simple command here using doctl to create the perfect droplet with the appropriate sshd_config. Suck it up, we&rsquo;re using the cloud console:\nLogin or create a DigitalOcean account - if it&rsquo;s a new account you can get $5 credit by getting an invite from someone (drop me a line on Twitter). You don&rsquo;t need anything crazy, just a basic droplet in a reasonably close zone. Under Authentication choose New SSH Key. Paste in the .pub key you copied in the previous step.\nOnce your droplet is created, grab the public IP address and login:\n\u276f ssh root@161.35.225.155 -i .\/digitalocean.clusteraccess We need to modify the sshd config to enable GatewayPorts. This will permit listening on the public interface so our SSH port forwarding will have the desired result. Edit \/etc\/ssh\/sshd_config using your favorite editor, vim. Uncomment or add the GatewayPorts entry and set it to yes. Then reload ssh: service ssh restart.\nValidate it works Run an http server on your system. I already had mkdocs running on localhost:8000, but you could as easily do the following:\n\u276f echo &#34;&lt;b&gt;hello world&lt;\/b&gt;&#34; &gt; index.html \u276f python3 -m http.server 8080 Serving HTTP on 0.0.0.0 port 8080 (http:\/\/0.0.0.0:8080\/) ... 127.0.0.1 - - [10\/Jul\/2021 08:59:45] &#34;GET \/ HTTP\/1.1&#34; 200 - Use whatever port you like, I overrode the default 8000 to 8080 because&hellip; mkdocs was running.\nNow, test your SSH forwarding:\n\u276f ssh root@161.35.225.155 -i .\/digitalocean.clusteraccess -R 80:&#39;*&#39;:8000 You should now be able to curl 161.35.225.155:\n\u276f curl 161.35.225.155 &lt;b&gt;hello world&lt;\/b&gt; Patch Ingress Controller Here&rsquo;s the really fun part, and it&rsquo;s going to depend heavily on how you deployed your Ingress controller. If you like, you could easily just edit the Deployment object directly. I use helmfile so I opted to use the built-in Kustomize support to patch it in.\nI created a ConfigMap in my ingress controller namespace:\n&#x26a0;&#xfe0f; this is just for the proof of concept, this should really go into a secret!\napiVersion: v1 kind: ConfigMap metadata: name: digitalocean-externalip-key data: ssh_key: | -----BEGIN OPENSSH PRIVATE KEY----- ********************************************************************** ********************************************************************** *************************itsasecretyo********************************* ********************************************************************** ********************************************************************** *****= -----END OPENSSH PRIVATE KEY----- And then patched my Ingress controller deployment (note, this is a helmfile specific example):\n- name: ingress-nginx-external namespace: nginx-system chart: nginx-stable\/nginx-ingress labels: repo: nginx-stable chart: nginx-ingress component: ingress domain: external version: ~0.9.3 values: - controller: defaultTLS.secret: nginx-system\/default-{{ .Environment.Values | get &#34;external_domain&#34; &#34;unknown&#34; }}-tls wildcardTLS.secret: nginx-system\/default-{{ .Environment.Values | get &#34;external_domain&#34; &#34;unknown&#34; }}-tls ingressClass: nginx-external service: annotations: metallb.universe.tf\/address-pool: k8s-services strategicMergePatches: - apiVersion: apps\/v1 kind: Deployment metadata: name: ingress-nginx-external-nginx-ingress namespace: nginx-system spec: template: spec: containers: - name: ssh-sidecar image: gfleury\/ssh-client command: [&#34;ssh&#34;] args: - &#34;-i&#34; - &#34;\/etc\/sshkeys\/ssh_key&#34; - &#34;-o&#34; - &#34;UserKnownHostsFile=\/dev\/null&#34; - &#34;-o&#34; - &#34;StrictHostKeyChecking=no&#34; - &#34;-N&#34; - &#34;-R&#34; - &#34;80:0.0.0.0:80&#34; - &#34;-R&#34; - &#34;443:0.0.0.0:443&#34; - &#34;-o ExitOnForwardFailure=yes&#34; - &#34;root@xxx.xxx.xxx.xx&#34; volumeMounts: - name: sshkey-volume mountPath: \/etc\/sshkeys volumes: - name: sshkey-volume configMap: name: digitalocean-externalip-key defaultMode: 256 You&rsquo;ll note the extra options in the SSH command. First, I want both 80 and 443 available. Since I&rsquo;m not running interactively, I&rsquo;d prefer to skip the host key checking. Finally, if the port forwarding fails I want to retry. So the SSH command will exit causing the container to be restarted. Nifty huh? You can verify by tailing the logs of the ssh-sidecar container, or connecting to :80 on the public IP. You should get an nginx 404.\nCreate an Ingress Ok, almost done! Now to create an Ingress. I like using httpbin as a test application.\nRun httpbin &amp; create a Kubernetes service:\n\u276f kubectl -n default run httpbin --image=kennethreitz\/httpbin:latest --port=80 --expose=true And apply this ingress:\n&#x26a0;&#xfe0f; This assumes you have cert-manager already setup, otherwise drop the cert-manager &amp; tls items\napiVersion: networking.k8s.io\/v1 kind: Ingress metadata: annotations: cert-manager.io\/cluster-issuer: letsencrypt name: httpbin namespace: default spec: ingressClassName: nginx-external rules: - host: httpbin.somedomain.tld http: paths: - backend: service: name: httpbin port: number: 80 path: \/ pathType: Prefix tls: - hosts: - httpbin.somedomain.tld secretName: httpbin-tls I created the appropriate DNS record on my domain, but if you&rsquo;re not using cert-manager you could as easily just add a host entry.\nLet&rsquo;s test:\ncurl -X GET &#34;https:\/\/httpbin.somedomain.tld\/base64\/ZHVkZSwgdGhpcyBpcyBhYnNvbHV0ZWx5IGFtYXppbmcu&#34; -H &#34;accept: text\/html&#34; Next Steps &amp; Acknowledgements This could definitely use polishing up. I added a firewall to my DigitalOcean droplet to allow port 80\/443 from anywhere but only accept 22 connections from my home IP. The whole VPS configuration is a simple Terraform or doctl automation. The SSH key should go in a secret, and this solution only works with a single Ingress controller instance at a time. But for a quick hack this works nicely!\nI modified my ssh-sidecar config from the one found here. Thanks for the great example gfleury!\n","permalink":"https:\/\/blog.jesseops.net\/posts\/simple-externalip-for-home-kubernetes-ingress\/","summary":"<p>Here&rsquo;s a simple way to expose an Ingress controller running in a private network to the public. My\nuse case is for adhoc standup of Kubernetes clusters via K3d on my computer while testing automation\n(eg, self managed ArgoCD) so that I can validate cert-manager &amp; external-dns.<\/p>","title":"Simple External IP for Home Kubernetes Ingress"},{"content":"I&rsquo;m sure you&rsquo;ve never had to do a search for &ldquo;fix author git multiple commits&rdquo;. Right? Well, I&rsquo;m not going to talk about how to do that fix (ugh it&rsquo;s a pain&hellip;). What I DO have to share is a way to hopefully avoid that mess in the future without any crazy arcane incantations or embracing tedious per-repository authorship edits.\nGlobal Config A little background &ndash; Git supports multiple layers of configuration at system, global, repository, and of course, CLI levels. The nearest config entry wins (system takes lowest priority with CLI highest).\nIt&rsquo;s pretty common to simply run &ldquo;git config &ndash;global user.email &ldquo;me@myself.i&rdquo; and call it a day. This command updates your global SSH config (located at ~\/.gitconfig on Linux) with your preferred email address.\nThat works if you always use the same email regardless of repository. But if you prefer to keep (for example) personal projects separate from open source contributions, this gets to be a pain as you need to override at least the email config entry per repository.\nOverride Config Just like my trick to use a different SSH key for some repositories, we can use built-in git config directives to make this work. Specifically, includeIf permits including config files based on some conditional.\nI always check my repositories out under ~\/code, and further group related repos together (eg, ~\/code\/work\/{this,that,another}_repo) in subdirectories. It&rsquo;s pretty simple to check if the current repo is in a specific subdirectory using gitdir. So I just add a .gitconfig to the subdirectory and in my global config add a conditional include for that subdirectory. See below for an example!\nOne additional trick that I use in tandem with my SSH configuration is to rewrite the git remote URL. This allows me to clone exactly as I normally would, but rewrite the URL with something specific to that project. From there my SSH config handles rewriting it back when actually interacting with the git remote.\nTurns out I was missing a really simple way to avoid this URL hack: the core.sshCommand config entry. This is a ton simpler and doesn&rsquo;t confuse other tools.\nExample Here are my config files, lightly edited:\nUser configuration [user] name = &#34;Jesse Roberts&#34; email = &#34;*****@**********.***&#34; [core] editor = vim [init] defaultBranch = main [rebase] autoStash = true [push] default = upstream [includeIf &#34;gitdir:~\/code\/work\/&#34;] path = &#34;~\/code\/work\/.gitconfig&#34; Subdirectory configuration overrides [user] email = work@workplace.tld name = &#34;Jesse Roberts&#34; [core] sshCommand = ssh -i ~\/.ssh\/&lt;my-work-ssh-key&gt; ","permalink":"https:\/\/blog.jesseops.net\/posts\/override-git-config-multiple-repositories\/","summary":"<p>I&rsquo;m sure you&rsquo;ve never had to do a search for <em>&ldquo;fix author git multiple commits&rdquo;<\/em>. Right? Well, I&rsquo;m\nnot going to talk about how to do that fix (ugh it&rsquo;s a pain&hellip;). What I DO have to share is a way to\nhopefully avoid that mess in the future without any crazy arcane incantations or embracing tedious\nper-repository authorship edits.<\/p>","title":"Override Git Config For Multiple Repositories"},{"content":"Port forwarding is amazingly useful. If you&rsquo;ve never used this feature before, the tl;dr is you can bind a port on your local system to a port on a remote system. This allows you to interact with a service as if it were running on your computer (eg, connect to a database that is firewalled off).\nssh can also perform &ldquo;dynamic&rdquo; port forwarding. This creates a SOCKS http proxy through the remote host. Any tool that respects the https_proxy environment variable can access services on (or access the internet through) the remote server.\nExample: accessing remote Kubernetes API I run k3d on my desktop as a local kubernetes &ldquo;cluster&rdquo;. Sometimes I want to sit on the couch and keep working on a project. It&rsquo;s easy enough to use either method to get access working as if I were sitting at my desk!\n&#x26a0;&#xfe0f; k3d binds to 0.0.0.0 on a random port by default. I prefer to set it to 127.0.0.1 and a known port, making simple port binding more predictable.\nDynamic (socks proxy) Easy to setup, but requires https_proxy environment variable to be set. Not all applications work out of the box.\n# Starting out on my laptop, can&#39;t access kubernetes API \u276f kubectl --context k3d-blog-example get node The connection to the server 127.0.0.1:6550 was refused - did you specify the right host or port? \u276f screen -S k3d-tunnel ... # Inside screen window \u276f ssh -NnD 127.0.0.1:9001 $user@$desktop ... # `Ctrl+a d` to detach from screen [detached from 53775.k3d-tunnel] \u276f kubectl --context k3d-blog-example get node The connection to the server 127.0.0.1:6550 was refused - did you specify the right host or port? # THAT&#39;S MORE LIKE IT \u276f https_proxy=socks5:\/\/127.0.0.1:9001 kubectl --context k3d-blog-example get node NAME STATUS ROLES AGE VERSION k3d-blog-example-server-0 Ready control-plane,master 111s v1.20.6+k3s1 Local port binding A little more basic, but this binds the desired port locally &ndash; effectively behaving as if the service was truly on my local system. The downside is I need to bind a new port for every service I wish to access (e.g., 8080 &amp; 8443 to access the loadbalancer service of my local cluster).\n# Starting out on my laptop, can&#39;t access kubernetes API \u276f kubectl --context k3d-blog-example get node The connection to the server 127.0.0.1:6550 was refused - did you specify the right host or port? \u276f screen -S k3d-tunnel ... # Inside screen window \u276f ssh -NnT -L 6550:localhost:6550 $user@$desktop ... # `Ctrl+a d` to detach from screen [detached from 53775.k3d-tunnel] \u276f https_proxy=socks5:\/\/127.0.0.1:9001 kubectl --context k3d-blog-example get node NAME STATUS ROLES AGE VERSION k3d-blog-example-server-0 Ready control-plane,master 1m v1.20.6+k3s1 ","permalink":"https:\/\/blog.jesseops.net\/posts\/access-remote-services-via-ssh\/","summary":"<p>Port forwarding is amazingly useful. If you&rsquo;ve never used this feature before, the tl;dr is you can\nbind a port on your <em>local<\/em> system to a port on a <em>remote<\/em> system. This allows you to interact with\na service as if it were running on your computer (eg, connect to a database that is firewalled off).<\/p>","title":"Access Remote Services Over SSH"},{"content":"While managing servers directly via SSH is mostly an anti-pattern these days (there&rsquo;s always that red-headed stepchild of a host that runs those cron entries you&rsquo;re just not sure if you can delete), I still use it heavily. Here are some tips and usage patterns I&rsquo;ve integrated into my workflow &ndash; and I&rsquo;m sure I have barely scratched the surface!\nKeep config in sync across workstations I really like to keep my configuration in sync across devices (and backed up!). I use syncthing to accomplish this. I have all my machines (NAS, desktop, laptop{0&hellip;?}, phone, work machine) all setup to sync. This is crazy helpful to avoid losing keys and keep consistent configuration.\nI recommend using the .stignore file to avoid synchronizing certain files (eg, work keys\/config, known_hosts, authorized_keys). This is configured per-device!\n\/\/ avoid sync conflicts with known hosts known_hosts \/\/ I prefer to directly manage which machines I can ssh into via my key authorized_keys* \/\/ skip synchronizing work keys\/conf **work** Let your config files do the work Entropy is great isn&rsquo;t it? What was once a pretty simple id_rsa you setup when first cloning a git repo or configuring a raspberry pi has proliferated into a folder full of keys as you migrated between computers and setup a homelab and forgot old passphrases. How to keep track of which key to use on which servers? And forget keeping track of usernames&hellip;\nLuckily, your ssh config can remember all of that!\n&#x2705; Tip &#x2705;\nOn Linux, the global ssh config is located at \/etc\/ssh\/ssh_config. It&rsquo;s a great place to start familiaring yourself with the available options and to learn how ssh prioritizes them. You can also man ssh_config.\nI keep my ssh config split up into appropriate files. This enables selective syncing and makes it much easier to manage over time.\nBase config &ndash; ~\/.ssh\/config I use this mostly to load in my application or environment specific configs, as well as to set some global Host settings:\n# Apply to all hosts unless overridden by more explicit match Host * # Only use identities (eg, keys) explicitly defined via CLI or ssh config # This avoids extra work trying invalid keys IdentitiesOnly yes # Check if remote server is alive, terminate session after 90s if not ServerAliveInterval 30 ServerAliveCountMax 3 # Automatically add keys to ssh-agent AddKeysToAgent yes # Include my env or account specific options Include config.d\/*.conf Home config &ndash; ~\/.ssh\/config.d\/$HOMENETWORK.conf This is for things like my NAS, other personal computers, etc.\nHost $DESKTOP # match exactly my desktop name # Specify my home username &amp; personal ssh key User jesseops IdentityFile ~\/.ssh\/jesseops # Override IP to get around VPN issues capturing DNS lookups Hostname 192.168.x.x Host *.localnet # Match any host on my localnet User $AUTOMATIONUSER ForwardAgent yes # Let me jump from one host to another IdentityFile ~\/.ssh\/localnet_infra Git config &ndash; ~\/.ssh\/confid.\/git.conf # git specific keys! Host github.com IdentityFile ~\/.ssh\/github Host bitbucket.org IdentityFile ~\/.ssh\/bitbucket Work config &ndash; ~\/.ssh\/config.d\/work{,_git}.conf Here I keep track of customizations for $current_job and can exclude it from synchronizing to my personal machines.\n# NOTE: I now use the _proper_ way, ie the core.sshCommand gitconfig entry instead # of hacky URL rules. Much easier. # Neat little trick in combination with my git config that allows me to override # the keys I use to access github\/bitbucket (useful for public hosted organizations) # Host github.com-work # HostName github.com # IdentityFile ~\/.ssh\/work # Specify multiple hostname patterns Host *.workdomain *.workdevdomain IdentityFile ~\/.ssh\/work User $USERID # Add a bastion for accessing eg cloud hosted systems Host work-bastion HostName $bastionip User ec2-user IdentityFile ~\/.ssh\/bastion-key StrictHostKeyChecking no # not secure _but_ for ephemeral instances the host key changes often UserKnownHostsFile=\/dev\/null # Lets just skip tracking host keys anyway # Access host via bastion Host someinternal.cloud HostName $internalip User ec2-user # May not be available in your ssh version, check out # https:\/\/superuser.com\/questions\/1253960\/replace-proxyjump-in-ssh-config ProxyJump work-bastion # exactly what it sounds like ","permalink":"https:\/\/blog.jesseops.net\/posts\/ssh-config\/","summary":"<p>While managing servers directly via SSH is <em>mostly<\/em> an anti-pattern these days (there&rsquo;s always that\nred-headed stepchild of a host that runs those <code>cron<\/code> entries you&rsquo;re just <em>not sure<\/em> if you can\ndelete), I still use it heavily. Here are some tips and usage patterns I&rsquo;ve integrated into my\nworkflow &ndash; and I&rsquo;m sure I have barely scratched the surface!<\/p>","title":"SSH Configuration Tips"},{"content":"Hi there! This is the first &ndash; alternatively, last if you&rsquo;re reading archives &ndash; post on this blog.\nI&rsquo;ve started this blog as a journal of sorts, documenting projects I&rsquo;m working on and saving notes about things I&rsquo;ve learned to refer back to later. Some of these maybe be useful to you. If so, feel free to drop me a line by twitter or email.\n","permalink":"https:\/\/blog.jesseops.net\/posts\/hello-world\/","summary":"Hi there! This is the first &ndash; alternatively, last if you&rsquo;re reading archives &ndash; post on this blog.\nI&rsquo;ve started this blog as a journal of sorts, documenting projects I&rsquo;m working on and saving notes about things I&rsquo;ve learned to refer back to later. Some of these maybe be useful to you. If so, feel free to drop me a line by twitter or email.","title":"Hello World"}]