Stop Blaming Scale: Why Ownership Matters

12/02/2025

Do you dread opening tickets when an internal tool is broken? Do you struggle to find someone to call when a production critical service is broken? Have you ever asked someone why something is broken and gotten a shrug in response?

If you answered yes to these questions, whether you've noticed it or not, you likely have an ownership problem in your organization.

What Does it Mean to Own?

Here, I define 'ownership' as the explicit accountability and responsibility for the long-term success and health of a specific area, whether it's a service, project, or process. Ownership in an equity sense certainly has its benefits, but I'm not here to talk about that (at least today).

It seems that all too often, companies allow a culture to permeate that doesn't reward demonstration of responsibility and mastery. This can be triggered by many things, such as:

  • fear of blame
  • unclear role definitions
  • rewarding output over impact

Isn't This Just Describing Working at a Large Company?

While some of these may seem like natural problems that come with scaling a company, I disagree. I've worked at a large (>11k headcount) company where ownership was prioritized, and the impact it had on the work environment was notable.

One management exercise, performed on a semi-annual basis, involved a group of the division's managers meeting to discuss what areas of ownership their direct reports had, if they were satisfied with their responsibilities, and what their growth path was within those areas. This acted as an accountability measure, ensuring that every manager was thinking about this for their team members. If someone didn't already own something, we could discuss as a group opportunities that fit their skill set and growth path. It also encouraged regular check-ins with team members to understand their skill growth goals and how you as a manager could facilitate them.

Why Should I Solve This?

For those that *do* have high ownership, having to work with others who don't is frustrating. Unchecked over time, this can become a negative feedback loop, starting with apathy, and ending with "quiet quitting", or even departure.

Direct ownership also encourages mastery. Having a discrete list of responsibilities, whether assigned or chosen, offers a clear path of knowledge and skill growth that aligns with organizational needs. This has the added benefit of professional development, bolstering an individual's resume with specific skills that can help them land their next job.

Pride is a powerful motivating factor. Being responsible for the perception of your area's success and failures, combined with a culture of excellence, can drive positive, long-term outcomes. This allows someone to continue building up an area that has been an organization's pain point in the past. This can also help with building visibility and working towards promotions or other opportunities.

Another reason to solve this is to avoid a double standard. Any user should have confidence in issue resolution and service reliability. If your organization provides customer-facing support, your internal support should carry the same attitude and be held to the same standard. Otherwise you risk employees noticing the double standard, and considering internal support to be a cushy gig, solely because they don't have the same expectations.

To summarize:

High Ownership Low Ownership
Accountability Well-defined roles Ambiguous, unclear roles
Personal Growth Opportunity for mastery Stagnation
Team Culture Excellence and support Frustration and apathy
Outcomes Improved results and morale Increased failures and turnover

Okay, I'm Convinced. How Do I Address the Problem?

If you are in management, making sure this culture grows is your responsibility, which includes actively sharing feedback. However, many of these can be encouraged or done even without a management dynamic.

The most straightforward step to addressing this is to ensure that everyone has at least 1-2 areas that they will own. This means thinking of processes, projects, services, and tools that might not have a owner. Everything from:

  • Who is responsible for making sure GitLab runners are consistently available for jobs?
  • Who owns the integration test suite?
  • Who's the expert on using sysinternals tools?

For these "named owners" to become "true owners", they will need:

  • to collect feedback from their "users"
  • the autonomy to make decisions for their area
  • the expectation of responsibility of the success and failures of their area
  • understanding that it may take time to develop expertise of their area

These are critical to transforming a simple responsibility into ownership. They allow an individual to feel empowered in in their daily work.

The more institutional way to tackle this is with recognition and mentoring. Highlighting individuals who demonstrate what ownership means to your division or team is important for building a culture where people are rewarded (including financially) for going above and beyond. This can take the simple form of giving shoutouts to coworkers in a meeting, or giving out awards to those who best embody ownership in your organization.

Finally, in a discrete sense, having an easily accessible, easily searchable, healthy index of assets and services (ITIL CMDB) can help here. All employees should have read access to this, and be trained to use it (or have a guide they can bookmark). Not to mention there are potential compliance obligations that a proper CMDB can help solve.

But Burnout?

There is a distinction here that should be made: I am not advocating for over-working employees, or throwing individuals in the deep end in a new area. Ownership means being the primary point of contact, driving an issue/project to resolution, but still leveraging resources and co-workers where appropriate. I'm also not suggesting an owner be responsible and on-call 24/7. If your organization has SLAs that need to be met, an on-call rotation that includes cross-training is far more sustainable approach for maintaining a system.

For managers, making sure you are checking in with your team members. Seeing if they have grown bored with their existing responsibilities or are feeling overworked. It's very possible they need more support or would benefit from a rotation to different ownership areas where they can continue to grow.

The Bus Factor

It's also important to call out that area and process knowledge needs to be distributed, not siloed to single person, lest there be an unforeseen circumstance. More than one person in your organization should always have an understanding of the work being done in any given area to minimize risk; you don't want to be stuck learning about a service while troubleshooting a critical production issue. That said, it's not bad to have a single person *responsible* for the long-term success of that service.

Conclusion

I hope this has served as a chance to consider how ownership looks in your own team or organization. If change is needed, start a conversation about fixing it with someone else who *cares*: your manager, PM, etc. If you're in management, champion a culture where taking responsibility is celebrated, not just expected.

If your org doesn't address this, your exceptional and transformative co-workers can and will find a place that already has.

Wrangling LLM Web Crawlers

04/14/2025

Background

There's been a lot of media coverage recently around LLM data scraping, particularly of resources in the open source community. These scrapers often ignore robots.txt, leading to inspiration for tools like Nepenthes, Cloudflare's recent implementation of the same idea, iocaine, Anubis, and many others. As mentioned in most of those sources, the amount of traffic generated by these bots, particularly on popular open source websites, has been concerning, and has increased hosting costs. The current scheme of training AI models will continue to need more data, and as more competitors enter the space, it seems that this won't be slowing down any time soon.

While these tools are a great take to try to address this problem, most have not developed a simple method for deploying and integrating with your existing networking solution. I wanted to make it as easy as possible to integrate these tools, make sure they only get applied to undesired web crawlers.

Writing a Traefik Plugin

As a Traefik user for my Kubernetes cluster, I've long been a user of the CrowdSec Traefik Plugin, which is effectively a native fail2ban solution, with the additive benefit of ban-list sharing among users. This has been a good solution for bad actors in general, as IPs that submit requests that match known vulnerability heuristics will be blocked. Knowing this type of request handling was possible through a Traefik plugin, led me to follow the same path for this project.

Traefik Middleware plugins are golang modules that implement a http.Handler type and ServeHTTP interface, and either modify the request or . These are indexed by Traefik by searching GitHub, running tests against the package, and if a user has specified it for use in their Traefik instance static configuration, it will be loaded at runtime through the Yaegi interpreter. Yaegi has worked great for me while working on this project, though notably the unsafe stdlib package cannot be loaded. While I didn't plan to use this package directly, I was hoping to use zerolog for logging, so that logging could be done in JSON, and match Traefik's native logs easily. However, zerolog ends up using unsafe as a dependency. As a result, I implemented a custom log handler that, for the time being, writes simple strings to Stdout/Stderr.

The foundation for the plugin is the ai-robots.txt project, a community maintained list of LLM bot user-agents, their operators, and additional information. The plugin caches this list on a regular basis and uses it in two ways. The first is to generate a robots.txt file dynamically when requested for an ingressroute that implements the middleware. This file is used for the Robots Exclusion Protocol, meant to tell bots which resources on the site, if any, they are allowed to visit. However, its been well documented that many of these LLM data scraping bots do not respect the contents of this file.

If a request is not to /robots.txt and is from one of these bot's user-agents, then the request can be simplied logged, rejected with a 403 error, or proxied to a custom service. The proxy feature is meant to forward the request to a "tarpit" like app such as Nepenthes, iocaine, and others. The proxy is configured to be unbuffered such that a tarpit that intentionally trickles information to the client does not impact performance.

The combination of these two features ensures that only bots that ignore the Robots Exclusion Protocol are being acted upon. Users can also configure the application with their own custom robots.txt index, if they want to allow specific bots, or block even more user agents that may not be encompassed in the list.

The plugin is available through the Traefik Plugin Catalog, and a configuration and deployment guide can be found on the project's README.

Future Plans

While this plugin addresses controlling traffic when the request from a web crawler provides an accurate user-agent, it does not handle the case where it might be forged or omitted from request headers. To address this I'd like to work on a feature where requests can be evaluated by alternative methods, such as needing to complete a Proof-of-Work challenge like Anubis, or Altcha

I also plan to improve the logging feature to properly log in JSON so that the logs can be ingested into tools like Loki and be queried easier.

Streaming Live TV with Plex and HDHomeRun inside Kubernetes

01/29/2025

Background

HDHomeRuns are a great way to stream live TV to your home. These are devices produced by Silicon Dust that connect to an OTA antenna (or cable box) and can be connected to clients via USB or Ethernet. These can be found used for pretty cheap on eBay, but even their new products are reasonably priced. What piques most folks' interest in these is their ability to integrate with Plex, allowing you to stream live TV directly within Plex clients.

In my case, I chose a Ethernet connected HDHomeRun, however this presented a challenge when running Plex inside Kubernetes. HDHomeRun connections are initiated by the client sending a UDP broadcast packet from port 65001. This is done to discover the HDHomeRun device. Once the HDHomeRun device is discovered by the client, it will present it as a connection option, as demonstrated in this official Plex guide. While this can easily work within a given network, what if you run your IOT devices in one VLAN, but run Plex in another? Furthermore, what if you are running Plex inside Kubernetes, which uses NAT between the pod networks and the hosts?

Solving with socat

Thankfully this is where socat can help us. Socat is a versatile utility for bidirectional data transfer between two independent data streams, supporting various protocols like TCP, UDP, pipes, and files. As meckhert on the Unifi forum shared, socat can be used to handle the VLAN traversal. In our case it can also handle the NATing issues from inside the Kubernetes cluster! By running socat in a sidecar container with Plex, it will be able to receive the broadcast packets Plex sends out, and forward them directly to the HDHomeRun device.

An example deployment of this:

yaml
12345678910111213141516171819202122232425262728
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: plex
name: plex
spec:
replicas: 1
selector:
matchLabels:
app: plex
strategy:
type: Recreate
template:
metadata:
labels:
app: plex
spec:
containers:
- image: plexinc/pms-docker:1.41.3.9314-a0bfb8370
name: plex
env: [] # your env vars here
volumeMounts: [] # your volume mounts here
- image: alpine/socat:1.8.0.0
name: socat
command: ["socat", "-d", "-d", "-v", "udp4-recvfrom:65001,broadcast,fork", "udp4-sendto:192.168.20.20:65001"] # replace 192.168.20.20 with your HD Home Run's ip
volumes: [] # your volumes here

Don't forget to setup any firewall rules. I found the following needed to be allowed between my IOT and Kubernetes VLANs:

Allow HDHomeRun Discovery

  • Protocol: IPv4 UDP
  • Source: Kubernetes Host IPs
  • Destination: HDHomeRun IP
  • Destination port: 65001

Allow HDHomeRun Streaming

  • Protocol: IPv4 TCP
  • Source: Kubernetes Host IPs
  • Destination: HDHomeRun IP
  • Destination Ports: 80, 5004

Hopefully by sharing this I can help someone else who runs into this case. Happy streaming!

Using a Transparent Proxy to bypass TLS Fingerprinting

11/22/2024

Background

As an avid user of Home Assistant, I was excited years ago when I found there was an integration to control my car's "smart" features from within Home Assistant (remote start, lock, monitor fuel and tire pressure, etc). However Mazda issued a bad-faith DMCA takedown of the Python library that supported it, with no first party alternative to their slow, unresponsive app. Read more at Ars Technica here.

While the code continued to function for awhile, this summer Mazda servers began to stop responding to traffic generated from this Python library, and after some testing by myself and other members of the HA community, we determined it was due to an issue with the TLS handshake, possibly TLS fingerprinting. Since this connection expects TLS 1.3, openssl provides limited options for modifying cipher lists, so I investigated transparent proxies as a method to solve this by renegotiating connections.

Experimenting

Running the traffic for this integration through a transparent HTTPS proxy allows us to change the TLS fingerprint (JA3/JA4) that the Mazda server sees. I had tested this a few months back running burpsuite on a Kali Linux VM but I had a good deal of trouble getting the connection to be reliable but was mostly using it to troubleshoot the connection.

I revisited this the other day with mitmproxy. I initially tried the transparent proxy feature it provides, but was having issues when trying to use a NAT to send it traffic, at which point I discovered the builtin WireGuard proxy, which is also transparent proxy but traffic is sent via a WireGuard client. This pairs excellently with an add-on for Home Assistant that allows it to be a WireGuard client.

mitmproxy provides standalone executables but there is also a docker image that works well, I have this deployed in a Kubernetes cluster. I will also note that there is a mitmproxy Home Assistant addon someone was maintaining, but appears to have been recently deprecated.

Setup Steps

1. Prepare configuration for mitmproxy

Mitmproxy uses configuration files, which are searched for in the container's /home/mitmproxy/.mitmproxy folder. With this, we can map a volume containing our config files to that folder path.

You will need the below config.yaml file:

yaml
123
web_host: 0.0.0.0 # listen on all interfaces (not just loopback)
mode:
- wireguard

If you have your own Certificate Authority, create a PEM encoded private key+cert file named mitmproxy-ca.pem.

The below YAML describes the deployment and service resources that I've been using resource is what I'm using in my setup. Note the need to expose the wireguard port as a UDP service.

yaml
12345678910111213141516171819202122232425262728293031323334353637383940414243444546
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mitm-proxy
name: mitm-proxy
spec:
replicas: 1
selector:
matchLabels:
app: mitm-proxy
template:
metadata:
labels:
app: mitm-proxy
spec:
containers:
- image: mitmproxy/mitmproxy:latest
name: mitm-proxy
command:
- mitmweb
- --web-host
- "0.0.0.0"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mitm-proxy
name: mitm-proxy
spec:
selector:
app: mitm-proxy
type: NodePort
ports:
- name: mitm-web
port: 8087
targetPort: 8081
protocol: TCP
- name: mitm-wg
port: 51820
targetPort: 51820
protocol: UDP
externalIPs:
- 192.168.109.2

2. Test accessing the web interface, and save the wireguard client configuration from the Capture menu. Note that when accessing it, you need to access by IP or follow specific reverse proxy setup steps that support its DNS rebinding protection. I have found it easiest for this use case to just access via load balanced IP directly.

3. Get HA to trust your Certificate Authority (or one generated by mitmproxy) with the hass-additional-ca addon by Athozs. This is pretty easy and he has good documentation on the linked project's GitHub page.

4. Install the HA WG client addon and configure it. You'll need to translate the traditional text wireguard config to the YAML expected by the addon, but its fairly easy. For the allowed_ips value, you will need to enter the IPs for (or a CIDR range containing) the two servers for your region from here. Presently, this is what I have for the US.

yaml
12
  allowed_ips:
- 23.45.46.0/24

Results

While this has allowed me to connect to the Mazda API again and once again unify my smart devices to a single pane of glass, there have been some interruptions to this connection (~95% uptime). Hopefully the community can find a true permanent solution to this connection issue, I know I am still interested in solving this for good.

Containerizing Web Apps and CI Build Pipeline

03/21/2024

Overview

I wanted to share my recent journey of containerizing two of my projects, a Flask app (this website) and a Go web app (ginrcon), and then building a CI/CD pipeline to automate building and publishing the Docker image. Let's dive in!

The Motivation Behind Containerization

As the deployment complexity of my projects grew, I found managing VMs, dependencies, scaling, and patching to be increasingly challenging. I've been consuming and deploying Docker containers for awhile for various services at home, such as Pi-hole, a Unifi network server, a Matrix stack, and more. I've seen the advantages of containers as an administrator, so from the development perspective, I knew containers promised consistency across different environments, simplified deployments, and enhanced scalability.

Flask App Containerization

I started by creating a Dockerfile for my Flask app. This involved specifying a base image, copying my app's code into the container, installing dependencies, and exposing the necessary ports. Since I had already written a Linux service unit file, I stuck with running the Flask app with Gunicorn.

To ensure the web app would also run highly-available, I planned on deploying it in a replicated fashion, so I utilized Docker Compose for my Swarm. This allowed me to define the replication definition in a simple YAML file, alongside the image spec.

Go App Containerization

One of the perks of Go is its ability to compile to a single binary. Leveraging this, I created a Dockerfile for my Go app, ensuring it downloaded dependencies and compiled within the build container, on a pinned Go version, for consistency.

Since Go compiles to a binary, my Dockerfile for the Go app focused on creating a minimalistic image, resulting in faster builds and smaller image sizes. This is why I chose to use Alpine Linux as the final image. This lead to me using a multi-staged build to accomplish all intended goals.

CI Pipeline

After determining the needed image build and publishing workflows to ensure my image was available on the GitHub Container Repository (GHCR), I figured setting up a CI pipeline to handle those steps for me automatically would further contribute to a consistent image, in addition to being easier to manage. While I have much experience with GitLab CI professionally, these projects were both already hosted on GitHub, so I chose to learn how to use GitHub Actions to run these pipelines.

Leveraging GitHub Actions' flexibility, I wrote a workflow to build Docker images that works for both apps, triggered by Releases on the project. Once the image is built successfully, another step in the workflow pushes these images to the associated package repo (GHCR), with the version auto-detected based on the Release version.

Conclusion

The migration of my Flask and Go web apps into containers, coupled with the establishment of a Docker image build and publish pipeline, has improved my development and deployment workflows. Containerization has not only enhanced portability, scalability, and availability, but also simplified the management of dependencies. With a reliable CI/CD pipeline in place, I'm better equipped to iterate on my projects in the future.

For future improvements, I'd like to work on developing unit tests for these projects as well as create GitHub Actions to trigger them automatically upon new commits/PRs.

Highly Available DNS for Home Network

11/23/2023

A recent project I worked on was improving the fault tolerance of my home network, specifically DNS. Previously, I was running a single instance of Pi-hole, which filters out unwanted DNS queries, and forwarded the rest to my upstream Windows Domain Controllers with integrated DNS. From there, queries go out to a public resolver. This approach had a few drawbacks. Two issue stemmed from that the Pi-hole instance was running bare-metal on a Raspberry Pi, which while usually reliable, was not tolerant of hardware issues. Patching the Raspberry Pi or rebooting it for other reasons would also cause a DNS service outage, which was undesirable. The Raspberry Pi was also being used for other services which occasionally could introduce undesirable system load. Another issue I could often encounter, if the upstream Windows DNS servers stopped responding to queries, Pi-hole cached these failed lookups, which would persist even after the issue with the upstream Windows servers was resolved and required a service restart. The solution I designed is pictured below. DNS queries are now sent to a single IP address (192.168.1.2), provided via DHCP, which is a load balanced IP address on my ADC (now NetScaler again) VPX appliance, pointed at two Pi-hole instances.

A diagram showing the DNS architecture. A Citrix NetScaler is the entrypoint, which forwards queries to one of two Pi-Hole containers, running under Docker Swarm. Those Pi-Hole containers forward queries upstream to Windows DNS servers, which in turn forward queries to Cloudflare at 1.1.1.1.

NetScaler Configuration

Getting a NetScaler instance up and running is actually pretty easy, since as of v12.1, Citrix offers a Freemium licensing option, which is bandwidth restricted to 20 Mbps and doesn't provide access to certain features like GSLB or Citrix Gateway, but neither limitation is an issue for this use case. Configuring a simple load balancer for servers on a NetScaler isn't particularly difficult and many general guides exist. At a high level, you need to:

  • Define the servers that will provide the DNS service.
  • Define a Load Balancing Service Group containing those servers.
  • Define a Load Balancing Virtual Server, with a Virtual IP listening at the IP address you'll be pointing clients to, and bind the above Service Group.

Additionally, you can bind a monitor for the Service Group to ensure DNS lookups function properly, rather than servers just responding to pings or other simple health checks. I configured a DNS monitor with the parameters shown on the right, specifically to query for my local domain name, and ensure it resolves to one of the IP addresses of my domain controllers. Multiple IP addresses can be added to the list to be considered a valid response. Don't forget to save your changes since they won't persist through reboot otherwise!

the parameters provided to the NetScaler health check configuration. The settings are: an interval of 5 seconds, a timeout of 2 seconds, a query for domain.local a query type of address, and an expected IP address of 192.168.1.2

Pi-hole Container Setup

The upstream Pi-hole instances are configured with Docker Compose, deployed as containers on a Docker Swarm cluster, and managed via Portainer. I opted for Docker Swarm over a more complex tool like Kubernetes given the relatively low complexity of this project's requirements. I may follow up with migrating these containers to being managed with Kubernetes in the future. Creating a Docker Swarm and joining nodes to it is fairly straightforward, and Docker's own documentation is pretty great for those steps (link). Managing these Pi-hole containers via Docker Compose and deploying them to the cluster was more complex since not a lot of reference documentation existed. To the side is the Docker Compose YAML used for this. A couple things to note about the Compose file:

This is running in replicated mode with the intent to be deployed to two specific nodes. This is handled via the settings under "deploy". Specifically note the requirement of the target nodes requiring the label of "pihole==true". This can be set via command line from the Swarm leader:

sh
1
docker node update --label-add pihole=true <node_id>

I'm directly publishing the container's ports to the corresponding ports on the host. This will use direct volumes for storage on the nodes, rather than bind mounts.

Most of the Pi-hole settings are configurable via the Compose file. However not all of them are, particularly custom defined Allow/Blocklists entries, Client Group Management, and others. For these settings, I recommend exporting/importing via the Teleporter backup feature under the settings page. These will be stored in the "pihole.etc" volume.

Example Compose File

yaml
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
version: '3.8'
services:
pihole:
image: pihole/pihole:latest
deploy:
mode: replicated
replicas: 2
update_config:
delay: 30s
placement:
max_replicas_per_node: 1
constraints: [node.labels.pihole==true]
restart_policy:
condition: on-failure
max_attempts: 3
delay: 30s
window: 120s
ports:
- target: 53
published: 53
protocol: tcp
mode: host
- target: 53
published: 53
protocol: udp
mode: host
- target: 80
published: 80
protocol: tcp
mode: host
environment:
DHCP_ACTIVE: 'false'
DNSMASQ_LISTENING: 'all'
DNS_BOGUS_PRIV: 'true'
DNS_FQDN_REQUIRED: 'true'
PIHOLE_DNS_: '192.168.1.5;192.168.1.6;fe80::b1f2:c67d:5464:e10f;fe80::f576:da56:d322:4dc'
REV_SERVER: 'true'
REV_SERVER_CIDR: '192.168.0.0/16'
REV_SERVER_TARGET: '192.168.1.5'
REV_SERVER_DOMAIN: 'domain.lan'
TZ: 'America/Chicago'
WEBTHEME: 'default-dark'
volumes:
- pihole.etc:/etc/pihole/
- pihole.dnsmasqd:/etc/dnsmasq.d/
networks:
- host
networks:
host:
external: true
volumes:
pihole.etc:
pihole.dnsmasqd:

Takeaways

Advantages:

  • A single IP to point to for DNS queries reduces network complexity"
  • When defining two DNS servers for clients, clients only fail over to the secondary one if the first is unavailable. This allows DNS queries to be consistently balanced between both Pi-Hole nodes and reduce system load."
  • The custom DNS monitor ensures my upstream Windows servers are answering domain queries with healthy responses."
  • Pi-hole containers defined via YAML increases flexibility to deploy additional nodes if needed."

Disadvantages:

  • The primary disadvantage of this setup is a single point of failure remains, with the NetScaler node being the listening IP for DNS queries. I found this risk to be tolerable though since I dont use my NetScaler VPX for other purposes, and its significantly more stable by comparison to the DNS servers themselves."

In the future, I'd like to further investigate maintaining between synchronicity between the Pi-Hole docker nodes. I plan to do this with either the handy gravity-sync tool by vmstan, and/or by using shared storage for the volumes being used by the docker nodes.

Breaking out of the Lenovo Smart Clock

04/16/2023

Background

This spring I stumbled across an interesting thread over at the XDA Forums where @willbilec found that the Lenovo Smart Clock 2 could could be broken out of its custom Android interface, and be used to run any Android application.

He documented a method where usage of the "Talkback" accessibility feature could be used to access a basic Android browser, at which point one could download, install, and run any APK of your choice.

Getting Started

I picked up one of these 2nd gen clocks on eBay after reading about the possibility for customization, but some folks had already mentioned the Talkback support was removed from the Google Home app preventing the previously used method to break out of the Google Home software.

After many different attempts amount of effort, I was able to find an alterative breakout method, detailed below, that follows the basic idea of the original method.

Digging In

We start with following a couple menus within settings:

Settings menu -> Send Feedback —> (Say Something) -> Legal Help Page

This opens a locked down browser that we need to break out of still since it can't download anything.

My goal at this point was to find any link that used http. This is since the locked down browser that opens refuses to appear navigate through to a site with a cert error.

I was able to navigate from this Legal site to Lumen, where one of their blog posts from 1/26/2023 used a HTTP link.

a browser screenshot of a lumen database blog post. A hyperlink is being inspected in the browser developer tools. The hyperlink is shown to be a HTTP link.

I setup an A record in my home DNS for this website, and pointed it a self-hosted webserver that hosted a single page, with a plain text link to FDroid, a website with Open Source apps available for download as APKs.

a browser screenshot of a very basic web server. The only content is a plain-text link to http://fdroid.org

While it might seem untuitive to have this be plain-text, we need to exploit another feature.

Pressing and holding to highlight the full URL presents an “Open” option in the context menu. Once clicked, nothing will appear to happen, but closing the locked-down browser window and quitting out of the settings menu reveals a full browser awaiting you. From here its trivial to download packages from F-Droid to start customizing the smart clock.

Once you can install an APK, you should install an on-screen keyboard, a priority since the smart clock doesn't have one by default. I got started with installing Unexpected Keyboard and Text Launcher to provide some basic interface options. At the recommendation of @j.smith I also installed Key Mapper, which allows for remapping the "bump" input that the smart clock has, normally meant to snooze alarms.

Wrap Up

I'm sure there is a more reliable method for this method, and I won't be surprised if Lenovo/Google is watching our thread over on XDA and patching these breakouts. There is also always the option of opening up the device and connecting with ADB, which ThomasPrior over on GitHub has a great guide for.

Configuring a Hub-Spoke VPN with WireGuard

05/13/2022

When I recently moved, I unfortunately found that my ISP used double-NAT for their customers. This meant for services I run on my home network that don't support IPv6, such as Plex, or my file share, I was unable to access them externally. To address this, I identified that a hub-spoke VPN configuration would allow me to access my home network when on the go. I choose WireGuard as the VPN protocol for a multitude of reasons: it is highly efficient compared to older protocols like OpenVPN or IPSec, is natively included in the Linux Kernel starting with version 5.6, and is configurable via a typical network interface.

By utilizing a server that is publicly accessible, you can route bi-directional traffic from a client not on-prem, into your home network:

a network diagram showing the flow of traffic. An Amazon EC2 instance acts as a gateway and router for the VPN connection, distributing traffic to and from clients. These clients include a home router, a macbook being used remotely, and a smartphone being used remotely. Each is assigned a client IP address, with the EC2 gateway server addressed as 10.10.10.1.

DNS for clients is routed back to my home DNS server (Pi-hole), with my internal domain configured as the search domain. This allows me to perform DNS lookups for clients on my home network, as well as my pi-hole for ad blocking on the go"

Setup

  1. Installation/setup of WireGuard is fairly easy, and there are plenty of guides available for details. Besides configuring the WireGuard interfaces with the configuration specific to this setup (see below), the only other specific setup was to configure the security group in the AWS web console to allow inbound traffic to the port noted in the server's WireGuard config.
  2. Navigate to your EC2 Management Console and select your server instance.
  3. Click the security group that is attached to the instance, found under the "Security" tab.
  4. Under "Inbound Rules", click "Edit inbound rules".
  5. Add a rule with the following:
  • Type: "Custom UDP"
  • Port range: enter the port you have in your server's WireGuard config
  • Source: Anywhere-IPv4

Takeaways

Configuring this did not come without challenges. Most notably, was the undertaking of re-IPing my home network. This was not strictly necessary, but it helps avoid IP conflicts between the network that I'm connected to on the go, and my home network. This configuration also selectively route only my private network ranges through the VPN to increase efficiency. This way if I want to stream a TV show, acccess, I'm not losing efficiency.

When I first implemented this setup, I was using a Raspberry Pi as the gateway into my home network. However, I soon realized this task was best suited for my Router. I have a Ubiquiti router that runs their EdgeOS, which doesn't have native WireGuard support. I was able to find this fantastic WireGuard package that allows for implementing WireGuard on vyatta, which EdgeOS is built on top of. Since vyatta is a fork of Debian linux, getting this configured was fairly straightforward. Their Github wiki covers these steps in details.

I route a 10.10.0.0/16 network through the VPN. Initially this was just a single /24 subnet, but I've found having each client correspond to a single network interface on the AWS server will allow for special configuration, such as mDNS reflection, without causing a reflection loop. This is next on my list to get implemented, since it allows access to even more services on my home network.

Configuration Details

conf
123456789101112131415161718
# AWS Server Config
[Interface]
Address = 10.10.10.1/24
PrivateKey = <private key here>
ListenPort = 53131
PostUp = iptables -A FORWARD -i %i -j ACCEPT
PostDown = iptables -D FORWARD -i %i -j ACCEPT

# LAN gateway
[Peer]
PublicKey = <public key here>
AllowedIPs = 10.10.10.2/32, 192.168.111.0/24
PersistentKeepAlive = 25

# Mobile client
[Peer]
PublicKey = <public key here>
AllowedIPs = 10.10.10.3/32
conf
12345678910
# Router (as gateway) Config
[Interface]
PrivateKey = <private key here>
Address = 10.10.10.2/24

[Peer]
PublicKey = <public key here>
AllowedIPs = 10.10.0.0/16
Endpoint = <public server domain name>
PersistentKeepalive = 25
conf
1234567891011
# Client Config
[Interface]
PrivateKey = <private key here>
Address = 10.10.10.3/24
DNS = 192.168.111.2, <local domain>

[Peer]
PublicKey = <public key here>
AllowedIPs = 10.10.0.0/16, 192.168.111.0/24
Endpoint = <vps server>
PersistentKeepalive = 25

Crowdsourced Drink Prices: Web App using ReactJS

06/02/2020

My latest project came about when out the other night with some friends. We were at a bar that was crowded, and I wanted to order a cocktail, but some bars have ridiculously high prices for certain drinks, so I was hesitant. Bars rarely post their prices for each drink, especially cocktails. I came up with the idea of building an app to crowdsource drink prices for any bar, inspired by other crowdsourced apps like Waze and GasBuddy. The "Drink Up, Pay Less" App (DU-PL) was born.

Front End:

  • React app written with JS ES6, HTML5, and CSS
  • Makes asynchronous requests to back end server, receives JSON data back
  • Hosted on AWS Amplify

Back End Server (RESTful Web API):

  • Nginx server acts as a reverse proxy to redirect traffic to a Node.js server
  • Node.js Server interfaces with MariaDB to retrieve data
  • MariaDB holds a database with a unique table for each bar
  • Hosted on Amazon EC2 (Elastic Compute Cloud)

Front End Detail:

I had a solid foundation in JavaScript going into this project, but had never worked with UI libraries like React or Angular. I chose React due to its rising popularity over recent years, especially for building PWAs (Progressive Web Apps). I decided to build a web app instead of a native Android or iOS app due to the fees required to join their app store developer programs. Progressive Web Apps are growing in support, allowing the webpage to be installable to a user's home screen, caches content for faster load times, and has offline functionality for features that don't directly require internet access.

Back End Detail:

The internet facing server which receives HTTPS requests runs on Nginx which forwards requests to an Express (a NodeJS framework for web applications) server. This is done so that Node is not run as a superuser, which has security vulnerabilities. I generated a SSL certificate (CA signed) so that the front-end app running on HTTPS was able to send requests to the server. I used MariaDB (a branch of MySQL) for the relational database to store data submitted by users for venues and drinks. Tables are generated on the fly if they do not exist for a specific venue, so that the app is easily expandable for a multi-regional user base. Initially I tried hosting the server myself, but ran into problems with my ISP blocking requests to ports 80 and 443, even though they claim not to. I ported it all to Amazon's EC2 service which allows for scalable operations, and allows some extra security in my home network by not hosting it myself.

Finished Project:

The app is currently live at: du-pl.com. Please check it out and feel free to leave feedback! I found the process very enjoyable and am happy I was able to add to my JavaScript development skills, as well as getting valuable experience using popular libraries and frameworks like React, Node.js, Nginx, and MariaDB.

Future Plans:

Moving forward I plan to continue to add more features to the site, like user accounts to favorite/bookmark venues, and the ability to search local venues for specific drinks. Ideally in the future I would port the app to React-Native, which would give me the ability to compile the app for XCode and Android formats. I would then host them on the iOS App Store and Google Play Store, as users would prefer a native app to a web app, as updates can be pushed automatically and the app can be found easier by new users.

Donating CPU Cycles to Fight COVID-19!

04/13/2020

Reading the title of this post might be a bit confusing, so let me start from the beginning. I was browsing Youtube this weekend when I saw ETAPRIME, who does Raspberry Pi and other SBC Computer projects, made a video with a similar title. I watched his video and learned about a program called BOINC, the Berkely Open Infrastructure for Network Computing. This program allows multiple computers to be given a distributed workload for solving complex problems, effectively acting as a supercomputer.

There are various projects that are run through the infrastructure, but one in particular, called Rosetta@home, is run by The Baker Lab at the University of Washing, which is dedicated to understanding the complex structure and function of proteins. They have turned their attention to SARS-COV2 virus and recently were given extra help to allow their software to run on ARM based computers.

I spent the weekend setting up my Raspberry Pi 4 with 4GB RAM to run uninterrupted on the Rosetta@home Project, since I wasn't currently using it for any personal projects. However after checking the available workload page, I noticed there was a lot more work available to be done by Intel based CPUs, so I also set up my PC to run BOINC and Rosetta@home overnight, as I wouldn't be using my laptop at night anyway. I'm happy to be contributing in any way I can while following stay at home orders!

You can view my total contributions at the link below. Moving forward in the next few days I will be setting up BOINC on my old laptop, as well as another SBC computer I have, an ODROID XU4, so that I can contribute as much as possible!

Check out my stats here!

Sneaker Bot (Python and Selenium)

04/05/2020

Having been an avid sneaker collector since high school, I've always had an eye open to combining my technical knowledge with my love for sneakers. In the sneaker community it has become very commonplace to use "bots" to purchase the latest sneaker releases, since time and repetitive attempts are key to being able to get a pair. Back in high school had written a simple Javascript program to automate a few button clicks through a Chrome browser extension, but now that my coding experience and knowledge has increased, I thought I would take a crack at making a fully encapsulated desktop program.

I chose to use Python as the primary language for this project as I haven't built a GUI before in Python and I thought that would be a good exercise. I was able to find the popular automated web testing framework Selenium. Selenium Webdriver gives the ability to control a browser window through code, such as window locations, button clicks, and more. It is a very straightforward package to install and use, as well as having a popular community with good resources. I built the GUI using elements from PyQt, a Python binding of the popular cross-platform toolkit Qt. This gave me easy access to pre-built elements like buttons, text field, and more, which I was able to access as objects through my code.

Current Features

  • The list of URLs, sizes selected, and proxy info can be saved to a custom ".hcp" format file allowing for import and export.
  • Preview images of the selected sneaker.
  • Notification if a specified size is out of stock.
  • Circumvention of basic automated access detection.
  • An API call to my webserver to run a check on a verification code to use the program.
  • Proxy support, as IP based bans are possible if a website is able to detect you are using a bot.
  • Implementation of multi-threading in order to allow multiple bot instances to run at once.
  • Threads are able to be terminated by the user using flags between functions.
  • Any cart instance is able to be open and checked out by the user through transferring cookies between browser sessions.

Wrap Up

The current version of the "HolyCopBot (hcp)" is available on my Github at this page, with future revisions planned!

Arduino Heater Controller

01/06/2020

Since the cold has rolled in the past weeks, I've been running the space heater in my bedroom, but the heater's temperature control is horrible. It consistently overheats and underheats the set temperature, which I would assume either comes from a cheap temperature sensor or the fact the sensor is located directly next to the heating element, so I finally decided to do something about it.

The Plan

To use a microcontroller to monitor the room's temperature, then trigger temperature changes on the heater or shut it off using infrared signals, spoofed from the original remote.

BOM

  • Arduino Mini (only used this specifically since I had a spare)"
  • DHT11/22 Temperature Sensor"
  • Infrared LED"
  • Infrared Sensor (For capturing codes only)"
  • 2222N NPN BJT"
  • HC-05 Bluetooth Module"
  • 2x 470k Ohm Resistors"
  • Hookup Wire to use as jumpers on the board"
  • 5V Power Supply (Minimum 0.5A)"
  • PCB Proto-board/Breadboard"

Hardware

First I organized then soldered everything on the board. The NPN transistor is used so that the IR LED can be powered with 5V, which gives a stronger signal, rather than from the Arduino. The BT module will be used for external commands from my Raspberry Pi over the serial port. I tried to space the DHT sensor as far as possible from the Arduino and the BJT to prevent heat interference. With all that done, let's move onto the software!

A picture of the assembled hardware on a table. A breadboard acts as a mounting plate for the other components: a temperature sensor, infrared LED, Arduino microcontroller, and bluetooth module board. A power cord is connected and runs out of view.
An alternative view of the assembled project's hardware. The bluetooth module is more clearly shown.

Software

I knew I needed a few libraries before I began to code so I added them through the Arduino IDE library manager:

  • DHT.h: Interfacing with the DHT sensor (very straightforward to use with either DHT11/22)
  • IRremote.h: Very helpful IR library for spoofing IR commands
  • AdafruitSleepyDog.h: Library for sleeping with the watchdog timer
  • SoftwareSerial.h: Using digital pins as serial tx/rx pins

The AdafruitSleepyDog library is very helpful for projects where low power is a concern, since power-down sleep function included on atmega328p chips is very low power and is able to be woken up by the watchdog timer (can be compared to other sleep modes on the 328ps datasheet). Even though I wanted to use a power supply with this project, I wanted the flexibility to use a battery, as well as attempting to lower power consumption if it was going to be plugged in all the time. The watchdog timer can only run for 8 seconds, a loop was necessary for it to sleep for 2-3 minutes as planned.

The IRremote.h library was the backbone for the implementation of this project, as my space heater has a simple IR remote paired with it. I was able to spoof the remotes commands by using an IR sensor and reading the raw bytes, as well as the hex data of each button's signal. Luckily, the remote uses the NEC IR Protocol, which is included in the library. This means I can send the hex data, instead of the raw data with bursts/spaces.

I used the SoftwareSerial library to add a HC-05 Bluetooth module to the remote. This will be used to read the serial port on every wake cycle, looking for a string sent by my Raspberry Pi, which will be read as chars to execute specific commands. I did this so I can use a python script on the Raspberry Pi to interface with the Arduino. The HC-05 is constantly paired to the Pi and can be repaired if disconnected by running a Python script. The commands that the Pi sends over include power on/off, changing the set temperature, and setting a timer. The temperature is automatically lowered at night and raised during the day by adding two python scripts to crontab, executing at 8:00AM and 11:30PM respectively.

Wrap Up

After putting this remote into use for a few days and debugging, I realized the DHT11 Temperature sensor with an error range of +/-2.0 degrees C is too high for room temperature control. After replacing the sensor with a DHT22 (same pinout and minor code changes) the remote works as expected and my bedroom's temperature is much more consistent!

The Arduino code as well as the python scripts can be found on my Github!