[{"content":"At the time of writing, attempting to run Claude Code on Termux works, however any background tasks involved will return an error similar to this:\nEACCES: permission denied, mkdir '\/tmp\/claude\/-data-data-com-termux-files-home\/tasks' The issue here is that Claude assumes that the user has permission to create files within \/tmp which is not true for Termux.\nWhile this should be fixed properly, you can use PRoot in the meantime as a workaround.\nproot -b \/data\/data\/com.termux\/files\/usr\/tmp:\/tmp claude You\u2019ll probably want to turn this into a shell alias\/function for convenience.\n","slug":"claude-code-termux-tmp-tasks","tags":null,"title":"How can I get Claude Code to execute background tasks on Termux?"},{"content":"A little while ago, I was asked about some EC2 hosts running Crowdstrike, particularly which versions they were running.\nWhile Crowdstrike was running as a systemd daemon, it wasn\u2019t immediately clear how to poke at it to get at any configuration info.\nIt turns out that Crowdstrike\u2019s daemon shipped with a CLI tool available at \/opt\/CrowdStrike\/falconctl.\nYou can use the -g flag to \u201cGET\u201d options followed by whichever flag might be useful.\n-h is your friend here.\nFor getting the version, I was able to do that like so:\n$ \/opt\/CrowdStrike\/falconctl -g --version ","slug":"crowdstrike-cli","tags":["security"],"title":"How can I view Crowdstrike configuration on a given host?"},{"content":"When trying to debug things in the browser, it\u2019s common to pop open your browser\u2019s dev tools or reach for an OS level proxy such as Proxyman.\nOne alternative for Chromium browsers that surfaces useful information is chrome:\/\/net-export. That URL should resolve both in Google Chrome as well as Chromium-based browsers such as Brave, Edge and so on.\nYou\u2019ll be presented with a pretty plain looking page.\nYou can basically pick whether you want to strip private info and what the maximum log size should be.\nClicking \u201cStart Logging to Disk\u201d will start spooling data to a JSON file until you stop doing so.\nSo, once you\u2019ve got a NetLog dump, now what?\nYou can use the browser-based Netlog Viewer which will render the contents within your browser rather than sending them off to a third party.\nThis can be useful if you want to share your net dump with a coworker who may be helping to debug a technical issue for example.\nThere\u2019s a lot of useful insights in here such as a full timeline of traffic, including disc cachingwhich wouldn\u2019t be captured by a standard man in the middle proxy.\nThere are also a lot of other standard things like DNS lookups which can be valuable for debugging.\nYou can even see what extensions were loaded at the time, which may help to figure out if any of them may have been modifying traffic in a way that causes problems.\n","slug":"chrome-netlog-dump","tags":["browsers","chrome","networking","mitm"],"title":"Capturing web traffic with Chrome's NetLog dumper"},{"content":"When creating a Kafka message, it\u2019s possible to set a key in order to ensure that all messages with that same key end up on the same partition.\nPreviously, the default strategy (aptly named DefaultPartitioner) used to be that the message key attached to a message would be hashed using murmur2.\nYou could use this calculator combined with the default Kafka seed (9747b28c) to figure out the murmur hash and then modulo by the number of partitions you had.\nFor example:\n0974728 (seed) + 12413413 (input) -> 3242098085 (hash) 3242098085 (hash) % 16 (partition) = 5 This would mean that a message with a key of 12413413 sent to a topic with 16 partitions, would be assigned to Partition 5\n","slug":"kafka-default-partitioner","tags":["kafka","events","messaging"],"title":"Kafka Default Partitioner"},{"content":"Nowadays, the default partition is apparently UniformStickyPartitioner although at the time of writing IBM\/sarama defaults to HashPartitioner and segmentio\/kafka-go has round-robin as its default.\nWhen it comes to the hash strategy, I recently found myself wondering how to determine the assigned partition for a program using kafka-go\u2019s hash partition.\nThis lead to me slicing up the default hasher implementation into a small Go playground program.\nNOTE: While I\u2019ve manually run the testcases through this small program and they all passed, I haven\u2019t exercised it in any depth. It\u2019s just a reference for myself in the future. You should probably use the actual library directly.\npackage main import ( \"fmt\" \"hash\" \"hash\/fnv\" ) func main() { partitions := generatePartitions(3) key := \"blah\" hasher := fnv.New32a().(hash.Hash32) hasher.Reset() if _, err := hasher.Write([]byte(key)); err != nil { panic(err) } partition := int32(hasher.Sum32()) % int32(len(partitions)) if partition < 0 { partition = -partition } fmt.Println(partition) } func generatePartitions(partitionCount int) []int { partitions := []int{} for i := 0; i <= partitionCount-1; i++ { partitions = append(partitions, i) } return partitions } ","slug":"kafka-golang-hash-strategy","tags":["kafka","events","messaging","kafka-go","sarama","golang"],"title":"Kafka Golang Hash Strategy"},{"content":"Recently, I was working on an instance of Backstage and when reinstalling node_modules from scratch, I ran into this interesting error. I\u2019ve removed most of the output for clarity.\n$ yarn install \u27a4 YN0000: \u00b7 Yarn 4.3.1 [...] \u27a4 YN0000: \u250c Link step \u27a4 YN0007: \u2502 isolated-vm@npm:4.5.0 must be built because it never has been before or the last one failed \u27a4 YN0009: \u2502 isolated-vm@npm:4.5.0 couldn't be built successfully (exit code 1, logs can be found here: \/private\/var\/folders\/_g\/k61b_sdn0q14vn94nkln53_h0000gq\/T\/xfs-a558ccfc\/build.log) \u27a4 YN0000: \u2514 Completed in 2s 414ms \u27a4 YN0000: \u00b7 Failed with errors in 3s 254ms From here, checking the mentioned logs gave me this:\n$ cat \/private\/var\/folders\/_g\/k61b_sdn0q14vn94nkln53_h0000gq\/T\/xfs-a558ccfc\/build.log # This file contains the result of Yarn building a package (isolated-vm@npm:4.5.0) # Script name: install gyp info it worked if it ends with ok gyp info using node-gyp@9.3.1 gyp info using node@20.12.2 | darwin | arm64 gyp info find Python using Python version 3.12.7 found at \"\/Users\/marcus\/.local\/share\/mise\/installs\/python\/3.12.7\/bin\/python3\" gyp info spawn \/Users\/marcus\/.local\/share\/mise\/installs\/python\/3.12.7\/bin\/python3 gyp info spawn args [...] Traceback (most recent call last): File \"\/Users\/marcus\/blah\/node_modules\/node-gyp\/gyp\/gyp_main.py\", line 42, in <module> import gyp # noqa: E402 ^^^^^^^^^^ File \"\/Users\/marcus\/blah\/node_modules\/node-gyp\/gyp\/pylib\/gyp\/__init__.py\", line 9, in <module> import gyp.input File \"\/Users\/marcus\/blah\/node_modules\/node-gyp\/gyp\/pylib\/gyp\/input.py\", line 19, in <module> from distutils.version import StrictVersion ModuleNotFoundError: No module named 'distutils' gyp ERR! configure error gyp ERR! stack Error: `gyp` failed with exit code: 1 gyp ERR! stack at ChildProcess.onCpExit (\/Users\/marcus\/blah\/node_modules\/node-gyp\/lib\/configure.js:325:16) gyp ERR! stack at ChildProcess.emit (node:events:518:28) gyp ERR! stack at ChildProcess._handle.onexit (node:internal\/child_process:294:12) gyp ERR! System Darwin 24.1.0 gyp ERR! command \"\/Users\/marcus\/.local\/share\/mise\/installs\/node\/20.12.2\/bin\/node\" \"\/Users\/marcus\/blah\/node_modules\/node-gyp\/bin\/node-gyp.js\" \"rebuild\" \"--release\" \"-j\" \"4\" gyp ERR! cwd \/Users\/marcus\/blah\/node_modules\/isolated-vm gyp ERR! node -v v20.12.2 gyp ERR! node-gyp -v v9.3.1 gyp ERR! not ok For clarity, there\u2019s the actually important part pulled out:\nTraceback (most recent call last): File \"\/Users\/marcus\/blah\/node_modules\/node-gyp\/gyp\/gyp_main.py\", line 42, in <module> import gyp # noqa: E402 ^^^^^^^^^^ File \"\/Users\/marcus\/blah\/node_modules\/node-gyp\/gyp\/pylib\/gyp\/__init__.py\", line 9, in <module> import gyp.input File \"\/Users\/marcus\/blah\/node_modules\/node-gyp\/gyp\/pylib\/gyp\/input.py\", line 19, in <module> from distutils.version import StrictVersion ModuleNotFoundError: No module named 'distutils' gyp ERR! configure error gyp ERR! stack Error: `gyp` failed with exit code: 1 What we can see here is that some part of trying to build node-gyp attempted a call to the Python module distutils and we don\u2019t have that module installed.\nAt this point, I realised I had upgraded a bunch of language runtimes recently and sure enough, distutils is no longer shipped with Python by default.\nThis was done via PEP 632 and announced at the very top of the Python 3.12 release notes that I haven\u2019t read.\nAnyway, for myself on macOS, I did the following to get distutils all set up:\n$ brew install python-setuptools $ pip install setuptools Both steps are required I believe but I haven\u2019t tested to see what if you have one or the other. I do know that running pip install fixed it but some claim that brew install was the missing half for them.\nAnyway, you should be back on your way to javascripting in no time.\n","slug":"nodegyp-python-distutils-missing","tags":["node","python","node-gyp","distutils","dependencies","breaking-changes"],"title":"Why is node-gyp complaining about missing distutils under Python 3.12?"},{"content":"Skip integrity check for yay package manager ==> Making package: yacreader-bin 9.14.2-4 (Sat 02 Nov 2024 13:43:07) ==> Checking runtime dependencies... ==> Checking buildtime dependencies... ==> Retrieving sources... -> Found yacreader_9.14.2-1_amd64.deb ==> Validating source_x86_64 files with sha256sums... yacreader_9.14.2-1_amd64.deb ... FAILED ==> ERROR: One or more files did not pass the validity check! -> error making: yacreader-bin-exit status 1 -> Failed to install the following packages. Manual intervention is required: yacreader-bin - exit status 1 yay -S --mflags --skipinteg <package name> ","slug":"arch-linux","tags":["linux","arch"],"title":"Arch Linux"},{"content":"Querying for JSON embedded in fields You can use JSON_EXTRACT_SCALAR to extract content out of a text field like so:\n-- Extract regular nested JSON SELECT JSON_EXTRACT_SCALAR(message, '$.blah.bleh') AS cool_field FROM bluh -- Extract JSON where fields are dot notated keys -- Supposedly you can do '$.\"blah.bleh\"' but this never worked for me SELECT JSON_EXTRACT_SCALAR(message, '$[\"blah.bleh\"]') AS cool_field FROM bluh -- Filter for only JSON deserialisable messages SELECT message FROM bluh WHERE JSON_EXTRACT(message, '$') IS NOT NULL -- Annotate the type of a message as a column SELECT message, CASE WHEN JSON_EXTRACT(message, '$') IS NOT NULL THEN 'JSON' ELSE 'PLAIN_TEXT' END as message_type FROM bluh ","slug":"athena-cheat-sheet","tags":["athena","sql","amazon","aws","cheatsheet","reference"],"title":"Athena Cheat Sheet"},{"content":"Apache Lucene is a search library used by the popular Kibana and OpenSearch Dashboards projects.\nWhile both projects have their own DSLs for searching1, they also support Lucene as a fallback.\nThere are often useful dashboard queries that can only be performed by dropping down to Lucene.\nChecking if a field doesn\u2019t exist SYNTAX: !_exists_:<field> EXAMPLE: !_exists_:http.status NOTES: Only matches if a key doesn't exist. Will not find fields that are empty or set to nil values. Checking if a field does exist SYNTAX: _exists_:<field> EXAMPLE: _exists_:http.header.user_agent NOTES: Empty or nil values will still match. Only matches if a key is missing entirely. Finding text \u201csimilar\u201d to existing fields (ie; levenshtein distance) SYNTAX: <query>~ EXAMPLE: awetome~ NOTES: This example would find documents containing \"awesome\" and other variations. Kibana Query Language (KQL) for Kibana and Dashboards Query Language (DQL) for Opensearch Dashboards\u00a0\u21a9\ufe0e\n","slug":"lucene-cheat-sheet","tags":["cheatsheet","kibana","lucene","opensearch","reference"],"title":"Lucene Cheat Sheet"},{"content":"Recently, I was querying for some ALB load balancer logs. It\u2019s not something that happens too often but I was surprised to find that the query rows were empty except for the date column.\nI quickly figured out that only logs from May 30th, 2024 and onwards were missing and after a quick comparison, realised that the log formats had changed.\nSure enough, as noted in a banner on this page, the ALB access log format changed adding classification, classification_reason and conn_trace_id fields.\nUnfortunately for anyone with AWS Athena tables, that means your pre-existing tables will likely have an out of date regex if you\u2019re using org.apache.hadoop.hive.serde2.RegexSerDe as your row format.\nI don\u2019t believe there is any way to update this and it requires recreating each table given that Athena doesn\u2019t support altering SERDEPROPERTIES.\nHonestly, I found this whole thing kind of disappointing and I\u2019m not sure what prevents it from happening again in future but as mentioned, I don\u2019t personally use Athena too often to feel the pain.\n","slug":"athena-empty-logs","tags":["amazon","athena","aws","cloud","loadbalancers","sql"],"title":"Athena Empty Logs"},{"content":"I always forget the various flags for tcpdump but also how to actually get captures off of machines in the rare case that it\u2019s useful for low level debugging.\n$ tcpdump -i <interface> -s 65535 -w <file> If you\u2019re capturing packets inside of a Docker container, you can export the file like so:\n$ docker cp <container_id>:<container_path> ~\/Desktop\/file.pcap You can also capture specific ports:\n$ tcpdump -i <interface> port 8126 -s 65535 -w <file> As for getting files off the host, you\u2019ll want to do this:\nscp <host>:<path>\/file.pcap ~\/Desktop\/file.pcap ","slug":"tcpdump-exfiltration","tags":["wireshark","networking"],"title":"How can I capture network packets with tcpdump?"},{"content":"When wanting to create an S3 bucket, you don\u2019t always want to go to the effort to actually attempt to make one.\nThankfully it\u2019s possible to see if a name is taken without any special auth required.\n$ curl -si https:\/\/blah.s3.amazonaws.com | grep bucket-region x-amz-bucket-region: us-east-1 $ curl -si https:\/\/fesuihfseiu.s3.amazonaws.com | grep bucket-region \/\/ no response so it's available Here\u2019s a full response to illustrate:\n$ curl -si https:\/\/blah.s3.amazonaws.com HTTP\/1.1 403 Forbidden x-amz-bucket-region: us-east-1 x-amz-request-id: QHNTTS080AYYG8H6 x-amz-id-2: 5HgWDKk7cDQ41J9zy6kKIdbMA57rtB4NaK\/9zzceuNaHa2SSGMJFjHdLlba2j1TFsp35GLBPcvU= Content-Type: application\/xml Transfer-Encoding: chunked Date: Mon, 01 Jan 2024 05:19:00 GMT Server: AmazonS3 <?xml version=\"1.0\" encoding=\"UTF-8\"?> <Error><Code>AccessDenied<\/Code><Message>Access Denied<\/Message><RequestId>QHNTTS080AYYG8H6<\/RequestId><HostId>5HgWDKk7cDQ41J9zy6kKIdbMA57rtB4NaK\/9zzceuNaHa2SSGMJFjHdLlba2j1TFsp35GLBPcvU=<\/HostId><\/Error> ","slug":"s3-bucket-naming","tags":["aws","s3"],"title":"How can I check if an S3 bucket name is available?"},{"content":"As discussed in High Performance Browser Networking, TCP Slow-Start is an important element of the TCP protocol as it ensures that both client and server start out with an acceptable throughput, so as to not overwhelm one another.\nVery quickly, they both increment their windows until they\u2019re at the allowed maximum.\nFor server to server communication however, this can cause a performance impact as long-lived TCP connections that idle are generally subject to this window.\nIt doesn\u2019t necessarily provide any value like with a client and server however as the network conditions are generally always the same with bandwidth never varying\nIf you are running a Linux system, you can check whether Slow-Start after idle is active like so:\n$ sysctl net.ipv4.tcp_slow_start_after_idle net.ipv4.tcp_slow_start_after_idle = 1 In the event that it is set, you can disable it:\n$ sudo sysctl -w net.ipv4.tcp_slow_start_after_idle=0 net.ipv4.tcp_slow_start_after_idle = 0 ","slug":"tcp-slowstart","tags":["software","networking","tcp","unix"],"title":"How can I check whether a server may be unnecessarily slow-starting?"},{"content":"As discussed in High Performance Browser Networking, TCP Flow Control is a pretty important mechanism for ensuring that clients and servers don\u2019t get overloaded.\nFlow control is a mechanism to prevent the sender from overwhelming the receiver with data it may not be able to process\u2014the receiver may be busy, under heavy load, or may only be willing to allocate a fixed amount of buffer space. To address this, each side of the TCP connection advertises (Figure 2-2) its own receive window (rwnd), which communicates the size of the available buffer space to hold the incoming data.\nWhile TCP window scaling should be enabled by default on almost every platform these days, you can check that it is enabled on a [[Linux|Linux]] system like so:\n$ sysctl net.ipv4.tcp_window_scaling net.ipv4.tcp_window_scaling = 1 In the event that it isn\u2019t set, you can fix it like so:\n$ sudo sysctl -w net.ipv4.tcp_window_scaling=1 net.ipv4.tcp_window_scaling = 1 ","slug":"tcp-window-scaling","tags":["software","networking","tcp","unix"],"title":"How can I check whether TCP window scaling is enabled?"},{"content":"When trying to cross-compile applications for different operating systems and architectures, you can often hit some niche configurations that are difficult to accomodate.\nThankfully Zig can help to alleviate this pain point as it bundles standard libraries for all major platforms ie; GNU libc, musl libc and so on.\nAssuming you have Zig already installed, you can cross compile a Go application like so:\n$ CGO_ENABLED=1 GOOS=linux GOARCH=amd64 CC=\"zig cc -target x86_64-linux\" CXX=\"zig c++ -target x86_64-linux\" go build --tags extended \ud83c\udfcf There are some custom names at play here You should note that Zig uses the term amd64 rather than x86_64.\nWhile the list of supported targets is very long, here are some most common combinations:\nx86_64-linux-gnu x86_64-windows-gnu x86_64-macos-musl aarch64-macos-gnu As you can tell, the format followed is <arch>-<os>-<libc>\n","slug":"zig-golang-crosscompile","tags":["golang","compilation","zig","softwaredevelopment"],"title":"How can I cross-compile Golang software using Zig?"},{"content":"Source: https:\/\/everything.curl.dev\/usingcurl\/tls\/sslkeylogfile\ncurl supports an environment variable called SSLKEYLOGFILE out of the box.\nSetting it will place the SSL key used to negotiate sessions at that path which you can then load into Wireshark to inspect secure sessions.\nMost browsers support it as well.\nWith that key in hand, you can provide it to Wireshark like so:\nUnder Preferences, head to Protocols -> TLS and hit Edit next to RSA keys list.\nIn the popup, entering in the IP address, port, protocol and key file you want to use to decrypt TLS traffic.\n","slug":"decrypting-wireshark-ssl-traffic","tags":["wireshark","networking","reverseengineering"],"title":"How can I decrypt SSL traffic with Wireshark?"},{"content":"Source: Originally mentioned by strogonoff on Hacker News\nYou can use sips together with iconutil to generate a complete .icns file for your app from a single 1024 by 1024 PNG without any third party software:\nmkdir MyIcon.iconset cp Icon1024.png MyIcon.iconset\/icon_512x512@2x.png sips -z 16 16 Icon1024.png --out MyIcon.iconset\/icon_16x16.png sips -z 32 32 Icon1024.png --out MyIcon.iconset\/icon_16x16@2x.png sips -z 32 32 Icon1024.png --out MyIcon.iconset\/icon_32x32.png sips -z 64 64 Icon1024.png --out MyIcon.iconset\/icon_32x32@2x.png sips -z 128 128 Icon1024.png --out MyIcon.iconset\/icon_128x128.png sips -z 256 256 Icon1024.png --out MyIcon.iconset\/icon_128x128@2x.png sips -z 256 256 Icon1024.png --out MyIcon.iconset\/icon_256x256.png sips -z 512 512 Icon1024.png --out MyIcon.iconset\/icon_256x256@2x.png sips -z 512 512 Icon1024.png --out MyIcon.iconset\/icon_512x512.png iconutil -c icns MyIcon.iconset As a bonus, generate .ico with ffmpeg:\nffmpeg -i MyIcon.iconset\/icon_256x256.png icon.ico ","slug":"macos-generating-iconsets","tags":["macos","design","utilities","macos","resources","design"],"title":"How can I easily bulk generate icons on macOS?"},{"content":"I used this sometimes for October whenever new Kobo devices come out\nIdentifiers can be found by downloading the relevant firmware update and then opening KoboRoot\/usr\/local\/Kobo\/libnickel.so.1.0.0 in a hex editor\nThe strings are about 8\/10th of the way down but you can jump there by searching 000000003\nYou should find some strings like so:\n00000000-0000-0000-0000-000000000310\ufffd\ufffd\ufffd\ufffdKobo Touch\ufffd\ufffd00000000-0000-0000-0000-000000000330\ufffd\ufffd\ufffd\ufffdKobo Glo\ufffd\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000340\ufffd\ufffd\ufffd\ufffdKobo Mini\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000350\ufffd\ufffd\ufffd\ufffdKobo Aura HD\ufffd\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000360\ufffd\ufffd\ufffd\ufffdKobo Aura\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000370\ufffd\ufffd\ufffd\ufffdKobo Aura H2O\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000371\ufffd\ufffd\ufffd\ufffdKobo Glo HD\ufffd00000000-0000-0000-0000-000000000372\ufffd\ufffd\ufffd\ufffdKobo Touch 2.0\ufffd\ufffd00000000-0000-0000-0000-000000000373\ufffd\ufffd\ufffd\ufffdKobo Aura ONE\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000374\ufffd\ufffd\ufffd\ufffdKobo Aura H2O Edition 2\ufffd00000000-0000-0000-0000-000000000375\ufffd\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000376\ufffd\ufffd\ufffd\ufffdKobo Clara HD\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000377\ufffd\ufffd\ufffd\ufffdKobo Forma\ufffd\ufffd00000000-0000-0000-0000-000000000382\ufffd\ufffd\ufffd\ufffdKobo Nia\ufffd\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000383\ufffd\ufffd\ufffd\ufffdKobo Sage\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000384\ufffd\ufffd\ufffd\ufffdKobo Libra H2O\ufffd\ufffd00000000-0000-0000-0000-000000000386\ufffd\ufffd\ufffd\ufffdKobo Clara 2E\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000387\ufffd\ufffd\ufffd\ufffdKobo Elipsa\ufffd00000000-0000-0000-0000-000000000388\ufffd\ufffd\ufffd\ufffdKobo Libra 2\ufffd\ufffd\ufffd\ufffd00000000-0000-0000-0000-000000000389\ufffd\ufffd\ufffd\ufffdKobo Elipsa 2E Depending on how new the device you\u2019re after, you may need to download a newer update\n","slug":"kobo-version-numbers","tags":["kobo","hex"],"title":"How can I extract a list of Kobo versions from a device?"},{"content":"From time to time, it can be useful to know who signed a given macOS application.\nYou can do that like so:\n$ cd \/tmp\/codesign $ codesign --display --extract-certificates $(which curl) \/\/ This will create some files in the current directory $ ls codesign0 codesign1 codesign2 $ qlmanage -c public.x509-certificate -p codesign* Running qlmanage will pop open a Finder preview window with the metadata for the attached signature\n","slug":"macos-extracting-signatures","tags":["macos","codesigning","certificates","x509"],"title":"How can I extract macOS signatures from binaries?"},{"content":"Source: https:\/\/alinpanaitiu.com\/blog\/turn-off-macbook-display-clamshell\/\nSince macOS Big Sur shipped, Apple have started shipping system libraries as one big cache blob.\nWe can use dyld-shared-cache-extractor to pull out these libraries for fun and reverse engineering.\nThis will require a full version of XCode to be installed however.\nOn macOS Ventura and above, you can run the following to extract shared libraries:\n$ dyld-shared-cache-extractor \/System\/Volumes\/Preboot\/Cryptexes\/OS\/System\/Library\/dyld\/dyld_shared_cache_arm64e \/tmp\/libraries We can also extract lists of symbols from System frameworks like so:\n$ mkdir symbols private-symbols $ fd --maxdepth 1 -t f \\ . .\/System\/Library\/*Frameworks\/*.framework\/Versions\/A\/ \\ -x sh -c 'nm --demangle --defined-only --extern-only {} > symbols\/{\/}' $ fd --maxdepth 1 -t f \\ . .\/System\/Library\/*Frameworks\/*.framework\/Versions\/A\/ \\ -x sh -c 'nm --demangle --defined-only {} > private-symbols\/{\/}' ","slug":"macos-extracting-shared-dyld-cache","tags":["macos","libraries","reverseengineering"],"title":"How can I extract stuff from the macOS shared dyld cache?"},{"content":"Here is how to generate a self-signed client CA for an extremely long period of time. You probably shouldn\u2019t do this for anything important though!\n$ openssl genpkey -algorithm RSA -out ca-key.pem $ openssl req -new -x509 -key ca-key.pem -out ca-cert.pem -days 9999 Next, let\u2019s generate a server key and certificate\n$ openssl genpkey -algorithm RSA -out server-key.pem $ openssl req -new -key server-key.pem -out server-csr.pem $ openssl x509 -req -in server-csr.pem -CA ca-cert.pem -CAkey ca-key.pem -out server-cert.pem -CAcreateserial -days 9999 Finally, we\u2019ll do the same for the client\n$ openssl genpkey -algorithm RSA -out client-key.pem $ openssl req -new -key client-key.pem -out client-csr.pem $ openssl x509 -req -in client-csr.pem -CA ca-cert.pem -CAkey ca-key.pem -out client-cert.pem -CAcreateserial -days 9999 You can use the following command to test a request to the server using the client certificate\n$ openssl s_client -connect <host>:<port> -CAfile client-cert.pem -key client-key.key ","slug":"generating-selfsigned-ca","tags":["security","certificates","openssl"],"title":"How can I generate a self-signed CA certificate chain?"},{"content":"From time to time, it can be useful to explore what your applications are doing.\nSometimes, you may find horrifying things such as with How Setting the TZ Environment Variable Avoids Thousands of System Calls.\nstrace strace is your main friend here when it comes to seing what processes are up to.\nThe most common command will be using strace to connect to a specific process.\n$ strace -p <pid> strace: Process 2253233 attached epoll_pwait(3, [], 128, 0, NULL, 1825221) = 0 epoll_pwait(3, [], 128, 1, NULL, 1825221026013077) = 0 epoll_pwait(3, [], 128, 0, NULL, 1825221) = 0 epoll_pwait(3, [], 128, 1, NULL, 1825221026013077) = 0 futex(0xc000060948, FUTEX_WAKE_PRIVATE, 1) = 1 futex(0xc000060548, FUTEX_WAKE_PRIVATE, 1) = 1 write(7, \"datadog.dogstatsd.client.aggrega\"..., 385) = 385 You\u2019ll see a bunch of system calls dumped out which can often be quite hard to parse.\nlsof Another approach is using lsof in order to see what other calls a process is making\n$ lsof -p <pid> COMMAND PID USER FD TYPE DEVICE SIZE\/OFF NODE NAME features 2253233 root cwd DIR 0,89 4096 14194727 \/opt\/features features 2253233 root rtd DIR 0,89 4096 14194750 \/ features 2253233 root txt REG 0,89 25723368 14194681 \/opt\/features\/features features 2253233 root 0u CHR 1,3 0t0 5 \/dev\/null features 2253233 root 1w FIFO 0,13 0t0 207407931 pipe features 2253233 root 2w FIFO 0,13 0t0 207407932 pipe features 2253233 root 3u a_inode 0,14 0 13467 [eventpoll] This will give you an idea of open files and then you can track down which ones might stick out\nfs_usage On macOS, this is a pretty strong tool to leverage.\nYou\u2019ll generally want to filter what is otherwise a pretty noisy stream of events.\n$ sudo fs_usage -f filesys 98642 21:20:06.930527 stat64 s\/Mainframe\/Investigating system calls.md 0.000063 Obsidian.2653255 21:20:06.930685 open F=58 (R_____N___V_) frame\/Investigating system calls.md 0.000050 Obsidian.2719054 21:20:06.930692 fgetattrlist F=58 0.000006 Obsidian.2719054 21:20:06.930696 getattrlist s\/Mainframe\/Investigating system calls.md 0.000059 Obsidian.2653255 21:20:06.930703 close F=58 0.000011 Obsidian.2719054 21:20:06.930822 getattrlist s\/Mainframe\/Investigating system calls.md 0.000023 Obsidian.2653255 21:20:06.934896 stat64 \/Contents\/PlugIns\/PinShareExtension.appex 0.000014 Obsidian.2653255 Here you can see a query for Obsidian (PID 98624) calls filtered by file system access.\n","slug":"investigating-system-calls","tags":["software","debugging","investigation"],"title":"How can I go about investigating system calls?"},{"content":"Often times, it can be useful to inspect the state of socket connections made by applications.\nYou can use the tool ss to inspect and even kill connections.\n$ ss # See a list of all sockets $ ss -t -p # See all TCP sockets with process information $ ss -K -t '( dport = :6514 )' # Kill all TCP connections open to destination port 6514 ","slug":"inspecting-socket-connections","tags":["software","networking","unix","sockets"],"title":"How can I inspect socket connection states"},{"content":"It\u2019s pretty rare but sometimes it can be useful to capture statsd metrics at the source.\nHere\u2019s an example that uses tshark to capture some dogstatsd metrics on the fly.\n$ tshark -f \"udp port 8125\" -i any -T fields -e data | xxd -p -r Running as user \"root\" and group \"root\". This could be dangerous. Capturing on 'any' 2 datadog.trace_agent.receiver.rate_response_bytes:82|h|#version:7.44.1,lang:cpp,lang_version:201402,tracer_version:v1.3.6,endpoint_version:v0.4,endpoint:traces_v0.4 datadog.trace_agent.receiver.serve_traces_ms:0.084921|h|#version:7.44.1,lang:cpp,lang_version:201402,tracer_version:v1.3.6,endpoint_version:v0.4,success:true datadog.trace_agent.stats_writer.flush_duration.avg:66|g|#version:7.44.1 datadog.trace_agent.internal.process_payload_ms.avg:0.21428571428571427|g|#version:7.44.1 datadog.trace_agent.trace_writer.compress_ms.max:7|g|#version:7.44.1 ","slug":"tshark-statsd-metrics","tags":["tshark","wireshark","datadog"],"title":"How can I live tail statsd metrics?"},{"content":"A very niche use case but I had a use case once for flattening a JSON object such that it went from this\n{ \"source\": { \"service\": \"blah\" } } into this\n{ \"source.service\": \"blah\" } In this case, the latter was easy to parse dynamically because it means the file would always have a depth of 1 instead of a possibly infinite depth to recurse through.\nAnyway, this was thankfully easier than I expected using a Python library called FlatDict\nHere\u2019s an example of how to use it:\nimport flatdict d = flatdict.FlatDict(expect, delimiter='.') How easy was that?\nI haven\u2019t checked if the FlatDict type is serialisable as JSON but if not, you can just convert back to a dictionary using dict(d).\nA fuller example can be found in the form of this Github Gist.\n","slug":"json-dict-flattening","tags":["json","python","software","programming"],"title":"How can I quickly flatten a JSON dictionary?"},{"content":"First, you\u2019ll want to find the resource identifier for the item you\u2019re looking to delete\n$ terraform state list aws_ecr_lifecycle_policy.ecr-expiry-policy aws_ecr_repository.blah-ecr [...] For this case, we\u2019ll use aws_ecr_repository.blah-ecr as our example identifier.\nYou can then remove it from state like so:\n$ terraform state rm 'aws_ecr_repository.blah-ecr' Removed aws_ecr_repository.blah-ecr Successfully removed 1 resource instance(s). Terraform has now \u201cforgotten\u201d that resource exists.\nThis is handy for things that are part of a stack but that you don\u2019t want to delete such as KMS keys which are generally retained for audit purposes or to ensure that no data is accidentally encrypted with no way to decrypt it in future.\n","slug":"terraform-remove-state","tags":["terraform","software","operations","terraform","software","operations"],"title":"How can I remove an item from Terraform state?"},{"content":"When compiling applications by default, they get invisibly signed in order to protect them from tampering.\nThis isn\u2019t always desirable such as if you want to use dtruss and other applications to monitor their system calls.\nYou can check if an application is signed like so:\n$ codesign -dv .\/main Executable=\/Users\/marcus\/Code\/main Identifier=a.out Format=Mach-O thin (arm64) CodeDirectory v=20400 size=15614 flags=0x20002(adhoc,linker-signed) hashes=485+0 location=embedded Signature=adhoc Info.plist=not bound TeamIdentifier=not set Sealed Resources=none Internal requirements=none You can remove the signature like so:\n$ sudo codesign --remove-signature .\/main $ codesign -dv .\/main .\/main: code object is not signed at all At the time of writing, I don\u2019t believe this is enough to actually let you run most things due to System Integrity Protection but it\u2019s worth a shot.\n","slug":"macos-strip-code-signatures","tags":["macos","libraries","reverseengineering"],"title":"How can I strip code signatures from macOS binaries?"},{"content":"All YouTube channels have an RSS feed which can be found at the following endpoint:\nhttps:\/\/www.youtube.com\/feeds\/videos.xml?channel_id=<channel_id> In order to get a user\u2019s channel ID, you just need to navigate to About, click the Share icon on the right sidebar and then hit `Copy channel ID\nI\u2019ve used this for years myself over having a YouTube account and it\u2019s nice.\n","slug":"youtube-rss","tags":["youtube","rss"],"title":"How can I subscribe to a YouTube channel with an RSS reader?"},{"content":"It can be interesting to unpack Electron apps to see how they function or just generally try to reimplement a feature in a different language.\nYou\u2019ll first need to open up the application itself which varies from platform to platform. In our case, we\u2019ll have a look at the Obsidian macOS app as an example.\n$ cd \/Applications\/Obsidian.app\/Contents $ ls CodeResources Info.plist PkgInfo _CodeSignature Frameworks MacOS Resources $ grep 'ElectronAsar' Info.plist -A 2 <key>ElectronAsarIntegrity<\/key> <dict> <key>Resources\/app.asar<\/key> The thing that we\u2019re after is an asar file which is effectively just a tar.gz file by another name.\nHere we can see that we need to navigate into the Resources folder to get our hands on app.asar.\n$ cd Resources $ pwd \/Applications\/Obsidian.app\/Contents\/Resources $ ls -la | grep .asar -rw-r--r--@ 1 marcus admin 948879 10 May 03:35 app.asar drwxr-xr-x@ 3 marcus admin 96 10 May 03:35 app.asar.unpacked -rwxr-xr-x@ 1 marcus admin 18912821 10 May 03:35 obsidian.asar We have two .asar files which may or may not be the case for whatever you\u2019re looking for.\nWe\u2019ll want to grab the bigger obsidian.asar as that\u2019s where all of the content actually lives but again, this may differ between applications.\nUnpacking the asar file is pretty straightforward using the asar npm package. Let\u2019s extract this file to desktop.\n$ cd ~\/Desktop $ npx @electron\/asar extract \/Applications\/Obsidian.app\/Contents\/Resources\/obsidian.asar obsidian-unpack $ ls obsidian-unpack app.css i18n main.js sim.js app.js i18n.js package-lock.json starter.html enhance.js icon.png package.json starter.js help.html index.html public worker.js help.js lib sandbox While you shouldn\u2019t use this to cause too much trouble, I didn\u2019t feel too bad using Obsidian as an example. While it is closed source, it\u2019s not that much effort to pop open Chrome DevTools and poke around those same files.\nEverything is all minified and uglified too. Not exactly a security measure (nor is it intended to be) but most of the effort in poking around is interpreting all this nonsense rather than actually unpacking it in the first place.\nOnce that\u2019s done, you can also repack the asar file like so:\n$ npx @electron\/asar pack obsidian-unpack obsidian.asar ","slug":"electron-unpacking","tags":["electron","reverseengineering","software"],"title":"How can I unpack the contents of an Electron app?"},{"content":"sudo pfctl -s all The rdr pass entries might be of interest.\nYou can then use tools like lsof and pstree to find out more ie; lsof iUDP:53535 -n -P\nConfig can be flushed with sudo pfctl -f \/etc\/pf.conf\n","slug":"macos-view-filter-rules","tags":["macos","software","networking"],"title":"How can I view macOS network filter rules?"},{"content":"Source: Making a Go program 42% faster with a one character change\nYou can compile your program with a special flag to see the output of how variables are stored\n$ go build -gcflags=-m *.go 2>&1 | tee inline.txt # command-line-arguments .\/client_redis.go:28:6: can inline NewClient .\/client_redis.go:71:25: inlining call to \"net\/http\".(*Client).Do .\/client_kafka.go:17:6: can inline (*KafkaClient).getType .\/client_kafka.go:24:34: inlining call to strings.Split .\/client_kafka.go:73:47: group.GroupId escapes to heap .\/client_kafka.go:73:78: clusterName escapes to heap You\u2019ll see two terms: \u201cinline\u201d and \u201cescapes to heap\u201d\nThe former is good, meaning that the variable only exists for the lifetime of the function (ie; it is added to the stack) while the latter is probably not ideal as it means that value persists on the heap until the garbage collector is run.\nA common scenario to avoid is accidentally copying values instead of taking a pointer to them. Passing the value copy into an function closure will cause it to be allocated to the heap as well.\n","slug":"golang-stack-heap-vis","tags":["golang","performance","garbagecollection","software","optimisation"],"title":"How can I visualise stack and heap alloactions in Go?"},{"content":"This has been something that has plagued me for years and I\u2019ve never sat down to properly fix it.\nInstead, I\u2019ve just added .DS_Store to .gitignore files probably over one hundred times by over.\nAnyway, the git documentation mentions the existence of a variable called core.excludesFile.\nIf you don\u2019t set it, and $XDG_CONFIG_HOME isn\u2019t overridden, you can add global ignores to $HOME\/.config\/git\/ignore.\nLet\u2019s see this in action. First we\u2019ll make a brand new Git repository and add a .DS_Store file.\n> mkdir sports > cd sports > git init Initialized empty Git repository in \/Users\/marcus\/Code\/sports\/.git\/ > touch .DS_Store > git status On branch main No commits yet Untracked files: (use \"git add <file>...\" to include in what will be committed) .DS_Store nothing added to commit but untracked files present (use \"git add\" to track) Ah yes, the perpetual hell but let\u2019s try out our new trick.\n> echo \".DS_Store\" >> ~\/.config\/git\/ignore > git status On branch main No commits yet nothing to commit (create\/copy files and use \"git add\" to track) Mwah, beautiful.\n","slug":"git-globally-ignore-files","tags":["git"],"title":"How can I globally ignore files?"},{"content":"I recently came across an unsecured Selenium instance but I wanted to confirm my findings by making a basic request.\nWhile I opted to use the Python bindings for Selenium, there wasn\u2019t a quick start guide on how to remotely connect to an instance.\nHere\u2019s how you can quickly connect to a Selenium instance and do a basic request using Python:\n>>> from selenium.webdriver.common.desired_capabilities import DesiredCapabilities >>> from selenium import webdriver >>> hub_url = \"http:\/\/example.com:4444\/wd\/hub\" >>> driver = webdriver.Remote(command_executor=hub_url, desired_capabilities=DesiredCapabilities.CHROME) >>> driver.get(\"https:\/\/news.ycombinator.com\") >>> driver.find_element_by_tag_name(\"img\").get_attribute(\"src\") 'https:\/\/news.ycombinator.com\/y18.svg' ","slug":"selenium-remote-connection","tags":["python","selenium"],"title":"How can I remotely connect to a Selenium cluster"},{"content":"This is arguably one of the more obscure commands I\u2019ve come across. At the time, a coworker of mine was having issues where his laptop would restart seemingly at random.\nWe were able to find out a bit more with the following command:\nlog show -predicate 'eventMessage contains \"Previous shutdown cause\"' -last 24h It may take a minute or so to actually find some logs but it should reveal a shutdown code.\nI don\u2019t remember where I dug it up but you can see a list of shutdown causes and their meanings in this PDF.\nHere\u2019s how the results looks on my machine where I had performed a normal shutdown as a test\nIf we compare the shutdown code to the PDF above, we can see the description is Correct shut down which lines up exactly.\nNow let\u2019s take this information and use it to see what was potentially happening to my coworkers laptop.\nHere\u2019s a screenshot of his terminal window with the same command:\nGoing back to the PDF again, we can see that -128 is an alias for -112. Checking -112 tells us that it is \u201cProbably memory related\u201d which at least narrows it down.\nI don\u2019t doubt that result since some of the most authoritative information can often be found in PDFs randomly floating around the internet!\nFor anyone wondering, my coworker has a new laptop on the way regardless, since he can\u2019t work with it constantly rebooting.\n","slug":"macos-check-shutdown-cause","tags":["crashes","logging","macos"],"title":"How can I find out why my Mac has restarted?"},{"content":"This question was trending on Hacker News but the thread in question never addressed it.\nBuried down in the comments was a technical fix suggested by torstenvl.\nSafari has a few configuration entries accessible via defaults read com.apple.coreservices.uiagent.\nWhile I haven\u2019t tested this personally, torstenvl recommended stubbing out the notification with the following commands:\ndefaults write com.apple.coreservices.uiagent CSUIHasSafariBeenLaunched -bool YES defaults write com.apple.coreservices.uiagent CSUIRecommendSafariNextNotificationDate -date 2050-01-01T00:00:00Z defaults write com.apple.coreservices.uiagent CSUILastOSVersionWhereSafariRecommendationWasMade -float 99.99 If this works for you, let me know. I\u2019m currently running the macOS Monterey beta at the time of writing and as I\u2019ve already used Safari, I don\u2019t believe I get this notification anymore.\n","slug":"macos-disable-safari-recommendation","tags":["macos","safari","software"],"title":"How can I disable the 'Try the new Safari' notification?"},{"content":"I\u2019ve been fiddling a bit with Wails recently and I gave the unreleased v2 alpha a try.\nOut of the box, it binds to Port 5000 and I was surprised to receive a 403 Forbidden.\nDefinitely not what I expected.\nWe can use the lsof utility to figure out what\u2019s holding on to Port 5000. You\u2019ll see in the screenshot below that I use a shell function called whomport but under the hood, it\u2019s running lsof -nP i4TCP:5000 | grep LISTEN. Let\u2019s see what the output looks like.\nThis doesn\u2019t really help us much since Control Centre could be anything but a bit of searching brings up that this change was introduced in macOS Monterey.\nIn particular, there\u2019s a new setting under System Preferences -> Sharing called Airplay Receiver. Let\u2019s toggle it off.\nOnce this is done, you should find Port 5000 instantly freed up. It\u2019s weird that Apple would pick such a commonly used port, especially for developers!\n","slug":"macos-port-5000-monterey","tags":["airplay","macos","monterey","receiver"],"title":"What is using Port 5000 on macOS Monterey?"},{"content":"While you can delete stock folders such as Templates, Public and so on, they\u2019ll still appear in the sidebar of your file explorer.\nThe good news is that they\u2019re pretty easy to disable.\nReferring to the xdg-user-dirs manual shows us that there is a configuration file of \u201cwell known\u201d user directories that lives at $HOME\/.config\/user-dirs.dirs by default\nSimply deleting the various entries inside might break a number of things but if you look closely, you\u2019ll spot that changing a directory to point to your home directory will disable it\nFor example:\n> cat $HOME\/.config\/user-dirs.dirs XDG_TEMPLATES_DIR=\"$HOME\" # templates is now disabled This should cause the Templates folder to disappear from the sidebar of Nautilus although you might need to restart first.\n","slug":"linux-disable-stock-folders","tags":["housekeeping","linux","xdg"],"title":"How can I get rid of the default application folders that ship with my Linux distro?"},{"content":"Recently, a coworker of mine got a new laptop and needed to connect to the printer at work. One of the dialog boxes asked for the \u201cprint queue\u201d.\nFor the unfamiliar, here\u2019s what the macOS printer settings look like.\nI can\u2019t see any queue settings so let\u2019s dive a little deeper.\nNothing here either but surely there must be something under the hood. Thankfully, there\u2019s a built in command called lpstat that allows all sorts of printer configuration.\n> man lpstat | grep lpstat lpstat(1) Apple Inc. lpstat(1) lpstat - print cups status information 26 April 2019 CUPS lpstat(1) In order to find the printer queue name, I was able to make use of lpstat -s like so:\n> lpstat -s system default destination: example_printer device for example_printer: ipp:\/\/example-printer\/my-fake-queue Ah, so the queue name is my-fake-queue. I wish the System Preferences pane had just said so earlier.\nWhile there, I also discovered a bunch of my old print jobs as well!\n> lpstat -W completed -l example_printer-3 marcus 59392 Wed 28 Apr 09:40:30 2021 Status: The printer is not responding. Alerts: processing-to-stop-point queued for example_printer example_printer-2 marcus 113664 Wed 17 Mar 15:36:56 2021 Status: The printer is unreachable at this time. Alerts: job-canceled-by-user queued for example_printer example_printer-1 marcus 51200 Thu 8 Oct 11:14:01 2020 Status: Alerts: processing-to-stop-point queued for example_printer Hopefully this makes your printing life easier, or at least gives you some closure on why those months old jobs refused to print.\n","slug":"macos-printer-cli","tags":["macos","printers"],"title":"How can I configure my printer via terminal on macOS?"},{"content":"You can check if a character contains a string by using the cmatch operator like so:\n$word = \"Hello\" $word -cmatch \"[A-Z]\" \/\/ True ","slug":"powershell-regex","tags":["powershell"],"title":"How can I perform a regex search in Powershell?"},{"content":"This one had my scratching my head a bit as I wasn\u2019t quite sure if Kubernetes was the right place to do this.\nDepending on your use case, it might make sense to terminate traffic before it reaches your cluster but that may have the effect of filtering traffic to other applications if not done properly.\nIn this instance, the Kubernetes cluster in question makes use of the NGINX Ingress Controller and as such, honours a whole bunch of flags.\nBefore we get into the details, let\u2019s set up a small example.\nWe\u2019ll pretend our desktop has an IP address is 192.0.2.3 exactly. We want to allow only a network range of 1 single address so that say; our mobile device with the address 192.0.2.2 can\u2019t connect but our desktop can.\nIn CIDR notation, this would be represented as 192.0.2.3\/32, with the 32 effectively meaning \u201cJust this one address\u201d instead of any other devices on the 192.0.2 range, or broader.\nWith our address block defined, let\u2019s look at an ingress:\napiVersion: networking.k8s.io\/v1 kind: Ingress metadata: name: my-cool-ingress annotations: nginx.ingress.kubernetes.io\/whitelist-source-range: \"192.0.2.3\/24\" spec: rules: - host: example.com http: paths: - path: \/ backend: service: name: example-docs port: name: http-example-docs Ok, we\u2019ve allowed our desktop to connect but let\u2019s try connecting to this ingress from a device we know isn\u2019t allowed, such as our laptop on 192.0.2.6:\n> curl https:\/\/example.com\/ <html> <head><title>403 Forbidden<\/title><\/head> <body> <center><h1>403 Forbidden<\/h1><\/center> <hr><center>nginx<\/center> <\/body> <\/html> Alright, and now from our desktop at 192.0.2.3\/24, which we allowed explicitly:\n> curl example.com <!doctype html> <html> <head> <title>Example Domain<\/title> [...] Success! We\u2019ve managed to use nothing but an ingress to block specific traffic but you might wonder, why would I ever use this?\nOne use case may be exposing applications that require the use of a public endpoint, such as Microsoft Teams or Slack.\nOften, you can\u2019t make use of OAuth but you want to protect against random internet traffic so you can explicitly allow known IP ranges.\nWith Azure for example, they publish a full list of their active IP ranges so if you can\u2019t simply make use of a VNet, this may be the next best thing.\n","slug":"kubes-ingress-ip-range","tags":["allowlist","kubernetes"],"title":"How can I restrict which traffic is allowed to pass through a Kube ingress?"},{"content":"Let\u2019s say that you have a variable that contains a string:\n$a = \"abc\" That\u2019s neat but what if I want to view the possible methods that are available on the string object? You can use the Get-Member cmdlet. You can also use the shorthand version gm.\n$a = \"abc\" $a | gm \/\/ Name | MemberType | Definition \/\/ Clone | Method | System.Object Clone() [...] \/\/ ... \/\/ Length | Property | int Length {get;} $a.Length \/\/ 3 You can also view the static methods associated with an object too:\n\"abc\" | gm - Static \/\/ Compare | Method | static int Compare(string strA, string strB)... ","slug":"powershell-object-methods","tags":["powershell"],"title":"How can I view the methods associated with an object?"},{"content":"One of the primary considerations of the HTTP2 Working Group was definitely that encouraging HTTPS meant a more secure web.\nMore practically however, there had been previous experiments using WebSockets and SPDY which showed that regular HTTP requests were highly prone to failure due to things like proxies interrupting negotiation.\nOften times, an Upgrade header was supplied with the initial HTTP negotiation and then shortly both sides upgraded to HTTPS but if HTTPS was used from the outside, there would be a significantly easier time doing protocol negotiation.\nThere is an overhead to establishing a TLS connection of course but the price pays off in the form of HTTP2 multiplexing and so on.\n","slug":"http-non-encryption-benefits","tags":["historical","http","reliability"],"title":"What non-encryption benefits are provided by HTTPS?"},{"content":"Rob Pike explained his understanding in a now-dead Google+ post back in 2012. The use of . and .. had appeared in early versions of the Unix file system, as a quick way to navigate around. They would appear when using ls to view the contents of a directory so a line was added that ignored anything where the first character was a period.\nThis, of course, meant that any files starting with a . were also hidden and so began years of bad practices. Rather than think \u201cWhere should I store my configuration folder\u201d, the easy option became storing a dotfile instead. It may be messy but if no one can see it, is it really so bad?\nRob also points out that configuration could just as easily be stored in $HOME\/cfg or $HOME\/lib as was the case in Plan 9. He doesn\u2019t dispute that dotfiles have their uses but emphasizes that the file itself serves the purpose. Prepending a dot does not a configuration file make.\n","slug":"linux-why-do-dotfiles-exist","tags":["historical","linux","macos"],"title":"Why are dot files a thing?"},{"content":"It was common to have images on a subdomain and the bulk of the site at the root of a domain such as nytimes.com and img.nytimes.com\nCaching is widely understood as the current value but it doesn\u2019t capture the historical context behind the introduction of this tactic.\nAnother aspect is that the size of headers bloated significantly, sometimes to where cookies associated with a request would be larger than a single TCP packet, which is about 1.5kb.\nIn order to reduce latency, it made sense to move resources that didn\u2019t require cookies to a separate domain, so that those requests didn\u2019t inherit excess headers. While not large on a single request, requests for multiple assets would balloon exponentially.\nThis practice was colloquially referred to as a \u201ccookie-less domain\u201d.\n","slug":"http-domain-splits","tags":["cookies","headers","historical","http"],"title":"Why did sites split their assets across multiple domains back in the day?"},{"content":"No! As I learned recently, only a handful of regions feature multiple availability zones.\nHaving originally started out using AWS, I had assumed that every region has multiple availability zones.\nAzure maintains a list of regions with multiple AZs so if you need redundancy, you\u2019re best off picking one of these.\nSome services may refuse to deploy entirely (such as \u201cgeo-redundant\u201d gateways in us-east as I found out recently)\n","slug":"azure-regions-alike","tags":["availability","azure","cloud","microsoft"],"title":"Are all Azure regions alike?"},{"content":"The following should roughly do it. Your mileage may vary!\ngit clone -b emacs-27 git:\/\/git.sv.gnu.org\/emacs.git cd emacs sudo apt-get build-dep emacs .\/autogen.sh .\/configure --with-x-toolkit=lucid --with-mailutils make -j4 .\/src\/emacs \/\/ test that its working sudo make install ","slug":"emacs-compile-from-source","tags":["emacs"],"title":"How can I compile Emacs from source?"},{"content":"If you\u2019re trying to test out a job, and don\u2019t want to wait for however long, you can manually create a job instance.\nAssuming our cronjob is called sports-leaderboard-calc, you can create it like so:\n> kubectl create job instance-name --from=cronjob\/sports-leaderboard-calc job.batch\/instance-name You\u2019ll then see the resulting job and pod under kubectl get job and kubectl get pod respectively.\n","slug":"kubes-create-cron-instance","tags":["cronjob","kubernetes"],"title":"How can I create an instance of a Kube cronjob?"},{"content":"I suppose they aren\u2019t used too much anymore but I\u2019ve started using them as a preview window for my projects page.\nIt can be handy to act different depending on an iFrame, such as scaling the view port.\nYou can\u2019t do something like this:\niframe > canvas { width: 500px; } canvas { width: 100%; } but you can use Javascript inside an iFrame and make the changes within the frame itself, rather than from the outside:\nfunction insideIframe() { try { return window.self !== window.top; } catch (e) { return true; } } if (insideIframe()) { \/\/ perhaps change the size of something or just act differently } ","slug":"js-detect-iframe-parent","tags":["iframe","javascript"],"title":"How can I determine if my code is inside of an iFrame?"},{"content":"Using pg_dump, which ships with the psql executable, it\u2019s a pretty simple progress\npg_dump --dbname={{DBNAME}} --host={{HOST}} --port={{PORT}} --username={{USERNAME}} --password --format=c > {{NAME}}.dump # The c in --format=c stands for custom If you\u2019re dumping to a directory archive format1, you can use -j to parallelize this operation ie; pg_dump -j 4.\nhttps:\/\/www.postgresql.org\/docs\/current\/backup-dump.html#BACKUP-DUMP-LARGE\u00a0\u21a9\ufe0e\n","slug":"postgres-export-db","tags":["databases","postgres"],"title":"How can I export a Postgres database?"},{"content":"This error is usually pretty cryptic and I often forget how to debug it so let\u2019s look at a sample error:\nError: The module '\/Users\/marcus\/Code\/octowise\/node_modules\/better-sqlite3\/build\/Release\/better_sqlite3.node' was compiled against a different Node.js version using NODE_MODULE_VERSION 83. This version of Node.js requires NODE_MODULE_VERSION 89. I often remember that I need to possibly use a different version of nodejs but I never remember how to tell which one.\nThe official NodeJS site has a table with version numbers and their corresponding NODE_MODULE_VERSION available here.\nIn the case of this error, I think I probably want to downgrade to Node.js 14.x? It\u2019s all very confusing.\n","slug":"nodejs-module-version","tags":["javascript","nodejs"],"title":"How can I find my current NODE_MODULE_VERSION?"},{"content":"You can see a list of current auth-sources by running the following elisp function\n> auth-sources (password-store \"~\/.authinfo.gpg\") ","slug":"emacs-auth-sources","tags":["elisp","emacs"],"title":"How can I find out where Emacs is checking for passwords?"},{"content":"Using pg_restore, it\u2019s almost the same process as pg_dump but in reverse\npg_restore --dbname={{DBNAME}} --host={{HOST}} --port={{PORT}} --username={{USERNAME}} --password --jobs 2 {{NAME}}.dump You can use -j to parallelize restoration ie; pg_dump -j 41.\nhttps:\/\/www.postgresql.org\/docs\/current\/backup-dump.html#BACKUP-DUMP-LARGE\u00a0\u21a9\ufe0e\n","slug":"postgres-import-db","tags":["databases","postgres"],"title":"How can I import a dumped database into Postgres?"},{"content":"Until recently, I never had to go near SAML with a 10 foot pole but I was recently helping out a coworker with adding SAML authentication to an Elasticsearch cluster.\nI had never seen one before but a SAML request looks a little like this:\nhttps:\/\/idp.example.org\/SAML2\/SSO\/Redirect?SAMLRequest=fZFfa8IwFMXfBb9DyXvaJtZ1BqsURRC2Mabbw95ivc5Am3TJrXPffmmLY3%2FA15Pzuyf33On8XJXBCaxTRmeEhTEJQBdmr%2FRbRp63K3pL5rPhYOpkVdYib%2FCon%2BC9AYfDQRB4WDvRvWWksVoY6ZQTWlbgBBZik9%2FfCR7GorYGTWFK8pu6DknnwKL%2FWEetlxmR8sBHbHJDWZqOKGdsRJM0kfQAjCUJ43KX8s78ctnIz%2Blp5xpYa4dSo1fjOKGM03i8jSeCMzGevHa2%2FBK5MNo1FdgN2JMqPLmHc0b6WTmiVbsGoTf5qv66Zq2t60x0wXZ2RKydiCJXh3CWVV1CWJgqanfl0%2Bin8xutxYOvZL18NKUqPlvZR5el%2BVhYkAgZQdsA6fWVsZXE63W2itrTQ2cVaKV2CjSSqL1v9P%2FAXv4C I took this example from Wikipedia and it\u2019s a pretty good illustration of where the juicy part of the request probably is.\nA basic way to inspect this request in Python would look like the following. I don\u2019t claim that this will work on all requests. For that, try something like python3-saml.\nfrom base64 import b64decode from urllib.parse import unquote import zlib url = \"fZFfa8IwFMXfBb9DyXvaJtZ1BqsURRC2Mabbw95ivc5Am3TJrXPffmmLY3%2FA15Pzuyf33On8XJXBCaxTRmeEhTEJQBdmr%2FRbRp63K3pL5rPhYOpkVdYib%2FCon%2BC9AYfDQRB4WDvRvWWksVoY6ZQTWlbgBBZik9%2FfCR7GorYGTWFK8pu6DknnwKL%2FWEetlxmR8sBHbHJDWZqOKGdsRJM0kfQAjCUJ43KX8s78ctnIz%2Blp5xpYa4dSo1fjOKGM03i8jSeCMzGevHa2%2FBK5MNo1FdgN2JMqPLmHc0b6WTmiVbsGoTf5qv66Zq2t60x0wXZ2RKydiCJXh3CWVV1CWJgqanfl0%2Bin8xutxYOvZL18NKUqPlvZR5el%2BVhYkAgZQdsA6fWVsZXE63W2itrTQ2cVaKV2CjSSqL1v9P%2FAXv4C\" urldecoded_url = unquote(url) b64decoded_url = b64decode(url) request = zlib.decompress(b64decoded_url, -15).decode() print(request) \/\/ '<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\r\\n<samlp:AuthnRequest\\r\\n xmlns:samlp=\"urn:oasis:names:tc:SAML:2.0:protocol\"\\r\\n xmlns:saml=\"urn:oasis:names:tc:SAML:2.0:assertion\"\\r\\n ID=\"aaf23196-1773-2113-474a-fe114412ab72\"\\r\\n Version=\"2.0\"\\r\\n IssueInstant=\"2004-12-05T09:21:59Z\"\\r\\n AssertionConsumerServiceIndex=\"0\"\\r\\n AttributeConsumingServiceIndex=\"0\">\\r\\n <saml:Issuer>https:\/\/sp.example.com\/SAML2<\/saml:Issuer>\\r\\n <samlp:NameIDPolicy\\r\\n AllowCreate=\"true\"\\r\\n Format=\"urn:oasis:names:tc:SAML:2.0:nameid-format:transient\"\/>\\r\\n<\/samlp:AuthnRequest>\\r\\n' If you\u2019re feeling a bit lazy, like I often am, you can use any of the online decoders, such as this one by PingID.\nIf you\u2019re dealing with sensitive credentials however, it\u2019s best to decode it locally rather than trusting a third party.\n","slug":"saml-inspect-request","tags":["authentication","saml"],"title":"How can I inspect a SAML request?"},{"content":"While you could provide a button, some parts of a site can look quite nice if they automatically transition between light and dark mode.\nYou can listen for these changes like so:\nwindow .matchMedia(\"(prefers-color-scheme: dark)\") .addEventListener(\"change\", (e) => { const updatedScheme = e.matches ? \"dark\" : \"light\"; if (updatedScheme === \"dark\") { \/\/ change something to dark mode } else { \/\/ change something to light mode } }); ","slug":"js-colour-scheme-listener","tags":["darkmode","javascript"],"title":"How can I listen for user changes to their colour scheme (ie dark mode)?"},{"content":"DNS! It\u2019s always the answer for your woes :)\nWhile there are a myriad of HTTP servers for seeing your external IP address, you can also use one of the various DNS based services on offer.\nThese will give you an IPv4 flag. The -4 flag isn\u2019t necessarily required but without explicitly providing it, you\u2019ll be gambling on the return type.\n> dig @resolver3.opendns.com myip.opendns.com +short -4 > dig @resolver4.opendns.com myip.opendns.com +short -4 > dig @ns1-1.akamaitech.net ANY whoami.akamai.net +short -4 > dig @ns1.google.com TXT o-o.myaddr.l.google.com +short -4 and likewise, for IPv6\n> dig @resolver1.ipv6-sandbox.opendns.com AAAA myip.opendns.com +short -6 > dig @ns1.google.com TXT o-o.myaddr.l.google.com +short -6 You can read more, and see some other providers I left out, in this detailed StackOverflow thread but generally speaking, I\u2019ve found OpenDNS\u2019s resolver4 to be the fastest of the lot on offer.\nA very handy thing to have aliased and way quicker than clicking 5 times to navigate to a webpage.\n","slug":"dns-lookup-current-ip","tags":["dig","dns"],"title":"How can I look up my current external IP address?"},{"content":"For large downloads, such as macOS updates, it can be annoying that tools like Self Service don\u2019t surface download metrics\nThankfully, we can find the download on disk and watch as the file size increases\nIn the case of macOS, downloads live at \/Library\/Application\\ Support\/JAMF\/Downloads\nI\u2019m no shell scripting master but the following is a quick hack to view the progress in real time\nThere are better tools like watch but eh, this works fine enough\nHere\u2019s the script I\u2019ve been using but it requires gnumfmt which you can install with brew install coreutils\n> while (true) do echo $(sudo ls -l \/Library\/Application\\ Support\/JAMF\/Downloads | grep macOS | awk '{ print $5 }' | gnumfmt --to iec --format \"Downloaded: %8.1f\"); sleep 15; done Downloaded: 6.9G Downloaded: 7.0G Downloaded: 7.0G Downloaded: 7.1G Downloaded: 7.1G That\u2019s not particulary readable so here\u2019s a bit of an explainer:\nwhile (true) do echo $( sudo ls -l \/Library\/Application\\ Support\/JAMF\/Downloads | # (1) grep macOS | # (2) awk '{ print $5 }' | # (3) gnumfmt --to iec --format \"Downloaded: %8.1f\" # (4) ); sleep 15; # (5) done Annoyingly, JAMF\/Downloads is a restricted directory so we have to be a superuser in order to operate within that folder We\u2019re only concerned with one column in particular, in my case it\u2019s the macOS Big Sur DMG Let\u2019s fetch the current file size but just seeing 8466481152 is not particularly useful We can use gnumfmt, a GNU utils implementation of numfmt given the latter only exists on Linux systems. gnumfmt is available via Homebrew as mentioned above We just run this script continually until Ctrl-C is invoked. Over a average speed proxy, it takes about 45 seconds to download 100MB so there\u2019s no value personally in setting something like sleep 5 Enjoy your window into frustration as you realise just how long waiting will take\n","slug":"macos-monitor-jamf-downloads","tags":["enterprise","jamf","macos","software"],"title":"How can I monitor JAMF downloads on macOS?"},{"content":"If you have a cronjob that you\u2019d like to pause while doing some maintenance for example, you can use the suspend attribute.\n> kubectl patch cronjobs does-something -p '{\"spec\": {\"suspend\": true }}' cronjob.batch\/does-something patched Once you\u2019re done, you can just flip true to false\n","slug":"kubes-pause-recurring-cronjob","tags":["cronjob","kubernetes"],"title":"How can I pause a recurring Kube cronjob?"},{"content":"Often times, it can be useful to check the value of a Kubernetes secret, to check that it lines up with what an application is receiving. An example might be a randomly generated secret that is shared between multiple Kubernetes resources.\nLet\u2019s have a look at a mock secret:\n> kubectl describe secret dummy-secret Name: dummy-secret Namespace: sports Labels: app.kubernetes.io\/managed-by=Helm Annotations: meta.helm.sh\/release-name: sports meta.helm.sh\/release-namespace: sports Type: Opaque Data ==== MY_FAVOURITE_FRUIT: 12 bytes So here we have a secret called dummy-secret and one of the values within it has the name MY_FAVOURITE_FRUIT.\nWe can fetch it like so:\n> kubectl get secret dummy-secret -o jsonpath=\"{.data.MY_FAVOURITE_FRUIT}\" | base64 --decode strawberries ","slug":"kubes-read-secret","tags":["credentials","kubernetes","security"],"title":"How can I read a Kubernetes secret?"},{"content":"By default, kubectl will search the default namespace for any newly added clusters to your context, which can be quite annoying.\nYou can of course tack on -n <namespace> manually or make your own little wrapper around kubectl as I have.\nA simpler version though is to just do the following:\nkubectl config set-context --current --namespace=baseball Context \"sports\" modified Where baseball is the name of your namespace of course.\nGoing forward, any commands will default to use the baseball namespace but you can override them as always with -n.\n","slug":"kubes-default-namespace","tags":["defaults","kubectl","kubernetes"],"title":"How can I set a default kubectl namespace for a given cluster?"},{"content":"Let\u2019s assume you have multiple networks set up under System Preferences > Networks.\nYou might have \u201cWork\u201d which has a bunch of proxy configuration specified and \u201cHome\u201d which just disabled proxy configuration.\nIf you left the former \u201cWork\u201d network selected, then went to a place that can\u2019t access the proxy server, you wouldn\u2019t be able to access the internet and vice versa.\nTo make automating this a little bit easier, there\u2019s a command line tool called scselect\nHere\u2019s an example of what it looks like in action:\n> scselect Defined sets include: (* == current set) * <guid> (Work) <guid> (Home) In this example, we can see the Work network is selected.\nNow we wanted to change to the Home network, you could do so manually in System Preferences or you can run scselect with the name of the network you want to change to like so:\n> scselect Home CurrentSet updated to <guid> (Home) > scselect Defined sets include: (* == current set) <guid> (Work) * <guid> (Home) As you can see, the Home network is now selected.\nI only recently discovered this tool so I haven\u2019t automated it yet but it\u2019s probably feasible to have a file with your working hours and then if it\u2019s within those hours, toggle on the Work network (and all of the proxy configuration that comes with it)\nThe reason you might want to use a schedule and not eg; WiFi name is that you might be working from home over a VPN for example.\n","slug":"macos-configured-networks","tags":["macos","networking"],"title":"How can I view configured networks in my macOS terminal?"},{"content":"The iex interpreter includes a function called h which can be used to show documentation for a module\nh String # h\/1 ","slug":"elixir-help-docs","tags":["elixir"],"title":"How can I view help documentation for an Elixir module?"},{"content":"Let\u2019s say we have the following module\ndefmodule Reminder do def alarm(time, day) do end end We can check what methods are on it by providing a :functions atom\nReminder.__info__(:functions) # [alarm: 2] As we can see, this Reminder module has an alarm method, with an arity of 2.\n","slug":"elixir-object-methods","tags":["elixir"],"title":"How can I view methods associated with an Elixir object?"},{"content":"According to the Storage part of the Prometheus documentation, a single sample is somewhere between 1 - 2 bytes.\nYou can roughly calculate how much storage you\u2019ll need with the following formulae:\ndisk_space = retention_time_in_seconds * samples_ingested_per_second * 2 bytes (take the upper to be safe) By that logic, if we were ingesting 4000 samples per second and we were retaining them for 15 days (the default), it would look something like this:\ndisk_space = 1296000 * 4000 * 2 disk_space \/\/ 10368000000 bytes disk_space in gigabytes \/\/ 10.37 gigabytes Given this, you can see the levers you have are decreasing the amount of sampling going on, reducing the amount of time samples are retained for or simply buying more disk space as you go on.\nRemember as well that we took the high end of the estimation and it could be as low as 5.185 if we\u2019re extremely lucky on compression and\/or presumably we have next to no labels on each sample.\nYou would also need to factor in many other things such as the write-ahead log but I don\u2019t pretend to know what any of these things are.\nI just use Prometheus! I don\u2019t actually maintain a cluster or anything like that.\n","slug":"prometheus-sample-size","tags":["monitoring","prometheus","timeseries"],"title":"How large is a single Prometheus sample?"},{"content":"Back in the day, there was just one file: HOSTS.TXT.\nIt contained a name-to-address mapping for every entity within ARPANET.\n\/etc\/hosts used to be compiled from HOSTS.TXT\nIt didn\u2019t scale for a number of reasons:\nAs soon as administrators pulled the latest version of HOSTS.TXT, it would already be out of date There was no way to enforce constraints eg; no duplicates on hostnames It took a lot of resources to serve it up to every administrator ","slug":"dns-original-implementation","tags":["dns","historical"],"title":"How was DNS originally implemented?"},{"content":"An initial thought might be that it would help to capture all context about everything, all of the time but that would soon get very expensive to store.\nProfiling takes the approach of capturing as much context as possible for a certain period of time, generally for use in debugging.\nContinually gathering information, such as how long each function took to execute, in a production environment would very quickly impact end users so this is best suited for validating targeted assumptions of what might be going wrong.\n","slug":"monitoring-what-is-profiling","tags":["instrumentation","monitoring"],"title":"What is profiling?"},{"content":"The root node of DNS has a null label\nThe DNS tree is restricted to 127 levels of depth so you could only.have.a.domain.name.one.hundred.and.twenty.seven.levels.deep.com\n. is used to mark a domain as absolute eg; utf9k.net.\nBehind the scenes, a full domain name would be www.google.com.<root\/null>\nSome websites, or perhaps more accurately the load balancers and proxies in front of them, don\u2019t acknowledge the existence of such a thing.\nOne high profile example is Amazon. If you visit https:\/\/amazon.com., you\u2019ll see a blank page with the title x. Note the period on the end of the URL to see this issue in effect.\n","slug":"dns-trailing-period","tags":["dns","historical","networking"],"title":"What is the period you sometimes see at the end of a domain name?"},{"content":"In the same vein that it\u2019s not often feasible to capture all data, all of the time, tracing is concerned with sampling a subset of events such as every 50th incoming request.\nGenerally most tracing implementations will show you how much time is spent at each step of the way from establishing an SSL connection through to how long is spent talking with any given database.\nDistributed tracing is this same idea but\u2026 well, distributed.\nMore specifically, interactions are \u201ctagged\u201d, whether it be an HTTP header or an attribute within an RPC call. While those interactions may pass the boundaries of any one service, they can be \u201cstitched\u201d back together by matching up the associated request IDs.\nThe idea here being that you can trace a request through a system oriented around microservices, as if it were just one regular application.\nGiven that only a subset of interactions (ie 1 in 100) are sampled, this solves the storage issues presented by full on profiling all of the time.\n","slug":"monitoring-tracing-overview","tags":["instrumentation","monitoring"],"title":"What is tracing?"},{"content":"Services and libraries have different needs. Further, not all services are alike in the types of work they perform or what types of work are important to measure\nOnline-serving systems These are services that have a person or client waiting for a response.\nAs such, the RED method captures key metrics which are Requests, Errors and Duration.\nIt\u2019s worth noting that there may be a tendency to exclude failed requsts when capturing duration but this temptation should be avoided.\nIn the event that you only had successes, a long running request that ultimate failed after 15 seconds would be excluded for example, despite any reasonable initial assumption that errors may tend towards having a lower duration.\nOffline-serving systems These are services that operate continually in the background. Their workloads are generally in batches and may utilise multiple steps, buffered with a queuing system.\nThe USE method captures key metrics which are Utilisation, Saturation and Errors.\nBatch jobs Similar to offline-serving systems, these may be kicked off upon request (ie sending an email in the background) or something akin to a cronjob.\nGiven that they aren\u2019t suitable for serving a persistent HTTP endpoint for scraping, it\u2019s best to push metrics to a monitoring solution such as Prometheus upon work being completed.\n","slug":"monitoring-what-to-instrument","tags":["instrumentation","monitoring"],"title":"What is worth instrumenting?"},{"content":"The Playstation 1 uses CD-ROMs with the XA extension.\nIf you open a PS1 ISO within a hex editor, you\u2019ll want to scroll down to offset 37656\nWithin a normal CD (or ISO), it will have a header offset size of 2048.\nIt seems a little arbitrary but you can read the table of contents by multiplying the header offset by 16.\nFor a normal CD, this would be 2048 * 16 = 32768\nIn the case of PS1 discs, the header offset size is 2352 for reasons I don\u2019t understand so they start at 2352 * 16 = 37632\nLastly, and somewhat arbitrarily, you\u2019ll want to jump forward by an additional 24 bytes in order to come to the starting point of 37565 points\n","slug":"ps1-disc-offset","tags":["hexedit","playstation","videogames"],"title":"Why do Playstation 1 discs start at offset 37656?"},{"content":"As an example of what I mean, org-roam had seemingly the same function names at one point, despite the only difference being some double dashes\nHere is an example\nAt first glance, the naming differences between org-roam-capture--get-point and org-roam--capture-get-point seems completely arbitrary\nSupposedly, since there is no such thing as internal vs external functions, it\u2019s a convention for declaring that a function should be considered private or internal only\nI still don\u2019t understand the above example since they both have double hyphens\n","slug":"emacs-function-double-dash","tags":["elisp","emacs"],"title":"Why do some Emacs functions have double dashes?"},{"content":"Lists that start with a ` end up having values interpolated.\nCompare the following two examples:\n'(,(concat \"Hello, \" \"World\"), \"Nice to meet you?\") ; (,(concat \"Hello, \" \"World\") ; ,\"Nice to meet you?\") As you can see, we got the exact same list that we defined when starting with a '\nHow about using a `?\n`(,(concat \"Hello, \" \"World\"), \"Nice to meet you?\") ; (\"Hello, World\" \"Nice to meet you?\") The concat expression is evaluated and we get back two strings!\n","slug":"emacs-list-backtick","tags":["elisp","emacs"],"title":"Why do some Emacs lists start with a backtick instead of a comma?"},{"content":"I\u2019ve known about this tool for some time now but I\u2019m writing about it because I ALWAYS forget to use it.\npbcopy, and as I just discovered, pbpaste are two tools that are built into macOS.\nYou can pipe data into the former to add it to your clipboard and similarly, you can use the latter as input into a unix pipeline.\nLet\u2019s look at an example:\n> echo \"see you on the other side\" | pbcopy You can now use Ctrl+V to paste this text into any GUI application. Saves you having to mouse over to the terminal and highlight text but I still do it every darn time.\nWe can also use our clipboard contents too as mentioned. You could have copied some text from a GUI application and you want to use it in your terminal.\n# Clipboard contains \"utf9k.net\" that I copied from my browser > pbpaste | xargs dig TXT | grep \"I see\" utf9k.net. 3444 IN TXT \"I see you snoopin' around ;) If you're after something, you can feel fr\\010ee to email me at marcus@utf9k.net\" What I\u2019m trying to say is that I have all of the tools at my disposal to avoid RSI but I just need to remember they exist\u2026\n","slug":"macos-clipboard-piping","tags":["clipboard","macos","terminal"],"title":"How can I access my clipboard contents inside my terminal?"},{"content":"Often times, you might want to test connectivity to a container but without doing so from within the container itself. You could just into a neighbouring pod but it may not have networking tools (ie tools) or even potentially network connectively if there\u2019s a network policy in the mix.\nA quick way to deploy a curl container has been shared before in the Kubernetes docs and it looks like this:\n> kubectl run curl --image=radial\/busyboxplus:curl -i --tty Unable to use a TTY - container curl did not allocate one If you don't see a command prompt, try pressing enter. I think the output looks something like that but this is a bit more involved as my work makes use of policies in our cluster.\nNow normally I just keep a file called curl-debug.yml sitting around my hard drive and deploy it using kubectl apply -f curl-debug.yml but you can also deploy it inline using a hideously log container override.\nYou may need more (or less) override fields depending on eg; if your network policy only allows pods with certain annotations or metadata to connect to what you\u2019re testing.\nAn unprivileged curl pod would look something like this. Note that I\u2019ve removed -i --rm --tty as it always seems buggy to me and I much prefer to just manually run kubectl exec -it curl -- sh than have my terminal hanging.\n> kubectl run curl --image=radial\/busyboxplus:curl --overrides='{ \"spec\": { \"securityContext\": { \"runAsUser\": 1000, \"runAsGroup\": 1000, \"seccompProfile\": { \"type\": \"RuntimeDefault\" }}, \"containers\": [{ \"name\": \"curl\", \"image\": \"radial\/busyboxplus:curl\", \"command\": [ \"\/bin\/sh\", \"-c\", \"--\" ], \"args\": [ \"while true; do sleep 30; done; \" ], \"securityContext\": { \"runAsNonRoot\": true, \"allowPrivilegeEscalation\": false }}]}} pod\/curl created and for those who don\u2019t love huge eyesores, here\u2019s the contents of the pod spec I alluded to earlier:\napiVersion: v1 kind: Pod metadata: name: \"curl\" labels: app: \"my-cool-app\" service: \"some-other-identifier\" spec: securityContext: runAsUser: 1000 runAsGroup: 1000 seccompProfile: type: RuntimeDefault containers: - name: \"curl\" image: \"radial\/busyboxplus:curl\" command: [\"\/bin\/sh\", \"-c\", \"--\"] args: [\"while true; do sleep 30; done;\"] securityContext: runAsNonRoot: true allowPrivilegeEscalation: false Ah right, the actual point of the question. Once you have curl running, and you\u2019re inside the container, you can then use curl to test out the connectivity of things.\nFor example, earlier today I was moving a container to a new cluster and it was using the URL that the ingress was listening to. Let\u2019s use https:\/\/sports.example.com in this case and say that the service was called sports.\nThe ingress URL changed from being internally accessible to publically accessable, although behind an OAuth2 proxy of course.\nI noticed this change by doing the following:\n> curl --head --location http:\/\/sports HTTP\/1.1 200 OK Server: nginx [...] Ok, it resolves the internal service perfectly fine. How about the public one?\n> curl --head --location https:\/\/sports.example.com HTTP\/1.1 302 Moved Temporarily Location: https:\/\/example.com\/_oauth2\/start?rd=https:\/\/sports.example.com [...] HTTP\/1.1 302 Found Location: https:\/\/login.microsoftonline.com\/common\/oauth2\/authorize?a_very_long_string [...] HTTP\/1.1 200 OK Content-Length: 186288 [...] Now, I don\u2019t even have to look at the payload to infer that we probably just hit an OAuth2 login page and that\u2019s exactly what was happening.\nIn the previous cluster, we were using internal links to the external OAuth proxy is never involved. Admittedly, this was rationalised as \u201cDNS just magically knows to resolve the request to the service right next door\u201d and perhaps this is true but maybe not!\nAnyway, a third case that you might run into is the following:\n> curl --head --location http:\/\/something # the command just sits with no output forever! If you check my curl-debug.yaml, I have specific labels that the network policy looks for. Because this curl pod is missing them, it can\u2019t make any requests.\nThis could be anything from protocol (TCP\/UDP), port number, whitelisted namespaces, whitelisted resources and so on. If you have this problem, either check your various log messages for reference to a required policy or check for an existing one that needs to be updated.\nDoing a search for \u201ckind: NetworkPolicy\u201d should help narrow down which files are relevant and\/or if they even exist in the first place.\nHappy debugging!\n","slug":"kubes-namespace-connectivity","tags":["curl","debugging","kubernetes","networking"],"title":"How can I test connectivity within my Kube namespace?"},{"content":"While there\u2019s the classic Apple menu -> About This Mac -> System Report, a terminal based alternative is the system_profiler command.\nYou can use a list of queryable types like so:\n> system_profiler -listDataTypes Available Datatypes: SPParallelATADataType SPUniversalAccessDataType [...] Once you\u2019ve found one or more types, you\u2019re interested in then just append it after the command like so: system_profiler <type1> <type2>\nLet\u2019s see how it looks in action:\n> system_profiler SPPowerDataType Power: Battery Information: Model Information: Manufacturer: DSY Device Name: bq20z451 Pack Lot Code: 0 PCB Lot Code: 0 Firmware Version: 1002 Hardware Revision: 1 Cell Revision: 2400 Charge Information: Fully Charged: No Charging: Yes Full Charge Capacity (mAh): 4569 State of Charge (%): 74 Health Information: Cycle Count: 81 Condition: Normal This is just an excerpt of what is otherwise a whole bunch of information.\nParticularly interesting is the SPAirPortDataType which can be queried to see a list of SSIDs in the environment.\n","slug":"macos-view-hardware","tags":["hardware","macos"],"title":"How can I find out more about the hardware inside my Mac?"},{"content":"From time to time, I have troubles with Firefox since it seems to clash with a corporate proxy we use.\nUsing the built-in certificate store rather than Firefox\u2019s own managed store seemed to \u201cfix\u201d this issue.\nTo do this, you\u2019ll want to navigate to about:config and then toggle security.enterprise_roots.enabled to true.\n","slug":"firefox-local-cert-store","tags":["browsers","enterprise","firefox","software"],"title":"How can I use my local certificate store with Firefox?"},{"content":"There is a CLI tool called xcall which seems to be the only way I\u2019ve seen to actually interact with x-callback-url outside of other applications.\nIt\u2019s a bit wonky in that you have to drag xcall.app to your Applications folder and then either add that to your path or reference the cli tool inside directly.\nHere\u2019s an example of it in use:\n> \/Applications\/xcall.app\/Contents\/MacOS\/xcall -url \"things:\/\/\/version\" -activateApp NO { \"x-things-client-version\" : \"31310506\", \"x-things-scheme-version\" : \"2\" } Annoyingly, this will activate the application in question, if it isn\u2019t already open, but that is the nature of x-callback-url after all.\nIt will take the foreground view upon opening but further invocations won\u2019t trigger it, assuming you use -activateApp NO. If you want it to appear, such as when triggering a search, you can use -activateApp YES instead.\n","slug":"macos-invoke-x-callback-url","tags":["macos","x-callback-url"],"title":"How can I try out x-callback-url commands on macOS?"},{"content":"Recently I had noticed that some shell commands on my laptop were executing surprisingly slow.\nLike most things in the tech world, it was due to a piece of jamf software locking up anything that was being read.\nI managed to validate this assumption using the command fs_usage which requires sudo. Here\u2019s an example of it in action.\n> sudo fs_usage | grep zshrc Password: 16:19:22 open \/Users\/marcus\/dotfiles\/zsh\/zshrc.md 0.000021 lugh 16:19:22 open \/Users\/marcus\/dotfiles\/zsh\/.zshrc 0.000137 lugh 16:19:22 WrData[A] \/Users\/marcus\/dotfiles\/zsh\/.zshrc 0.000324 W lugh 16:19:22 lstat64 \/System\/Volumes\/Data\/Users\/marcus\/dotfiles\/zsh\/.zshrc 0.000015 fseventsd 16:19:22 lstat64 dotfiles\/zsh\/.zshrc 0.000005 perl5.28 16:19:22 lstat64 .zshrc 0.000007 perl5.28 16:19:22 lstat64 .zshrc 0.000004 perl5.28 16:19:22 readlink .zshrc 0.000004 perl5.28 16:19:22 stat64 dotfiles\/zsh\/.zshrc\/.stow 0.000002 perl5.28 16:19:22 stat64 dotfiles\/zsh\/.zshrc\/.nonstow 0.000001 perl5.28 16:19:22 stat64 dotfiles\/zsh\/.zshrc 0.000004 perl5.28 16:19:22 fsgetpath \/Users\/marcus\/dotfiles\/zsh\/.zshrc 0.000005 Finder 16:19:22 getattrlist \/Users\/marcus\/dotfiles\/zsh\/.zshrc 0.000014 Finder 16:19:22 fsgetpath \/Users\/marcus\/dotfiles\/zsh\/.zshrc 0.000005 Finder 16:19:22 fsgetpath \/Users\/marcus\/dotfiles\/zsh\/zshrc.md 0.000005 Finder 16:19:22 getattrlist \/Users\/marcus\/dotfiles\/zsh\/zshrc.md 0.000012 Finder 16:19:22 fsgetpath \/Users\/marcus\/.zshrc 0.000005 Finder 16:19:22 getattrlist \/Users\/marcus\/.zshrc 0.000015 Finder 16:19:22 fsgetpath \/Users\/marcus\/zshrc.md 0.000005 Finder 16:19:22 getattrlist \/Users\/marcus\/zshrc.md 0.000014 Finder 16:19:22 fsgetpath \/Users\/marcus\/zshrc.md 0.000003 Finder 16:19:22 getxattr dotfiles\/zsh\/zshrc.md 0.000014 Finder 16:19:22 fsgetpath \/Users\/marcus\/zshrc.md 0.000004 Finder 16:19:22 fsgetpath \/Users\/marcus\/zshrc.md 0.000003 Finder 16:19:23 lstat64 \/System\/Volumes\/Data\/Users\/marcus\/dotfiles\/zsh\/.zshrc 0.000005 fseventsd Now this output doesn\u2019t actually come from my work computer so you won\u2019t see the mentioned JamfAgent but we can walk through this anyway.\nFirst is lugh, a custom and possibly temporary literate markdown tool I use on my dotfiles. Next is perl, in the form of GNU Stow followed by macOS Finder doing some things. This gives a really nice breakdown of what is going on.\nYou can even use it to better understand applications, like if you run git status and see all the files that were touched within the .git folder.\nI actually spotted that Yet Another Daemon was touching some of my .git files on my work laptop too. Shoo!\n","slug":"macos-see-file-usage","tags":["enterprise","jamf","macos","performance","terminal"],"title":"How can I see what applications are making my shell commands slow?"},{"content":"If you\u2019ve ever seen those pesky default folders like Public and Movies, the good news is that you can get rid of them.\nYou can\u2019t, or more specifically, you shouldn\u2019t fully delete them as some applications may assume their existence but you can get close enough.\nLet\u2019s say we want to hide Public, you can hide it from Finder like so:\nchflags hidden ~\/Public The next time you navigate to your Home directory using Finder, you\u2019ll see that they\u2019ve magically disappeared\nIf you want to hide multiple at once, you can provide a comma delimited list:\nchflags hidden ~\/{Downloads,Public} If, for whatever reason, you wanted to block anyone or anything from accessing those folders as well, you could use chmod to do that:\nchmod 000 ~\/{Downloads,Public} Personally, I don\u2019t bother with this step but you might have a use for it.\nThe one issue with the above is that you\u2019ll see those files appear in your Terminal and I don\u2019t know about you but that basically makes this whole exercise pointless.\nThere are ways to do it but I haven\u2019t looked into them myself.\n","slug":"macos-hide-home-folders","tags":["housekeeping","macos"],"title":"How can I hide folders in my Home directory?"},{"content":"For those of us who are subject to using corporate VPNs, all sorts of wackiness can occur such as 127.0.0.1 being routed first to another country before trying to resolve locally.\nYou can see both IPv4 and IPv6 routing entries by running netstat -rn. Personally, I like to just show IPv4 addresses.\nHere\u2019s an example of my route table with WiFi (and ethernet) interfaces disabled:\n> netstat -nr -f inet Routing tables Internet: Destination Gateway Flags Netif Expire 127 127.0.0.1 UCS lo0 127.0.0.1 127.0.0.1 UH lo0 111.0.0 link#1 UmCS lo0 I\u2019ve changed the last entry since I don\u2019t actually know if it\u2019s an internet work address.\n","slug":"macos-view-route-table","tags":["macos","networking","vpn"],"title":"How can I see my route table?"},{"content":"This issue is particularly annoying and I only just discovered it today for the first time.\nHere\u2019s an example of what it looks like\nIn order to install the application so that it bypasses Gatekeeper, you can rerun brew cask install like so:\n> brew cask install --no-quarantine blah > brew reinstall --no-quarantine blah If you\u2019d like to keep this flag enabled all the time, and honestly you might as well, you can also do the following:\n> export HOMEBREW_CASK_OPTS=\"--no-quarantine\" > brew cask install blah ","slug":"macos-homebrew-app-blocked","tags":["gatekeeper","homebrew","macos"],"title":"How can I run a Homebrew application being blocked by Gatekeeper?"},{"content":"I recently ran into this issue when switching my distro to Manjaro.\nI\u2019d find that whenever a different audio source would start playing such as a voice call, notification or even a silent video on the web, my Spotify audio would drop to 0 instantly\nIn order to fix this, all I needed to do was unload the module-role-cork module presumably used by pulseaudio\nYou can toggle via your terminal to test that it works like so:\n> pactl unload-module module-role-cork # disabled, try spotify and another audio source > pactl load-module module-role-cork # enabled, spotify should be interrupted While I\u2019m not sure how long unload-module persists (I\u2019m guessing until the next restart), you can achieve the same effect by commenting out the module in the configuration for pulseaudio like so:\n> grep \"cork\" \/etc\/pulse\/default.pa -B 3 ### Cork music\/video streams when a phone stream is active # Disabling this allows audio streams to run over the top of each other # Before this, a newer stream (notification, video) would mute Spotify #load-module module-role-cork Once that\u2019s done, you should be good to go. It seems to work as expected for me anyway.\n","slug":"linux-audio-muting-suddenly","tags":["audio","bugs","linux","spotify"],"title":"Why do some of my applications suddenly get muted on Linux?"},{"content":"On April 20th 2020, oil futures fell to $-37.63 per barrel but how is that possible? That would suggest people are literally paying customers to take oil off their hands.\nIn a sense, that\u2019s exactly the case but perhaps not for quite the reasons you might expect.\nWhat are oil futures? with the pandemic bringing the economy to a standstill, there is so much unused oil sloshing around that American energy companies have run out of room to store it. And if there\u2019s no place to put the oil, no one wants a crude contract that is about to come due.\nIn reality, traders hold a contract that, generally after about 3 months, is translated into a physical delivery of oil.\nFor example, someone may pay $40 per barrel in January. Oil may be worth $60 in March so a trader might then sell the contract for a $20 profit, as I understand it anyway.\nFutures are designed for people who actually want to purchase oil, or any future-able item like wheat, corn and so on.\nThat doesn\u2019t stop a whole portion of Wall Street who speculate on these futures however.\nThere are some funny stories about junior traders who have forgotten to sell their contract and have been required to take delivery of hundreds of physical barrels of oil.\nReceiving delivery The city of Cushing, Oklahoma is called the \u201cPipeline Crossroads of the World\u201d and is where a great deal of oil is stored in the United States.\nWhen it comes to oil futures, Cushing is also the one and only designated delivery point.\nNormally, oil is stored there on behalf of those who lease between 50 - 80 million barrels worth of storage.\nWhy did oil futures turn negative? Given Coronavirus meant that most of the world was at home, oil producers had little option but to store their oil.\nWhen it came time for those futures to convert into physical oil, there was no actual storage left for traders to purchase.\nWith nowhere to store their oil, and the foreboding promise of exorbitant storage fees at the delivery site, the holders of oil delivery contracts were forced to pay buyers with oil storage contracts to take the product off their hands, upending the market and sending the entire industry into unknown, negative territory.\nFaced with the horrifying idea of having to drive all the way to Cushing, Oklahoma to somehow receive thousands of barrels of oil, it became more appeals to just pay almost $40\/barrel for someone else to take the future (and impending delivery) off of their hands\nStorage may have been possible but at a time where it was in high demand, the excesses would have been astronomical.\nNot to mention, there was already an overabundance of oil for the foreseeable future so it would take a long time to realise a profit, if at all.\nSources https:\/\/www.bloomberg.com\/opinion\/articles\/2020-04-22\/nobody-wants-much-oil-right-now https:\/\/www.bloomberg.com\/opinion\/articles\/2020-04-28\/oil-traders-not-sure-they-like-oil https:\/\/www.reuters.com\/article\/us-global-oil-usa-storage\/no-vacancy-main-us-oil-storage-in-cushing-is-all-booked-idUSKCN22332W https:\/\/www.cushingcitizen.com\/news\/oil-turns-red Further reading https:\/\/www.npr.org\/sections\/money\/2016\/08\/26\/491342091\/planet-money-buys-oil ","slug":"finance-oil-futures-negative","tags":["finance","futures"],"title":"Why did oil futures go negative in April 2020?"}]