[{"content":"Introduction My journey started with simple SDR experiments with Reverse engineering a digital radio signal, but I wanted more.\nI wanted to be able to decode real-world signals! So I stood up from my chair and started looking around my house.\nSoon I found two interesting items:\nThe gas meter, which has a pretty nice label with 169MHz written on it. The heat allocators on the radiators, which are not labeled with a frequency, but must communicate somehow, so I guess they also use radio signals. Although I really wanted to go straight to the radio signals, I decided to approach them step by step, so I started searching for documentation.\nWireless M-Bus Starting point As soon as I started researching, I found out that most of the meters in the world (and especially in Europe) use a protocol called Wireless M-Bus (or WMBus) to communicate their readings.\nWireless M-Bus is a wireless extension of the already existing M-Bus (Meter-Bus) protocol, which is a European standard (EN 13757-2 and EN 13757-3) for reading water, gas, heat, and electricity meters using a wired connection.\nBut where should I start? I needed a real signal to analyze. I tried to capture some signals with my RTL-SDR, but I had no idea what to look for, and so I eventually gave up.\nInstead, after digging deeper into the internet, I discovered several useful resources.\nFirst of all I discovered that:\n\u26a0\ufe0f Wireless M-Bus is a complex protocol with many modes. According to the specification EN-13757-4, Wireless M-Bus defines several modes (S, T, R2, C, N, F). The most supported ones are S, T and C. I needed Mode N, because it is the only one used for the 169MHz frequency band.\nUseful resources First of all, the classic rtl_433 has partial Wireless M-Bus support. However, it only supports a few modes, and mode N is not one of them.\nThen I stumbled upon rtl_wmbus, a tool specifically designed to decode Wireless M-Bus signals captured with RTL-SDR or RTL433. However, again, it only supported a few modes, and mode N was not one of them.\nHowever, I found a particularly interesting GitHub issue in that repository, where contributors were discussing adding Wireless M-Bus Mode N support.\nThe interesting part was that @optiluca had already uploaded some sample captures of Wireless M-Bus signals and had given the link to download those captures!\nAs such, I downloaded the sample captures right away and started analyzing them.\nAnalyzing the captures Even though the captures were not recorded by me, they were still a great starting point.\nI already had the I\/Q Quicklook by triq.org installed, so I chose a nice looking capture by looking at the waterfall, and opened it.\nYou can clearly see that the signal oscillates between two frequencies, which is a strong hint that the modulation used is Frequency Shift Keying (FSK), in this case 2-FSK.\nURH I loaded the capture in URH, set the modulation, and started decoding the signal.\nI started manually decoding the bits in a simple file, but after an enormous amount of effort (weeks passed) I was not satisfied with the manual decoding; there clearly were strange values for simple fields: I was struggling a lot.\nSo I started experimenting with other tools. In particular, with GNU Radio.\nGNU Radio Building a GNU Radio flowgraph was insane to me; I had never experienced such frustration before. Luckily for me, nowadays LLMs are available, so I used one to help me build the flowgraph. Of course it hallucinated many times, but at least it gave me a starting point.\nSo, first of all, I built a simple flowgraph to convert the .cu8 file to a complex signal:\nThen, I just added some normalization to help me with the playground. The throttle block allowed me to slow down the playback speed so that I could visualize parts of the signal more easily when I needed to. The resampling was just to use a sample rate that was a multiple of the baud rate.\nThen, I centered the signal around 0 and applied a low-pass filter to remove the noise:\nAnd confirmed that the result was good with a QT GUI Sink block:\nAfter that I tried and retried a lot. I think I created and deleted hundreds of blocks just to finally arrive at a working flowgraph. Then, finally, I kept two different working flowgraphs, that I can tweak in similar but different ways.\nThe first one is simpler; it just uses a built-in GFSK Demod block. The second one uses a Quadrature Demod block followed by a Symbol Sync block and a Binary Slicer block. Both do the same thing, but in different ways: the GFSK Demod block sensitivity (2 * math.pi * fsk_deviation_hz \/ samp_rate) is the inverse of the Quadrature Demod gain (samp_rate \/ (2*math.pi*fsk_deviation_hz)).\nHere is the full flowgraph:\nDownloadable here (right click, save link as): wmbus-mode-n.grc\nThe flowgraph, when run, produces two files that should contain almost the same bitstream.\nTo decode Wireless M-Bus frames from the bitstream, I used a simple Python script, that you can find below:\ndecode-wmbus.py (Click to expand) # decode-wmbus.py from struct import unpack def crc16_en13757_bytes(data): POLY = 0x3D65 INIT = 0x0000 XOROUT = 0xFFFF crc = INIT for byte in data: for i in range(8): bit = (byte &gt;&gt; (7 - i)) &amp; 1 c15 = (crc &gt;&gt; 15) &amp; 1 crc = ((crc &lt;&lt; 1) &amp; 0xFFFF) if c15 ^ bit: crc ^= POLY return crc ^ XOROUT files = [ &#39;g26_gfsk_demod.dat&#39;, &#39;g26_quad_demod_symbol_sync.dat&#39; ] for f in files: print(f&#34;\\n\\nFile: {f}&#34;) # Original bits array from file o_bits = [(byte &gt;&gt; (7 - i)) &amp; 1 for byte in open(f, &#39;rb&#39;).read() for i in range(8)] # Search for preamble and sync preambles_indices = [] preamble = [0,1]*8 sync1 = [1,1,1,1,0,1,1,0] preamble_plus_sync1 = preamble + sync1 for i in range(len(o_bits) - len(preamble_plus_sync1)): if o_bits[i:i+len(preamble_plus_sync1)] == preamble_plus_sync1: preambles_indices.append(i) print(f&#34;Preamble found at bit position: {i}&#34;) print(f&#34;Total preambles found: {len(preambles_indices)}&#34;) if not preambles_indices: print(&#34;No preamble found.&#34;) continue for preamble_index in preambles_indices: # Re-align data to preamble_index % 8 aligned_bits = o_bits[preamble_index:] data = bytes( sum(bit &lt;&lt; (7 - i) for i, bit in enumerate(chunk)) for chunk in [aligned_bits[i:i+8] for i in range(0, len(aligned_bits), 8)] if len(chunk) == 8 ) print(f&#34;Realigned data (length {len(data)} bytes): {data.hex()}&#34;) # L1 sync preamble = unpack(&#39;&gt;H&#39;, data[0:2])[0] sync = unpack(&#39;&gt;H&#39;, data[2:4])[0] # L2 fields L_field = unpack(&#39;&gt;B&#39;, data[4:5])[0] C_field = unpack(&#39;&gt;B&#39;, data[5:6])[0] M_field = unpack(&#39;&gt;H&#39;, data[6:8])[0] A1_field = unpack(&#39;&gt;I&#39;, data[8:12])[0] A2_field = unpack(&#39;&gt;H&#39;, data[12:14])[0] # decode M field alphabet = &#34;_ABCDEFGHIJKLMNOPQRSTUVWXYZ+-*\/&#34; M_third_letter = alphabet[(M_field &amp; 0b11111)] M_second_letter = alphabet[(M_field &gt;&gt; 5) &amp; 0b11111] M_first_letter = alphabet[(M_field &gt;&gt; 10) &amp; 0b11111] M_top_bit = (M_field &gt;&gt; 15) &amp; 0b1 # decode A field A_device_type = A2_field &amp; 0xFF A_version = (A2_field &gt;&gt; 8) &amp; 0xFF A_id = A1_field print(f&#34;At preamble index {preamble_index}:&#34;) print(f&#34; Preamble: 0x{preamble:04X}&#34;) print(f&#34; Sync: 0x{sync:02X}&#34;) print(f&#34; L field: 0x{L_field:02X} (length = {L_field} bytes)&#34;) print(f&#34; C field: 0x{C_field:02X}&#34;) print(f&#34; M field: 0x{M_field:04X} ({M_first_letter}{M_second_letter}{M_third_letter}, top bit: {M_top_bit})&#34;) print(f&#34; A field: 0x{A1_field:08X}{A2_field:04X} (ID: 0x{A_id:04X}, Version: 0x{A_version:02X}, Device Type: 0x{A_device_type:02X})&#34;) print() format = None if sync == 0b11110110_10001101: format = &#34;A&#34; elif sync == 0b11110110_01110010: format = &#34;B&#34; else: print(f&#34;Unknown mode\/format {sync:04X}&#34;) continue L7_data = bytes() print(f&#34;Mode N, Format {format}&#34;) if format == &#34;A&#34;: CRC_field = unpack(&#39;&gt;H&#39;, data[14:16])[0] actual_crc = crc16_en13757_bytes(data[4:14]) print(f&#34; CRC field: 0x{CRC_field:04X}&#34;) if actual_crc == CRC_field: print(f&#34; CRC valid: 0x{CRC_field:04X}&#34;) else: print(&#34; &#34; + &#34;-&#34;*8 + f&#34;&gt; CRC invalid! Computed: 0x{actual_crc:04X}, Expected: 0x{CRC_field:04X}&#34;) # ... # TODO # ... elif format == &#34;B&#34;: CI_field = unpack(&#39;&gt;B&#39;, data[14:15])[0] print(f&#34; CI field: {CI_field:02X}&#34;) length = min(128, L_field) - 13 print(f&#34; DATA field length: {length} bytes&#34;) DATA_field = data[15:15+length] L7_data += DATA_field CRC_field = unpack(&#39;&gt;H&#39;, data[15+length:15+length+2])[0] actual_crc = crc16_en13757_bytes(data[4:15+length]) print(f&#34; DATA field: {DATA_field.hex()}&#34;) if actual_crc == CRC_field: print(f&#34; CRC valid: 0x{CRC_field:04X}&#34;) else: print(&#34; &#34; + &#34;-&#34;*8 + f&#34;&gt; CRC invalid! Computed: 0x{actual_crc:04X}, Expected: 0x{CRC_field:04X} &lt;&#34; + &#34;-&#34;*8) # If L_field is greater than 128, then there is another optional frame if L_field &gt; length: OPT_DATA_field = data[15+length+2:15+length+2+L_field-129] L7_data += OPT_DATA_field OPT_CRC_field = unpack(&#39;&gt;H&#39;, data[15+length+2+L_field-129:15+length+2+L_field-129+2])[0] actual_opt_crc = crc16_en13757_bytes(OPT_DATA_field) print(f&#34; OPT DATA field: {OPT_DATA_field.hex()}&#34;) if actual_opt_crc == OPT_CRC_field: print(f&#34; OPT CRC valid: 0x{OPT_CRC_field:04X}&#34;) else: print(&#34; &#34; + &#34;-&#34;*8 + f&#34;&gt; OPT CRC invalid! Computed: 0x{actual_opt_crc:04X}, Expected: 0x{OPT_CRC_field:04X} &lt;&#34; + &#34;-&#34;*8) print(f&#34;\\nLayer 7 data: {L7_data.hex()}&#34;) This script allowed me to programmatically decode the bitstreams into the following (I stripped one of the two almost identical outputs for brevity):\nPreamble found at bit position: 563 Total preambles found: 1 Realigned data (length 243 bytes): 5555f6728c4434352554741021037db33000800101db00753000001bc688bb11ab6992bfe71c7f4130ed586a02a8eef4bbd86eb0919a242577df74a2a2a0ca187c617c34cf70c9fb4bcbbf3da84c8bf6c48778b6e6213d8885020143a0152c1d3ed5ce45a7c5ce0fb4c5f084e5d8459976e902e2230fd8c13aec20d4e20f08f7022397da7a00ae56451f4e7e157a923e241007e535b071678cc024d505a2a1135bd95326c7f0023584579cd8147359db8996218f38e8258f9ee098dcb8850ec982a25491e404b1e4e915af725c0514513c50d7749d4ba117caa0bb70183209a51ec8134decafeb8498630c06ce686f8949e633 At preamble index 563: Preamble: 0x5555 Sync: 0xF672 L field: 0x8C (length = 140 bytes) C field: 0x44 M field: 0x3435 (MAU, top bit: 0) A field: 0x255474102103 (ID: 0x25547410, Version: 0x21, Device Type: 0x03) Mode N, Format B CI field: 7D DATA field length: 115 bytes DATA field: b33000800101db00753000001bc688bb11ab6992bfe71c7f4130ed586a02a8eef4bbd86eb0919a242577df74a2a2a0ca187c617c34cf70c9fb4bcbbf3da84c8bf6c48778b6e6213d8885020143a0152c1d3ed5ce45a7c5ce0fb4c5f084e5d8459976e902e2230fd8c13aec20d4e20f08f70223 CRC valid: 0x97DA OPT DATA field: 7a00ae56451f4e7e157a92 OPT CRC valid: 0x3E24 Layer 7 data: b33000800101db00753000001bc688bb11ab6992bfe71c7f4130ed586a02a8eef4bbd86eb0919a242577df74a2a2a0ca187c617c34cf70c9fb4bcbbf3da84c8bf6c48778b6e6213d8885020143a0152c1d3ed5ce45a7c5ce0fb4c5f084e5d8459976e902e2230fd8c13aec20d4e20f08f702237a00ae56451f4e7e157a92 And with that, I was finally able to decode Wireless M-Bus Mode N signals!\nInterpreting the Layer 7 data is another story, but at least I managed to get this far.\n","permalink":"https:\/\/bennesp.github.io\/posts\/007-wmbus\/","summary":"My journey to demodulate and decode Wireless M-Bus Mode N signals using SDR","title":"Reverse engineering Wireless M-Bus Mode N"},{"content":"1. Explore the spectrum with SDR++ SDR++ is one of the best SDR applications available to me. It is really user-friendly and feature rich.\nI generally use SDR++ to explore the radio spectrum and find interesting signals.\nIt also allows you to listen to cleartext AM\/FM signals, which is always fun.\n2. Recording a signal TLDR: Do not record the signal with SDR++. Instead, use rtl_sdr to record the IQ data.\nAfter finding an interesting signal, I usually want to record it for further analysis. SDR++ allows you to record the bandwidth of the signal you are interested in in a WAV file. However, for proper signal analysis, I prefer to record the original IQ data. IQ data preserves all the signal information needed for digital signal processing.\nWhile SDR++ can record the entire baseband spectrum, working with these large files can be challenging for targeted analysis. For more focused signal investigation, specialized tools like rtl_sdr or rtl_433 are often more handy.\nBoth tools are capable of recording raw IQ data directly from RTL-SDR devices. The real difference for our purpose lies in how they handle the recorded data.\n2.1. Using rtl_sdr to record IQ data To record the IQ data with rtl_sdr, you can use the following command:\nrtl_sdr -f 433.92M -s 2M -g 20 output.complex16u where:\n-f 433.92M specifies the frequency of the signal you want to record (in this case, 433.92 MHz) -s 2M sets the sample rate to 2 MHz, which determines how many samples per second are captured -g 20 sets the gain to 20 dB, which amplifies the signal for better reception output.complex16u is the name of the output file where the IQ data will be saved The gain especially is one of the parameters you can play with in SDR++ before recording the signal. It can significantly affect the quality of the recorded IQ data.\n2.2. Using rtl_433 to record IQ data (and possibly decoding it) rtl_433 is not only capable of recording data, but also capable of demodulating and decoding signals from various known devices.\nTo record the IQ data with rtl_433, you can use the following command:\nrtl_433 -f 433.92M -s 2M -g 20 -w output.cu8 where:\n-f 433.82M specifies the frequency of the signal you want to record (in this case, 433.82 MHz) -s 2M sets the sample rate to 2 MHz, which determines how many samples per second are captured -g 20 sets the gain to 20 dB, which amplifies the signal for better reception -w output.cu8 is the name of the output file where the IQ data will be saved The real power of the recording feature in rtl_433 is that you can avoid saving data for the whole time, but you can choose to save only when a signal is detected. This can significantly reduce the size of the recorded file and make it easier to analyze later.\nTo save signals that are known to rtl_433, you can use the -S option followed by known. If you instead want to save unknown signals, you can use -S unknown. To save all signals, you can use -S all.\nFor example, to record data on 169.4MHz only when an unknown signal is detected, you can use the following command:\nrtl_433 -f 169.4M -s 250k -g 20 -S unknown This command will generate multiple files, each containing the IQ data of a detected unknown signal.\n3. Analyze the recorded signal into URH URH is another great tool. It allows you to analyze deeply the recorded IQ data and reverse engineer the signal.\nBeware that when you import a file into URH, the extension is very important. URH supports these formats:\n.complex for float32 I and float32 Q .complex16u for unsigned 8bit I and unsigned 8bit Q .complex16s for signed 8bit I and signed 8bit Q .wav for PCM audio A comprehensive list of supported file formats is also given by rtl_433 itself at this link (Spoiler: cu8 is the same as complex16u)\n(see more about URH in the URH user guide)\nSince rtl_sdr and rtl_433 produce uint8 interleaved IQ files, you can import the recorded signal into URH simply by using the .complex16u or .cu8 extensions.\n4. Analyze the signal 4.1. Isolate the signal The very first things to do in URH is to only analyze the signal of interest, so you need to isolate it from the rest of the recorded data. By doing so, every step afterwards will be much easier.\nGo to &ldquo;Signal view&rdquo; -&gt; &ldquo;Spectrogram&rdquo; to visualize the signal in the frequency domain.\nHere you can play with FFT window size:\nIf very small, the time domain resolution will be very high, but the frequency domain resolution will be very low. (note how many yellow rectangles you can see in the spectrogram in the horizontal direction) If very large, the time domain resolution will be very low, but the frequency domain resolution will be very high. (note how many different frequencies you can see in the spectrogram in the vertical direction: it seems a completely different signal, but it&rsquo;s the same one as before!) For isolating the signal frequencies a large FFT window size is what we need.\nAfter finding your signal by playing with the FFT window size, you can isolate it by selecting your signal, right-clicking on it, and choosing &ldquo;Apply bandpass filter&rdquo;.\nThis will create a new signal that contains only the frequencies of interest, making it easier to analyze.\nYou can repeat this process with different filter parameters (using narrower bandpass filters if you have multiple frequencies near each other) until you can clearly see only the signal of interest in the spectrogram and also in the time domain view.\n4.2. Reduce the noise URH allows you to leverage the &ldquo;Noise&rdquo; parameter to ignore every signal below a certain threshold (the red rectangular line in the time domain view).\nYou must play with this parameter to find the best value that ignores the noise while keeping the red rectangular line below the signal of interest.\n4.3. Demodulate the signal: choosing the modulation type Now you have many different options to choose from to demodulate the signal. URH supports:\nASK (so also OOK) FSK PSK 4.3.1. OOK OOK is very easy to visually recognize in the waterfall: you will see a clear on-off keying pattern, where the signal is either present (high) or absent (low), like if it was a morse code signal. The example already seen (shown also below) is a perfect example of OOK.\nOOK can be demodulated by selecting ASK in URH, since OOK is a special case of ASK where the amplitude is always at the same level.\n4.3.2. ASK ASK is very similar to OOK, but the number of signals is not limited to just &ldquo;off&rdquo; and &ldquo;on&rdquo;. In fact, ASK has different levels of amplitude, which can be used to encode more than just binary data.\nTo recognize ASK, you can look at the frequency domain (the waterfall) where you will see a signal around a single frequency that increases and decreases in amplitude.\nSee the example from hackaday comparing ASK (on the left) with OOK (on the right):\n4.3.3. (G)FSK FSK (Frequency Shift Keying) encodes data by switching between N frequencies. The most common implementation is 2-FSK (also called BFSK), which alternates between two frequencies to represent binary 0s and 1s.\nFSK signals are easily recognizable by looking in the frequency domain, since you will see the signal jumping between two (or more) distinct frequencies.\nSince FSK is very spiky in the frequency domain, often the Gaussian variant of FSK (GFSK) is used instead, which smooths the transitions between frequencies, reducing bandwidth and minimizing interference.\nThe visual difference between FSK and GFSK is that GFSK has smoother transitions between frequencies, while FSK has more abrupt changes. However, in practice, FSK and GFSK can be treated very similarly for demodulation purposes, so you don&rsquo;t need to worry too much about distinguishing between them.\nIn the example above, I highlighted in blue the two frequencies used by the 2-GFSK signal. The signal pattern repeats multiple times, making the frequency switching behavior easier to observe.\n4.3.4. PSK I&rsquo;ve not yet encountered a PSK signal, so I will need to do more research on this topic. :)\n4.3. Demodulate the signal: parameters Once you have chosen the modulation type, you can start demodulating the signal by playing with the parameters.\nThe most important one is the &ldquo;Sample\/Symbol&rdquo; parameter, which defines how many samples are used to represent a single symbol (bit).\nGiven the sampling rate Fs and the baud rate Rb, then sps can be calculated as:\nsps = Fs \/ Rb So if you are sampling at 2.4MHz and you know the baud rate is 4.8Kbps, then the Sample\/Symbol is:\nsps = 2,400,000 \/ 4,800 = 500 If you don&rsquo;t know the baud rate, you must proceed with trial and error, that can be frustrating at the beginning. A &ldquo;guided&rdquo; trial and error however can be done in URH with the following tips:\nTry the autodetect feature in URH, that can give you a good starting point Try to imagine what protocol can be used by the device you are trying to reverse engineer, and look for its baud rate online And of course use the &ldquo;Demodulated&rdquo; view! Even if it sounds like you are not achieving anything (see screenshot below)\nmaybe it&rsquo;s just because you need to zoom in a little bit more&hellip;\nAlso always remember you can select a portion of the signal and see the corresponding demodulated bits only for that portion and viceversa.\n4.4. Decoding the signal Once the demodulation seems good enough, you can proceed to decode the signal. In the reality you will likely need to go back and forth between demodulation and decoding multiple times until you get a good result.\nDecoding the signal involves finding the right frame structure, preamble, sync word, and encoding scheme used by the protocol. Without at least a vague idea on the protocol, it is just a matter of guessing.\nBelow you can see an example of a successfully demodulated and decoded Wireless-MBus Mode N signal.\nConclusions That&rsquo;s it for now! I hope this page will help at least someone to get started with reverse engineering digital radio signals using SDR.\n","permalink":"https:\/\/bennesp.github.io\/posts\/006-sdr-pipeline\/","summary":"Explore the process of reverse engineering a digital radio signal using SDR.","title":"Reverse engineering a digital radio signal"},{"content":"Cilium is now installed after these steps, but it doesn&rsquo;t mean much as of now. To see the tip of the benefits iceberg, we need to explore Hubble.\nHubble is the observability layer of Cilium. It allows you to monitor the network traffic flowing through your nodes and pods.\n1. Install Hubble Install it with:\nhelm upgrade cilium cilium\/cilium \\ --namespace kube-system \\ --reuse-values \\ --set hubble.relay.enabled=true \\ --set hubble.ui.enabled=true And wait for Cilium to be ready with cilium status --wait:\ncilium status &ndash;wait after some minutes while Hubble is being installed\n2. Explore Hubble UI Hubble is the observability layer of Cilium. Hubble UI is the web interface to explore the data collected by Hubble.\nEnable it with:\ncilium hubble ui Open the URL in your browser and select the namespace you want to explore.\nHubble UI service graph, in a namespace of choice\n3. Explore Hubble CLI The main potential of Hubble can be glimpsed by using its CLI.\nFirst of all let&rsquo;s start the port-forwarding to the Hubble Relay:\nkubectl port-forward -n kube-system deploy\/hubble-relay 4245:grpc Then, we can use the hubble observe command to see the traffic flowing through the network (-f follows the traffic):\nhubble observe --namespace traefik -f 4. Explore L7 visibility Cilium and Hubble can provide Layer 7 visibility, meaning that you can see the HTTP or DNS requests flowing through your network in real-time, and even filter them.\nTo see L7 flows, we need to enable L7 visibility for a specific namespace or pod. However, before enabling L7 visibility, we need to have a CiliumNetworkPolicy in place.\nThis is a simple tailored policy for one of my application pods.\n1apiVersion: cilium.io\/v2 2kind: CiliumNetworkPolicy 3metadata: 4 name: immich-l7-visibility 5 namespace: immich 6spec: 7 endpointSelector: 8 matchLabels: 9 k8s:io.kubernetes.pod.namespace: immich 10 app.kubernetes.io\/name: server 11 ingress: 12 # Allow all traffic coming from otel namespace to metrics ports 13 - fromEndpoints: 14 - matchLabels: 15 io.kubernetes.pod.namespace: otel 16 toPorts: 17 - ports: 18 - port: &#34;8081&#34; 19 protocol: TCP 20 rules: 21 http: [{}] 22 - ports: 23 - port: &#34;8082&#34; 24 protocol: TCP 25 rules: 26 http: [{}] 27 # Allow all traffic coming from the internet to http port 28 - fromEndpoints: 29 - matchLabels: 30 app.kubernetes.io\/name: traefik 31 io.kubernetes.pod.namespace: traefik 32 toPorts: 33 - ports: 34 - port: &#34;2283&#34; 35 protocol: TCP 36 rules: 37 http: [{}] 38 egress: 39 # Allow all traffic going to the DNS server 40 - toEndpoints: 41 - matchLabels: 42 io.kubernetes.pod.namespace: kube-system 43 toPorts: 44 - ports: 45 - port: &#34;53&#34; 46 protocol: UDP 47 rules: 48 dns: 49 - matchPattern: &#34;*&#34; 50 # Allow all traffic going to Redis 51 - toEndpoints: 52 - matchLabels: 53 app.kubernetes.io\/name: redis 54 io.kubernetes.pod.namespace: immich 55 toPorts: 56 - ports: 57 - port: &#34;6379&#34; 58 protocol: TCP 59 # Allow all traffic going to PostgreSQL 60 - toEndpoints: 61 - matchLabels: 62 app.kubernetes.io\/name: postgresql 63 io.kubernetes.pod.namespace: immich 64 toPorts: 65 - ports: 66 - port: &#34;5432&#34; 67 protocol: TCP 68 # Allow all traffic going to ML microservices 69 - toEndpoints: 70 - matchLabels: 71 app.kubernetes.io\/name: machine-learning 72 io.kubernetes.pod.namespace: immich 73 toPorts: 74 - ports: 75 - port: &#34;3003&#34; 76 protocol: TCP Note: the highlighted lines are the ones that enable L7 visibility for the specified ports.\nIn this case I enabled L7 visibility for 4 ports: 8081, 8082, 2283, and 53.\nAfter that, we can simply observe L7 traffic with:\nhubble observe --namespace immich -f -t l7 and we can see all the flows with their details, including HTTP URLs, status codes and DNS queries!\nHubble CLI showing L7 visibility for a specific namespace\n","permalink":"https:\/\/bennesp.github.io\/posts\/005-cilium-hubble\/","summary":"Some neat features of Hubble, leveraged by Cilium","title":"Explore Cilium Hubble \ud83d\udef0\ufe0f"},{"content":"Suppose you have a running k3s cluster and would like to explore Cilium. You don&rsquo;t want to start from scratch because you would have to redeploy all your applications and possibly lose data.\nHowever, migrating the k3s CNI doesn&rsquo;t seem to be officially supported (see this discussion).\nSo here you are, copying and pasting from the internet and hoping it works.\nThese are the (minimum) steps I followed to migrate my k3s cluster to Cilium during a coffee break from work \u2615\ufe0f\nSteps Prerequisites A running K3s cluster The Cilium CLI tool installed \ud83d\udd0c 1. Disable previous CNI Edit file \/etc\/rancher\/k3s\/config.yaml (create it if it doesn&rsquo;t exist)\n1flannel-backend: none 2disable-network-policy: true And simply restart k3s to apply the changes.\nsudo systemctl restart k3s \ud83d\udd0c 2. Remove old flannel interface I am not sure why this is not done automatically, but it is a mandatory step, otherwise Cilium will not be able to start failing with the impossibility of creating a vxlan interface.\nip link delete flannel.1 \ud83d\udce6 3. Install Cilium export KUBECONFIG=\/etc\/rancher\/k3s\/k3s.yaml cilium install --set ipam.operator.clusterPoolIPv4PodCIDRList=&#34;10.42.0.0\/16&#34; cilium status --wait After some seconds\/minutes, Cilium will be ready.\nCilium after some minutes when it is ready\n\u26a0\ufe0f 4. Restart all the pods Since Cilium is now managing the network, but pods are still using the old CNI, you must to restart all your pods.\nThis is one of the reason I believe that many providers do not support this migration.\n4.1. Restart all the pods safely (more or less) kubectl get ns -o name | sed &#39;s|namespace\/||&#39; | xargs -I{} kubectl rollout restart deployment -n {} kubectl get ns -o name | sed &#39;s|namespace\/||&#39; | xargs -I{} kubectl rollout restart statefulset -n {} kubectl get ns -o name | sed &#39;s|namespace\/||&#39; | xargs -I{} kubectl rollout restart daemonset -n {} 4.2. Simpler way of restarting all the pods However, if you can accept the risk of doing so (and of course seconds\/minutes of downtime), you can simply delete all the pods, causing them to be restarted with the new CNI.\nkubectl delete pod --all --all-namespaces ","permalink":"https:\/\/bennesp.github.io\/posts\/004-k3s-cilium\/","summary":"A step-by-step guide to migrate your K3s cluster from default CNI to Cilium","title":"Migrate K3s to Cilium"},{"content":"Brewfile Brewfile is a feature of brew, also referred to as brew bundle.\nWith brewfile you can install in a declarative way stuff like:\nnormal packages (like as brew install package) taps (like as brew tap homebrew\/cask) casks (like as brew install --cask package) mas (Mac App Store packages, like as mas install package) whalebrew (like as whalebrew install repo\/docker-image) Usage From scratch Create a file called Brewfile whenever you want (for example in home directory) and type brew bundle in a terminal to let brew read, update and install everything.\nFrom existing system If you have already packages installed and you don&rsquo;t want to write a Brewfile from scratch, you can dump your packages with brew bundle dump.\nForce cleanup If you want your Brewfile to match 1:1 with your system, you can use brew bundle cleanup --force to remove all packages that are not in your Brewfile.\nExample Example of a Brewfile, taken from the Official Github repository:\n# &#39;brew tap&#39; tap &#34;homebrew\/cask&#34; # &#39;brew tap&#39; with custom Git URL tap &#34;user\/tap-repo&#34;, &#34;https:\/\/user@bitbucket.org\/user\/homebrew-tap-repo.git&#34; # set arguments for all &#39;brew install --cask&#39; commands cask_args appdir: &#34;~\/Applications&#34;, require_sha: true # &#39;brew install&#39; brew &#34;imagemagick&#34; # &#39;brew install --with-rmtp&#39;, &#39;brew services restart&#39; on version changes brew &#34;denji\/nginx\/nginx-full&#34;, args: [&#34;with-rmtp&#34;], restart_service: :changed # &#39;brew install&#39;, always &#39;brew services restart&#39;, &#39;brew link&#39;, &#39;brew unlink mysql&#39; (if it is installed) brew &#34;mysql@5.6&#34;, restart_service: true, link: true, conflicts_with: [&#34;mysql&#34;] # &#39;brew install --cask&#39; cask &#34;google-chrome&#34; # &#39;brew install --cask --appdir=~\/my-apps\/Applications&#39; cask &#34;firefox&#34;, args: { appdir: &#34;~\/my-apps\/Applications&#34; } # always upgrade auto-updated or unversioned cask to latest version even if already installed cask &#34;opera&#34;, greedy: true # &#39;brew install --cask&#39; only if &#39;\/usr\/libexec\/java_home --failfast&#39; fails cask &#34;java&#34; unless system &#34;\/usr\/libexec\/java_home --failfast&#34; # &#39;mas install&#39; mas &#34;1Password&#34;, id: 443987910 # &#39;whalebrew install&#39; whalebrew &#34;whalebrew\/wget&#34; ","permalink":"https:\/\/bennesp.github.io\/posts\/brewfile\/","summary":"Brewfile is literally a package.json for macos <code>brew<\/code>. Every installed software on your system is defined in the Brewfile, in a <strong>declarative<\/strong> way.","title":"~\/Brewfile"},{"content":" \u2139\ufe0f Every snippet that uses a file can be onelined using cat, like:\ncat &lt;&lt; EOF | ldapcommand ldif file content here EOF Commands In the snippets below we use some arguments. Here are the arguments per-tool, explained:\nldapadd, ldapmodify, ldapsearch -D &quot;cn=admin,dc=example,dc=org&quot; is the username, called &ldquo;Binding DN&rdquo; -w admin is the password -f filename.ldif specify which LDIF file to use to add\/modify\/search ldapsearch only -b &quot;dc=example,dc=org&quot; is the base address where to start the search Snippets Create a new user 1 2 3 4 5 6 7 8 9 10 11 12 13 # file.ldif #\u00a0mandatory: dn: cn=billy,dc=example,dc=org cn: billy objectClass: inetOrgPerson sn: Jones # sn=Surname userPassword: billy # userPassword can be hashed with slappasswd, see next snippet # userPassword: {SSHA}g1RqOK2kk\/nV241xD2BZRdxdVuRqGNnG # optionally: uid: 123 givenName: Billy &amp; Mandy mail: billy@example.org ldapadd -D &#34;cn=admin,dc=example,dc=org&#34; -w admin -f file.ldif Generate hashed password slappasswd -g # generates random non hashed password slappasswd # hashes password with default hash (like SSHA = Salted SHA) slappasswd -h &#39;{CRYPT}&#39; # generates hashed password with CRYPT hash algorithm # oneline: PASSWORD=$(slappasswd -g) &amp;&amp; echo $PASSWORD &amp;&amp; slappasswd -s &#34;$PASSWORD&#34; Export all the readable objects ldapsearch -D &#34;cn=admin,dc=example,dc=org&#34; -w admin -b &#34;dc=example,dc=org&#34; Allow every authenticated user to read the directory dn is the selector of the database (you could have hdb instead of mdb if your LDAP is older) olcAccess {1} is the priority between all the other rules (can be negative, rules are applied from the smallest number to the higher one) to * is the resource to which the rule applies, could be to dn.base=&quot;dc=example,dc=org&quot; or similar, see docs for more info by anonymous auth allows every un-authenticated user to authenticate by users read allows every authenticated user to read the object specified in the to clause # config.ldif dn: olcDatabase={1}mdb,cn=config changetype: modify add: olcAccess olcAccess: {1}to * by anonymous auth by users read ldapmodify -D &#34;cn=admin,cn=config&#34; -w config -f config.ldif ","permalink":"https:\/\/bennesp.github.io\/posts\/ldap-snippets\/","summary":"Useful snippet for managing LDAP from CLI","title":"LDAP Snippets"},{"content":"Snippets If you search only snippets, go to Snippets.\nWhy LDAP If you need to use LDAP, I&rsquo;m really sorry, it simply sucks.\nIt is a very old idea with many limitations, but sadly still widely used.\nIf you search in Google Images or similar for &ldquo;LDAP&rdquo; you will only find interfaces that reminds me of Windows 95, like the following:\nA very old and ugly interface that I don&rsquo;t want to see in my life\nBut as I previously said, it is just used a lot, so sooner or later we will have to face it.\nWhat is LDAP LDAP is a protocol to organize data in a hierarchical structure. Often these data are &ldquo;users&rdquo; and &ldquo;organizations&rdquo; of a company.\nAn implementation of ldap is openldap, and a convenient container image of openldap is osixia\/openldap.\nHow LDAP Basic setup LDAP Server setup The simplest setup, using docker:\ndocker run --rm -ti -p 389:389 osixia\/openldap:1.5.0 or using docker-compose:\nservices: ldap: image: osixia\/openldap:1.5.0 ports: - &#34;389:389&#34; You can now play with it using a tool to manage the LDAP server.\nLDAP Client setup There are a lot of tools, but the only usable and not old style one that I found is jxplorer. It is written in Java so it is pretty portable. It has two &ldquo;modes&rdquo;: &ldquo;HTML View&rdquo; and &ldquo;Table Editor&rdquo;. &ldquo;HTML View&rdquo; is terrible, so I always use the &ldquo;Table Editor&rdquo;.\nIt looks like this:\nJxplorer\nYou will want to login to the LDAP server. The &ldquo;login&rdquo; process in LDAP is called &ldquo;binding&rdquo; process. So you need to &ldquo;bind&rdquo; to the server.\nThe default credentials that the container image provides are currently:\nUser DN: cn=admin,dc=example,dc=com Password: admin Login window\nIf they differ, refer to the documentation of the image on github.\ncn? dc? ou? In LDAP we refer to an object in a tree with its full &ldquo;path&rdquo;.\nSo cn=admin,dc=example,dc=org means &ldquo;the object called cn=admin, inside the object called dc=example, which is inside the object called dc=org&rdquo;\nWhy cn and dc?\nBecause objects are just a bunch of properties, and thus we need to define a property that identify the object. Why not using the same property for every object? Who knows.\nAnyway:\ndc (domain component) is used for directories cn (common name) is used for leaf nodes like users ou (organizational unit) is used for &hellip;yes, organizational unit It is just convention.\nThis is an example of structure (with the first three nodes compressed):\nExample of structure\nHow to add an object? Since an object is just a bunch of properties inside a directory, we need to understand two things to add an object:\nWhat is the &ldquo;path&rdquo; of the object? What are the properties of the object? Let&rsquo;s suppose we want to create a top-level organizational unit, for example we want to add the organizational unitfor the &ldquo;Quality&rdquo; department of our company that is called example.org.\nWe will create the object:\nInside dc=example,dc=org, so that it will be ou=Quality,dc=example,dc=org With the properties that an organizational unit should have Properties are defined in groups by &ldquo;objectClasses&rdquo;. Each object class has a name and a collection of properties.\nIf we try to create a new node, and search inside the available object classes, we can find the organizationalUnit:\nobjectClasses\nAdding it and submitting the final object, will lead to a new object in the directory structure\nQuality organizational unit added\nCLI ftw&hellip; or wtf Personally I really like UIs since they make it easier to understand what is going on, but when automation is needed, the CLI is the obvious best option.\nBut, as I previously told, LDAP is old AF. So CLI is terrible.\nThese are some tools (pre-installed on osx):\nldapadd ldapdelete ldapmodify ldappasswd ldapurl ldapcompare ldapexop ldapmodrdn ldapsearch ldapwhoami slapacl slapauth slapconfig slapindex slapschema slapadd slapcat slapdn slappasswd slaptest Yes, they are a lot, but the most useful are ldapadd and ldapsearch\n","permalink":"https:\/\/bennesp.github.io\/posts\/ldap-setup\/","summary":"Why, what and how LDAP","title":"LDAP Setup"}]