[{"content":"The MVG Observatory Project collects real-time departure data from Munich&rsquo;s public transport system. The data is organized hierarchically: the top level contains date-based folders, each containing subfolders named after station IDs. These station folders hold multiple JSON files that capture departure information throughout the day.\nEach station&rsquo;s data is stored in two types of files: *_body.json and *_meta.json. The body files contain either API error messages or JSON arrays of responses, which are imported into the mvg.responses table in Clickhouse. The corresponding meta files store request metadata (sharing the same timestamp as their body files) and are imported into the mvg.requests table.\nAt the end of each day, the top-level folder is automatically archived into a zstd compressed file. To analyze this data, the contents must be imported into a Clickhouse database for processing.\n20240615\/ \u251c\u2500\u2500 de:09162:1 \u2502 \u251c\u2500\u2500 1718409659_body.json \u2502 \u251c\u2500\u2500 1718409659_meta.json \u2502 \u251c\u2500\u2500 ... \u251c\u2500\u2500 de:09166:1 \u2502 \u251c\u2500\u2500 1718409734_body.json \u2502 \u251c\u2500\u2500 1718409734_meta.json \u2502 \u251c\u2500\u2500 ... \u251c\u2500\u2500 ... Status Quo: mvg-analyser The initial data processing solution, mvg-analyser, is a Ruby script that extracts and processes data from data.mvg.auch.cool into Clickhouse. The script streams compressed archives directly into memory, avoiding the need to write decompressed files to disk.\nHowever, both the request and response data lack essential contextual information. To facilitate debugging during analysis, each entry needs to be enriched with:\nDateTime (from the folder name) Station ID (from the subfolder name) Timestamp (from the file prefix) The current workflow operates as follows:\nParse JSON from each file Extract context from the filepath Enrich the data by adding context to the hash object Use clickhouse-ruby gem to serialize and submit via HTTP(S) While the script reduces HTTP overhead by batching 100,000 entries before submission, it has several limitations:\nBottlenecks Sequential Processing\nCurrent dataset: 250 archives containing ~270,000 files each Single-threaded execution leaves most CPU cores idle Expensive Data Operations\nEach file requires individual JSON parsing Data enrichment is performed on every record High cumulative processing overhead across millions of files Language Limitations\nRuby, while flexible, isn&rsquo;t optimized for high-performance data processing Benefits Resource Efficiency Minimal memory footprint Safe interruption at archive boundaries Checkpointing enables reliable resume functionality Current Performance Metrics Environment: Hetzner CX32 Processing time: ~12 hours for a complete dataset Even with more hardware, the only factor that would improve significantly the performance is clock speed. So even with the dedicated server used below, the compute time would not improve much.\nMaking it Fast: Leveraging Clickhouse&rsquo;s Native Capabilities While the initial plan was to rewrite the tool in Go with parallelization in mind, exploring Clickhouse&rsquo;s rich feature set revealed a more elegant solution. The key breakthrough came from utilizing Clickhouse&rsquo;s built-in data handling functions.\nDirect File Processing Clickhouse&rsquo;s file function enables direct data reading from various sources:\nSELECT * FROM file(&#39;cleaned\/20240807\/de:09162:1\/1722990560_body.json&#39;) LIMIT 1 \u250c\u2500plannedDepartureTime\u2500\u252c\u2500realtime\u2500\u252c\u2500delayInMinutes\u2500\u252c\u2500realtimeDepartureTime\u2500\u252c\u2500transportType\u2500\u252c\u2500label\u2500\u252c\u2500divaId\u2500\u252c\u2500network\u2500\u252c\u2500trainType\u2500\u252c\u2500destination\u2500\u252c\u2500cancelled\u2500\u252c\u2500sev\u2500\u2500\u2500\u252c\u2500stopPositionNumber\u2500\u252c\u2500messages\u2500\u252c\u2500bannerHash\u2500\u252c\u2500occupancy\u2500\u252c\u2500stopPointGlobalId\u2500\u252c\u2500platform\u2500\u252c\u2500platformChanged\u2500\u2510 1. \u2502 1722990900000 \u2502 true \u2502 0 \u2502 1722990900000 \u2502 TRAM \u2502 N17 \u2502 32917 \u2502 swm \u2502 \u2502 Effnerplatz \u2502 false \u2502 false \u2502 2 \u2502 [] \u2502 \u2502 LOW \u2502 de:09162:1:2:3 \u2502 \u1d3a\u1d41\u1d38\u1d38 \u2502 \u1d3a\u1d41\u1d38\u1d38 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 This produces a nicely formatted table with all departure data. The function also supports:\nRemote files via url() or s3() functions Automatic file format detection (CSV, JSON, Parquet) Direct reading of compressed files with automatic format detection Native Zstandard support Archived File Processing But as mentioned earlier, one major benefit of our initial tool is direct in-memory processing of the data, and decompressing it would take quite a lot of disk space. Luckily, Clickhouse can read from compressed files (the compression algorithm is automatically detected), and Zstandard is of course supported.\nSELECT * FROM file(&#39;cleaned\/20240807.tar.zst&#39;) LIMIT 1 \u250c\u2500c1\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 1. \u2502 20240807\/0000755000000000000000000000000014654534474010502 5ustar rootroot20240807\/de:09162:1\/0000755000000000000000000000000014655005141011722 5ustar rootroot20240807\/de:09162:1\/1722988805_body.json0000644000000000000000000002504314654534407014752 | \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 But that does no longer look like the nicely formatted table we got when parsing the file directly. That is due to the fact that Clickhouse expects only one file in the archive to be parsed. Not only do we have multiple files in different subfolders, we also have two types of files (requests and responses) that we want to import into different tables.\nBut there is a solution for that: Since Clickhouse 23.8, file() supports specifying the path attribute that also supports globs. That means we can explicitly filter for our _meta.json and _body.json files like so:\nSELECT * FROM file(&#39;*.tar.zst :: *\/*\/*_body.json&#39;) LIMIT 1 \u250c\u2500plannedDepartureTime\u2500\u252c\u2500realtime\u2500\u252c\u2500delayInMinutes\u2500\u252c\u2500realtimeDepartureTime\u2500\u252c\u2500transportType\u2500\u252c\u2500label\u2500\u252c\u2500divaId\u2500\u252c\u2500network\u2500\u252c\u2500trainType\u2500\u252c\u2500destination\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500cancelled\u2500\u252c\u2500sev\u2500\u2500\u2500\u252c\u2500stopPositionNumber\u2500\u252c\u2500messages\u2500\u252c\u2500bannerHash\u2500\u252c\u2500occupancy\u2500\u252c\u2500stopPointGlobalId\u2500\u252c\u2500platform\u2500\u252c\u2500platformChanged\u2500\u2510 1. \u2502 1713052800000 \u2502 true \u2502 1 \u2502 1713052860000 \u2502 BUS \u2502 N40 \u2502 33N40 \u2502 swm \u2502 \u2502 Klinikum Gro\u00dfhadern \u2502 false \u2502 false \u2502 8 \u2502 [] \u2502 \u2502 LOW \u2502 de:09162:1:7:7 \u2502 \u1d3a\u1d41\u1d38\u1d38 \u2502 \u1d3a\u1d41\u1d38\u1d38 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Data Enrichment That is almost everything we need. But if we remember that the main reason for parsing the JSON in the first place was to enrich the entry, we notice that these three fields are still missing. In the Ruby tool, that information is parsed from the filename, and coincidentally, Clickhouse provides a _file variable when using the file() function with a path. We can use it like this:\nSELECT _file FROM file(&#39;*.tar.zst :: *\/*\/*_body.json&#39;) LIMIT 1 \u250c\u2500_file\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 1. \u2502 20240414\/de:09162:1\/1713052847_body.json \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Together with some Clickhouse functions, we can parse the three required fields from the path:\nSELECT splitByChar(&#39;\/&#39;, _file)[2] AS station, splitByChar(&#39;_&#39;, splitByChar(&#39;\/&#39;, _file)[3])[1] AS timestamp, splitByChar(&#39;\/&#39;, _file)[1] AS datestring FROM file(&#39;*.tar.zst :: *\/*\/*_body.json&#39;) LIMIT 1 \u250c\u2500station\u2500\u2500\u2500\u2500\u252c\u2500timestamp\u2500\u2500\u252c\u2500datestring\u2500\u2510 1. \u2502 de:09162:1 \u2502 1713052847 \u2502 20240414 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 Final Solution With everything being assembled together (we need to specify what goes in which column) and also converting nanoseconds to seconds, the final query looks like this:\nResponses INSERT INTO mvg.responses ( plannedDepartureTime, realtime, delayInMinutes, realtimeDepartureTime, transportType, label, divaId, network, trainType, destination, cancelled, sev, stopPositionNumber, messages, bannerHash, occupancy, stopPointGlobalId, platform, platformChanged, station, timestamp, datestring ) SELECT intDiv(plannedDepartureTime, 1000), realtime, delayInMinutes, intDiv(realtimeDepartureTime, 1000), transportType, label, divaId, network, trainType, destination, cancelled, sev, stopPositionNumber, messages, bannerHash, occupancy, stopPointGlobalId, platform, platformChanged, splitByChar(&#39;\/&#39;, _file)[2], splitByChar(&#39;_&#39;, splitByChar(&#39;\/&#39;, _file)[3])[1], splitByChar(&#39;\/&#39;, _file)[1] FROM file(&#39;*.tar.zst :: *\/*\/*_body.json&#39;, &#39;JSONEachRow&#39;) SETTINGS input_format_allow_errors_ratio = 1; Requests INSERT INTO mvg.requests ( station, timestamp, datestring, appconnect_time, connect_time, httpauth_avail, namelookup_time, pretransfer_time, primary_ip, redirect_count, redirect_url, request_size, request_url, response_code, return_code, return_message, size_download, size_upload, starttransfer_time, total_time, headers, request_params, request_header ) SELECT splitByChar(&#39;\/&#39;, _file)[2], splitByChar(&#39;_&#39;, splitByChar(&#39;\/&#39;, _file)[3])[1], splitByChar(&#39;\/&#39;, _file)[1], appconnect_time, connect_time, httpauth_avail, namelookup_time, pretransfer_time, primary_ip, redirect_count, redirect_url, request_size, request_url, response_code, return_code, return_message, size_download, size_upload, starttransfer_time, total_time, headers, request_params, request_header FROM file(&#39;*.tar.zst :: *\/*\/*_meta.json&#39;, &#39;JSONEachRow&#39;) SETTINGS input_format_allow_errors_ratio = 1 SETTINGS input_format_allow_errors_ratio = 1; is provided because some files do not contain JSON, but the earlier mentioned error messages, we just ignore them.\nResults And just like Clickhouse was made for efficiently processing large amounts of data, it processes multiple archives in parallel and allows for the utilization of much more hardware. Therefore, I upgraded the initial CX32 to (relatively speaking) much beefier machines, and with a dedicated server the whole import finishes in just 60 minutes for the responses and 50 minutes for the requests.\nCCX33 (Hetzner Cloud, dedicated CPU, 8 threads, 32 GB memory) 0 rows in set. Elapsed: 5392.159 sec. Processed 977.30 million rows, 362.32 GB (181.24 thousand rows\/s., 67.19 MB\/s.) Peak memory usage: 4.47 GiB. The request import was unable to be completed because, for some reason, the peak memory usage is much higher than for the responses (4.56 GB vs. 55.63 GB).\nDedicated Server (Hetzner, i7-8700, 12 threads, 128 GB memory) Elapsed: 3630.785 sec. Processed 977.30 million rows, 362.32 GB (269.17 thousand rows\/s., 99.79 MB\/s.) Peak memory usage: 4.56 GiB. Elapsed: 3117.311 sec. Processed 33.59 million rows, 37.85 GB (10.78 thousand rows\/s., 12.14 MB\/s.) Peak memory usage: 55.63 GiB. Overall we saved about 83% processing time and also a whole Ruby script with one single SQL query.\nAppendix The following database schema is using for mvg.requests and mvg.responses\nCREATE TABLE mvg.responses ( `datestring` Date CODEC(Delta(2), ZSTD(3)), `timestamp` DateTime CODEC(Delta(4), ZSTD(3)), `station` LowCardinality(String) CODEC(ZSTD(3)), `plannedDepartureTime` DateTime CODEC(Delta(4), ZSTD(3)), `realtime` Bool CODEC(ZSTD(3)), `delayInMinutes` Int32 CODEC(ZSTD(3)), `realtimeDepartureTime` DateTime CODEC(Delta(4), ZSTD(3)), `transportType` LowCardinality(String) CODEC(ZSTD(3)), `label` LowCardinality(String) CODEC(ZSTD(3)), `divaId` LowCardinality(String) CODEC(ZSTD(3)), `network` LowCardinality(String) CODEC(ZSTD(3)), `trainType` String CODEC(ZSTD(3)), `destination` LowCardinality(String) CODEC(ZSTD(3)), `cancelled` Bool CODEC(ZSTD(3)), `sev` Bool CODEC(ZSTD(3)), `platform` Int32 CODEC(ZSTD(3)), `platformChanged` Bool CODEC(ZSTD(3)), `stopPositionNumber` Int32 CODEC(ZSTD(3)), `messages` String CODEC(ZSTD(3)), `bannerHash` String CODEC(ZSTD(3)), `occupancy` LowCardinality(String) CODEC(ZSTD(3)), `stopPointGlobalId` String CODEC(ZSTD(3)) ) ENGINE = MergeTree PARTITION BY datestring ORDER BY (label, destination, station, plannedDepartureTime, timestamp) CREATE TABLE mvg.requests ( `datestring` Date CODEC(Delta(2), ZSTD(3)), `timestamp` DateTime CODEC(Delta(4), ZSTD(3)), `station` LowCardinality(String) CODEC(ZSTD(3)), `appconnect_time` Float64 CODEC(ZSTD(3)), `connect_time` Float64 CODEC(ZSTD(3)), `httpauth_avail` Int32 CODEC(ZSTD(3)), `namelookup_time` Float64 CODEC(ZSTD(3)), `pretransfer_time` Float64 CODEC(ZSTD(3)), `primary_ip` LowCardinality(String) CODEC(ZSTD(3)), `redirect_count` Int32 CODEC(ZSTD(3)), `redirect_url` String CODEC(ZSTD(3)), `request_size` Int32 CODEC(ZSTD(3)), `request_url` String CODEC(ZSTD(3)), `response_code` Int16 CODEC(ZSTD(3)), `return_code` LowCardinality(String) CODEC(ZSTD(3)), `return_message` LowCardinality(String) CODEC(ZSTD(3)), `size_download` Float32 CODEC(ZSTD(3)), `size_upload` Float32 CODEC(ZSTD(3)), `starttransfer_time` Float32 CODEC(ZSTD(3)), `total_time` Float32 CODEC(ZSTD(3)), `headers` String CODEC(ZSTD(3)), `request_params` String CODEC(ZSTD(3)), `request_header` String CODEC(ZSTD(3)) ) ENGINE = MergeTree PARTITION BY datestring ORDER BY (station, timestamp) ","permalink":"https:\/\/auch.cool\/posts\/2024\/zstd-json-clickhouse-import\/","summary":"<p>The <a href=\"https:\/\/mvg.auch.cool\">MVG Observatory Project<\/a> collects real-time departure data from Munich&rsquo;s public transport system.\nThe data is organized hierarchically: the top level contains date-based folders, each containing subfolders named after station IDs.\nThese station folders hold multiple JSON files that capture departure information throughout the day.<\/p>\n<p>Each station&rsquo;s data is stored in two types of files: <code>*_body.json<\/code> and <code>*_meta.json<\/code>.\nThe body files contain either API error messages or JSON arrays of responses, which are imported into the <code>mvg.responses<\/code> table in Clickhouse.\nThe corresponding meta files store request metadata (sharing the same timestamp as their body files) and are imported into the <code>mvg.requests<\/code> table.<\/p>","title":"Clickhouse: Import compressed JSON fast"},{"content":"","permalink":"https:\/\/auch.cool\/enten\/","summary":"<div class=\"flourish-embed flourish-number-ticker\" data-src=\"visualisation\/17756932\"><script src=\"https:\/\/public.flourish.studio\/resources\/embed.js\"><\/script><\/div>","title":"Enten \ud83e\udd86"},{"content":"I got dragged into the rabbit hole of taking analog photos about a year ago and enjoying it since.\nBut when you want to board a plane with your camera and a bunch of film rolls you might ask yourself: What to do with it during the x-ray at the security check. Will it harm or destroy my film?\nFilm condition This concern is valid for a film that is either new or exposed and winded back which therefore is unprocessed. A film that has already been processed by a lab and you have been handed the negatives back is unaffected and safe.\nFilm speed Also, the amount of possible damage to the film depends on its speed (the ISO or Exposure Index). Kodak lists the limit at 400, therefore indicating that anything being slower\/lower than 400 should be fine with a non-CT scan.\nFor the sake of simplicity, I&rsquo;ll leave that part out of consideration and thread every film the same way in further aspects.\nType of baggage Your stuff can travel in an airplane either as carry-on or checked baggage. The latter usually means that you check your back before security check and receive it on the baggage claim belt after the flight.\nChecked baggage You want to avoid putting your film in baggage that is checked and therefore will go through a different scanner which usually utilizes higher energy for its X-rays. This will likely harm your unprocessed film. This is also true for any carry-on that has to be checked during boarding, for example, due to the flight being fully booked. Remove your film from the carry-on that has to be checked as this will likely go through the same harmful scanner as normal checked baggage.\nCarry-on The baggage you carry with you will go through less harmful scanners but depending on the film or the scanner type you want to avoid that too. Nevertheless, this is the most flexible and safe option, so this is the way to go.\nScanner types In airports, you usually find a variety of different scanners which all work slightly differently and also can harm the film in different ways. You will usually find either one of two common scanners.\nFor both scanners, no finite conclusion can be drawn. Every scanner type from every manufacturer is a little different. It also depends a lot on the operator. They can for example scan your bag or portions of it multiple times if they want to get a more in-depth look at a specific area. This will affect the way the film can be damaged drastically.\nClassic\/normal X-ray Most likely you will hit a &rsquo;normal&rsquo; X-ray scanner - these are at the moment most common and are the ones where you must not have any liquid in the bag and are asked to remove electronic devices such as laptops and put them into a separate bin.\nDepending on your film and the operator you are probably fine with your film being scanned by one of those scanners. They don&rsquo;t emit a very high amount of X-rays and at least Kodak says for normal film, five or fewer scans are fine. Of course, you can be hit by bad luck but there are also people out there who carry a film that has been scanned much more than five times and their images did turn out well.\nComputer Tomography (CT) Another type is CT (or sometimes CAT) scanners which become more and more common as the normal scanners get replaced by them already on multiple airports. They usually look bigger and you can identify them pretty easily because you don&rsquo;t have to take out any electronics or liquids out of your bag.\nSince these scanners create a much more in-depth image of your baggage they also harm your film much more. Depending on the scanner and the operator your film might be damaged after a single scan. Avoiding these is highly recommended.\nHow to avoid X-ray The safest way to make sure your film does arrive undamaged is to ask for a hand check of your film.\nAs for everything in life: Be nice and try to create as little extra effort for the security staff as possible. If you are polite they usually treat you the same way.\nI always carry my film rolls in a small transparent bag and only with the film rolls themselves without the additional plastic tube. This allows the security staff to directly recognize what&rsquo;s in the bag and what your intentions are. Also, they either sample some rolls or inspect all of them with a drug test. Carrying them all in one bag makes it easier to just hand them the bag and they&rsquo;ll take care of the rest.\nThe FAA explicitly grants passengers in the US this right in their regulations (108.17). I also never had any issues with flights within Europe. After asking nicely for a check without scanning this was done almost always without any further discussions.\nExamples ISO 400, no scan ISO 400, one CT scan ISO 200, one CT scan Conclusion Although there is no guarantee that a scan from any scanner will damage your film (there are a lot of people having taken beautiful pictures with X-rayed film) it does not hurt to be a little prepared and ask nicely for a hand check. Also if your film goes through a normal scanner your film might be still working fine.\nTLDR Avoid film in checked baggage Avoid CT scanners Travel with film in a transparent plastic bag without housing Ask nicely for a hand-check Take awesome photos ","permalink":"https:\/\/auch.cool\/posts\/2024\/sensible-equipment\/","summary":"<p>I got dragged into the rabbit hole of taking analog photos about a year ago and enjoying it since.<\/p>\n<p>But when you want to board a plane with your camera and a bunch of film rolls you might ask yourself:\nWhat to do with it during the x-ray at the security check. Will it harm or destroy my film?<\/p>\n<h2 id=\"film-condition\">Film condition<\/h2>\n<p>This concern is valid for a film that is either new or exposed and winded back which therefore is <strong>unprocessed<\/strong>. A film that has already been processed by a lab and you have been handed the negatives back is unaffected and safe.<\/p>","title":"Fly with Sensible Photo Equipment"},{"content":"When switching between multiple systems including macOS you may have notices thoses strange looking files starting with ._.\nThey regularely break Switch firmware upgrades and might be in the way during other filesystem operations as well.\nThere are multiple solutions out there in the wild including removing them or moving them away using rsync. The most convinient way on macOS is using the dot_clean command which is shipped with macOS by default.\nIt merges the metadata with the normal files and removes all the dotfiles. After that you can just copy you stuff around as usual. See the man page for more details.\n","permalink":"https:\/\/auch.cool\/posts\/2023\/til-dot-clean\/","summary":"<p>When switching between multiple systems including macOS you may have notices thoses strange looking files starting with <code>._<\/code>.<\/p>\n<p>They regularely break Switch firmware upgrades and might be in the way during other filesystem operations as well.<\/p>\n<p>There are multiple solutions out there in the wild including removing them or moving them away using <code>rsync<\/code>. The most convinient way on macOS is using the <code>dot_clean<\/code> command which is shipped with macOS by default.<\/p>","title":"TIL: 'dot_clean' in macOS"},{"content":"This years Advent of Code is a little special because I use my own language RocketLang to solve the puzzles. I&rsquo;ll probably explain this in a seperate post at some point so lets have a look at the first puzzle of the year:\nProblem The task is to calculate the amount of calories they are carrying. For example:\n1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 This shows us 5 elves with different amount of items (with different calories each).\nSolution Part 1 Part 1 is to detect the elve which carries the most calories. I did think a little bit ahead and though that maybe the number of the elve (eg. the second elve has XY calories) might be important in part 2 and created a more complicated approach using a map instead of an array which looks like this:\ninput = IO.open(&#34;.\/input&#34;).lines() count = 0 elves = {0: 0} foreach item in input if (item == &#34;&#34;) count = count + 1 elves[count] = 0 end elves[count] = elves[count] + item.plz_i() end puts(&#34;Part 1: %d&#34;.format(elves.values().sort()[-1])) It iterates over each line, sums up the calories and bumps the count on each empty line. At the end I only take the values of that map (as an array), sort it to get the maximum easily and return the last value which is the sum of all calories the elve with the most calories is carrying.\nPart 2 Unfortunataly (or luckily?) part 2 does not require us to get any data from a specific elve - so we do not really use our map.\nBut instead we need to get the sum of the calories of the three elves with the most calories. Luckily this quite simple as we already have our sorted array se we can just go ahead and sum up the last 3 values like so:\nsum = 0 foreach item in elves.values().sort()[-3:] sum = sum + item end puts(&#34;Part 2: %d&#34;.format(sum)) This solves part 2 and we&rsquo;re done for today.\nConclusion I try to add a conclusion to every day with the thoughs about RocketLang I had whilst solving the puzzle.\nFor solving these kinds of puzzles it would be pretty neat to have a small helper method, similiar to Ruby, which allows summing up the values of an array directly. I already opened an issue for that\nCode ","permalink":"https:\/\/auch.cool\/posts\/2022\/aoc-day-1\/","summary":"<p>This years Advent of Code is a little special because I use my own language <a href=\"https:\/\/rocket-lang.org\/\">RocketLang<\/a> to solve the puzzles.\nI&rsquo;ll probably explain this in a seperate post at some point so lets have a look at the first puzzle of the year:<\/p>\n<h2 id=\"problem\">Problem<\/h2>\n<p>The task is to calculate the amount of calories they are carrying. For example:<\/p>\n<pre tabindex=\"0\"><code>1000\n2000\n3000\n\n4000\n\n5000\n6000\n\n7000\n8000\n9000\n\n10000\n<\/code><\/pre><p>This shows us 5 elves with different amount of items (with different calories each).<\/p>","title":"Advent of Code - RocketLang Edition"},{"content":"This post misses a lot of technical details as I reconstruct the changes and implementations out of my memory and my image archive. The latest Blumentopf iteration will contain all technical details you need.\nIn 2020 I remembered a Self-Watering Planter I printed a few years ago and thought it would be nice to reuse this one to replace my chunky prototype with a cleaner and more aesthetic version.\nSince I grew a lot of chilis that needed a similiar amount of water, I thought it would be easier to only use one pump and chain the pots together. This would reduce the amount of pumps needed and also the amount of (expensive Tinkerforge) relais.\nI revived my control unit and bought a new 12V pump.\nI printed a couple of pots for my new chilli plants and lined them up on my windowsill.\nAs you can see here, the idea here is to use the built-in reservoir of the pots to water the plants. I used a generic DIY watering kit for tubing everything up.\nIn an additional overhaul I improved the tubing and placed everything nice and clean. As you can see, this is already a huge improvement from lil` chonker in the beginning.\nOne difficulty was that plants need more water the more they grow. They also started to diverge on their water requirements. I thought I might run into trouble if the reservoir fills more and more and the plants will stand in water at some point. Especially chilis do not like that. So I needed a way to check the moisture of the pots.\nTherefore I ordered a set of moisture sensors (make sure you buy capacitive ones as they last longer) and wired them up to a ESP8266 (I have mine soldered to a nodeMCU 0.9) and stuffed into my controller box.\nThe capacitive sensors return an analog value depending on how much water they have contact with. I noted down the value of air and pure water to use as a reference. After that, I wrote a little programm for my ESP to return these values.\nI hooked everything up to my Prometheus and Grafana and from now on, I had some values to monitor while the plants were automatically watered.\nI soon noticed that the numbers were a little off but it was precised enough for me to continue to compact the setup.\n","permalink":"https:\/\/auch.cool\/posts\/2022\/blumentopf-2\/","summary":"<p>This post misses a lot of technical details as I reconstruct the changes and implementations out of my memory and my image archive. The latest Blumentopf iteration will contain all technical details you need.<\/p>\n<p>In 2020 I remembered a <a href=\"https:\/\/www.thingiverse.com\/thing:903411\">Self-Watering Planter<\/a> I printed a few years ago and thought it would be nice to reuse this one to replace my chunky prototype with a cleaner and more aesthetic version.<\/p>\n<p><img loading=\"lazy\" src=\"\/img\/blumentopf\/01\/07.jpg\" title=\"Chunky prototype in case you have forgotten it\"><\/p>","title":"\ud83e\udeb4 Blumentopf - Automated Plant Watering #2"},{"content":"This post misses a lot of technical details as I reconstruct the changes and implementations out of my memory and my image archive. The latest Blumentopf iteration will contain all technical details you need.\nFour years ago I started working on a never ending side project with which I wanted to automate the watering of some of my plants.\nI fiddled around with a few modules from Tinkerforge which I love for jumping into a project. The modules are relatively pricey but have an excellent documentation and are working very well.\nThe very first idea was to place the plants in a box with an outpouring flow on the bottom to create a waterloop with a connected reservoir. I drew some basic sketches and my dad created the boxes out of metal.\nThen, we 3D-printed an adapter for the tubes:\nI used a Raspberry Pi to control the Tinkerforge modules and wired everything up to fit in a small box.\nIn detail this involved a Master Brick, a voltage converter to convert 12V to 5V for operating the modules (pumps need 12V), two Voltage\/Current Bricklets to mease the current of the pumps - the idea here was to detect then the water reservoir was empty - a Dual Relais Bricklet and some TSSS Brushless Water Pumps.\ngraph TB; A[Rasbperry Pi Zero + brickd]&ndash;&gt;B[Master Brick]; B&ndash;&gt;C[Dual Relais] C&ndash;&gt;D[Relay 1] D&ndash;&gt;E[Current Bricklet] E&ndash;&gt;F[Pump 1]\nC&ndash;&gt;G[Relay 2] G&ndash;&gt;H[Current Bricklet] H&ndash;&gt;I[Pump 2]\nWired together, the system looked like this:\nWith this setup, it was possible to &ldquo;flood&rdquo; the smaller box with water, let it sit there for a few minutes to give the plants enough time to suck in some water and then pump the rest of the water back into the reservoir. Since the plants were elevated they never stand in water for a long period of time.\nAs you can probably guess from the pictures the whole thing was very chunky and also not very beautiful. It stayed a couple of weeks on my desk and watered some plants but was abandoned after some time, because of the reasons above.\nDue to some movements and other sideprojects, things were delayed for some time and continuted in 2020, see the next iteration in the next post.\n","permalink":"https:\/\/auch.cool\/posts\/2022\/blumentopf-1\/","summary":"<p>This post misses a lot of technical details as I reconstruct the changes and implementations out of my memory and my image archive. The latest Blumentopf iteration will contain all technical details you need.<\/p>\n<p>Four years ago I started working on a never ending side project with which I wanted to automate the watering of some of my plants.<\/p>\n<p>I fiddled around with a few modules from <a href=\"https:\/\/www.tinkerforge.com\/en\/\">Tinkerforge<\/a> which I love for jumping into a project. The modules are relatively pricey but have an excellent documentation and are working very well.<\/p>","title":"\ud83e\udeb4 Blumentopf - Automated Plant Watering #1"},{"content":"Recently we tried out sous vide cooking and needed quite some time to find well-sounding parameters in the internet or cookbooks.\nTo preserve the tweaks we did to them I wanted to put them up here, also some of you might want to give it a try too.\nBaseline Parameters The following parameters can be used for a cold start and might need adjustments depending on your tools and taste.\nDuration Steak thickness cooking time 2 cm 60 min 3 cm 90 min 4 cm 120 min 5 cm 160 min 6 cm 210 min 7 cm 240 min 8 cm 180 min Temperature cooking level temperature blue rare 40\u00b0C - 45\u00b0C rare 45\u00b0C - 50\u00b0C medium rare 50\u00b0C - 54\u00b0C medium 54\u00b0C - 56\u00b0C medium-well 56\u00b0C - 60\u00b0C well-done 60\u00b0C - 65\u00b0C First Try For the first try we bought a Dry Aged Entrecote (Rib Eye) from a Simmental cattle (Simmentaler) with a thickness of 2.5 cm to 3 cm.\nWe wanted a medium-rare steak with a tendency to rare and therefore configured out sous vide device to heat the water to 52\u00b0C for 90 minutes.\nFor our taste it was almost medium so next time we will lower either the time or the temperature.\n","permalink":"https:\/\/auch.cool\/posts\/2021\/sous-vide\/sous-vide-steak-1.0\/","summary":"<p>Recently we tried out sous vide cooking and needed quite some time to find well-sounding parameters in the internet or cookbooks.<\/p>\n<p>To preserve the tweaks we did to them I wanted to put them up here, also some of you might want to give it a try too.<\/p>\n<h2 id=\"baseline-parameters\">Baseline Parameters<\/h2>\n<p>The following parameters can be used for a cold start and might need adjustments depending on your tools and taste.<\/p>","title":"Sous Vide Steak 1.0"},{"content":"I like books and sometimes I even read them! Below you can find a unsorted list I&rsquo;ve read and enjoyed:\nDie Redaktion by Benjamin Fredrich Na Servus!: Wie Ich Lernte, Die Bayern Zu Lieben by Sebastian Glubrecht At the Edge: Riding for My Life by Danny MacAskill Herr Sonneborn geht nach Br\u00fcssel by Martin Sonneborn Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future by Ashlee Vance The Phoenix Project by Gene Kim, Kevin Behr, George Spafford Permanent Record by Edward Snowden How Google Works by Eric Schmidt and Jonathan Rosenberg fritz gegen Goliath by Mirco Wolf Wiegert Die kleinste gemeinsame Wirklichkeit by Mai Thi Nguyen-Kim Currently I am reading:\nKill It with Fire: Manage Aging Computer Systems by Marianne Bellotti Command-Line Rust: A Project-Based Primer for Writing Rust CLIs by Ken Youens-Clark Mission Erde \u2013 Die Welt ist es wert, um sie zu k\u00e4mpfen by Robert Marc Lehmann Why We Sleep: Unlocking the Power of Sleep and Dreams by Matthew Walker If you think I missed a really good one, let me know :)\nLast modified Oct 15, 2024 ","permalink":"https:\/\/auch.cool\/books\/","summary":"<p>I like books and sometimes I even read them!\nBelow you can find a unsorted list I&rsquo;ve read and enjoyed:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/54430395-die-redaktion\">Die Redaktion<\/a> by Benjamin Fredrich<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/2608806-na-servus\">Na Servus!: Wie Ich Lernte, Die Bayern Zu Lieben<\/a> by Sebastian Glubrecht<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/30052267-at-the-edge\">At the Edge: Riding for My Life<\/a> by Danny MacAskill<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/44290242-herr-sonneborn-geht-nach-br-ssel---abenteuer-im-europaparlament\">Herr Sonneborn geht nach Br\u00fcssel<\/a> by Martin Sonneborn<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/25541028-elon-musk\">Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future<\/a> by Ashlee Vance<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/17255186-the-phoenix-project\">The Phoenix Project<\/a> by Gene Kim, Kevin Behr, George Spafford<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/46223297-permanent-record\">Permanent Record <\/a> by Edward Snowden<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/23158207-how-google-works\">How Google Works<\/a> by Eric Schmidt and Jonathan Rosenberg<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/58990454-fritz-gegen-goliath\">fritz gegen Goliath<\/a> by Mirco Wolf Wiegert<\/li>\n<li><a href=\"https:\/\/www.goodreads.com\/book\/show\/56951280-die-kleinste-gemeinsame-wirklichkeit-wahr-falsch-plausibel-die-gr-t\">Die kleinste gemeinsame Wirklichkeit<\/a> by Mai Thi Nguyen-Kim<\/li>\n<\/ul>\n<p>Currently I am reading:<\/p>","title":"Books"},{"content":"Places to eat or drink that I enjoyed and found above average.\nCanada Montreal The Coldroom | Bar Jatoba | Restaurant Germany Munich Azuki | Restaurant Bar GAR\u00c7ON | Bar Call Soul | Bar Pacific Times | Bar sansaro | Restaurant Schreiberei | Restaurant Iceland Akureyri Cafe Berlin | Breakfast Rub23 | Restaurant Reykjavik Grillmarka\u00f0urinn | Restaurant ROK | Restaurant Tides | Resturant V\u00edk Berg Restaurant | Restaurant Ireland Dublin Vintage Cocktail Club | Bar USA Honolulu DECK | Restaurant Eggs &rsquo;n Things Saratoga | Breakfast Paia, Maui Mama&rsquo;s Fish House | Restaurant Wailea, Maui Monkeypod Kitchen | Restaurant ","permalink":"https:\/\/auch.cool\/food\/","summary":"<p>Places to eat or drink that I enjoyed and found above average.<\/p>\n<h2 id=\"canada\">Canada<\/h2>\n<ul>\n<li>Montreal\n<ul>\n<li><a href=\"https:\/\/www.thecoldroommtl.com\/\">The Coldroom<\/a> | Bar<\/li>\n<li><a href=\"https:\/\/www.jatobamontreal.com\/\">Jatoba<\/a> | Restaurant<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2 id=\"germany\">Germany<\/h2>\n<ul>\n<li>Munich\n<ul>\n<li><a href=\"https:\/\/azukimunich.com\/\">Azuki<\/a> | Restaurant<\/li>\n<li><a href=\"https:\/\/bar-garcon.de\/\">Bar GAR\u00c7ON<\/a> | Bar<\/li>\n<li><a href=\"https:\/\/callsoul-breakingbar.de\/\">Call Soul<\/a> | Bar<\/li>\n<li><a href=\"http:\/\/www.pacific-times.de\/\">Pacific Times<\/a> | Bar<\/li>\n<li><a href=\"https:\/\/www.sushiya.de\/\">sansaro<\/a> | Restaurant<\/li>\n<li><a href=\"https:\/\/schreiberei-muc.de\/restaurant\">Schreiberei<\/a> | Restaurant<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2 id=\"iceland\">Iceland<\/h2>\n<ul>\n<li>Akureyri\n<ul>\n<li><a href=\"https:\/\/berlinakureyri.is\/\">Cafe Berlin<\/a> | Breakfast<\/li>\n<li><a href=\"https:\/\/www.rub23.is\/en\">Rub23<\/a> | Restaurant<\/li>\n<\/ul>\n<\/li>\n<li>Reykjavik\n<ul>\n<li><a href=\"https:\/\/grillmarkadurinn.is\/en\">Grillmarka\u00f0urinn<\/a> | Restaurant<\/li>\n<li><a href=\"https:\/\/www.rokrestaurant.is\/\">ROK<\/a> | Restaurant<\/li>\n<li><a href=\"https:\/\/www.tidesrestaurant.is\/\">Tides<\/a> | Resturant<\/li>\n<\/ul>\n<\/li>\n<li>V\u00edk\n<ul>\n<li><a href=\"https:\/\/www.stayinvik.is\/menus\">Berg Restaurant<\/a> | Restaurant<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2 id=\"ireland\">Ireland<\/h2>\n<ul>\n<li>Dublin\n<ul>\n<li><a href=\"https:\/\/vintagecocktailclub.com\/\">Vintage Cocktail Club<\/a> | Bar<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2 id=\"usa\">USA<\/h2>\n<ul>\n<li>Honolulu\n<ul>\n<li><a href=\"https:\/\/www.deckwaikiki.com\/\">DECK<\/a> | Restaurant<\/li>\n<li><a href=\"https:\/\/eggsnthings.com\/\">Eggs &rsquo;n Things Saratoga<\/a> | Breakfast<\/li>\n<\/ul>\n<\/li>\n<li>Paia, Maui\n<ul>\n<li><a href=\"https:\/\/mamasfishhouse.com\/\">Mama&rsquo;s Fish House<\/a> | Restaurant<\/li>\n<\/ul>\n<\/li>\n<li>Wailea, Maui\n<ul>\n<li><a href=\"https:\/\/monkeypodkitchen.com\/\">Monkeypod Kitchen<\/a> | Restaurant<\/li>\n<\/ul>\n<\/li>\n<\/ul>","title":"Food & Drinks"},{"content":"I am tinkering on stuff that runs the internet at Mozilla.\nPreviously I broke things in parallel on a lot of servers in a hosting company. I have both unconventional and practical solutions for every challenge, with a preference for scalability and general applicability. I do temporary things that live forever occasionally too.\nLost my heart to Ruby, love photography, mountain biking and \u201ccatching the latest flicks\u201d - as the cool kids say across the pond.\nFind me on GitHub, LinkedIn, Letterboxd and my photos self-hosted and on Instagram.\nProjects kickr.me: enhanced foosball table \ud83d\ude80\ud83c\uddf1\ud83c\udd70\ud83c\udd96: Interpreter in Go Games Uwe: Game developed at the 2019s Hetzner Game Jam Scrap Race: Game developed for the 35th Godot Wild Jam Arnold: Game developed for the 37th Godot Wild Jam Solar Valley: Game developed for the 38th Godot Wild Jam Sticktorio: Game developed for the 42th Godot Wild Jam ","permalink":"https:\/\/auch.cool\/about\/","summary":"<p>I am tinkering on stuff that runs the internet at <a href=\"https:\/\/mozilla.org\">Mozilla<\/a>.<\/p>\n<p>Previously I broke things in parallel on a lot of servers in a\n<a href=\"https:\/\/hetzner.com\">hosting company<\/a>. I have both unconventional and practical\nsolutions for every challenge, with a preference for scalability and general applicability.\nI do temporary things that live forever occasionally too.<\/p>\n<p>Lost my heart to Ruby, love photography, mountain biking and \u201ccatching the latest flicks\u201d - as the cool kids say across the pond.<\/p>","title":"Robert"},{"content":"Another day another Shenzhen I\/O puzzle.\nIn order to test some of our new manufacturing equipment, we need a pulse generator with certain requirements (specifications). However, instead of buying one at the market price, I thought we could simply create our own. For this project you will need to make use of conditional execution. Please continue your study of the language reference.\nAs a refresher, checkout the Language Reference Card in the first post.\nThis puzzle is more complicated because the in- and output does differ from the puzzles before:\nMaybe you noticed that in-\/output are not synchronous. This means, if we have an input signal, we do not necessarily have a output signal and vice versa.\nWith these four lines we pass all the tests easily.\nProduction Cost Power Usage Lines of Code 3 240 5 3 142 3 ","permalink":"https:\/\/auch.cool\/posts\/2021\/shenzhen-io-3\/","summary":"<p>Another day another Shenzhen I\/O puzzle.<\/p>\n<blockquote>\n<p>In order to test some of our new manufacturing equipment, we need a pulse generator with certain requirements (specifications).\nHowever, instead of buying one at the market price, I thought we could simply create our own.\nFor this project you will need to make use of <em>conditional execution<\/em>.\nPlease continue your study of the language reference.<\/p><\/blockquote>\n<p>As a refresher, checkout the Language Reference Card <a href=\"\/posts\/2021\/shenzhen-io-1\/\">in the first post<\/a>.<\/p>","title":"Shenzhen I\/O #3 - Diagnostic Pulse Generator"},{"content":"Today we&rsquo;re doing another Shenzhen I\/O puzzle.\nAs this is the second task, the puzzle is again quite simple. As a reminder, we do have the following command documentation:\nThe task is to amplify a signal. As we can see from the verification tab the output has to be higher than the input - twice as high, to be precise.\nThis seems fairly easy, we just need to multiply the input signal with 2 and output the result. If we take a closer look on our commands we can find a mul operator which can take one argument. It is documented with the following:\nmul R\/I Multiply the value of the first operand by the value of the acc register and store the result in the acc register.\nSo this makes things a bit more complicated. We can not simply take register p0 and multiply it with integer 2. The argument for mul always gets multiplied with the register acc. Therefore we need to do a little extra step and save 2 into acc. After that we can multiply register p0 with integer 2 saved in acc which automatically stores the result again in acc. Now we need to move the result from acc to register p1 - our output - and sleep for one second.\nmov 2 acc mul p0 mov acc p1 slp 1 With these four lines we pass all the tests easily.\nProduction Cost Power Usage Lines of Code 3 240 4 ","permalink":"https:\/\/auch.cool\/posts\/2021\/shenzhen-io-2\/","summary":"<p>Today we&rsquo;re doing another Shenzhen I\/O puzzle.<\/p>\n<p>As this is the second task, the puzzle is again quite simple.\nAs a reminder, we do have the following command documentation:<\/p>\n<p><img alt=\"Docs\" loading=\"lazy\" src=\"\/img\/shenzhen\/shenzhen-io-0.png\"><\/p>\n<p>The task is to amplify a signal.\nAs we can see from the verification tab the output has to be higher than the input - twice as high, to be precise.<\/p>\n<p><img alt=\"Puzzle 2\" loading=\"lazy\" src=\"\/img\/shenzhen\/shenzhen-io-2.png\"><\/p>\n<p>This seems fairly easy, we just need to multiply the input signal with <code>2<\/code> and output the result.\nIf we take a closer look on our commands we can find a <code>mul<\/code> operator which can take one argument.\nIt is documented with the following:<\/p>","title":"Shenzhen I\/O #2 - Control Signal Amplifier"},{"content":"\nI recently stumbled over the Catan style boardgame 2.0 from Dakanzla and thought it would be nice to print my own version of it.\nDoing this I noticed how hard it was to collect all information, find the right models for the version I wanted and finally printing it. Therefore I am writing this post helping others to create their version of Catan.\nThis is an ongoing series of posts during the building\/printing of the game and is likely to change. While new blog posts will cover the process I&rsquo;ll keep the specs in this post up to date when I change a color or similiar.\nThe Basics Catan consists of a base game which is extendable from four up to six players. In addition there are several game extentions which are made for four players and also have six player extensions. There will be a seperate blog post about the extensions once I finished them, until then, everything mentioned here is for the base game only.\nThe Parts The biggest difference is that Catan is shipped with a frame and the printed version uses tiles only. Therefore you need to add more water than you would with the paper frame. All other resources can be taken 1:1 from the original game. The following gives you a quick overview:\nType Base Game 6 Player Extension Total Brick 3 2 5 Desert 1 1 2 Ore 3 2 5 Wheat 4 2 6 Wood 4 2 6 Wool 4 2 6 Water 9 4 13 Harbor 9 - 9 Robber 1 - 1 Road 4 x 15 2 x 15 6 x 15 City 4 x 4 2 x 4 6 x 4 Settlement 4 x 5 2 x 5 6 x 5 Crossing 4 x 10 2 x 10 6 x 10 The Colors Let&rsquo;s start with the most difficult part: colors. Generally you need to find the colors that fit your needs most. You can simply print all tiles in the same color and then paint them all the way you want if you are into that. Personally I am not that used into painting stuff so I took another approach: I picked base colors where I though they would fit the resource most and then only paint the highlights.\nBut unfortunately it is not that easy to pick the right colors from pictures on the internet, so this was quite a journey.\nHow it started\nHow it&rsquo;s going\nTiles At the moment I use the following colors for the tiles:\nType Inner Color Outer Color Brick EuMakers Orange Brick EuMakers Orange Brick Desert Fillamentum Mukha Fillamentum Mukha Ore Extrudr Anthracite Extrudr Anthracite Wheat Eumakers Tangerine Orange Eumakers Tangerine Orange Wood 3dJake Green Extrudr Brown Wool Eumakers Pastel Green Extrudr White Water Eumakers Blue \/ Extrudr Blue Eumakers Blue \/ Extrudr Blue Player For the playes I tried to pick colors which are not similiar to the ones I used for the normal game tiles. In a later iteration I switched the outer border of whool to white, therefore I&rsquo;ll see if I have to switch the white player color.\nPlayer Color 1 Extrudr White 2 Extrudr Black 3 Extrudr Hellfire Red 4 Extrudr Turquoise 5 Fillamentum Lilac 6 Fillamentum Orange Orange ","permalink":"https:\/\/auch.cool\/posts\/2021\/printer-of-catan-1\/","summary":"<p><img alt=\"Filament 1\" loading=\"lazy\" src=\"\/img\/catan\/header.jpg\"><\/p>\n<p>I recently stumbled over the <a href=\"https:\/\/www.thingiverse.com\/thing:2525047\">Catan style boardgame 2.0<\/a> from Dakanzla and thought it would be nice to print my own version of it.<\/p>\n<p>Doing this I noticed how hard it was to collect all information, find the right models for the version I wanted and finally printing it. Therefore I am writing this post helping others to create their version of Catan.<\/p>\n<p>This is an ongoing series of posts during the building\/printing of the game and is likely to change. While new blog posts will cover the process I&rsquo;ll keep the specs in this post up to date when I change a color or similiar.<\/p>","title":"Printer of Catan #1"},{"content":"A few days ago I tried out SHENZHEN I\/O. It is some kind of puzzle game in which you need to solve little tasks using assembler and programming a microcontroller.\nThe solution is then ranked in three categories and compared to friends:\nProduct Cost: depends on the hardware you did choose Power Usage: Is increased with more complex and expensive commands Lines of code There is no tutorial. All you have is a 50 sheet documentation about the chips an the commands. That sounds pretty shitty in the beginning but is a nice, realistic thing I started to like pretty quickly. It&rsquo;s like doing an exam with docs allowed.\nIn the beginning of the manual there is a quick overview over some of the available commands. I&rsquo;ll go through some of them as we use them.\nIn the first puzzle the task is to match a given pattern with two LEDs.\nThe first LED (active) is already done as an example:\nmov 0 p0 slp 6 mov 100 p0 slp 6 To solve the puzzle I did come up with an rather straigh forward solution.\nMost instructions to take operands like R\/I and R. R Does stand for a register and I for a integer. Means, with R\/I you can pass both a register or an integer to the instruction.\nLet&rsquo;s start with mov [R\/I] [R]. It takes two arguments and copies the value of the first operand into the second. The second command is slp which stands for sleep and only takes one argument. The operand specifies the number of time units the process will sleep.\nIn the graph high is 100 and low 0, to solve it we simple need to match the patter, with the both commands above this is quite easy:\nmov 0 p0 slp 4 mov 100 p0 slp 2 mov 0 p0 slp 1 mov 100 p0 slp 1 ","permalink":"https:\/\/auch.cool\/posts\/2021\/shenzhen-io-1\/","summary":"<p>A few days ago I tried out <a href=\"https:\/\/store.steampowered.com\/app\/504210\/SHENZHEN_IO\/\">SHENZHEN I\/O<\/a>. It is some kind of puzzle game in which you need to solve little tasks using assembler and programming a microcontroller.<\/p>\n<p>The solution is then ranked in three categories and compared to friends:<\/p>\n<ul>\n<li>Product Cost: depends on the hardware you did choose<\/li>\n<li>Power Usage: Is increased with more complex and expensive commands<\/li>\n<li>Lines of code<\/li>\n<\/ul>\n<p>There is no tutorial. All you have is a 50 sheet documentation about the chips an the commands. That sounds pretty shitty in the beginning but is a nice, realistic thing I started to like pretty quickly. It&rsquo;s like doing an exam with docs allowed.<\/p>","title":"Shenzhen I\/O #1 - Security Camera"},{"content":"Day 6 is again pretty nice to solve with Ruby. Today it is all about customs declaration.\nWe again have different groups seperated by a newline containing multiple lines with characters. Each line represents a person and each character represents a different question answered with &ldquo;yes&rdquo;.\nabc a b c ab ac a a a a In this example we do have four groups of people. The first group only has one person which answered three questions. The second group as three people which answered three (different) questions. The third group has two people, they both answered question a and then two different questions. In the fourth group every of the four people answered one (the same) question.\nPart 1 First we need to find the sum of all questions answered and need to remove duplicate answers within a group.\nSince it does not matter who ansered within the group we can simply join every line together. We can do this simply by use .split(&quot;\\n&quot;).join which results in a single string per group. Then we use #char to again split the line into an array with one character per element. This prepares for unsing #uniq to sort out the duplicated and after that we #count them.\nThe solution for the first part it then only the sum of all these counts.\nPart 2 In the second part it comes in handy that we already used arrays.\nOur tasks here is to only count the answers for questions that every person on the group has answered.\nTherefore we again split the group into people and their line into an array with only characters in it. Then we can use the &amp; operator in Ruby with will return only intersections of the arrays.\nFor example ['a', 'b'] &amp; ['a', 'c'] will return ['a']\nAfter that we just count all the elements in all arrays which are left. In the final solution we have one (pretty understandable) line per part left.\n","permalink":"https:\/\/auch.cool\/posts\/2020\/aoc-day-6\/","summary":"<p>Day 6 is again pretty nice to solve with Ruby. Today it is all about customs declaration.<\/p>\n<p>We again have different groups seperated by a newline containing multiple lines with characters.\nEach line represents a person and each character represents a different question answered with &ldquo;yes&rdquo;.<\/p>\n<pre tabindex=\"0\"><code>abc\n\na\nb\nc\n\nab\nac\n\na\na\na\na\n<\/code><\/pre><p>In this example we do have four groups of people.\nThe first group only has one person which answered three questions.\nThe second group as three people which answered three (different) questions.\nThe third group has two people, they both answered question <code>a<\/code> and then two different questions.\nIn the fourth group every of the four people answered one (the same) question.<\/p>","title":"Advent of Code Day #6"},{"content":"Day five is all about the airplane seating. One seat it represented by something like this: FBFBBFFRLR. The first eight characters are for the row, the last three for the seat.\nPart 1 We need to find the highest seat id to solve the first part.\nThe seat id is calculated with: row-id * 8 + column.\nFor the rows you start with a range of 0 to 127.\nF means use the first half B means use the last half With this in mind you need to iterate over the first eight characters to get your seat.\nI here missed completely that you could simply convert this into binary. This would have been a much nicer solution.\nI again used #each_slice here and splitted the range into halfes and continue with the first or last one. For the seats it is the very same with a smaller range (0 to 7) and slightly different characters.\nL means use the first half R means use the last half When we have all the ids we can simply select the highest one.\nPart 2 Luckily we store all the ids in one array. We now know that some seats in the front and some in the back are always free. We also know that one seat in this range is missing.\nA nice trick to find this seat is to generate a full range from the first to the last seat, sum them and compare them to the sum of our set with the missing seat. With this we easily find the missing id.\n","permalink":"https:\/\/auch.cool\/posts\/2020\/aoc-day-5\/","summary":"<p>Day five is all about the airplane seating.\nOne seat it represented by something like this: <code>FBFBBFFRLR<\/code>.\nThe first eight characters are for the row, the last three for the seat.<\/p>\n<h2 id=\"part-1\">Part 1<\/h2>\n<p>We need to find the highest seat id to solve the first part.<\/p>\n<p>The seat id is calculated with: <code>row-id * 8 + column<\/code>.<\/p>\n<p>For the rows you start with a range of <code>0<\/code> to <code>127<\/code>.<\/p>\n<ul>\n<li><code>F<\/code> means use the first half<\/li>\n<li><code>B<\/code> means use the last half<\/li>\n<\/ul>\n<p>With this in mind you need to iterate over the first eight characters to get your seat.<\/p>","title":"Advent of Code Day #5"},{"content":"In day 4 we need to deal with passports (yay!) and check if they are valid for given criterias. The passworts do have multiple fields (key:value) and are divided by a newline.\necl:gry pid:860033327 eyr:2020 hcl:#fffffd byr:1937 iyr:2017 cid:147 hgt:183cm iyr:2013 ecl:amb cid:350 eyr:2023 pid:028048884 hcl:#cfa07d byr:1929 hcl:#ae17e1 iyr:2013 eyr:2024 ecl:brn pid:760753108 byr:1931 hgt:179cm The fields are given and expected like so:\nbyr (Birth Year) iyr (Issue Year) eyr (Expiration Year) hgt (Height) hcl (Hair Color) ecl (Eye Color) pid (Passport ID) cid (Country ID) Splitting them into &ldquo;passports&rdquo; is as easy as it gets.\nPart 1 In part 1 our job is to check only if all the given fields are there for each passport ignoring their values. Since it is christmas and the North Pole Credentials do not submit a Country ID for their documents we are allowed to ignore cid.\nTo solve this I made a list with the required fields and loop over them and check if at least one field in the password has it - otherwise it is invalid.\nPart 2 Now it is a bit more complex. We have given criterias for our fields and additionally to the required fields check we need to check if the criterias match.\nI converted the criterias into ruby ranges and regular expressions (see line 14 to 22). Then I looped over every field as in part 1 but now I split them into key\/value pairs. After that, based on my criterias, I either matched them against the regex or the range.\n","permalink":"https:\/\/auch.cool\/posts\/2020\/aoc-day-4\/","summary":"<p>In day 4 we need to deal with passports (yay!) and check if they are valid for given criterias.\nThe passworts do have multiple fields (<code>key:value<\/code>) and are divided by a newline.<\/p>\n<pre tabindex=\"0\"><code>ecl:gry pid:860033327 eyr:2020 hcl:#fffffd\nbyr:1937 iyr:2017 cid:147 hgt:183cm\n\niyr:2013 ecl:amb cid:350 eyr:2023 pid:028048884\nhcl:#cfa07d byr:1929\n\nhcl:#ae17e1 iyr:2013\neyr:2024\necl:brn pid:760753108 byr:1931\nhgt:179cm\n<\/code><\/pre><p>The fields are given and expected like so:<\/p>\n<pre tabindex=\"0\"><code>byr (Birth Year)\niyr (Issue Year)\neyr (Expiration Year)\nhgt (Height)\nhcl (Hair Color)\necl (Eye Color)\npid (Passport ID)\ncid (Country ID)\n<\/code><\/pre><p>Splitting them into &ldquo;passports&rdquo; is as easy as it gets.<\/p>","title":"Advent of Code Day #4"},{"content":"In day 3 you have a ever repeating map with trees and free spaces. You move over this map and have to count how many trees (#) you will hit.\nPart 1 Given is a pattern in which you move right and down. In part 1 we start in the top left corner and always move 3 right and one down until we reach the last line.\n..##.........##.........##.........##.........##.........##....... ---&gt; #...#...#..#...#...#..#...#...#..#...#...#..#...#...#..#...#...#.. .#....#..#..#....#..#..#....#..#..#....#..#..#....#..#..#....#..#. ..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.# .#...##..#..#...##..#..#...##..#..#...##..#..#...##..#..#...##..#. ..#.##.......#.##.......#.##.......#.##.......#.##.......#.##..... ---&gt; .#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....# .#........#.#........#.#........#.#........#.#........#.#........# #.##...#...#.##...#...#.##...#...#.##...#...#.##...#...#.##...#... #...##....##...##....##...##....##...##....##...##....##...##....# .#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.# ---&gt; Since the map is repeating we need to reset the x position if we reach the end of the line in the given example. Theoretical you could achieve this with a simple modulo, but I did it without because I hadn&rsquo;t it in mind when I was in a hurry. The rest of part 1 is very simple looping without any suprises.\nPart 2 In part 2 we no longer have one given slope but different ones. Therefore we need to move over the map with every given slope and to get the result we have to multiply the trees from every slope.\nThe variable amount of steps to the right is easy because we only need to add this number to the current_pos. I found the party with the skipped lines a bit more tricky. In order to solve this I divided the lines in multiple slices (using Ruby&rsquo;s #each_slice) which matched the given lines to be skipped and used the first line from each line I always land in the right spot.\nTo get the result we now only need to use - again - #reduce where you can perform a mathematical operation between every element in the array.\n","permalink":"https:\/\/auch.cool\/posts\/2020\/aoc-day-3\/","summary":"<p>In day 3 you have a ever repeating map with trees and free spaces.\nYou move over this map and have to count how many trees (<code>#<\/code>) you will hit.<\/p>\n<h2 id=\"part-1\">Part 1<\/h2>\n<p>Given is a pattern in which you move right and down.\nIn part 1 we start in the top left corner and always move 3 right and one down until we reach the last line.<\/p>\n<pre tabindex=\"0\"><code>..##.........##.........##.........##.........##.........##.......  ---&gt;\n#...#...#..#...#...#..#...#...#..#...#...#..#...#...#..#...#...#..\n.#....#..#..#....#..#..#....#..#..#....#..#..#....#..#..#....#..#.\n..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#..#.#...#.#\n.#...##..#..#...##..#..#...##..#..#...##..#..#...##..#..#...##..#.\n..#.##.......#.##.......#.##.......#.##.......#.##.......#.##.....  ---&gt;\n.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#.#.#.#....#\n.#........#.#........#.#........#.#........#.#........#.#........#\n#.##...#...#.##...#...#.##...#...#.##...#...#.##...#...#.##...#...\n#...##....##...##....##...##....##...##....##...##....##...##....#\n.#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.#.#..#...#.#  ---&gt;\n<\/code><\/pre><p>Since the map is repeating we need to reset the <code>x<\/code> position if we reach the end of the line in the given example.\nTheoretical you could achieve this with a simple modulo, but I did it without because I hadn&rsquo;t it in mind when I was in a hurry.\nThe rest of part 1 is very simple looping without any suprises.<\/p>","title":"Advent of Code Day #3"},{"content":"In day 2 you were given a set of rules to be parsed first. The looked like this:\n1-3 a: abcde 1-3 b: cdefg 2-9 c: ccccccccc Each line contains two numbers, a character and a password\nPart 1 In part 1 you have to check if the given character appears at least and at most times given by the numbers. Each password that matches this behavior is considered valid.\nPart 2 In part 2 you need to check if the given character is on one of the two given positions indicated by the numbers. ","permalink":"https:\/\/auch.cool\/posts\/2020\/aoc-day-2\/","summary":"<p>In day 2 you were given a set of rules to be parsed first. The looked like this:<\/p>\n<pre tabindex=\"0\"><code>1-3 a: abcde\n1-3 b: cdefg\n2-9 c: ccccccccc\n<\/code><\/pre><p>Each line contains two numbers, a character and a password<\/p>\n<h2 id=\"part-1\">Part 1<\/h2>\n<p>In part 1 you have to check if the given character appears at least and at most times given by the numbers.\nEach password that matches this behavior is considered valid.<\/p>\n<script src=\"https:\/\/emgithub.com\/embed-v2.js?target=https:\/\/github.com\/flipez\/advent-of-code\/blob\/master\/2020%2fday-2%2fpart_1.rb&style=agate&showLineNumbers=off&showFileMeta=on&type=code&showCopy=on\"><\/script>\n<h2 id=\"part-2\">Part 2<\/h2>\n<p>In part 2 you need to check if the given character is on <em>one<\/em> of the two given positions indicated by the numbers.\n<script src=\"https:\/\/emgithub.com\/embed-v2.js?target=https:\/\/github.com\/flipez\/advent-of-code\/blob\/master\/2020%2fday-2%2fpart_2.rb&style=agate&showLineNumbers=off&showFileMeta=on&type=code&showCopy=on\"><\/script><\/p>","title":"Advent of Code Day #2"},{"content":"First day was fairly simple. You need to find the two-pair combination (part 1) that sums up to 2020 and the three-pair combination (part 2).\nThen simply multiply each of the numbers within the combination. The result is the solution for the puzzle.\nUnfortunately due to an outage day 1 will not result in any points on the leaderboards.\n","permalink":"https:\/\/auch.cool\/posts\/2020\/aoc-day-1\/","summary":"<p>First day was fairly simple. You need to find the two-pair combination (part 1) that sums up to 2020 and the three-pair combination (part 2).<\/p>\n<p>Then simply multiply each of the numbers within the combination. The result is the solution for the puzzle.<\/p>\n<script src=\"https:\/\/emgithub.com\/embed-v2.js?target=https:\/\/github.com\/flipez\/advent-of-code\/blob\/master\/2020%2fday-1%2fcomplete.rb&style=agate&showLineNumbers=off&showFileMeta=on&type=code&showCopy=on\"><\/script>\n<p>Unfortunately due to an <a href=\"https:\/\/www.reddit.com\/r\/adventofcode\/comments\/k4ejjz\/2020_day_1_unlock_crash_postmortem\/\">outage<\/a> day 1 will not result in any points on the leaderboards.<\/p>","title":"Advent of Code Day #1"},{"content":"During the upcoming AoC I want to post some small posts about each little puzzle. In preparation of this I was looking for a way in hugo to include and highlight source code from GitHub without copying it to each article.\nI discovered emgithub and wrote a little shortcode for it and you can find it here.\n","permalink":"https:\/\/auch.cool\/posts\/2020\/emgithub\/","summary":"<p>During the upcoming AoC I want to post some small posts about each little puzzle.\nIn preparation of this I was looking for a way in hugo to include and highlight source code from GitHub without\ncopying it to each article.<\/p>\n<p>I discovered <a href=\"https:\/\/emgithub.com\">emgithub<\/a> and wrote a little shortcode for it and you can find it <a href=\"https:\/\/github.com\/flipez\/hugo-shortcodes\/blob\/master\/emgithub.html\">here<\/a>.<\/p>\n<script src=\"https:\/\/emgithub.com\/embed-v2.js?target=https:\/\/github.com\/flipez\/hugo-shortcodes\/blob\/master\/README.md&style=agate&showLineNumbers=off&showFileMeta=on&type=code&showCopy=on\"><\/script>","title":"Emgithub shortcode"},{"content":"Seit einiger Zeit gibt es nun schon Let&rsquo;s Encrypt. Und seit einigen Wochen auch eine Betaphase die Anfang Dezember \u00f6ffentlich wird.\nIch fasse mich hier jetzt einfach mal kurz und zeige wie man sich in wenigen Schritten seine Zertifikate automatisiert generieren kann.\nAls ersten m\u00fcssen wir nat\u00fcrlich den Client von Let&rsquo;s Encrypt installieren - aktuell (und vermutlich wird das so bleiben) ist es nur per Api m\u00f6glich Zertifikate zu erstellen.\npacman -S letsencrypt Jetzt passen wir die Einstellungen im nginx an. Das ist am Anfang ein wenig Aufwand - der sich aber lohnt da man das Setup so eigentlich nicht mehr ver\u00e4ndern muss.\nUm den Prozess ohne Ausfall des Webservers durchzuf\u00fchren muss die Domain \u00fcber eine spezielle URL ereichbar sein. In meinem Beispiel s\u00e4he das so aus:\nserver { listen 80 default_server; listen [::]:80; server_name www.flipez.net flipez.net; location &#39;\/.well-known\/acme-challenge&#39; { default_type &#34;text\/plain&#34;; root \/tmp\/letsencrypt-auto; } location \/ { return 301 https:\/\/$server_name$request_uri; } } Ihr m\u00fcsst nat\u00fcrlich die Domain sowie ggf. das Verzeichnis so anpassen, wie ihr es m\u00f6gt. In meinem Fall ist die Domain f\u00fcr Testzwecke und nicht stark konfiguriert. Ihr m\u00fcsst also die Integration unter Umst\u00e4nden etwas anders vornehmen. Am Ende ist es wichtig das die URL entsprechend erreichbar ist.\nNun zu den SSL Einstellungen. Auch hier m\u00fcsst ihr wieder an eure Konfiguration anpassen:\nssl on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols TLSv1.2; ssl_prefer_server_ciphers on; ssl_stapling on; ssl_stapling_verify on; ssl_dhparam ssl\/dhparam.pem; ssl_ciphers AES256+EECDH:AES256+EDH:!aNULL; add_header Strict-Transport-Security &#34;max-age=31536000; includeSubdomains&#34;; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; ssl_certificate \/etc\/letsencrypt\/live\/flipez.net\/fullchain.pem; ssl_certificate_key \/etc\/letsencrypt\/live\/flipez.net\/privkey.pem; Beachtet, dass ihr mit den Einstellungen zwar ein A+ Rating bei SSL-Labs bekommt, ggf aber nicht alle eure Besucher unterst\u00fctzt.\nIn den letzten zwei Zeilen f\u00e4llt dann auch schon ins Auge, wo die Zertifikate abgelegt werden (sollen).\nDas war es auch schon im nginx. Nun geht es an die Konsole:\nStellt zuerst sicher, dass das Verzeichnis des Servers existiert:\nmkdir -p \/tmp\/letsencrypt-auto Nun k\u00f6nnt ihr den Client die Arbeit \u00fcberlassen:\nletsencrypt certonly \\ --server https:\/\/acme-v01.api.letsencrypt.org\/directory \\ -a webroot \\ --webroot-path=\/tmp\/letsencrypt-auto \\ --agree-dev-preview \\ -d www.flipez.net \\ -d flipez.net Auch hier ist nat\u00fcrlich offensichtlich, dass Anpassungen vorgenommen werden m\u00fcssen. Wer nur erneuern m\u00f6chte h\u00e4ngt noch ein --renew an. Wenn alles klappt quittiert Let&rsquo;s Encrypt mit Congratulations [&hellip;] und erkl\u00e4rt dabei noch einmal kurz, wie awesome ihr eigentlich seid.\nDannach mit einem Tool der Wahl den nginx neu starten:\nnginx -s reload systemctl reload nginx Damit das alles dann auch sch\u00f6n automatisch passiert legen wir uns noch eine Systemd-Datei sowie den passenden Timer an.\n# \/etc\/systemd\/system\/letsencrypt.service [Unit] Description=renew certificates for flipez.net [Service] Type=simple ExecStart=\/usr\/bin\/mkdir -p \/tmp\/letsencrypt-auto ExecStart=\/usr\/bin\/letsencrypt --renew certonly \\ --server https:\/\/acme-v01.api.letsencrypt.org\/directory \\ -a webroot --webroot-path=\/tmp\/letsencrypt-auto \\ --agree-dev-preview -d www.flipez.net -d flipez.net ExecStart=\/usr\/bin\/nginx -s reload [Install] WantedBy=multi-user.target Und dann noch den passenden Timer\n# \/etc\/systemd\/system\/letsencrypt.timer [Unit] Description=run cert renew every month [Timer] OnUnitActiveSec=monthly Unit=letsencrypt.service [Install] WantedBy=multi-user.target So werden die Zertifikate jeden Monat einmal rotiert. Das kann man nat\u00fcrlich je nach belieben anpassen. Wem die Sache mit dem nginx zu kompliziert ist kann nat\u00fcrlich auch den Webserver nach jeder Rotation neustarten. Wie das funktioniert hat Konrad in seinem Blog beschrieben.\n","permalink":"https:\/\/auch.cool\/posts\/2015\/11-17-letsencrypt\/","summary":"<p>Seit einiger Zeit gibt es nun schon Let&rsquo;s Encrypt. Und seit einigen Wochen auch eine Betaphase die Anfang Dezember \u00f6ffentlich wird.<\/p>\n<p>Ich fasse mich hier jetzt einfach mal kurz und zeige wie man sich in wenigen Schritten seine Zertifikate automatisiert generieren kann.<\/p>\n<p>Als ersten m\u00fcssen wir nat\u00fcrlich den Client von Let&rsquo;s Encrypt installieren - aktuell (und vermutlich wird das so bleiben) ist es nur per Api m\u00f6glich Zertifikate zu erstellen.<\/p>","title":"Let's encrypt - Free, automated and open"},{"content":"Heute hab ich mich mal daran versucht Limonade herzustellen. Dabei wollte ich als Geschmack vorallem Ananas, Limette und Grenadine haben. Nach einer Weile klicken im Internet hab ich eine grobe Vorstellung davon gehabt was ich denn so alles brauchen werde. Dann gab es noch einen Tipp des Diners meines Vertrauens.\nWas brauchen wir denn alles:\n100 ml frischer Limettensaft (aus etwa 3 Limetten)\n200 ml Ananassaft\n1200 ml Leitungswasser\n200 ml Grenadinesirup\nRohrzucker\nAnanasst\u00fcckchen\nEisw\u00fcrfel\nDer Rest ist eigentlich total einfach. Ihr presst die Limetten aus, mischt den Ananassaft und das Grenadinesirup dazu. Dann f\u00fcllt ihr mit Wasser auf. Es bleibt dabei euch \u00fcberlassen ob und wieviel Mineralwasser oder stilles Wasser ihr nehmt. Ich habe bis jetzt mit 50\/50 einen guten Geschmack erzielt.\nBeim servieren einfach auf den Boden jedes Glases einen Teel\u00f6ffel Rohrzucker geben und ein paar St\u00fccken Ananas. Dann dazu ein paar Eisw\u00fcfel und mit Limonade auff\u00fcllen.\nIch kam bis jetzt nur zu knapp 4 Tests. Falls das hier mal jemand liest und nachmacht, lasst mich doch bitte wissen, wie es geschmeckt hat und was ihr ver\u00e4ndert habt.\n","permalink":"https:\/\/auch.cool\/posts\/2015\/05-03-limonade\/","summary":"<p>Heute hab ich mich mal daran versucht Limonade herzustellen. Dabei wollte ich als Geschmack vorallem Ananas, Limette und Grenadine haben. Nach einer Weile klicken im Internet hab ich eine grobe Vorstellung davon gehabt was ich denn so alles brauchen werde. Dann gab es noch einen Tipp des Diners meines Vertrauens.<\/p>\n<p>Was brauchen wir denn alles:<\/p>\n<ul>\n<li>\n<p>100 ml <em>frischer<\/em> Limettensaft (aus etwa 3 Limetten)<\/p>\n<\/li>\n<li>\n<p>200 ml Ananassaft<\/p>\n<\/li>\n<li>\n<p>1200 ml Leitungswasser<\/p>\n<\/li>\n<li>\n<p>200 ml Grenadinesirup<\/p>","title":"Wenn Limonade in den Betatest geht.."},{"content":"Wenn man mal wissen m\u00f6chte wie schnell das Netzwerk eigentlich ist oder ob ein Anbieter drosselt - und wann - wird meist eine Datei heruntergeladen. Nun war aber das Problem, dass diese Datei irgendwann einmal zuende ist - m\u00f6chte man nun &lsquo;unendlich&rsquo; viel Daten versenden um die Belastung lange aufrecht zu erhalten gibt es da sicherlich viele M\u00f6glichkeiten. Der erste Gedanke war, \/dev\/zero zum Symlinken und per http zur Verf\u00fcgung zu stellen - das klappte nicht so ganz. Die L\u00f6sung liegt aber n\u00e4her als man denkt: netcat!\nMan stellt \u00fcber netcat einfach \/dev\/zero zur verf\u00fcgung (ggf. mehrfach wenn auf mehrere Clients geladen werden soll) und schlie\u00dft so auch die Plattengeschwindigkeit als Problem nahezu aus. Dazu auf dem Server einfach:\ncat \/dev\/zero | nc -l -p &lt;port&gt; Auf dem Client greift man so auf die Daten zu:\nnc &lt;ip&gt; &lt;port&gt; &gt; \/dev\/null So pr\u00fcgelt man nun viele Daten \u00fcber das Netzwerk. Man kann mit &lsquo;pv&rsquo; noch etwas mehr Informationen abrufen. Das s\u00e4he dann so aus:\nnc &lt;ip&gt; &lt;port&gt; | pv &gt; \/dev\/null ","permalink":"https:\/\/auch.cool\/posts\/2015\/04-01-nc-nettest\/","summary":"<p>Wenn man mal wissen m\u00f6chte wie schnell das Netzwerk eigentlich ist oder ob ein Anbieter drosselt - und wann - wird meist eine Datei heruntergeladen. Nun war aber das Problem, dass diese Datei irgendwann einmal zuende ist - m\u00f6chte man nun &lsquo;unendlich&rsquo; viel Daten versenden um die Belastung lange aufrecht zu erhalten gibt es da sicherlich viele M\u00f6glichkeiten. Der erste Gedanke war, \/dev\/zero zum Symlinken und per http zur Verf\u00fcgung zu stellen - das klappte nicht so ganz. Die L\u00f6sung liegt aber n\u00e4her als man denkt: netcat!<\/p>","title":"Moar speed! moar! Geschwindigkeit mit netcat testen"},{"content":"Will man nur ein grobes Limit f\u00fcr den Webservertraffic setzen, so bietet es sich an direkt die Funktionen von nginx zu nutzen.\nMan kann sehr einfach in einem Sever\/Location Block limitierungen setzen.\nlimit_rate_after 300m; limit_rate 5000k; Das setzt ein Limit auf etwa 50MBit wenn mehr als 300MB von einer(!) Verbindung aus geladen werden. Die Werte muss nat\u00fcrlich jeder f\u00fcr sich selbst einstellen und dienen hier nur dem Beispiel.\nM\u00f6chte man nun aber unabh\u00e4ngig der Verbindung den Server oder die IP des Gegen\u00fcber drosseln so bieten sich die nginx Zonen an.\nlimit_conn_zone $binary_remote_addr zone=perip:300m; server { limit_conn perip 5; } So l\u00e4sst sich zum Beispiel ein Limit auf eine IP eines Server setzen, unabh\u00e4ngig davon, wieviele Verbindungen dieser \u00f6ffnen - diese k\u00f6nnen auch limitiert werden.\nNoch mehr dazu gibt es in den nginx-docs\n","permalink":"https:\/\/auch.cool\/posts\/2015\/03-29-nginx-bandwith-limit\/","summary":"<p>Will man nur ein grobes Limit f\u00fcr den Webservertraffic setzen, so bietet es sich an direkt die Funktionen von nginx zu nutzen.<\/p>\n<p>Man kann sehr einfach in einem Sever\/Location Block limitierungen setzen.<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-nginx\" data-lang=\"nginx\"><span class=\"line\"><span class=\"cl\"><span class=\"k\">limit_rate_after<\/span> <span class=\"mi\">300m<\/span><span class=\"p\">;<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">limit_rate<\/span> <span class=\"mi\">5000k<\/span><span class=\"p\">;<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>Das setzt ein Limit auf etwa 50MBit wenn mehr als 300MB von einer(!) Verbindung aus geladen werden. Die Werte muss nat\u00fcrlich jeder f\u00fcr sich selbst einstellen und dienen hier nur dem Beispiel.<\/p>\n<p>M\u00f6chte man nun aber unabh\u00e4ngig der Verbindung den Server oder die IP des Gegen\u00fcber drosseln so bieten sich die nginx Zonen an.<\/p>","title":"Trafficlimit mit nginx"},{"content":"Ich habe mich mal etwas mit der &lsquo;Sicherheit&rsquo; beim ausliefern von Websiten besch\u00e4ftigt und wie man dort am besten manipulieren kann. Umso mehr Kontrolle man im Netzwerk hat, umso weniger braucht man nat\u00fcrlich beachten. Generell braucht man aber eigentlich gar keine Kontrolle und arbeitet einfach mit vielen verschiedenen Methoden zusammen. Sp\u00e4testens mit arpspoof ist man dann Gateway und hat alles was man so braucht.\nNehmen wir mal an, wir leiten per DNS alle Anfragen an &lsquo;google.de&rsquo; an einen seperaten Server. Am angegebenen Ziel haben wir einen nginx welcher wiederum die Anfragen unterschiedlich bearbeitet. Wir wollen alles, was unter Google gesucht werden soll an eine lokale Flask-App geben. Daf\u00fcr eignen sich die folgenden Locations:\nserver { location \/ { proxy_pass http:\/\/localhost:5000; } location \/search { proxy_pass http:\/\/localhost:5000; } } Nun haben wir aber das Problem, dass dort auch Resourcen die Google lokal abruft (Logo etc) mit ausgeliefert werden m\u00fcssten. Das umgehen wir einfach indem wir alles was mit Bildern o\u00c4 zutun hat einfach wieder an Google abtreten.\nlocation \/images { proxy_pass http:\/\/216.239.32.20; } location ~ favicon.ico { proxy_pass http:\/\/216.239.32.20; } Damit haben wir also schon mal alle Request die wir wirklich zu bearbeiten haben an die korrekte Adresse verwiesen. In Python brauchen wir nun also nurnoch den Request nehmen, ausf\u00fchren, das Ergebniss modifizieren und ausliefern. Das klingt alles relativ komplex, ist aber relativ einfach. Hier mal die fertige App:\nfrom flask import Flask, request import urllib.request import re app = Flask(__name__) app.debug = True @app.route(&#39;\/&#39;) @app.route(&#39;\/search&#39;) def index(): query = request.args.get(&#39;q&#39;) if query: url = &#39;http:\/\/216.239.32.20\/search?q=%s&#39; % (query,) a = urllib.request.urlopen(getRequest(url)) return setFacebookFoobar(a.readall().decode(&#39;utf-8&#39;)) moo = urllib.request.urlopen(&#39;http:\/\/216.239.32.20\/&#39;).readall() return re.sub(&#39;action=&#34;[^&#34;]+&#39;,&#39;action=&#34;\/&#39;, str(moo)).replace(&#34;b&#39;&#34;, &#34;&#34;) def setFacebookFoobar(content): body = re.sub(&#39;[Ff]acebook&#39;, &#39;foobar&#39;, content) return body def getRequest( url ): req = urllib.request.Request( url, data=None, headers={ &#39;User-Agent&#39;: &#39;Mozilla\/5.0 (X11; Linux x86_64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/41.0.2227.0 Safari\/537.36&#39; } ) return req def getimages(): return urllib.request.urlopen(&#39;http:\/\/216.239.32.20&#39; + request.path).readall() if __name__ == &#39;__main__&#39;: app.run(host=&#39;0.0.0.0&#39;) ","permalink":"https:\/\/auch.cool\/posts\/2015\/03-06-websitespoof\/","summary":"<p>Ich habe mich mal etwas mit der &lsquo;Sicherheit&rsquo; beim ausliefern von Websiten besch\u00e4ftigt und wie man dort am besten manipulieren kann. Umso mehr Kontrolle man im Netzwerk hat, umso weniger braucht man nat\u00fcrlich beachten. Generell braucht man aber eigentlich gar keine Kontrolle und arbeitet einfach mit vielen verschiedenen Methoden zusammen. Sp\u00e4testens mit arpspoof ist man dann Gateway und hat alles was man so braucht.<\/p>\n<p>Nehmen wir mal an, wir leiten per DNS alle Anfragen an &lsquo;google.de&rsquo; an einen seperaten Server. Am angegebenen Ziel haben wir einen nginx welcher wiederum die Anfragen unterschiedlich bearbeitet. Wir wollen alles, was unter Google gesucht werden soll an eine lokale Flask-App geben. Daf\u00fcr eignen sich die folgenden Locations:<\/p>","title":"Foobar statt Facebook - HTML spoofing mit Python"},{"content":"Seit dem 16.09.14 ist Netflix nun auch in Deutschland erreichbar. Hochgelobt und hei\u00df erwartet kann man nun auch Filme und Serien in Deutschland \u00fcber das Streamingportal sehen. Ich habe mir schon im Vorfeld das Angebot in der US-Version angeschaut und hatte, wie viele andere auch, bedenken, dass das Angebot hier in DE doch stark verschm\u00e4lert sein wird.\nWer sich jetzt sorgt der sei beruhigt, so schlimm ist es nicht. Nat\u00fcrlich ist das Angebot - teilweise deutlich - kleiner als das &lsquo;Original&rsquo;. Nichts destro-trotz sind die Aush\u00e4ngeschilder wie House of Cards oder Orange is the new Black verf\u00fcgbar. Auch in Deutsch. Im Gegensatz zu anderen Streaminganbietern sind alle Filme und Serien auch im O-Ton verf\u00fcbar.\nF\u00fcr wen ist nun Netflix geeignet? Das l\u00e4sst sich leider nicht so einfach sagen, am besten ist das den kostenlosen Probemonat zu nutzen und f\u00fcr sich selbst zu entscheiden. Serienfans die auch viel US-Serien im O-Ton schauen k\u00f6nnten mit dem Angebot in Deutschland schnell unterfordert sein, da bietet sich dann eher das &lsquo;Original&rsquo; an. F\u00fcr den Otto-Normal Nutzer wartet Netflix aber mit solidem Angebot aus schicken neuen Serien und auch deutschen Produktionen auf. Netflix grunds\u00e4tzlich schlecht zu reden w\u00e4re meckern auf hohem Niveau. Soweit so gut, ich werde dann erstmal weiter schauen :)\n","permalink":"https:\/\/auch.cool\/posts\/2014\/09-25-netflix-first-impression\/","summary":"<p>Seit dem 16.09.14 ist Netflix nun auch in Deutschland erreichbar. Hochgelobt und hei\u00df erwartet kann man nun auch Filme und Serien in Deutschland \u00fcber das Streamingportal sehen. Ich habe mir schon im Vorfeld das Angebot in der US-Version angeschaut und hatte, wie viele andere auch, bedenken, dass das Angebot hier in DE doch stark verschm\u00e4lert sein wird.<\/p>\n<p>Wer sich jetzt sorgt der sei beruhigt, so schlimm ist es nicht. Nat\u00fcrlich ist das Angebot - teilweise deutlich - kleiner als das &lsquo;Original&rsquo;. Nichts destro-trotz sind die Aush\u00e4ngeschilder wie House of Cards oder Orange is the new Black verf\u00fcgbar. Auch in Deutsch. Im Gegensatz zu anderen Streaminganbietern sind alle Filme und Serien auch im O-Ton verf\u00fcbar.<\/p>","title":"Netflix: Der erste Eindruck"},{"content":"Heute hab ich es endlich geschafft meine fullbricked LinkStation zu retten. Nach einigem hin und her gab es dann vom Support doch mal eine sinnvolle Anleitung. Ich dr\u00f6sel das hier etwas auf, so das ihr eigentlich direkt loslegen k\u00f6nnt.\nWas genau war das Problem? Die LinkstationDuo (LS-WXL) speichert die Daten in irgendeinem v\u00f6llig kaputten Dateisystem. Noch schlimmer ist eigentlich nur, dass dieses elegante St\u00fcck Hardware die Firmware auf den Festplatten speichert. Das ist solange kein Problem wie man keine oder nur eine Platte tauscht. Tauscht man nun aber beide hat man eine etwas zu klobig geratene, lichschwache Diskokugel die fr\u00f6hlich mit Fehlercodes um sich wirft.\nDas zeigt sich wie? In meinem Fall war es ein 6-faches Blinken mit kurzen Pausen, gefolgt von einer langen und wieder von vorn. Irgendwo in endlosen unverst\u00e4ndlichen Listen sind die auch dokumentiert.\nFangen wir also an.Wir brauchen folgende Software: Achja, es wird nur Windows supportet&hellip;\nFirmware\nTFTP Image\nNAS Navigator\nWir befinden uns also unter Windows und haben die drei Pakete da oben heruntergeladen (wichtig!). Wir navigieren erst einmal zur Adaptereinstellung unserer prim\u00e4ren Netzwerkkarte und \u00e4ndern diese auf folgende statische Werte:\nIP Adresse: 192.168.11.1 Subnetz-Maske: 255.255.255.0 Das ist das Subnetz in dem sich die LinkStation meldet wenn sie keine Firmware mehr hat. Jetzt verbinden wir die Linkstation direkt mit dem PC. Sp\u00e4testens jetzt ist auch das Internet weg. Ganz tief durchatmen, dass wird schon wieder!\nJetzt starten wir das TFTP-Boot.exe Programm und warten bis es sich mit einem &ldquo;accepting requests..&rdquo; auf die Lauer gelegt hat. Jetzt starten wir die Linkstation, warten bis diese den \u00fcblichen Fehlercode zeigt. Jetzt halten wir den Function-Knopf kurz 2-3 Sekunden gedr\u00fcckt. Die Linkstation sollte nun Blau blinken. Kurz darauf sollte der TFTP Boot 2 S\u00e4tze an Bl\u00f6cken an die Linkstation \u00fcbertragen haben.\nSo, jetzt wird es richtig kaputt. Wir starten den NAS-Navigator und antworten auf die Frage ob wir die IP anpassen wollen mit NEIN! In der \u00dcbersicht taucht jetzt die LinkStation auf. Vermutlich mit einer anderen IP als wir es erwartet h\u00e4tten. Wir \u00e4ndern also unsere statische, lokale IP auf das Subnetz der Linkstation ab. Wenn zum Beispiel die IP Adresse als: 169.254.127.15 und die Subnetzmaske als: 255.255.0.0 angezeigt werden, m\u00fcssen wir die IP des PC auf: 169.254.127.16 und die Subnetzmaske auf: 255.255.0.0 \u00e4ndern.\nJetzt starten wir die LSUpdater.exe welche die LinkStation automatisch erkennen sollte. Dort klicken wir einfach auf Update und hoffen auf das beste.\n&ndash; Ich werde hier in den n\u00e4chsten Tage fortsetzen. Folgen wird ua wie man komplett neue Festplatten ohne oder mit falscher Partitionstabelle einbindet und noch ein paar andere kleine Fehler werden angef\u00fchrt werden &ndash;\n","permalink":"https:\/\/auch.cool\/posts\/2014\/09-03-buffalo-linkstation\/","summary":"<p>Heute hab ich es endlich geschafft meine fullbricked LinkStation zu retten. Nach einigem hin und her gab es dann vom Support doch mal eine sinnvolle Anleitung. Ich dr\u00f6sel das hier etwas auf, so das ihr eigentlich direkt loslegen k\u00f6nnt.<\/p>\n<h6 id=\"was-genau-war-das-problem\">Was genau war das Problem?<\/h6>\n<p>Die LinkstationDuo (LS-WXL) speichert die Daten in irgendeinem v\u00f6llig kaputten Dateisystem. Noch schlimmer ist eigentlich nur, dass dieses elegante St\u00fcck Hardware die Firmware auf den Festplatten speichert. Das ist solange kein Problem wie man keine oder nur eine Platte tauscht. Tauscht man nun aber beide hat man eine etwas zu klobig geratene, lichschwache Diskokugel die fr\u00f6hlich mit Fehlercodes um sich wirft.<\/p>","title":"Buffalo Linkstation entbricken"},{"content":"Hallo Zusammen,\nheute m\u00f6chte ich mich mal einem anderen Thema widmen. Bis jetzt gibt es hier ja nur Beitr\u00e4ge \u00fcber den Raspberry Pi, das wird sich heute \u00e4ndern.\nDa ich auch in der Fotografie Hobbym\u00e4\u00dfig unterwegs bin werde ich hier in Zukunft auch einiges \u00fcber meine Erfahrungen rund um meine Spiegelreflex posten. De Anfang soll hier mein bisher gr\u00f6\u00dftes Projekt machen. Direkt gegen\u00fcber meiner Arbeitsstelle wird seit mehreren Monaten ein neues Stahlwerk errichten. Die perfekte Gelegenheit um mal etwas mit Zeitrafferaufnahmen zu experimentieren.\nIch habe Online schon viele &ldquo;Timelapse&rdquo; Aufnahmen gesehen und auch in der Vergangenheit schon einmal damit experimentiert ( Siehe YouTube ) und festgestellt, dass das eigentlich viel Spa\u00df macht.\nAlso habe ich angefangen mich zu belesen und herauszufinden wie man denn sowas am besten Anstellt. Diese Erfahrungen und das Wissen m\u00f6chte ich hier jetzt gerne mit euch teilen um den ein oder anderen der mit dem Gedanken spielt sowas mal zu probieren, ein paar Tipps zu geben wie man am besten Anf\u00e4ngt.\nWie, Wo, Was? Zu allererst stellt sich nat\u00fcrlich die Frage &ldquo;Was&rdquo; soll \u00fcberhaupt zu sehen sein. F\u00fcr so etwas bieten sich nat\u00fcrlich gro\u00dfe Pl\u00e4tze mit vielen Autos und Menschen oder Baustellen an. Das muss im Zweifel dann jeder in seiner Umgebung herausfinden. Nicht jeder hat schlie\u00dflich riesige Baustellen oder Gro\u00dfst\u00e4dte vor seiner T\u00fcr. In meinem Fall hatte ich mit der Baustelle Gl\u00fcck, sonst ist hier n\u00e4mlich auch nichts los :)\nHat man das Objekt seiner Tr\u00e4ume ersp\u00e4ht steht die Frage &ldquo;Wie&rdquo; recht nah. Dabei gibt es grunds\u00e4tzlich zwei M\u00f6glichkeiten. Man kann Zeitrafferaufnahmen entweder mit vielen Einzelbildern erstellen oder direkt durchg\u00e4ngig als Video, welches man dann sp\u00e4ter nur zurecht schneidet und entsprechend beschleunigt. Aufnahmen als Video empfehlen sich meist nur bei k\u00fcrzeren Aufnahmen da diese in der Regel mehr Speicherplatz verbrauchen. In meinem Fall habe ich Einzelbilder gew\u00e4hlt da diese ja \u00fcber mehrere Monate hinweg laufen soll. Neben DSLR`s sind viele Festplatten leider auch recht teuer.\nDamit kommen wir auch schon zu dem Teil, in dem gekl\u00e4rt werden muss, was man denn alles braucht. Das m\u00f6chte ich einmal an zwei Beispielen darstellen. Zu erst eine kurze Aufnahme die vielleicht 1 oder 2 Stunden geht. Generell ist es wichtig ein Stativ zu benutzen oder die Kamera sonst irgendwie fest zu positionieren. Wackler sind in so einem Video n\u00e4mlich mehr als unsch\u00f6n. F\u00fcr die Aufzeichnung im Video-Modus benutze ich immer die Einstellung &ldquo;NTSC&rdquo; und eine Aufl\u00f6sung von 1920x1080 @ 30 B\/s. Das kann aber je nach Kamera abweichen. Haben wir die Kamera so positioniert das wir alles im Blick haben und sie fest steht brauchen wir eigentlich nur noch in den Video Modus wechseln und die Aufnahme starten. Zur Nachbearbeitung komme ich dann sp\u00e4ter.\nHaben wir keine Kamera die Videos aufzeichnen kann oder soll die Aufnahme \u00fcber mehrere Stunden gehen so das der Speicherplatz nicht ausreicht m\u00fcssen wir mit Einzelaufnahmen, also einzelnen Bildern arbeiten. Um sehr viele Bilder kontinuierlich aufzunehmen bietet sich bei Canon Kameras die Firmware &ldquo;Magic Lantern&rdquo; an.\nWie genau ihr mit Magic Lantern umgeht, erfahrt ihr demn\u00e4chst hier\n","permalink":"https:\/\/auch.cool\/posts\/2013\/06-19-timelapse-location\/","summary":"<p>Hallo Zusammen,<\/p>\n<p>heute m\u00f6chte ich mich mal einem anderen Thema widmen. Bis jetzt gibt es hier ja nur Beitr\u00e4ge \u00fcber den Raspberry Pi, das wird sich heute \u00e4ndern.<\/p>\n<p>Da ich auch in der Fotografie Hobbym\u00e4\u00dfig unterwegs bin werde ich hier in Zukunft auch einiges \u00fcber meine Erfahrungen rund um meine Spiegelreflex posten. De Anfang soll hier mein bisher gr\u00f6\u00dftes Projekt machen. Direkt gegen\u00fcber meiner Arbeitsstelle wird seit mehreren Monaten ein neues Stahlwerk errichten. Die perfekte Gelegenheit um mal etwas mit Zeitrafferaufnahmen zu experimentieren.<\/p>","title":"Timelapse selber machen - Die Location"},{"content":"Heute wollen wir uns einmal \u00fcber die Medien-Tauglichkeit des Raspberry`s unterhalten. Die Apple-J\u00fcnger unter euch wird es gefallen - heute r\u00fcsten wir den Rasperry mit AirPlay aus.\nAlles was wir dazu brauchen: Raspberry Pi, Netzwerkkabel oder USB-Wlan Stick, 3.5 Klinke Soundsystem (f\u00fcr mehr Qualit\u00e4t bieten sich USB Systeme an), ein iDevice oder iTunes.\nSchritt 1 - Vorbereiten Als erstes aktualisieren wir die Paketlisten:\nsudo apt update Danach richten wir den Klinke Ausgang als Standard ein. In den meisten F\u00e4llen wird der Ton ja \u00fcber den HDMI Port ausgegeben.\namixer cset numid=3 1 Dabei steht 0 f\u00fcr Automatisch, 1 f\u00fcr Kopfh\u00f6rer - also Klinke und 2 f\u00fcr den HDMI Ausgang.\nSchritt 2 - Die Paketinstallation F\u00fcr unseren &ldquo;AirPi&rdquo; benutzen wir Shairport welches wir mit ein paar zus\u00e4tzlichen Paketen direkt von Github installieren k\u00f6nnen. Dazu m\u00fcssen wir erst einmal Git installieren. Das machen wir wie folgt:\nsudo apt install git libao-dev libssl-dev libcrypt-openssl-rsa-perl libio-socket-inet6-perl libwww-perl avahi-utils Jetzt laden wir Shairport mit diesem Befehl herunter:\ngit clone https:\/\/github.com\/albertz\/shairport.git shairport * Dann wechseln wir in den Shairport Ordner und compilieren:\ncd shairport sudo make Schritt 3 - Shairport automatisch starten lassen Wir installieren es mit\nsudo make install Und kopieren die Init-Datei in das Startverzeichnis\nsudo cp shairport.init.sample \/etc\/init.d\/shairport Dann wechseln wir in das init.d Verzeichnis und weisen Shairport die ben\u00f6tigten Rechte zu\ncd \/etc\/init.d sudo chmod a+x shairport sudo update-rc.d shairport defaults Jetzt bearbeiten wir die Einstellungen (in \/etc\/init.d)\n\u00d6ffnen mit\nsudo nano shairport Und \u00e4ndern DAEMON_ARGS von\nNAME=shairport DAEMON=&#34;\/usr\/local\/bin\/shairport.pl&#34; PIDFILE=\/var\/run\/$NAME.pid DAEMON_ARGS=&#34;-w $PIDFILE&#34; zu\nNAME=shairport DAEMON=&#34;\/usr\/local\/bin\/shairport.pl&#34; PIDFILE=\/var\/run\/$NAME.pid DAEMON_ARGS=&#34;-w $PIDFILE -a NameDesAirPi&#34; Gespeichert wird, wie \u00fcblich mit STRG+O und verlassen mit STRG+X\nSchritt 4 - Starten So. Damit w\u00e4ren wir fertig und k\u00f6nnen jetzt starten. Ganz einfach per:\nsudo \/etc\/init.d\/shairport start Viel Spa\u00df mit eurem AirPi ;)\n","permalink":"https:\/\/auch.cool\/posts\/2013\/03-30-raspberry-pi-airplay\/","summary":"<p>Heute wollen wir uns einmal \u00fcber die Medien-Tauglichkeit des Raspberry`s unterhalten. Die Apple-J\u00fcnger unter euch wird es gefallen - heute r\u00fcsten wir den Rasperry mit AirPlay aus.<\/p>\n<p>Alles was wir dazu brauchen: Raspberry Pi, Netzwerkkabel oder USB-Wlan Stick, 3.5 Klinke Soundsystem (f\u00fcr mehr Qualit\u00e4t bieten sich USB Systeme an), ein iDevice oder iTunes.<\/p>\n<h2 id=\"schritt-1---vorbereiten\">Schritt 1 - Vorbereiten<\/h2>\n<p>Als erstes aktualisieren wir die Paketlisten:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">sudo apt update\n<\/span><\/span><\/code><\/pre><\/div><p>Danach richten wir den Klinke Ausgang als Standard ein. In den meisten F\u00e4llen wird der Ton ja \u00fcber den HDMI Port ausgegeben.<\/p>","title":"Raspberry Pi - AirPlay"},{"content":"Im ersten Beitrag (hier) haben wir uns bereits ein Raspbian installiert. Da man nun auch die Gr\u00f6\u00dfe des Raspberry nutzen m\u00f6chte und ihn nicht immer an einem Monitor mit Maus und Tastatur betreiben m\u00f6chte, richten wir im folgendem Beitrag den Fernzugriff ein.\nAls ersten suchen wir f\u00fcr die Pakete nach neuen Versionen und installieren diese falls vorhanden ganz einfach per\napt update apt upgrade Die Willkommensnachricht (motd - Message of the Day) k\u00f6nnen wir mit einem Texteditor einfach ab\u00e4ndern. Gespeichert wird mit STRG + O und mit STRG + X geht es raus.\nnano \/etc\/motd Jetzt \u00e4ndern wir noch den Port f\u00fcr den Fernzugriff:\nnano \/etc\/ssh\/sshd_config Den k\u00f6nnt ihr euch frei heraussuchen und an der entsprechenden Stelle einfach ab\u00e4ndern. \u00c4nderung abspeichern und Datei schlie\u00dfen. Zum \u00fcbernehmen der \u00c4nderung starten wir SSH neu:\n\/etc\/init.d\/ssh restart Ab jetzt k\u00f6nnen wir von einem anderem PC auf den Raspberry zugreifen. Damit wir die IP wissen k\u00f6nnen wir entweder im Router nachschauen oder per\nip a die Netzwerkeigenschafen anzeigen. Ferngesteuert kann unter Windows z.B mit dem Tool PuTTY.\nDamit w\u00e4hre der Fernzugriff auch schon eingerichtet.\nFrohes Remoten ! ","permalink":"https:\/\/auch.cool\/posts\/2013\/03-18-raspberry-pi-remote\/","summary":"<p>Im ersten Beitrag (<a href=\"\/posts\/2013\/03-17-raspberry-pi-schritte\/\" title=\"Raspberry Pi - Die ersten Schritte\">hier<\/a>) haben wir uns bereits ein Raspbian installiert.\nDa man nun auch die Gr\u00f6\u00dfe des Raspberry nutzen m\u00f6chte und ihn nicht immer an einem Monitor mit Maus und Tastatur betreiben m\u00f6chte, richten wir im folgendem Beitrag den Fernzugriff ein.<\/p>\n<p>Als ersten suchen wir f\u00fcr die Pakete nach neuen Versionen und installieren diese falls vorhanden ganz einfach per<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" class=\"chroma\"><code class=\"language-bash\" data-lang=\"bash\"><span class=\"line\"><span class=\"cl\">apt update\n<\/span><\/span><span class=\"line\"><span class=\"cl\">apt upgrade\n<\/span><\/span><\/code><\/pre><\/div><p>Die Willkommensnachricht (motd - Message of the Day) k\u00f6nnen wir mit einem Texteditor einfach ab\u00e4ndern. Gespeichert wird mit <code>STRG + O<\/code> und mit <code>STRG + X<\/code> geht es raus.<\/p>","title":"Raspberry Pi - Fernsteuern"},{"content":"Heute wollen wir uns einmal um die minimal Installation eines Raspbian`s auf dem Raspberry k\u00fcmmern. Dazu brauchen wir ein paar Dinge:\nRaspberry Pi SD Karte + Kartenleser HDMI f\u00e4higen Monitor \/ Fernseher USB - Tastatur Netzwerkkabel Schritt 1 - Die SD Karte vorbereiten Als ersten formatieren wir die SD Karte ins FAT32 Format. Danach laden wir uns den Raspbian Installier herunter (raspbian.org)\nDieser Installer f\u00fchrt die ersten Schritte mit Daten von der SD Karte aus und l\u00e4d` dann die ben\u00f6tigten Daten nach. Deshalb immer auf die Internetverbindung achten.\nDiese Daten entpacken wir dann direkt auf die SD Karte. Somit w\u00e4hren wir mit der Karte auch schon fertig und stecken sie in den Raspberry.\nSchritt 2 - Die Raspbian Installation Wir versorgen den Raspberry mit Internet, einer Tastatur sowie einem Monitor und geben ihm per USB noch etwas Saft.\nJetzt sollte direkt der Installer starten. Dieser stellt uns jetzt einige Fragen. Die nachfolgenden Schritte sind in der Regel zutreffend, k\u00f6nnen aber im Einzelfall bei euch anders sein.\nSelect language: Deutsch Select your location: Deutschland Configure the keyboard: Deutsch Configure the network: Hier einen Namen f\u00fcr das Ger\u00e4t eingeben. z.B Raspberry Configure the network: Dieser Punkt kann einfach mit Enter \u00fcbersprungen werden Choose a mirror of the Debian archive: mirrordirector.raspbian.org Choose a mirror of the Debian archive: \/raspbian\/ Choose a mirror of the Debian archive: Die Frage nach einem Proxy kann in den meisten F\u00e4llen mit Enter \u00fcbersprungen werden Jetzt werden wir gefragt ob wir die Installation ohne Kernel fortsetzen wollen. Das beantworten wir mit &lt;Yes&gt; Der Installer l\u00e4d jetzt weitere Daten vom Webserver. Idealer Zeitpunkt den Kaffee anzusetzen&hellip;. Sind wir von der Kaffeemaschine zur\u00fcck, k\u00f6nnen wir auch schon die Userdaten eingeben.\nRoot Passwort ( Der root - Benutzer wird sp\u00e4ter zum Installieren von Paketen und bearbeiten von Dateien ben\u00f6tigt. ) Vollst\u00e4ndiger Name Benutzername Passwort f\u00fcr den Benutzer Jetzt stellen wir die Zeitzone ein ( in DE sollte das Berlin sein ).\nDann schl\u00e4gt uns der Installer vor, wie wir die Speicherkarte partitionieren k\u00f6nnen. Das k\u00f6nnte so aussehen:\n1 primary 78.6 MB B f fat32 \/rpiboot 2 primary 255.9 MB f swap swap 3 primary 3.6 GB f ext3 \/ Das bejahen wir mit &lt;Finish partitioning and write changes to disk&gt; und best\u00e4tigen mit &lt;Yes&gt;\nJetzt wird das Grundsystem installiert und eingerichtet. Perfekt um den inzwischen fertigen Kaffee zu holen&hellip;\nMeist kann das security.debian.org repository nicht erreicht werden. Mit &lt;Continue&gt; best\u00e4tigen und die Frage nach Informationen mit &lt;Yes&gt; oder &lt;No&gt; beantworten - wie ihr wollt.\nJetzt werden wir gefragt welche Software wir mit Installieren wollen. Wir belassen es in diesem Fall bei den bereits ausgew\u00e4hlten:\nSSH server Standard system utilities Diese werden jetzt installiert. Der Kaffee sollte Trink-Temperatur haben - schmecken lassen! Danach best\u00e4tigen wir noch einmal mit &lt;Continue&gt;, der Raspberry sollte nun neustarten.\nWillkommen in der Welt des Raspbian ;) Weiterf\u00fchrend kann hier der Fernzugriff eingerichtet werden.\n","permalink":"https:\/\/auch.cool\/posts\/2013\/03-17-raspberry-pi-schritte\/","summary":"<p>Heute wollen wir uns einmal um die minimal Installation eines Raspbian`s auf dem Raspberry k\u00fcmmern. Dazu brauchen wir ein paar Dinge:<\/p>\n<ul>\n<li>Raspberry Pi<\/li>\n<li>SD Karte + Kartenleser<\/li>\n<li>HDMI f\u00e4higen Monitor \/ Fernseher<\/li>\n<li>USB - Tastatur<\/li>\n<li>Netzwerkkabel<\/li>\n<\/ul>\n<h2 id=\"schritt-1---die-sd-karte-vorbereiten\">Schritt 1 - Die SD Karte vorbereiten<\/h2>\n<p>Als ersten formatieren wir die SD Karte ins FAT32 Format. Danach laden wir uns den Raspbian Installier herunter (<a href=\"http:\/\/www.raspbian.org\/RaspbianInstaller\" title=\"Raspian.org\">raspbian.org<\/a>)<\/p>\n<p>Dieser Installer f\u00fchrt die ersten Schritte mit Daten von der SD Karte aus und l\u00e4d` dann die ben\u00f6tigten Daten nach. Deshalb immer auf die Internetverbindung achten.<\/p>","title":"Raspberry Pi - Die ersten Schritte"}]