{"title":"Gianluca Arbezzano - GianArb","link":[{"@attributes":{"href":"https:\/\/gianarb.github.io\/atom.xml","rel":"self"}},{"@attributes":{"href":"https:\/\/gianarb.github.io\/"}}],"updated":"2026-03-17T22:05:51+00:00","id":"https:\/\/gianarb.it","author":{"name":"Gianluca Arbezzano","uri":"https:\/\/gianarb.it","email":"gianarb92@gmail.com"},"entry":[{"title":"I own a business and ShippingBytes got its own VAT number","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/shippingbytes-got-vat-i-got-a-business"}},"description":"From \"ShippingBytes did not work\" to owning a business: why I made the shift, how I work, and what I\u2019m offering.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2026-01-25T06:08:27+00:00","published":"2026-01-25T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/shippingbytes-got-vat-i-got-a-business","content":"<p>A few months ago I wrote a blog post with the title <a href=\"\/blog\/shippingbytes-didnot-work\">\u201cShippingBytes did not\nwork for me\u201d<\/a>. It did not age well at all.\nI now own a business, and ShippingBytes has its own VAT number.<\/p>\n\n<p>I bought this domain when I was looking for a way to decouple myself from my\ncareer because I feel this blog should be for myself, instead it was all about\nGianluca as a developer. As you can see I was looking for a <code>.com<\/code> because\nthat\u2019s what real businesses still need, I thought it was a \u201ctech enough\u201d name and\nit was available so here it is.<\/p>\n\n<p>I tried to blog on it, but the quality of my writing was aligned with my pretty\nlow motivation at the time. When I revisited them, I decided the outcome was\nnot good enough - it did not spark the passion I have for the field.<\/p>\n\n<p>When I started sharing my desire to run a little consulting firm almost\neveryone I talked to was surprised and they thought I already had one.\nSurprisingly enough I worked as a full-time employee for my entire career until\nnow.<\/p>\n\n<p><img src=\"\/img\/shippingbytes.svg\" alt=\"ShippingBytes Logo. It is an &quot;S&quot; surrounded by an RNA since it is how the body exchanges informations\" width=\"80%\" \/><\/p>\n\n<h2 id=\"why-now\">Why now?<\/h2>\n\n<p>Career is a pendulum. For many years open-source contributions, conferences\nwere a big thing but they were always something I did for myself, often\nsupported by my employer but in a very unstructured way.<\/p>\n\n<p>Just before COVID, I decided to take a break from all of that. I joined a couple\nof startups and the pandemic changed many things in the field. Somehow it helped\nme stay true to my intention to avoid \u201cextra work\u201d for some time - probably for\ntoo long. I ended up feeling \u201calone\u201d from a work point of view. Since COVID,\naccording to my CV I changed 4 jobs and I never saw a colleague in person.<\/p>\n\n<p>I ended up pushing aside the soft skills, not purely related to coding that I\nused for the majority of my career for a bit too long.<\/p>\n\n<p>When I turned to the job market it did not spark joy to me, after one year of\nlooking around I could not find a job spec describing what I wanted to do, they\nwere all too AI-sloppy or restrictive.<\/p>\n\n<p>For me, those were all signs that it was time for me to figure out my own sweet spot.\nExperimenting with building not only software but systems, running my own business\nhas already pushed me out of my comfort zone into pretty boring things. Figuring out how\nto open a bank account, bureaucracy is pretty fun here in Italy, I had to reach\nout to old friends and managers to find some work. Some of those tasks are more\nfun than others but they require some of those skills that I left behind and\nthat build what I am.<\/p>\n\n<p>In a field that looks like it is living a revolution I prefer to feel free to\nmake my own mistakes instead of having to suffer the one made from somebody\nelse.<\/p>\n\n<h2 id=\"so-now-what\">So now what?<\/h2>\n\n<p>I have a few product opportunities that are ongoing, those run on their own\nShippingBytes as I said is a consulting business. It is a way to reach out to\npeople I collaborate or work with and see what they are up with and how I can\nhelp. <a href=\"https:\/\/shippingbytes.com\/self-service\/\">If you are one of those I am here!<\/a><\/p>\n\n<p>I decided to set it up as monthly subscription, with a price you can find\nwritten on my work site because I think this is the right way to build a\ncollaboration where we are all involved in the value creation. A subscription\nmeans I work with you on outcomes, not hours.<\/p>\n\n<p>Investing time in estimating tasks or billing hourly communicates the wrong things.\nI am not the one who should figure out how long a task should take. The\ncustomer knows how much time it should take. I can help you get the best out of\nthe time you want to allocate, and if things change along the way - fine! We will adapt.<\/p>\n\n<p>This is why I don\u2019t want to make estimates, but I want to collaborate not only\nfrom a code perspective but also in roadmap creation.<\/p>\n\n<h2 id=\"what-is-this-business-all-about\">What is this business all about?<\/h2>\n\n<p>Ok, software development and consulting - but what exactly?<\/p>\n\n<ul>\n  <li>Open source maintenance and community: Whether you\u2019re all in with open source and need help maintaining repositories, or you just have a few repos to engage with your community, I can help. I\u2019m happy to review pull requests, triage issues, manage releases, and implement solid community collaboration workflows.<\/li>\n  <li>Software development and maintenance: from Proof of Concept to keeping alive\nprograms that serve their purpose<\/li>\n  <li>Kubernetes development: I served as RelEng for K8S, developed operators and\nother extensions. If you have code that interacts with the Kubernetes API, I\nam here to help<\/li>\n  <li>Observability and troubleshooting: I\u2019ve been running my own code since my\nfirst day as a developer. I was involved early on with opencensus,\nopentracing, opentelemetry, and I enjoy making systems understandable<\/li>\n  <li>DevOps and automation<\/li>\n  <li>Apache Arrow, DataFusion, workflow engines: I\u2019ve worked for various companies\nbuilding their own databases and experienced GenAI from the inside. I want to\nhelp tame this beast!<\/li>\n<\/ul>\n\n<p>I know it sounds like a lot - how can I be helpful across all these fields? The\nreality is that with almost 15 years of experience, I\u2019ve seen many successful\narchitectures and navigated through failures. I\u2019m here to keep working on\nsoftware that enables people to efficiently build and run internet-scale\nsystems. I like unknowns and to make my hands dirty.<\/p>\n\n<h2 id=\"what-am-i-looking-for\">What am I looking for?<\/h2>\n\n<p>Luckily for me I got my first two contracts signed even before starting the\nbusiness. I set three as limit and I am not looking for more.<\/p>\n\n<p>I worked with early-stage startups for the majority of my career and I like the\nenvironment. If you read this article carefully, you\u2019ll see that I\u2019m also\nlooking to expand into product-oriented companies (not just as a developer).\nAngel investment is something I am actively working on.<\/p>\n\n<p>At this stage I am looking to reconnect with old colleagues and build new connections!<\/p>\n"},{"title":"Keep in mind your susteinability","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/keep-in-mind-your-sustainaibility"}},"description":"A story about susteinability that starts from gardening and ends with a call to action to find a moral for it","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2025-10-03T06:08:27+00:00","published":"2025-10-03T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/keep-in-mind-your-sustainaibility","content":"<p>Five years ago we bought our current house.<\/p>\n\n<p>It was a solid but old house from the 60s, we had to remodule a few things\ninside because back then the rooms where many and tiny.<\/p>\n\n<p>Plumbing, elecricity, heating, windows, floors we ripped all out, replacing it\nwith modern solutions. Two years and half of hard work from various people.<\/p>\n\n<p>The first thing we did as soon as we bought the house was gardening. We were\nstill leaving with our parents, making the plan about how to do the renovation\nbut vegetables were already planted. It feels crazy but yep. I think this is\nhow excitment and passion feels.<\/p>\n\n<p>I always try to keep the impact of myself in the planet at minimum. Minumum\nplastic, watering just enough and in the most efficient possible. For example\neven if I had access near the garden to the the water system we use in house,\nclean and public water from the city provider I repurposed and old tank and I\nbought a used pump to get water from a small river near by.<\/p>\n\n<p><img src=\"\/img\/gianluca-spicy.jpg\" alt=\"\" \/><\/p>\n\n<p>The river and the pump is unsuitable for the purpose. The rider is not deep\nenough, there is not enough water and the pump is deeply unconfortable. You\nneed to feel it with water, put the hose correctly otherwise it won\u2019t suck\nenough water and you will need to do the filling process again.<\/p>\n\n<p>All of this to say that I was spending 30 minutes if my life every two, three\nweeks trying to fill the tank to end soacked in water.<\/p>\n\n<p>After two years Ludovica arrived and I got even fewest time to play with a\ngarden but we knew it was important, this is when I realized that I had to\naccept the compromise of being a bit less green, relying sometime to public\nwater instead of getting to stressed, tired and burned out for a few liters.<\/p>\n\n<p>I know every liter counts but you should keep track of the big picture. A\nconcious shortcut today that keeps you running is the best you can do to\nyourself. I try to keep the consumption under control, I don\u2019t waste it I still\nget water from the river the majority of the time but for the time I can\u2019t I am\nglad I can relay on pressure from the city water system.<\/p>\n\n<p>Next year we\u2019ll renovate the main entrace of the house, we will dig outside and\nas part of that we will install a big tank to collect water from the roof and\nfrom there I will have another option I can relay on not only to water the\nplans (now I have some fruit trees and the terrace is full of plants and\nflowers as well) but also to wash cars and other things.<\/p>\n\n<p>The big picture is always the same, keep a low impact and maxime the resources\nwe have but sustainablity is important not only when it comes to the planet but\nalso to yourself, and it is easy to forget. Do your best, love yourself it is a\ngreat starting point.<\/p>\n\n<p>I am sure there is a moral in this story and something that applies to software\nas well but tonight I will leave the connection to you.<\/p>\n"},{"title":"ShippingBytes did not work for me (this article did not age well)","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/shippingbytes-didnot-work"}},"description":"ShippingBytes is on hold! (update: not anymore)","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2025-04-02T06:08:27+00:00","published":"2025-04-02T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/shippingbytes-didnot-work","content":"<section>\n    <div class=\"row\">\n        <p class=\"note\">This article did not age well. I got an update for you here: <a href=\"\/blog\/shippingbytes-got-vat-i-got-a-business\" target=\"_blank\">\"I own a business and ShippingBytes got its own VAT number\"<\/a>.<\/p>\n    <\/div>\n<\/section>\n\n<p>June 2024 I started a new website called ShippingBytes. I thought I was looking\nfor a place where I could write tech articles about lesson learned and similar\nthat felt wrong to write here.<\/p>\n\n<p>This blog to be is more like a diary, writing about what I am doing and why\nmore than how. I wrote many how-to articles here but I also skipped many\nbecause I didn\u2019t want to publish them here.<\/p>\n\n<p>Anyway, I tried to figure out a writing schedule, daily, weekly but it quickly\nturned to never.<\/p>\n\n<p>My plan was to use ShippingBytes to do a bit of branding for a future where I\nwill find myself doing consulting and freelancing work. I am not sure if it\nwill ever happen or if I will like such career but I tend to change companies\nmore often than what I would like to and this makes me to think that probably\nbeing an employee is not what I should do for my entire career.<\/p>\n\n<p>But something I learned along the way is that there are many many companies\ndoing very cool things and that I want to learn from and work with. And many\npeople told me that doing consulting and freelancing has its own complexity and\nits own skillset like everything. I also feel like my skillset is very wide and\nnot that deep. I did a lot of experience with Kubernetes, I joined the Release\nTeam for a few iterations but I don\u2019t want to work with it for 8 hours. For\nthis reason if you want to pay the most up to date, and most skilled Kubernetes\nexpert I am not that person, I can figure everything out but I don\u2019t know\neverything and I feel the same for everything I know. A few people are really\ninto eBPF, they like the topic, they expanded as security expert and I think it\nis the good intersection of skills and market that can turn into a sustainable\nconsulting experience. I can\u2019t relate to any technology I work with or\ntechonlogy I want to invest into.<\/p>\n\n<p>Yep, there is a bit of missfeeling as you can notice and if you feel the same\nlet me know with an email <a href=\"mailto:ciao@gianarb.it\">ciao@gianarb.it<\/a> because I\nwant to know how you feel about it as well.<\/p>\n\n<p>This article may feel a bit negative but I learned a few things and I started\nto experiement with alternative models that may work for me, but they are long\nterm and I don\u2019t have enough to share just yet. The TLDR is that I started\nsharing my wide tech skills with a couple of friends that happened to run a\nsuccessfully business and that wants to expand in the digital side (who does\nnot want to) in exchange of piece of the stake of the company itself or some\nsort of royalies. It will work? I don\u2019t know but I will let you know.<\/p>\n\n<p>This will may force me to move out from a full time job but we will see and\ndiversifying is something that I think I want to apply to my dialy job as well\nnot only as investing strategy because it sounds like a good strategy for this\ntime of uncertainties where a full time job is not that stable as it should be.<\/p>\n\n<p>Anyway to get back to the main topic I am going to keep ShippingBytes as domain\nfor a bit because I like the name, and I thought about a name for a long time\nso I want to see how it ages for a bit more but I removed the blog and I\nwon\u2019t write there. I left the newsletter box because you never know but I\nthink I know.<\/p>\n"},{"title":"LLM raises deep questions","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/llm-raises-deep-questions"}},"description":{},"image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2025-01-31T06:08:27+00:00","published":"2025-01-31T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/llm-raises-deep-questions","content":"<p>I haven\u2019t written here about my relationship with LLMs in software. If you want a hint,\nI think this article from Armin Ronacher <a href=\"https:\/\/lucumr.pocoo.org\/2025\/1\/30\/how-i-ai\/\">\u201cHow I Use AI: Meet My Promptly Hired\nModel Intern\u201d<\/a> gives a good\npicture of how I use them.<\/p>\n\n<p>My toolkit is very different since I\u2019ve been locked into <code>vim<\/code> since 2014 and I haven\u2019t\nfound good integrations for it yet. I use <a href=\"https:\/\/aider.chat\/\">aider<\/a> because it\nallows me to converse with an agent from my terminal. It also has good\nintegration with git and so on.<\/p>\n\n<p>That said, deep questions arose in my mind after an intense coding session with Claude.<\/p>\n\n<h2 id=\"wow-it-did-a-pretty-good-job\">Wow it did a pretty good job<\/h2>\n\n<p>When it does a good job (and if you know how and when to use it, it usually\ndoes), I feel good because it helps me write code quickly.<\/p>\n\n<p>I know the people I work for are happy about it since my performance has\nincreased, so that\u2019s good.<\/p>\n\n<p>It also helped me to save time usually spent googling, reading StackOverflow\nand so on. This is the way we learned but apparently LLMs can help with that as\nwell. Take a look at <a href=\"https:\/\/newsletter.pragmaticengineer.com\/p\/the-pulse-119\">\u201cThe Pulse #119: Are LLMs making StackOverflow\nirrelevant?\u201d<\/a> from\nThe Progmatic Programmer if you want to know more about this topic.<\/p>\n\n<h2 id=\"the-dark-side\">The dark side<\/h2>\n\n<p>On the flip side, things are a lot less peaceful. I am not concerned\nabout the job market because I understand that my value doesn\u2019t stop with the code\nI write, but I do get paid to write software. There is a relationship there and it would be\nnaive to evangelize against that.<\/p>\n\n<p>Also, I used to be passionate about writing code, solving puzzles and so on. How\ndid I end up being happy when something else writes code for me? How\nvaluable is the outcome of my work if it\u2019s better for something else to take care\nof it because it can do it quicker than I can? Maybe not better, but good\nenough by any reasonable measure.<\/p>\n\n<h2 id=\"embrace-this-new-tool\">Embrace this new tool<\/h2>\n\n<p>LLMs look a lot faster when it comes to writing code. But it does not know what\nto do. Also it writes a specific type of code. It writes the code the majority\nof the people already wrote, in the same way.<\/p>\n\n<p>For some people, or in some situations, this is everything they need. You need a\ncommon solution to a common problem, so why should you write it yourself or waste\ntime looking for what to copy-paste or for a library to import that will\ncontain 4% of what you need and 96% garbage?<\/p>\n\n<p>People scared that LLMs will take their jobs should read this sentence twice\nbecause in it lies a strategy to avoid that fate. We need to get better. We\nneed to be good at solving problems in better ways. This is challenging and it\nmakes me feel alive.<\/p>\n\n<p>I am not excited about writing the next navbar with a few tables to display a\ncollection of rows from a database. I may need to do it because I need to get\nsome work done, but now I have another tool in my toolchain that can help with\nthat.<\/p>\n\n<h2 id=\"llms-can-fix-complicated-problems\">LLMs can fix complicated problems<\/h2>\n\n<p>I don\u2019t want to reduce everything to: \u201cif you only work on easy problems you can\nbe replaced by LLMs\u201d. They can solve complicated problems as well, but I think\nfor the next couple of years in software, people capable of solving problems in\ndifferent ways will keep thriving! And to develop such skills you need\nto quickly bootstrap \u201cthe common solution\u201d and add your creativity on top.<\/p>\n\n<p>This was always the fun part in code. 12 years ago when I started an HTML page\nwith some PHP code sticked to it was everything I need to get challenged. Today\nI need something more than that and that\u2019s another thing to look for if you\nwant to stay relevant, the right environment that won\u2019t make you to feel as\ngood as an LLM, it can be a team, a manager, yourself and opensource. You are a\nlot more than that!<\/p>\n"},{"title":"Hey it's been a while","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/it-as-been-a-while"}},"description":"Classic 2024, 2025 new year post","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2025-01-13T06:08:27+00:00","published":"2025-01-13T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/it-as-been-a-while","content":"<p>Ciao! I just want to let you know that I am all good! The last year of writing\nand working it is hard to define.<\/p>\n\n<p>I started <a href=\"https:\/\/shippingbytes.com\">shippingbytes<\/a> with an idea but I didn\u2019t\nlike it so I am not writing that much there, I may even close it and call it a\nfailing success because I learned that I actually don\u2019t like to write about\ntech per se that much.<\/p>\n\n<p>I thought that such contents were not a good fit in here but in reality they\ndon\u2019t feet there even because they are not for me. If I will end up learning\nsomething fun I will write about it here as I usually do, otherwise I am gonna\ntry to embrace the fact that is good to not having much to say. It is perfectly\nfine and I won\u2019t go away only because I don\u2019t have anything to share on bsky!<\/p>\n\n<p>It has been a year of collaborations with friends from my own town with their\nown ideas. A local entrepreneurs that wants to digitalize their driving\nschools, a friend of mine that does personal training for professional athletes\nthat is trying to figure out a digital product that fits their needs.\nCollaborating with people around self care sounds like a good idea since I go\nfor 33 this year and Ludovica may start walking soon! I need to be at my best!<\/p>\n\n<p>I am doing this because I like their mindset, I am looking for the natural\nevolution of a developer with almost 15 years of experience. My soft skills are\n100% more valuable and luckily way better than my coding skills and I want to see\nwhat\u2019s next. I see those opportunities as an investments, I did 1 year of\ndevelopement for free to get to a product that Today will go live with first 10\nstudents and most likely will turn in its own company. No startup, no VCs, a\nperson with a business that wants to try something new in its area.<\/p>\n\n<p>The work with the professional athletes is at the beginning, I am not sure how\nit will go and if will jump onboad. I don\u2019t think I have the bandwith to do\ndevelopment but it is hard to justify myself outside of an editor! :)<\/p>\n\n<p>What else? I started to invest a small amount of EUROs in a few ETF with the\nsame spirit of \u201cI feel like I need to try something new and diversify\u201d. I think\nthat the stock market does not represent the real market and it is a job per\nse, but I am lucky enough to save a good amount of money at the end of the\nmonth and I feel ok with the strategy I adopted so far. I don\u2019t have enough\nmoney do to what I think is more valuable and better express my values that is\nsupporting people to do good things. I am trying to do it with my time but it\ndoes not scale as I want as long as it keeps me in front of my desktop, so this\nis a first step to learn something new. Anyway, I am mixing things together.<\/p>\n\n<p>This is the mood this website will turn to be. Random thoughts, something\ntechnical.<\/p>\n\n<p>My primary full time job is going as it should be, we are still here! If you\ndon\u2019t know I joined as first eng a startup doing generative ai for tabular data\ntwo years ago, already! The genai market is still something that does not make\nme happy but it is still too early in my opinion, and I am trying to be\npatient. I joined with my old manager at InfluxData, the worklife balance is\nsomething that I won\u2019t find elsewhere. We have enough money in the bank to\nexperiement and build something that we hope will work in an environment that\nhas a lot to say and that makes me feel stable in this crazy job market. Join\n#lowcarbonsoftware on liberachat if you want to chat about how all of this is\ndevastating the planed because I am with you.<\/p>\n\n<p>What about 2025?! Feel relevant in the current job market in a way that better\nfeet my current mood and situation. A few years ago it meant going to\nconferences at least once a month to share what I was doing, writing posts,\nbeing an ambassador for foundations and things like that. Not doing all of this\nsound scary if I look at how useful all of that was to be where I am today, but\nI am not the same person!<\/p>\n\n<p>Today I don\u2019t really know what it means, but it is easier to define things when\nyou look at them from far away and today I am all hands into my life! <a href=\"mailto:ciao@gianarb.it\">Please\nreach out to say hello my good old friend<\/a>!<\/p>\n"},{"title":"Static Sites limited my ability to have fun","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/static-site-limited-my-ability-to-have-fun"}},"description":"This is my reaction to 'The Static Site Paradox' from Loris Cro and how I think services like GitHub Pages killed my enthusiasm for this field","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2024-10-09T06:08:27+00:00","published":"2024-10-09T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/static-site-limited-my-ability-to-have-fun","content":"<p>This blog post is a reaction to <a href=\"https:\/\/kristoff.it\/blog\/static-site-paradox\/\">The Static Site Paradox\n<\/a> from <a href=\"https:\/\/kristoff.it\/\">Loris Cro<\/a><\/p>\n\n<p>Loris hilights the paradox where people with completence in software end up\nwith solutions that are cheaper and easier to maintain compared with people\nwithout such skills that end up being locked into Wordpress, webservers,\ncaching layers, backends and so on. Its intuition is that it should be other way around.<\/p>\n\n<p>It is a good prospective but I want to expand it a little bit. I do gardening\nand I bought a lawn mower this year. My provider is a small one that does a lot\nof repairs, it is not a shop, they fix tools then they end up salling them as\nwell.<\/p>\n\n<p>I like their approach because they do not start from fancy and new tools, easy\nto use, but impossible to repair, if you share their value you end up with a\ntool that is basic, solid and easy and cheap to repair, even by yourself. I am\nnot an expert when it comes to engines, carburetors, gas and oils but I learned\nthat at some point carburetors may get dirty and if you clean the nozzle you\nare back in full shape. Sometime I did it by myself.<\/p>\n\n<p>Same when I visited the house of my electrician, or the reason why mechanics\noften pick cars that are strong and easy to repair vs models that are made of\nchips they can\u2019t fix.<\/p>\n\n<p>Experts know the cost of maintenance and they can optimize for that. I also\nknow electricians who like domitics and IoT so they end up with expensive gear\nand get locked to sellers that makes them pay for every update. That\u2019s fine as\nwell, people usually have both personality, context is the driver.<\/p>\n\n<p>I like to eat healthy and well cooked food that I can\u2019t grow by my own and I do\nnot have the skill to put together. Even if I grow my own things.<\/p>\n\n<p>Users with no experince with software can\u2019t optimize for technical simplicity.\nUser experience, cost optimization (agency that can update worpdress are\nprobably cheaper than agency doing custom static site because of the\neconomy of scale), avoid vendor lock in and so on.<\/p>\n\n<p>Something else that this article made me to realize is that I had a lot more\nfun and the possibility to experiment with something like Wordpress compared\nwith something like GitHub Pages or a static site.<\/p>\n\n<p>Not because I think Wordpress or PHP is superior, I think serving static pages\nis the solution for the web. It is the most friendly to use, green\u2026 Even with\na CMS or a backend the end goal should be to serve static content as ealry as\npossible, compiling and caching. It can be in your FS as a static site does or\nin Redis, that\u2019s not the point for me.<\/p>\n\n<p>What I realized killed my enthusiasm for experimenting, building, learning in\nsuch context is in fact GitHub Pages or similar solutions. Because they made my\nlife too easy and I got lazy.<\/p>\n\n<p>Recently I decided to split my blog into a second one dedicated to my\nexperience as developer, devops and so on, where I am trying to write on\nschedule, it is not doing as well as I would like so I am not sure for how long\nI will have the energy to go with that, do you want to help? please share it\n<a href=\"https:\/\/shippingbytes.com\">shippingbytes.com<\/a> if you like what I write. But I\ndeiced to build it with WordPress, in a VPC, with a low number of plugins (only\none) a custom theme and so on.<\/p>\n\n<p>Having a VPC justifyed a lot more experimentation, I am collaborating with a\nfriend to develop a product for driving school I run a few other services that\nok are not that useful and I can leave without but is that the point?<\/p>\n\n<p>I realized that easy solution doze off my passion and my energy so do not feel\nstupid if you pay a few beers a month for serving yourself a static site if you\ncan affort that. If you feel alone it is because you are in the wrong bubble\nand I can help with that!<\/p>\n"},{"title":"Content creator wanna-be","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/content-creator-wanna-be"}},"description":"I am wondering I finally figured out how to use shippingbytes.com . I often tried to be a content creator by night but I can't stick to it, as I can stick with anything. I am not sure if this time will be nay different but let's try.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2024-06-03T06:08:27+00:00","published":"2024-06-03T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/content-creator-wanna-be","content":"<p>I am where I am because of open source. Twelve years ago when I started my\nfirst full time job position I didn\u2019t know much. I got hired as solo developer\nfor an internal CMS for a laboratory. An application on top of Postgres to\nregister and manage tools.<\/p>\n\n<p>Anyway this is not the point, I didn\u2019t know much, after a few days writing\nspaghetti code I realized that I was not going anywhere so I opened IRC, I\njoined the PHP channel and I started asking questions. Somebody picked them up\nand I realized that near my place we had a PHP Meetup where I finally found\nsomebody with a proper team who hired me, my contract ended and I joined a team.<\/p>\n\n<p>TLDR open source is what made me the developer I am, not because I write the\ntip of the iceberg, not because my code seats for 99% of open source code\nwritten by somebody else but because of the community I was surrounded since\nday one.<\/p>\n\n<p>I already had a blog, 12 years ago I was 19 and my blog was mainly about tech\nnews because I was not a developer, I didn\u2019t have much to share of my own.<\/p>\n\n<p>But blogging turned to be crucial for my career as a developer, a little bit as\ngiveback and also because I need to <a href=\"https:\/\/registerspill.thorstenball.com\/p\/be-findable\">be\nfindable<\/a>. I live in a\ncountry where salary don\u2019t stay up to date with inflation, figuring my way out\nto it makes a difference.<\/p>\n\n<p>All of this to say that I enjoy writing tutorials or lesson learned about\noperations, development, monitoring but not here, on my personal diary because\nI don\u2019t think they belong to here. I end up not writing them, not giving back\nto the community I belong to.<\/p>\n\n<p>Also internet is a lot different compared to 12-15 years ago and I feel I need\nto prepare for something new.<\/p>\n\n<p>I can\u2019t stop thinking about what Chris Ferdinand is doing with\n<a href=\"https:\/\/gomakethings.com\/\">GoMakeThings<\/a>, I am not a frontend eng and to be\nhost I didn\u2019t find an area or a technology that I felt in love with like\nfrontend looks for Chris (I hope you will find somebody that will look at you\nlike Chris writes about frontend!), but I like to read what he writes, I am not\nsure consultancy will be for me but I like the place he developed for itself\nand I feel like it is time for me to figure out where to share what I learn\nabout delivery, cloud, containers, CI and so on. Things that don\u2019t fit to this blog.<\/p>\n\n<p>I have <a href=\"http:\/\/shippingbytes.com\">shippingbytes.com<\/a> sitting around for a while\nand I am thinking about using it for this purpose, in a world where GenAI\nsteals from knowledge owner to serve anonymous answers  I am thinking about\nstarting a my content creation journey.<\/p>\n\n<p>I am not good at all when it comes to timing. This is why I need your support,\nyour support and empathy.<\/p>\n\n<p>I also want to experiment with something that is not a static site generator\nbecause content creators needs their own tools. First version of this blog was\nJoomla, ShippingBytes right now runs locally as custom theme for Wordpress.<\/p>\n\n<p>I know you didn\u2019t expect that but what if wysiwyg will spark joy? And I know\nShippingBytes will have a newsletter, a database will be useful anyway, static\nsite is not enough.<\/p>\n"},{"title":"It looks all about AI but it is not. Live at VivaTech","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/vivatech"}},"description":"Counting words in flyers and adv AI wins but you can spot something better","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2024-05-23T06:08:27+00:00","published":"2024-05-23T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/vivatech","content":"<p>I am spending a few days at VivaTech, my current employer <a href=\"https:\/\/rockfish.ai\">Rockfish\nData<\/a> won a challenge organized by\n<a href=\"https:\/\/orange.fr\">Orange<\/a> and the kindly hosted us at their booth.<\/p>\n\n<p>We are not the only one, to me it looks like this conferences targets\ninnovation departements and if you didn\u2019t work at a company big enough like\nmyself you probably didn\u2019t even know what it is.<\/p>\n\n<p><img src=\"\/img\/vivatech-2024-entrance.jpg\" alt=\"Picture of the VivaTech entrance. A huge VIVA written with led readable from both side\" \/><\/p>\n\n<p>Innovative solutions by definitions needs to be discovered, it is something\nthat you don\u2019t know or have yet, so there are experts and teams who are\ndedicated to find innovative solutions for business needs. Where do you find\nthis solutions? At VivaTech, sometime.<\/p>\n\n<p>Who innovates? Sometime the departement itself develops those skills, venture\ncapital is another way to contribute to innovative solution, or buying from\nstartup. Orange and many companies here do all of those things and here they\nshow case their innovative solution and their collaborations with startups.<\/p>\n\n<p>This is why you will see small booths hosted by giants like AWS, MicroSoft or\neven nations like Italy, Armenia, Germany, China they are all here with the\nstartup they decided to collaborate with.<\/p>\n\n<p>For a startup it is a great opportunity to be presents by those giants like\ninteresting solutions. Team members of the innovation team will bring to you\ndecision makers internal the company as potential customer, or their customer\ndirectly. I had the opportunity to speak with a few members of the EU data\nteam, I need say they went straight to what they were looking for.<\/p>\n\n<p>If you have a walk here you will see AI everywhere. It is by far the most\ncommon words is all the advertising. Every company placed AI near its name.\nSlack AI, AWS Generative AI, Salesforce AI assistance and you name it. I didn\u2019t\nspot yet the <a href=\"https:\/\/xeiaso.net\/notes\/2024\/ai-hype\/\">iTerm2 AI<\/a> booth yet but\nI bet it is here somewhere.<\/p>\n\n<p>But I am looking for more. There are innovative solutions for smart cities,\nthings you see on YouTube only, a lot of robots of any kind. The one serving\nfoods, arms assembling complicated things and so on.<\/p>\n\n<p>What I like the most is the vibe built around innovation. Italy, my own country\nhas a booth where it hosts a lot of small startups that I never hear about, a\nlot are doing sustainability and green economy. I am glad to know that even if\ncompared with other countries we are not that risk prone and our investments\nare far away from what other countries do it looks like we care about our\nplaned. I take it as a good sign.<\/p>\n\n<p>I know people reading my blog are technical, VivaTech is not KubeCon, people\nstop at my booth to ask about value proposition, market feet, not about eBPF or\nsyscall but I find this space interesting, please if you are here come to say\nhi and tell me about something extremely nerd. Do you need ideas? NixOS,\ncontainerd, eBPF!<\/p>\n\n<p><img src=\"\/img\/vivatech-2024-go2-robot.jpg\" alt=\"GO2 robots that looks like the Boston Dynamic one\" \/><\/p>\n"},{"title":"We got fiber at home","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/we-got-fiber"}},"description":"We got fiber at home!","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2024-05-14T06:10:10+00:00","published":"2024-05-14T06:10:10+00:00","id":"https:\/\/gianarb.it\/blog\/we-got-fiber","content":"<p>I remember six-seven years ago when I moved back home from my experience in\nDublin dreaming about my house receiving fiber internet! Back then I think my\nhouse didn\u2019t even have 100Mb, it was probably 20Mb and Docker images were not\nthat nice to use. Somehow I managed to be a Docker Captain anyway, so I bet\nDocker Hub was pretty good.<\/p>\n\n<p>Anyway a few weeks ago I called my provider to help me out with an outage that\nwe solved rebooting a few times the router and then they asked me if I was ok\nhaving them coming to replace my cable for free with a fiber one.<\/p>\n\n<p>Just now that I didn\u2019t feel the need for it anymore! Internet was good enough\nfor my use but how can you say no, and here I am! Writing a markdown file and\npublishing it with 1Gb symmetrical internet. I think I can get even more\nthan 1Gb but I will need to wait for consumer hardware to catch up.<\/p>\n\n<p>A few months ago I bought a repurpose 16 ports PoE ethernet switch with 1Gb per\nport because I thought \u201cfiber will never be a thing here\u201d. I have also noticed\nthat my old tp-link 5 ports switch I have on my desk has 100Mb limit per port.<\/p>\n\n<p>I am not sure if it will end up taking dust in my ewaste cabinet or not.<\/p>\n\n<p>That said, if you see myself moving quickly or looking great on camera now you\nknow why.<\/p>\n"},{"title":"A bunch of RSS feed probably the first round of many","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/a-bunch-of-rss-feeds"}},"description":"I am building my own internet again and it is all made of RSS feeds today a few that I am particulary enjoying following those days","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2024-04-27T06:10:10+00:00","published":"2024-04-27T06:10:10+00:00","id":"https:\/\/gianarb.it\/blog\/a-bunch-of-rss-feeds","content":"<p>Hey! It has been a while, and I feel sorry for that. I needed a break because\nlife for me spans in between a desk and a screen, a garden, we have built a\nhouse that takes a lot of my time and joy and now we got a baby!<\/p>\n\n<p>Ludovica is two and an half old now, almost three months old and she is an\nawesome and lovely challenge!<\/p>\n\n<p>This is not gonna be a technical or complex article because I do not have the\nenergy for that but since I miss you all this is better than nothing and I hope\nit will give me the energy to keep writing because a few months ago I deployed\na sweet and old fashion web based RSS reader and I started building my internet\nagain! I need to admit it gave me good vibes and energies.<\/p>\n\n<p>I feel back in control after all the Twitter implosion and I think you should do the same.<\/p>\n\n<p>I decided to use <a href=\"https:\/\/tt-rss.org\/\">Tiny Tiny RSS<\/a> in short tt-rss because\nit is in PHP, with just a few JavaScript well documented and easy to deploy\npiece of cake. Just as the internet that I felt in love with.<\/p>\n\n<p>At the beginning I installed it locally on my desk pointing my <code>\/etc\/hosts<\/code> to\nit and then I moved it to one of the <a href=\"https:\/\/gianarb.it\/blog\/homelab-diy-bmc-intel-nuc\">Intel\nNUC<\/a> I have laying around\nhere.<\/p>\n\n<p>I took the habit three times a week to select five articles to read and so far\nit is going great. It is not something that I forced myself to do. No pomodoro\ntime or other weird self building strategy it just happened to be a good\ncadence for myself now. Don\u2019t set goals, enjoy reading on your own without\npeople liking a post, or reacting with an emoji or a send cold comments about\nsomething they didn\u2019t even open.<\/p>\n\n<hr \/>\n\n<p>As I promise here six of the website I am enjoying right now (obviously tech\nrelated), bonus point an article for each that got my attention.<\/p>\n\n<h2 id=\"vicki-boykis\">Vicki Boykis<\/h2>\n\n<p><a href=\"https:\/\/vickiboykis.com\/\">Vicki<\/a> is a machine learning engineer at Mozilla and\nyou know those days AI is kind of hot! I am also into the GenAI space but not\nall as math expert. I do API, I move data around, operations and things like\nthat. So I am trying to learn a bit more about this industry that today looks\nlike a pile of VC money moving burning the electric power all the all heart.<\/p>\n\n<p>Vicki is awesome and I like the style and cadence of its writing moving from\nthe practical vision they have for today internet <a href=\"https:\/\/vickiboykis.com\/2024\/04\/25\/how-i-search-in-2024\/\">\u201cHow I search in\n2024\u201d<\/a> down to the\nhistory and evolution of operations in AI\n<a href=\"https:\/\/vickiboykis.com\/2024\/01\/15\/whats-new-with-ml-in-production\/\">https:\/\/vickiboykis.com\/2024\/01\/15\/whats-new-with-ml-in-production\/<\/a>.<\/p>\n\n<h2 id=\"drew-devault\">Drew DeVault<\/h2>\n\n<p>I doubt that <a href=\"https:\/\/drewdevault.com\/\">Drew DeVault<\/a> needs any sort of\npresentation. Free Software Ambassador and I enjoy following his journey with\nhis product and projects. A person with strong opinions all in written format.<\/p>\n\n<p><a href=\"https:\/\/drewdevault.com\/2023\/08\/29\/2023-08-29-AI-crap.html\">\u201cAI crap\u201d<\/a> is\nbeats me continuously since I turned to be an insider in this industry and I\nlove how he makes what for me are impossible and complex tasks to look\napproachable like writing a shell in a programming language he is developing\n<a href=\"https:\/\/drewdevault.com\/2023\/07\/31\/The-rc-shell-and-whitespace.html\">\u201cThe rc shell and its excellent handling of whitespace\u201d<\/a><\/p>\n\n<h2 id=\"mikodura\">Mikodura<\/h2>\n\n<p>Not only person blog, I also have a category full of company tech blogs that I\nam subscribed to and since I started to hack on hardware and I turned to be a\nlittle bit of a \u201cmaker\u201d I discovered\n<a href=\"https:\/\/www.midokura.com\/blog\/\">Mikodura<\/a>. Their blog is technically and well\nwritten, it is forth following. Can\u2019t say the same for companies like\nCloudflare for example, their cadence can\u2019t fit with my reading habit (but I am\nsubscribed to their feed as well, it is muted so I can get to it at my own\ntime)<\/p>\n\n<h2 id=\"zed\">Zed<\/h2>\n\n<p>The new cool editor in town. Don\u2019t judge me too quickly, I am not like\n<a href=\"https:\/\/registerspill.thorstenball.com\/p\/from-vim-to-zed\">Thorsten<\/a> or\n<a href=\"https:\/\/www.youtube.com\/watch?v=ZRnWmNdf5IE\">ThePrimeagen<\/a>, I still love VIM\nand I am not gonna betray it. VIM fully locked me down forever probably.<\/p>\n\n<p>I subscribed to their blog recently and I enjoyed learning a bit more about\ndata structured that empower editors, they look so foundational that you tend\nto forget about how technically complex they needs to be <a href=\"https:\/\/zed.dev\/blog\/zed-decoded-rope-sumtree\">\u201cZed Decoded: Rope &amp; SumTree\u201d<\/a>.<\/p>\n\n<p>Be aware that you will get new Zed features straight to your feed and you may\nbe tempted to leave SublimeText to Zed.<\/p>\n\n<h2 id=\"daniel-stenberg\">Daniel Stenberg<\/h2>\n\n<p>The cURL guy! <a href=\"https:\/\/daniel.haxx.se\/blog\/\">Daniel Stenberg<\/a> personal journey\nto <a href=\"https:\/\/daniel.haxx.se\/blog\/2024\/04\/24\/six-billion-docker-pulls\/\">\u201cSix Billion docker\npull<\/a> but the\nreality is that we all love the blogpost where he shares random emails he gets\nfrom his blog <a href=\"https:\/\/daniel.haxx.se\/blog\/2024\/01\/12\/emails-i-received-the-collection\/\">\u201cEMAILS I RECEIVED, THE\nCOLLECTION\u201d<\/a>.<\/p>\n\n<h2 id=\"surfing-complexity\">Surfing Complexity<\/h2>\n\n<p><a href=\"https:\/\/surfingcomplexity.blog\/\">Surfing Complexity<\/a> knows as Lorin Hochstein\nand friends shares their journey in the tech bubble. Their last article <a href=\"https:\/\/surfingcomplexity.blog\/2024\/03\/26\/the-problem-with-invariants-is-that-they-change-over-time\/\">\u201cThe\nproblem with invariants is that they change over\ntime\u201d<\/a>\nis a good one but this blog is a long running one, get lost in the archive and\ntell me if you find something I should look at!<\/p>\n\n<hr \/>\n\n<p>That\u2019s it for today, let me know if you want to get more links for me and ping\nthe people I quoted here to say hello! Bloggers need support!<\/p>\n"},{"title":"Linkerd jumped on the bandwagon","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/linkerd-jumped-on-the-bandwagon"}},"description":"Buoyant is the company ","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2024-02-22T06:10:10+00:00","published":"2024-02-22T06:10:10+00:00","id":"https:\/\/gianarb.it\/blog\/linkerd-jumped-on-the-bandwagon","content":"<p>I am not here to say if service mesh is useful or not I am sure it depends.\nThe <a href=\"https:\/\/www.techtarget.com\/searchitoperations\/news\/366570820\/Linkerd-service-mesh-production-users-will-soon-have-to-pay\">press \u201cLinkerd service mesh production users will soon have to\npay\u201d<\/a>\nreports that we have lost another opensource project or at least there is a new\ndrama in town.<\/p>\n\n<p>Nothing bad but it comes at the perfect time with the meme we are all\nlooking at <a href=\"https:\/\/www.reddit.com\/r\/github\/comments\/1at9br4\/i_am_new_to_github_and_i_have_lots_to_say\/\">those days<\/a>. The timing is so perfect that I had to check the\ncalendar, it is April 1st.<\/p>\n\n<p><img src=\"\/img\/i-am-new-to-github-i-have-alot-to-say.png\" alt=\"I am new to github and I have a lot to say\" \/><\/p>\n\n<p>If you want your exe you will need to pay Buoyant for that!<\/p>\n\n<p>I am being too shallow, only the stable releases will be on Buoyant! In\npractice they do what everyone else does, they have used GitHub as a platform\nto do category creation, community, and so on. Now is the time for the company\nto monetize.<\/p>\n\n<p>Nothing to be surprised this is the evolution of the opensource ecosystem, or\nat least that\u2019s what company founded by VCs relaying on GitHub for marketing\nwant us to believe. We have all worked for some of those in the last 10 years.<\/p>\n\n<p>Since I do opensource and I value this ecosystem I am wondering how such\ndecision can be taken and communicated so poorly, the definition of opensource\nper se should avoid those standpoints. How can a company on its own define and\nchange the release management for an opensource project? The company is not\neven called as the opensource project!<\/p>\n\n<p>Can a single company can say something like this? What are the contributors and\nmaintainers doing? At this point is Linkerd sustainable as opensource project\nbehond Buoyant? Probably no.<\/p>\n\n<p>The Cloud Native Computing Foundation guarantees for Linkerd, thankfully they\ngather and organize stats about their opensource project. <a href=\"https:\/\/linkerd.devstats.cncf.io\/d\/5\/companies-table?orgId=1\">So let\u2019s have a\nlook<\/a>, in 2023\nthey counted 128.856 contributions to the project, 112.721 from the same\ncompany Buoyant Inc. followed by 1.400 contributions made by independent\ncontributors, 484 from the CNCF and 313 from Microsoft Inc. I won\u2019t calculate\nthe percentage becauase it looks and unpleasant definition of opensource in my\nopinion.<\/p>\n\n<p>Anyway, this can be an opportunity for you if you are one of those independent\ncontributors. I spent a couple of minutes reading the <a href=\"https:\/\/news.ycombinator.com\/item?id=39459102\">HackerNews\nthread<\/a> coming from this press\nrelease and here some numbers:<\/p>\n\n<blockquote>\n  <p>Their new offering (BEL) would be around $14k\/mo for our org (though they\n say discounts are available), with 90 days notice. That\u2019s a rather large\n chunk of change I didn\u2019t request in our 2024 budget, for a cost-category\n that didn\u2019t exist before.<\/p>\n<\/blockquote>\n\n<p>If you want to market yourself as an alternative to Buoyant helping company\nstaying on top of their Linkerd installation, you just have to ask for less\nthan 2k USD a cluster and with that you are helping Linkerd to stay opensource\navoiding vendor lock-in.<\/p>\n\n<p>NOTE: this is not a legal advice, I am not sure if you can do it, nowadays\nopensource licenses are madness.<\/p>\n\n<p>Open your editor, smash some HTML and CSS it is now the time to use that\nservice mesh related domain you bought a few years back!<\/p>\n"},{"title":"How I discover new codebases","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/how-to-contribute-to-new-codebases"}},"description":"Strategies I use when I want to contribute to new codebases","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2023-10-24T06:10:10+00:00","published":"2023-10-24T06:10:10+00:00","id":"https:\/\/gianarb.it\/blog\/how-to-contribute-to-new-codebases","content":"<p><img src=\"\/img\/you-gen-a-new-job-no-docs-read-the-code-meme.jpg\" alt=\"\" \/><\/p>\n\n<p>Today is my time to be honest with you. I think this meme describes a lot of\nplaces and codebases that I had to deal with or that I contributed to.<\/p>\n\n<p>I don\u2019t want to tell you why because there are many reasons about how you can\nend up with a blob of undocumented code and I can write an article on its own\nbut I want to share the strategy I use to figure it out.<\/p>\n\n<p>Why am I the right person to do so? Because I change job frequently, not\nbecause of undocumented code obviously and I like to contribute to opensource\nsoftware that I use, and sometime I end up contributing to small undocumented\nlibraries, or to overly documented massive projects.<\/p>\n\n<ol>\n  <li>Take a look at the CI\/CD system<\/li>\n<\/ol>\n\n<p>Many applications have a basic CI\/CD system, to run tests, to check code\nformatting and sometime to build the software itself. When I don\u2019t event know\nthe language because you I am contributing to a codebase developed in a\nlanguage I am not familiar with the CI\/CD teaches a lot about the toolchain\nthat I need to have in order to be effective. Moving forward I tend to replace\ntools I don\u2019t like with alternatives I am more familiar with but at the\nbeginning CI\/CD, makefiles, npm packages or equivalent files are gold. Worst\ncase if they don\u2019t have unit tests or they don\u2019t build their code in CI\/CD I\nend up knowing about what they use to format their code, it does not look much\nbut usually it drives to the required tech stack.<\/p>\n\n<ol>\n  <li>Dockerfile<\/li>\n<\/ol>\n\n<p>Dockerfile are useful to figure out dependency tree and system dependencies\nthat can teach you a lot about the codebase you are dealing with. It is also\nuseful to figure out if my teammate are familiar with containers, or more old\nstyle cmake and <code>.\/configure<\/code> kind of people.<\/p>\n\n<ol>\n  <li>The entrypoint, look for that!<\/li>\n<\/ol>\n\n<p><code>fn main<\/code>, <code>func main<\/code>, <code>index.php<\/code> look for the entrypoint! If I see more than\none entrypoint I am in a monorepo, if there is only one it is a single\napplication. If I can\u2019t find one maybe is a library but libraries should have\nan entrypoint as well, so look for <code>Option<\/code> or <code>Configure<\/code> classes or objects.\nIf you find one the class using it is often the library endpoint.<\/p>\n\n<ol>\n  <li>Run the test suite<\/li>\n<\/ol>\n\n<p>I like to run code locally when I can because it makes things a bit more real,\nIt validates that I figured out the right toolchain and that I am starting from a\ntrusted checkpoint.<\/p>\n\n<ol>\n  <li>I need an easy win<\/li>\n<\/ol>\n\n<p>Why are you looking at such codebase? Do not miss the why! If it is your first\nday at work and you got assigned to an apparently easy bug to fix this is your\ngoal, so try to figure out the right path, you know how to build the software\nnow, you know how to run the testsuite so leverage that when running the entire\napplication is still an unknown or is not possible.<\/p>\n\n<p>I am not saying that tests should be the end goal for you, because there are\ncodebase with zero or unless tests, but they can be helpful at the beginning as a\nnorth start, if they are unreliable or absent point 5 is still valid but there\nis no escape the entrypoint needs to be discovered and used to validate that\nyour change has the right impact.<\/p>\n\n<p>When I feel brave, I figured out the part of the code I want to contribute and\nthe are no tests I write a small scripts that imports such path so I can run\nthe subset of the code in isolation, quickly and repeatedly without too much\nnoise. At some point it ca even be turned into a unit test.<\/p>\n"},{"title":"Kubernetes is finally just an utility","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/kubernetes-is-finally-just-an-utility"}},"description":"Kubernetes is a tool, not a religion. It can tech you a lot about about scalability, resiliency but if your buisness is not driven by this specific tool you should not overlook it.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2023-08-02T10:08:27+00:00","published":"2023-08-02T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/kubernetes-is-finally-just-an-utility","content":"<p>Kubernetes and cloud-native is a topic I spent a good time of my professional\nlife contributing to. I wrote some code, operators, articles, talks, libraries,\nand I have been a member of the awesome Release Team for a few iterations.\nKubernetes helped me a lot to grow as a developer and improved my ability to\ncollaborate with people all over the world.<\/p>\n\n<p>I know how to operate it, how it is written, the components it is made of, and\nso on, but I do not work for a company that makes money out of it anymore.<\/p>\n\n<p>I worked for cloud providers making money building software that integrates\nwith Kubernetes. I worked for companies that made money and took investments\nsaying: \u201cOur solution runs natively on Kubernetes\u201d. They didn\u2019t age well and I\nam not surprised at all.<\/p>\n\n<p>If you are like me and your company does not profit from Kubernetes do not look\nat it as something important. It is good to learn about it, to operate it but\nthat\u2019s it, like you do with Linux, systemd, git, or everything else, because\nthat\u2019s what it is. I spoke with many people who told me that Kubernetes is\ncomplex, I got it, so it can\u2019t be that complicated, can\u2019t be more complicated\nthan systemd. There are cloud providers or companies that you can pay to get a\nfully functioning Kubernetes endpoint to interact with up and running. I read\narticles related to how the EC2 service works, and how and why they build it,\nbut that\u2019s it. I don\u2019t feel bad about using tools or services without knowing\nall the details about how they are made.<\/p>\n\n<p>I don\u2019t want to discourage you from using Kubernetes or contributing to it. I\nadvocate for the good practices that Kubernetes enforces and teaches but that\u2019s\nthe best it does. Today after two years of not touching it I installed kind,\nthe kubectl and I wrote 352 lines of YAML that I successfully applied to a\nKubernetes cluster because I am working with a potential customer that runs on\nAzure and we picked Kubernetes as the common language, I think this is its\nsuperpower. A technology capable of improving collaboration, and breaking\nbarriers is a gift that we should protect. And it does not require me to know\nabout CNI, CRI, CTO, and kubelet (can you guess the wrong one?).<\/p>\n\n<p>Last week we expanded our solution from AWS, where I built the infrastructure\nout of autoscaling groups, Launch Templates, EC2, load balancers, and duct take\nto GCP where I decided to use GKE Autopilot driven all by Terraform. Not Flux\nor Helm, two technologies that I never used but Terraform because the \u201cIoC\u201d\nsolution we have is 100% based on that and I didn\u2019t feel the need for something\nelse. GKE because it makes sense, it is quick and I don\u2019t have the experience\n    on GPC as I have on AWS to operate at a \u201clower level\u201d in a reasonable time.<\/p>\n\n<p>The solution we have on AWS is too expensive because there are a million tiny\ndetails when building using simple components like the one I mentioned that are\npainful to figure out at least for me, or at least not interesting from a\nbusiness point of view. This is why I am probably moving to ECS pretty soon.\nNot EKS, because EKS does not look at simple as GKE Autopilot.<\/p>\n\n<p>The developer I want to be and the one I like to work with put effort into\nfinding the right solution based on their current context, building all the\nsurroundings that will play a difference in the game we all have to fight with:\nevolution and time. It requires knowledge and skills exceeding a specific\ntechnology or trend.  There are similarities with woodworking where complex\ncuts or repetitive tasks require jigs, and those need to get built with\naccuracy because they can drive the success of the primary project.<\/p>\n"},{"title":"You should avoid Meetup.com","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/avoid-meetup-com"}},"description":"Please avoid Meetup.com. You can do better on your own","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2022-11-24T10:08:27+00:00","published":"2022-11-24T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/avoid-meetup-com","content":"<p>A few years ago I decided to spend some of my time organizing a Meetup about\ncloud computing in Turin, Italy. I got support from the Cloud Native Computing\nFoundation to get a paid account to Meetup.com, food and drinks. I organized\nvarious events in Turin and Milan as well. We had a good time and I worked with\ndifferent companies to help them share what they are building or passionate\nabout.<\/p>\n\n<p>I work remotely and it was for me the best way to connect with other people in\nmy area working on similar challenges. The event was in English because a video\nmaker was there recording and mounting videos to share on the CNCF blog, or\nwith the companies or communities the speaker was involved with.<\/p>\n\n<p>COVID-19 changed our daily routine drastically as you know and I embraced\nvirtual events as many others did. We gained good popularity and we reached 800\npeople registered to our meetup group, but we have missed the locality part of\nall of this . In the meantime I decided to take a break as an active organizer,\nleaving my spot to another person who supported me a lot during the day to day\noperations. I left my role as CNCF Ambassador and in the meantime the CNCF\ndecided to move all those communities out from Meetup.com to their internal\nplatform.<\/p>\n\n<p>I don\u2019t want to comment on their internal platform, the migration was left to\nthe organizer, at some point we were maintaining events on both platforms\nhaving the end of 2022 as the deadline to close the Meetup.com group.<\/p>\n\n<p>In the meantime I tried to export the people who trusted me as organizer to\nimport them elsewhere but the attendees belong to Meetup.com and there is not\nmuch you can do about it, Meetup.com locks you in.<\/p>\n\n<p>What about deleting a Meetup group? Well, apparently you can remove all the\norganizers and leave it in a limbo until Meetup.com removes it, or until\nsomebody else claims to be the new organizer.<\/p>\n\n<p>Really! You work a couple of years to build a group of people who trust you,\nyour way of dealing with their time and when you decide that it is time to move\non the best Meetup offers is to leave those people on their own. Obviously just\nas it happens with DNS the last day before termination somebody else claimed to\nbe the new organizer and took over the meetup group to share their own event.\nLuckily it was a person I collaborated with in the past and I was able to\nbecome the organizer again. It took me 2 hours to do the right things. I had to\nkick out of the meetup group all 800 members one by one leaving an empty group\nthat nobody has any reason to claim.<\/p>\n\n<p>It is my responsibility as organizer to take care of what happens to the people\nwho trusted me, and I think leaving them on their own to the first person that\nfinds itself in the right place, at the right time is wrong and you should not\ntrust a platform forcing this behavior.<\/p>\n\n<p>Avoid Meetup.com, you can do better! Setup a mailing list, build a static\nwebsite on GitHub.com, write a few lines of whatever language you want to learn\nand expose an HTTP server.<\/p>\n\n<p>I like to share what I do and to experiment. In the last 10 years I organized\nmeetups, I tried to self-publish a book, I wrote on my blog and many of you\ntrusted me, leaving their emails to me and sharing their own time. The only\nreasonable thing I can do is to clean up after myself when I am done. A few\nyears ago when I realized that the book I was trying to write didn\u2019t make much\nsense I deleted the 2000 people who registered to receive updates about it\nbecause it is the right thing to do. If you find yourself in a similar\nsituation do the right thing, cleanup after yourself.<\/p>\n"},{"title":"From Ubuntu to NixOS the story of a mastodon migration","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/from-ubuntu-to-nixos-history-of-a-mastodon-migration"}},"description":"Twitter is not at its best. Developers are looking for an alternative as many others. Mastodon with its decentralized and feel of ownership is raising in popularity. I started with a hands crafted self hosted Ubuntu server because I felt the pressure about joining as early as possible but the end goal was to use NixOS for that. This is the story of how I moved my Mastodon instance to NixOS","image":"https:\/\/gianarb.it\/img\/1280px-NixOS_logo.png","updated":"2022-11-24T10:08:27+00:00","published":"2022-11-24T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/from-ubuntu-to-nixos-history-of-a-mastodon-migration","content":"<p>Do you know that Elon Musk bought Twitter for a lot of money? As a consequence many people are trying to figure out what to do. Developers quickly turned to Mastodon.<\/p>\n\n<p>I decided to self host my server, you can interact with me on Mastodon as <a href=\"https:\/\/m.gianarb.it\/@gianarb\">@gianarb@m.gianarb.it<\/a>.<\/p>\n\n<p>I do not have strong opinions about a decentralized system, I think it is another way to build a distributed system, experimenting with it is an opportunity, nothing more right now. I never liked the idea to sell my identity for free on social media but having a presence online proved to be crucial for my career and I don\u2019t want to miss that.<\/p>\n\n<p>Mastodon pushes many people, myself included, to think about: \u201cshould I host my own server?\u201d. In my opinion it is an important question because it forces us to make our hands dirty again. We all know how comfortable GitHub pages are. You can set up your own static website in a minute, for free but it lowered my enthusiasm for technology because it makes things too easy. If you answered \u201cyes\u201d and now you are hands down trying to run your own Mastodon I hope you are having fun and that you are learning something that raises your excitement for how computers work. \u201cHosting more of my own things\u201d was on my bucket list and Mastodon pushed me down the stairs.<\/p>\n\n<h2 id=\"at-the-beginning-it-was-all-about-ubuntu\">At the beginning it was all about Ubuntu<\/h2>\n\n<p>NixOS has been my way to go for everything since the last two years, but I am not good at it. I tried to run my own Mastodon for a few days but I was not getting anywhere, got stuck trying to figure out how to properly manage secrets, and the machine lifecycle, how to deploy, how to interact with the tootctl, everything was a big unknown. Mastodon itself was a big unknown too. So I decided to step back and run my own instance following a random blog post:<a href=\"https:\/\/www.linuxbabe.com\/ubuntu\/how-to-install-mastodon-on-ubuntu\"> \u201cHow to Install Mastodon on Ubuntu 22.04\/20.04 Serves\u201d<\/a>. Not sure if it is the best one out there but it gave me a Mastodon to play with in 10 minutes. Don\u2019t need to tell me about infrastructure such as code, immutability and so on, this environment teached me all of this crap works, Mastodon is a bit more familiar and my end goal is still to figure out how to run it with NixOS.<\/p>\n\n<h2 id=\"build-a-migration-plan\">Build a migration plan<\/h2>\n\n<p><a href=\"https:\/\/page.romeov.me\/posts\/setting-up-mastodon-with-nixos\/\">\u201cSetting up your own Mastodon instance with Hetzner and NixOS\u201d<\/a> by romeov explained how to get Mastodon running on NixOS. A few lines of configuration and the NixOS Mastodon Module configures Postgres, Redis, Nginx with TLS, and Mastodon itself for me. It is not the only way to go, the module supports running dedicated pools of those services as well but for my single user and single server configuration it is more than enough. So I started planning how to migrate my own server following the official <a href=\"https:\/\/docs.joinmastodon.org\/admin\/migrating\/\">Mastodon documentation<\/a> and it ended up looking like this:<\/p>\n\n<ol>\n  <li>Provision a very basic NixOS instance (called beetroot from now on)<\/li>\n  <li>Stop mastodon services (web, sidekiq, stream) in the Ubuntu box<\/li>\n  <li>Take a backup of Postgres with the suggested command:<code> pg_dump -Fc mastodon_production -f backup.dump<\/code><\/li>\n  <li>Create a tar.gz archive for the system directory in Mastodon<\/li>\n  <li>Move the archive and the sql backup to beetroot via tailscale file:<code> tailscale file cp<\/code> public-system.tar.gz beetroot:<\/li>\n  <li>Get the two files from beetroot via tailscale: <code>tailscale file get .<\/code><\/li>\n  <li>Untar the system directory<\/li>\n  <li>Stop the mastodon systemd services, drop the mastodon database from beetroot and replace it with the backup from the Ubuntu server<\/li>\n  <li>Restart the mastodon services via systemd and have fun<\/li>\n<\/ol>\n\n<h2 id=\"how-it-went\">How it went<\/h2>\n\n<p>The plan was solid! <a href=\"https:\/\/pony.social\/@cult\">CULTPONY<\/a> looked at it briefly as well, so we are good!<\/p>\n\n<p>But you know, in reality there are many unknowns. There is only one way to figure them out, time to stop making plans, it is time to break them!<\/p>\n\n<pre><code class=\"language-nix\">services.mastodon = {\n  enable = true;\n  localDomain = \"PUT-YOUR-DOMAIN-HERE e.g. computing.social\";\n  configureNginx = true;\n  smtp.fromAddress = \"\";\n};\n<\/code><\/pre>\n\n<p>First, when I initialized the NixOS Mastodon module it starts an Nginx server because Mastodon requires TLS, it uses Let\u2019s Encrypt for that and this requires the DNS record to point to the NixOS instance otherwise Let\u2019s Encrypt won\u2019t be able to close the loop, but I can\u2019t point the DNS to a not yet ready instance because who knows if I am gonna be able to make it today, tomorrow or never! I decided to tell the Mastodon module to skip Nginx configuration for now setting<code> services.mastodon.configureNginx=false;<\/code><\/p>\n\n<p>Technically there is <a href=\"https:\/\/discourse.nixos.org\/t\/nixos-deploy-in-a-vm-how-to-test-https-website-acme-lets-encrypt\/8876\">another way<\/a> to do it but it did not work for me and I still don\u2019t know why. Let me know if you figure it out because it will be way more comfortable to get a self signed certificate so we can test without having to change DNS.<\/p>\n\n<p>In the process of making the tar archive for the system directory I saw it contained a directory called cache, huge like multiple GBs. Cache to me means ephemeral, easy to rebuild, safe to wipe. So I did it! To be fair, I knew I was doing something stupid. And I knew the dirty way to go requires at least to move the directory, to keep it around until realization of the silly fact that cache means something important that should not be lost! Too late! I lost it, my Mastodon instance was then empty of all the avatars and profiles images, not a great start. After some googling and some struggling, <a href=\"https:\/\/github.com\/mastodon\/mastodon\/discussions\/21305#discussioncomment-4218030\">I was able to build it back<\/a> (if you have a more official answer for this issue let us know there). My 2 cents: move the cache folder around, way easier than figure out how to get it back. Oh get another 2 cents, remember to check file permissions when you do this (guess why I know).<\/p>\n\n<p>Everything now was set and ready to receive traffic, so I pointed the DNS to the new server, I set my host file to get routed to it quickly, I changed services.mastodon.configureNginx to true and I waited.<\/p>\n\n<p>Ok! This is how it went after a day of struggling obviously! Last time I used postgres was probably 6 years ago. pg_dump, pg_restore are easy but I had to figure out how to authenticate properly, Ubuntu was set up to run over 127.0.0.1, the NixOS Mastodon Module by default provisions Postgres with <a href=\"https:\/\/www.postgresql.org\/docs\/current\/auth-trust.html\">auth trust<\/a> trust and with a socket entrypoint. It means that authentication does not require a password and it is based on a UNIX user. For example the Linux user Postgres owns and has access to the database owned and managed by Postgres. The NixOS Mastodon module creates a mastodon user in Linux with access to the mastodon file (the system directory for example) and with access to its own mastodon Postgres database. Nothing that looks like rocket science but still, it took me some time to figure it all out.<\/p>\n\n<p>How to manage password in NixOS is a question I don\u2019t feel comfortable answering yet and it blocked me at the beginning when I was trying to setup my own instance because I wanted to manage tailscale auth key for example automatically, or when thinking about how to manage the connection between mastodon web and postgres. Currently my answer is to avoid passwords. It works for now, but I know it won\u2019t be the right answer for the following articles in this series that will probably title: \u201cMastodon monitoring a success story\u201d where I will share how to configure the monitoring and observability pipeline for my instance with Grafana Cloud, but this is a story for another time.<\/p>\n\n<p>Point 7 of the migration plan was about untar the system directory, but I realized I didn\u2019t know where to place it. <a href=\"https:\/\/github.com\/NixOS\/nixpkgs\/blob\/master\/nixos\/modules\/services\/web-apps\/mastodon.nix#L32\">Looking at the NixOS module<\/a> there is a path for that:<\/p>\n\n<pre><code class=\"language-terminal\">PAPERCLIP_ROOT_PATH = \"\/var\/lib\/mastodon\/public-system\";\n<\/code><\/pre>\n\n<p>But what does it look like? And what is PAPERCLIP_ROOT_PATH? Is it really what I think it is? It was not clear to me and only <code>var\/lib\/mastodon<\/code> was there in the system because the public-system folder gets created when Mastodon is actually in use. So I had to take a step back and I created a vanilla e2e working Mastodon instance to figure it out. At the end it <strong>obviously<\/strong> look like it should be, but who knew that!<\/p>\n\n<pre><code class=\"language-terminal\">[nix-shell:\/var\/lib\/mastodon]# tree -L 2\n.\n\u251c\u2500\u2500 public-system\n\u2502   \u251c\u2500\u2500 accounts\n\u2502   \u251c\u2500\u2500 cache\n\u2502   \u251c\u2500\u2500 custom_emojis\n\u2502   \u2514\u2500\u2500 media_attachments\n\u2514\u2500\u2500 secrets\n<\/code><\/pre>\n\n<h2 id=\"show-me-the-code\">Show me the code<\/h2>\n\n<p>Currently I published the NixOS configuration for beetroot as part of my <a href=\"https:\/\/github.com\/gianarb\/dotfiles\/tree\/main\/nixos\/machines\/beetroot\">dotfiles<\/a> along with the other NixOS configurations for my Thelio workstation and for the Asus Zenbook I use at home. It uses <a href=\"https:\/\/nixos.wiki\/wiki\/Flakes\">flake<\/a> and <a href=\"https:\/\/github.com\/serokell\/deploy-rs\">deploy-rs<\/a>. It targets a Linode shared CPU virtual machine and that\u2019s why, as you can see in the hardware-configuration NixOS detected Qemu as hardware.<\/p>\n\n<pre><code class=\"language-nix\">deploy.nodes.beetroot = {\n      hostname = \"139.162.167.171\";\n      sshUser = \"root\";\n\n      profiles.system = {\n        user = \"root\";\n        path = deploy-rs.lib.x86_64-linux.activate.nixos\n          self.nixosConfigurations.production;\n      };\n    };\n<\/code><\/pre>\n\n<p>Do not ask me about my deploy preference when it comes to Nix, deploy-rs is just the one I figured out, I may switch to Nixops because it is a bit more standard, they work similarly from a configuration standpoint but in theory deploy-rs is designed with profiles in mind, to deploy single users, something that I don\u2019t think I need. But it works well enough for now.<\/p>\n\n<p>If you look inside the flake.nix file you see two different nixosConfigurations, production and vm, both importing the same <code>configuration.nix<\/code>. Production is deployed via deploy-rs and vm is used for testing purposes with: <code>nixos-build build-vm -flake .#vm<\/code><\/p>\n\n<p>I didn\u2019t find a good use of it just yet, I am currently blocked by the acme certificate and because I am lazy. I am not sure if it is needed for Linode Shared CPU since it is a VM as well and it detects Qemu as a hypervisor. Time will help me figure it out.<\/p>\n\n<p>At the beginning I developed this configuration outside of my dotfiles. Mainly because I didn\u2019t know what to expect from it. Now that Mastodon is up and running and this configuration is in use I feel more confident. Even if I have a lot I want to do I decided to move it in my dotfiles to have access to other NixOS components there. I need to add a secret to authenticate to Grafana Cloud, probably with <a href=\"https:\/\/github.com\/ryantm\/agenix\">agenix<\/a> in its own private repo imported via flake so I won\u2019t have my password shared with you all (forgive me, it is not you, it is me or something else I don\u2019t know), I want to move the cache directory and postgres data to a ZFS pool as well, but not now, right now I want to enjoy my running instance.<\/p>\n\n<h2 id=\"now-what\">Now what?<\/h2>\n\n<p>This is everything I have learned for now migrating from Ubuntu to NixOS. I want to be clear, even if the core of this article looks like a bunch of mistakes I am not frustrated, I think the NixOS Mastodon Module is comfortable to use and well written. The challenges I described come from a rusty and inexperienced ops person. The module lacks documentation around operational experience, how to use it and what it provides but it is reasonable and I hope those notes will help to improve it and will push me to contribute back to the official documentation.<\/p>\n\n<p>When I mentioned Prometheus and Grafana I shared that I am thinking of writing a series of posts about this topic, those are the one I have currently ongoing:<\/p>\n\n<ul>\n  <li>Monitoring success story (probably with a deep dive in Password management on its own)<\/li>\n  <li>NixOS configuration, GitOps and machine lifecycle (this is about how I manage my NixOS configuration, how I deploy NixOS and so on)<\/li>\n  <li>Data management with ZFS<\/li>\n  <li>Mastodon update from 3.x to 4.0<\/li>\n<\/ul>\n\n<p>Your support and interest will push me forward writing all of them so let me know what you think about this one, the following topics and if you would like to read something else, like my journey with Linode, since I decided to try it out running this Mastodon instance there.<\/p>\n\n<p>I would like to thanks all the writers behind the documentations, articles, GitHub discussions I have linked, and all the GitHub issues, StackOverflow questions, and GitHub repositories I have looked at to resolve my unknowns, sharing is caring! Thanks <a href=\"https:\/\/hachyderm.io\/@hazelweakly\">@hazelweakly<\/a> for your early review!<\/p>\n"},{"title":"My workflow with NixOS. How do I work with it","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/my-workflow-with-nixos"}},"description":"In the last two years I pick up NixOS as I tool I want to use. The learning curve is steep but I think I have a workflow that I like","image":"https:\/\/gianarb.it\/img\/1280px-NixOS_logo.png","updated":"2022-09-12T10:08:27+00:00","published":"2022-09-12T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/my-workflow-with-nixos","content":"<h2 id=\"some-context\">Some context<\/h2>\n\n<p>Coding is fun when you can figure out the right workflow. There is nothing fun\nwhen it comes to writing software in a way that is not sustainable or that does\nnot sparks joy.<\/p>\n\n<p>I started to use Nix and NixOS almost two years ago, in a previous job in\na totally different context.<\/p>\n\n<p>Back then we had to quickly and often provision operating system, build\nsoftware and so on. Since I moved back to write Software and to write Rust I\nhave to admit that building my code, or shipping operating systems is not\nsomething I have to do very often, but I decided to keep learning and fighting\nagainst NixOS because it fits my mindset.<\/p>\n\n<p>Recently I resumed a few NUCs I keep in a box because everybody\ndeserves a home lab, and a good home lab deserves some netbooting, so it was\ntime to play with NixOS for something that is not my workstation or my laptop.<\/p>\n\n<h2 id=\"the-workflow\">The workflow<\/h2>\n\n<p>Nix is code, finally. It means that there are libraries, you can import them,\nrun tests, and execute such code. YAML, Json in my experience, at some point\nare a limitation, or they create friction, you ended up with an easy to break\ntemplate engine.<\/p>\n\n<p>I decided to invest some time to figure out how to use flake. And this is where\nI am so far:<\/p>\n\n<pre><code class=\"language-nix\">{\n  description = \"A generic and minimal netbooting OS for my homelab\";\n\n  inputs =\n    {\n      nixpkgs.url = \"github:NixOS\/nixpkgs\/nixos-22.05\";\n    };\n\n  outputs = { self, nixpkgs, ... }:\n    let\n      system = \"x86_64-linux\";\n    in\n    {\n      nixosConfigurations = {\n        generic = nixpkgs.lib.nixosSystem {\n          inherit system;\n          modules = [\n            .\/configuration.nix\n          ];\n        };\n      };\n      packages.${system}.netboot = nixpkgs.legacyPackages.${system}.symlinkJoin {\n        name = \"netboot\";\n        paths = with self.nixosConfigurations.generic.config.system.build; [\n          netbootRamdisk\n          kernel\n          netbootIpxeScript\n        ];\n        preferLocalBuild = true;\n      };\n    };\n}\n<\/code><\/pre>\n\n<p>I am not the right person to tell you what all of this does because I am not an\nexpert and it is the outcome of many videos on YouTube, questions on\ndiscourse.nixos.org, articles and beers, a lot of beers.<\/p>\n\n<p>The <code>output<\/code> part describes what I want to build and as you can see there are\ntwo outcomes. One is a <code>nixosConfigurations<\/code>, potentially it can contain more\nthan one NixOS description but right now I have a single one called <code>generic<\/code>\nand as you can see it imports a module called <code>configuration.nix<\/code>. You can see\nit as a ready to go NixOS provisioned as I want. This is 99% a copy paste of a\ntraditional <code>configuration.nix<\/code> file as you may know them. The one I use comes\nfrom  <a href=\"https:\/\/nixos.wiki\/wiki\/Netboot\">\u201cNetbooting Wiki\u201d in NixOS.org<\/a>.<\/p>\n\n<pre><code class=\"language-nix\">{ config, pkgs, lib, modulesPath, ... }: with lib; {\n  imports = [\n    (modulesPath + \"\/installer\/netboot\/netboot-base.nix\")\n  ];\n  users.users.root.openssh.authorizedKeys.keys = [\n    \"ssh-sfdbsrbs\"\n  ];\n\n  ## Some useful options for setting up a new system\n  services.getty.autologinUser = mkForce \"root\";\n\n  environment.systemPackages = [ pkgs.tailscale ];\n\n  networking.dhcpcd.enable = true;\n\n  services.openssh.enable = true;\n  services.tailscale.enable = true;\n\n  hardware.cpu.intel.updateMicrocode =\n    lib.mkDefault config.hardware.enableRedistributableFirmware;\n\n  systemd.services.tailscale-autoconnect = {\n    description = \"Automatic connection to Tailscale\";\n\n    # make sure tailscale is running before trying to connect to tailscale\n    after = [ \"network-pre.target\" \"tailscale.service\" ];\n    wants = [ \"network-pre.target\" \"tailscale.service\" ];\n    wantedBy = [ \"multi-user.target\" ];\n\n    # set this service as a oneshot job\n    serviceConfig.Type = \"oneshot\";\n\n    # have the job run this shell script\n    script = with pkgs; ''\n      # wait for tailscaled to settle\n      sleep 2\n\n      # check if we are already authenticated to tailscale\n      status=\"$(${tailscale}\/bin\/tailscale status -json | ${jq}\/bin\/jq -r .BackendState)\"\n      if [ $status = \"Running\" ]; then # if so, then do nothing\n        exit 0\n      fi\n\n      # otherwise authenticate with tailscale\n      ${tailscale}\/bin\/tailscale up -authkey tskey-really\n    '';\n  };\n\n  networking.firewall = {\n    checkReversePath = \"loose\";\n    enable = true;\n    trustedInterfaces = [ \"tailscale0\" ];\n    allowedUDPPorts = [ config.services.tailscale.port ];\n  };\n\n  system.stateVersion = \"22.05\";\n}\n<\/code><\/pre>\n\n<p>The only difference compared with a traditional non-flake configuration is the import:<\/p>\n\n<pre><code>  imports = [\n    (modulesPath + \"\/installer\/netboot\/netboot-base.nix\")\n  ];\n<\/code><\/pre>\n\n<p>Flake provides the utility variable <code>modulesPath<\/code> as a shortcut for accessing\nthe nixpkgs modules described as flake input.<\/p>\n\n<p>This OS does a few simple things:<\/p>\n\n<ul>\n  <li>Setup a public ssh key for the root user that I can use to ssh into the server.<\/li>\n  <li>It register itself to Tailscale<\/li>\n<\/ul>\n\n<p>The output <code>nixosConfigurations<\/code> is used via <code>nixos-build<\/code>.\nIt took me some time to figure out that <code>nixos-build<\/code> used in the right wat does not replace my current operating system.\nDo not run <code>nixos-build switch<\/code> if you won\u2019t want to screw up your local NixOS OS! Instead you can build this\noperating system in the <code>.\/result<\/code> directory via:<\/p>\n\n<pre><code>$ nixos-rebuild build --flake .#generic\n<\/code><\/pre>\n\n<p>A single configuration can describe different NixOS, that\u2019s why you have to\nidentify what you want to build with ` .#generic`.<\/p>\n\n<p>The second output builds the same OS but it shapes the content of the\n<code>.\/result<\/code> directory as I want it (I am not sure if I need it but this is what\nthe NixOS netbooting wiki does, so far so good).<\/p>\n\n<p>To build it you can use <code>nix build<\/code>:<\/p>\n\n<pre><code>$ nix build .#netboot\n<\/code><\/pre>\n\n<p>Pretty cool! I can tar.gz that and ship it where I want. Straightforward.<\/p>\n\n<h2 id=\"how-to-run-this-vm\">How to run this VM<\/h2>\n\n<p>Do you know how boring and time consuming it is to test a new operating system?<\/p>\n\n<p>If you want to do it on real hardware you have to set it up, and if you want to\nuse QEMU you have a few days in front of you to remember all the flags you need, how\nto bridge the guest with the host and who knows what. I tried for a few days\nand I failed, until I discovered:<\/p>\n\n<pre><code>$ nixos-rebuild build-vm --flake .#generic\nbuilding the system configuration...\n\nDone.  The virtual machine can be started by running \/nix\/store\/dk4i22xmacnxxdmgvjhlyain5spb11yn-nixos-vm\/bin\/run-nixos-vm\n<\/code><\/pre>\n\n<p>Pure gold! If you run the <code>run-nixos-vm<\/code> script a QEMU virtual machine will\nappear ready for you to test your operating system. Kind of cool! I can even\nsee it showing up in the Tailscale admin console!<\/p>\n\n<p>A zero friction experience that boost my ability to try what I am working on.<\/p>\n\n<h2 id=\"integration-tests\">Integration tests<\/h2>\n\n<p>Nix provides a testing framework, but I started to use it recently. It spins up\none or more virtual machines and assert that they work as expected. I wrote\na test that looks for the tailscale network inteface:<\/p>\n\n<pre><code class=\"language-nix\">let\n  nixpkgs = fetchTarball \"https:\/\/github.com\/NixOS\/nixpkgs\/archive\/0f8f64b54ed07966b83db2f20c888d5e035012ef.tar.gz\";\n  pkgs = import nixpkgs { };\nin\npkgs.nixosTest\n  ({\n    system = \"x86_64-linux\";\n\n    nodes.machine = import .\/configuration.nix;\n\n    testScript = ''\n      start_all()\n      machine.succeed(\"sleep 5\")\n      machine.succeed(\n          \"ifconfig | grep tailscale0\",\n      )\n    '';\n  })\n<\/code><\/pre>\n\n<p>This test uses the same <code>configuration.nix<\/code> I used to generate my netbooting\nNixOS. It starts a node called <code>machine<\/code> and via python script it runs the bash\ncommand <code>ifconfig | grep tailscale0<\/code>.  I am sure I can do better than <code>sleep 5<\/code>\nbut as I said, I am far away from being good at this.<\/p>\n\n<p>You can use this approach to run assertions on multiple nodes,\nhere an example from Nix.dev <a href=\"https:\/\/nix.dev\/tutorials\/integration-testing-using-virtual-machines\">\u201cIntegration testing using virtual machines\n(VMs)\u201d<\/a>.<\/p>\n\n<h2 id=\"steep-learning-curve\">Steep learning curve<\/h2>\n\n<p>Everyone agrees that Nix and NixOS are not easy technology to pick up. And I\ncan confirm, there are articles, blogs, dotfiles available everywhere but they\nlook all different and it is hard to figure out if they are new, old or how to\napply them to your use case.<\/p>\n\n<p>Flake is an attempt from the community to standardize all of that, and much\nmore. We will see!<\/p>\n\n<p>It is also true that motivation and context can flat the curve. My plan is to\nwrite more about this topic since I am trying to spin up and automated a home\nlab.<\/p>\n\n<p>I have to figure out how to do secret management but as soon as I have\nit sorted out I will share my homelab configuration as I share my laptops\nconfiguration in my <a href=\"https:\/\/github.com\/gianarb\/dotfiles\/tree\/main\/nixos\">dotfiles<\/a>.<\/p>\n\n<p>Stay tuned.<\/p>\n"},{"title":"Website redesign and goodbye Bootstrap","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/redesign-goodbye-bootstrap"}},"description":"I managed to remove Bootstrap from my website! You should not read this post","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2021-12-17T10:08:27+00:00","published":"2021-12-17T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/redesign-goodbye-bootstrap","content":"<p>I managed to remove Bootstrap from my website! For me, it was the example of vendor lock-in.<\/p>\n\n<p>A few years ago the trend was to use cloud provider but not enough to feel locked to a particular provider. In practice compute service was the only service allowed to be used. Everything else was an attempt from the devil to keep you down. The cause was lack of trust. Compute didn\u2019t matter that much, it was not making anything more complicated, it was somebody else virtual machine.<\/p>\n\n<p>Now it is clear that services like object store, managed databases, queue systems, machine learning or serverless are the secret of success when it comes to cloud providers. Because those services are stable and available for you to use quickly with zero operational effort. It gets harder to move to another vendor but there is not a lot you can do to avoid that. Motivation avoids vendor lock-in.<\/p>\n\n<p>For me was the same, I tried many time to remove Bootstrap from my website but I never cared enough about the outcome, because I am not a designed and\u2026 really I don\u2019t care.<\/p>\n\n<p>Friday 17th December something changed! I had time and I was up for something boring, so I made it! I replaced Bootstrap with a couple of CSS classes.<\/p>\n\n<h2 id=\"goodbye-navbar\">Goodbye navbar<\/h2>\n\n<p>I decided to remove the navigation bar at the top of the website. First, I don\u2019t know how to make one on my own. Second, the number of pages were limited to three. Not enough to justify a real menu. Now a post contains the content and nothing more, very clean and I think it helps stay focused on what matters.<\/p>\n\n<p>I left a small link to get back to the list of the other posts I wrote and that\u2019s it. This is not a magazine and I won\u2019t make any money out from this website, no reasons to drive you to other articles. Also, the people who reads what I write are good enough with computers to figure out what they want on their own.<\/p>\n\n<h2 id=\"your-browser-is-cooler-than-me\">Your browser is cooler than me<\/h2>\n\n<p>No extra fonts, or font size, I tried to limit the number of html tags winning in accessibility. I trust your browser ability to interpret HTML and the font you have installed should be enough to read what I have to write! Let\u2019s get back to simple things.<\/p>\n\n<h2 id=\"adv-or-not-adv\">ADV or not ADV<\/h2>\n\n<p>I removed Google Analytics months ago. Is it time to remove ads? I didn\u2019t make much money from this website, probably not enough to justify a banner and that javascript. I owned enough to run this website for other 10 years and I think it is enough! I reached sustainability! And you don\u2019t need to thank me! I am the fist one using Brave as a browser, blocking noisy banners. I think the majority of the people reading my posts do the same.<\/p>\n\n<p>The downside is that for me joining the Carbon network was a goal I had a few years ago. Because there is number you have to reach in order to enter the program. Adv is the only source of vanity number of this project and I kind of like to use $ for this even if it is a bit more than \u201cnothing a month\u201d. I don\u2019t know, if you have feedback let me know!<\/p>\n\n<p>My feeling is that 2022 will be all about finding other source of income that are not coming from my spending hours writing code. Not sure if spending time writing articles counts just yet. Probably not what I want.<\/p>\n\n<h2 id=\"whats-next\">What\u2019s next<\/h2>\n\n<p>Not much, my readers are not many, and this year I didn\u2019t have much to write about. A website with fewest components will enable me to experiment a little bit more. An item for my invisible todo list is to write a yet another static site generator. The word needs more Rust code\u2026 who knows.<\/p>\n\n<p>Probably it won\u2019t happen. I don\u2019t have special needs, but if will happen, you will notice!<\/p>\n\n<h2 id=\"credit\">Credit<\/h2>\n\n<p>I think I am reading too much what Drew DeVault writes! The minimalism of this initiative comes from his <a href=\"https:\/\/drewdevault.com\/\">webiste<\/a>. My friend <a href=\"https:\/\/fntlnz.wtf\">Lorenzo<\/a>, when it comes to CSS is a person I admire. He is good with eBPF as well but not as good!<\/p>\n\n<p>Have a great Christmas!! Relax and do what you like with the people you love!<\/p>\n"},{"title":"How I started with NixOS","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/how-i-started-with-nixos"}},"description":"I played with NixOS for the last couple of months. This is a story about how I picked it up, or how I should have done it.","image":"https:\/\/gianarb.it\/img\/1280px-NixOS_logo.png","updated":"2021-10-01T10:08:27+00:00","published":"2021-10-01T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/how-i-started-with-nixos","content":"<p>I frequently change operating-system and distribution moving between macOS and Linux because I didn\u2019t marry any of them yet.<\/p>\n\n<p>Just before having a MacBook again, I was an ArchLinux user, a happy one. I have to admit it was not that different compared with other distributions, at least as a user.  Yes, fewest packages installed, a few services, please don\u2019t freak out, as I wrote I enjoyed it.<\/p>\n\n<p>I see a value when it comes to describing as code your desires, learning from other people sharing their code, importing or copy-pasting it in different places.<\/p>\n\n<p>Developers do that all day. I am a representative, I hope what I write will match my desires. After so many years I am full of hope.<\/p>\n\n<p>With this in mind, Arch, Debian, or Ubuntu do not make a difference. It is all about the package manager. NixOS and Nix looked to me as a step forward in this sense.<\/p>\n\n<p>I decided to end my vacation with macOS earlier. I picked up my personal Asus Zenbook 3 from its box to install NixOS.<\/p>\n\n<p>Coming from ArchLinux the NixOS installation process is similar, we are on our own:<\/p>\n\n<ol>\n  <li>Format disks<\/li>\n  <li>Write partition table<\/li>\n  <li>Mount partitions<\/li>\n  <li>And so on<\/li>\n<\/ol>\n\n<p>The main difference comes when you run <code>nixos-generate-config<\/code>:<\/p>\n\n<pre><code># nixos-generate-config --root \/mnt\n<\/code><\/pre>\n\n<p>The command tries its best to detect kernel modules from your hardware, mount points, and so on. This phase is a great time to start your first fight of many with NixOS.\nThe generated file will be in <code>\/mnt\/etc\/nixos\/configuration.nix<\/code> and <code>\/mnt\/etc\/nixos\/hardware-configuration.nix<\/code>. Open the generated file to can validate if they have sense. Don\u2019t worry. It is a Linux distribution. If something is missed, it will tell us.\nThe <code>hardware-configuration.nix<\/code> file as the name suggests identifies your hardware.<\/p>\n\n<p>Not everything can be detected yet, I use <code>luks<\/code> to encrypt my disks; the generated <code>hardware-configuration<\/code> needs a bit of help to figure it out.<\/p>\n\n<pre><code class=\"language-nix\">  boot.initrd.luks.devices = {\n    root = {\n      device = \"\/dev\/nvme0n1p2\";\n      name = \"root\";\n      preLVM = true;\n      allowDiscards = true;\n    };\n  };\n<\/code><\/pre>\n\n<p>Nix as a programming language takes a bit of practice, but NixOS is different. Many people share their configuration in GitHub, a boost in productivity.\nI keep a list of NixOS configurations or Nix-related repositories that I look at when I don\u2019t know how to solve a particular issue. <a href=\"https:\/\/github.com\/gianarb\/dotfiles\/tree\/master\/nixos#credits\">I really think you should do the same<\/a> because nobody wants to spend a day fixing its laptop, even worst if it is the one you use at work.<\/p>\n\n<h2 id=\"start-simple\">Start simple<\/h2>\n\n<p>My end goal was to checkout my NixOS configuration as part of my dotfiles in a git repository. Too much when you don\u2019t even know how NixOS works.<\/p>\n\n<p>I have put aside this goal for a few weeks, and my new goal was to get my laptop working in all its part. The complicated part, that I didn\u2019t solve in total is audio, it works but the volume control is not as good as it should be. You can check the configuration I use in my dotfiles but the solution does not matter.<\/p>\n\n<h2 id=\"check-it-out\">Check it out<\/h2>\n\n<p>When I was happy with my configuration it was time to finally move it in its final destination. I joined the \u201cstable era\u201d for my Nix configuration, everything was good enough and it was not changing costantly. Perfect time for some refactoring.<\/p>\n\n<p>I decided to use my <a href=\"https:\/\/github.com\/gianarb\/dotfiles\">dotfiles repository<\/a> with a <code>nixos<\/code> subdirectory. This is the one I had when I first moved the configuration from my local environment to Git:<\/p>\n\n<pre><code>$ tree -L 1 .\/nixos\n.\/nixos\n\u2514\u2500\u2500 machines\n    \u2514\u2500\u2500 AsusZenbook\n        \u251c\u2500\u2500 configuration.nix\n        \u2514\u2500\u2500 hardware-configuration.nix\n<\/code><\/pre>\n\n<p>Those <code>*.nix<\/code> files are a copy of the one I have in <code>\/etc\/nixos<\/code>.<\/p>\n\n<p>Now I had to teach NixOS where the new configuration are, the are various way, I decided to delete everything inside <code>\/etc\/nixos\/configuration.nix<\/code>, leaving only an <code>import<\/code> to the configuration I moved as part of my dotfiles.<\/p>\n\n<p>NOTE: I clone my dotfiles at <code>\/home\/gianarb\/.dotfiles<\/code>.<\/p>\n\n<p>I didn\u2019t need <code>\/etc\/nixos\/hardware-configuration.nix<\/code> and this is the content for my <code>\/etc\/nixos\/configuration.nix<\/code>:<\/p>\n\n<pre><code class=\"language-nix\">{ config, ... }:\n\n{\n  imports = [\"\/home\/gianarb\/.dotfiles\/nixos\/machines\/AsusZenbook\/configuration.nix\"];\n}\n\n<\/code><\/pre>\n\n<h2 id=\"pick-a-second-use-case\">Pick a second use case<\/h2>\n\n<p>I got a new Thelio System76 workstation (thanks to EraDB) and it was the perfect opportunity to re-use my fresh NixOS configuration, and my new skill.<\/p>\n\n<p>At this point I am still working from my Asus Zenbook, but it is time to get a new <code>.\/machines\/thelio<\/code> directory without a <code>hardware-configuratio.nix<\/code>, but only with the <code>configuration.nix<\/code> in there. The idea is to start extracting what you want to reuse from your first machine in its own files that can be imported everywhere you want.<\/p>\n\n<p>I started from my user, because it is a common desire to reuse the same user across different machines. That\u2019s why I have in my dotfiles a <code>users<\/code> subdirectory.<\/p>\n\n<pre><code>gianarb@huge ~\/.dotfiles  (master=) $ cat nixos\/users\/gianarb\/default.nix\n{ config, inputs, lib, pkgs, ... }:\nwith lib;\n{\n  # Define a user account. Don't forget to set a password with \u2018passwd\u2019.\n  users.users.gianarb = {\n    isNormalUser = true;\n    uid = 1000;\n    createHome = true;\n    extraGroups = [\n      \"root\"\n      \"wheel\"\n      \"networkmanager\"\n      \"video\"\n      \"dbus\"\n      \"audio\"\n      \"sound\"\n      \"pulse\"\n      \"input\"\n      \"lp\"\n      \"docker\"\n    ];\n    openssh.authorizedKeys.keys = [\n      \"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEKy\/Uk6P2qaDtZJByQ+7i31lqUAw9xMDZ5LFEamIe6l\"\n    ];\n  };\n}\n<\/code><\/pre>\n\n<p>I have imported it in both machines as we did previously for the all configuration and I splitted other applications like: <code>i3<\/code>, my audio configuration, vscode and so on. You can find all of them inside the <code>applications<\/code> directory:<\/p>\n\n<pre><code>$ tree -L 1 nixos\/applications\/\nnixos\/applications\/\n\u251c\u2500\u2500 i3.nix\n\u251c\u2500\u2500 sound-pipewire.nix\n\u251c\u2500\u2500 sound-pulse.nix\n\u251c\u2500\u2500 steam.nix\n\u251c\u2500\u2500 sway.nix\n\u251c\u2500\u2500 tailscale.nix\n\u2514\u2500\u2500 vscode.nix\n<\/code><\/pre>\n\n<p>Double checking that the refactoring is just a matter of re-building NixOS:<\/p>\n\n<pre><code># nixos-rebuild test\n# nixos-rebuild switch\n<\/code><\/pre>\n\n<h2 id=\"time-to-install-nixos-the-second-target\">Time to install NixOS the second target<\/h2>\n\n<p>I had everything I needed to re-install NixOS with my configuration into another target. It was time to setup a USB stick and boot Thelio from the USB.\nThe system I want is described as Nix configuration, the installation looks the same as the one we have done, or the one described in the documentation, but at this point, we do not need to generated configuration. We have our own one.\nThe only part we need, the first time, if we want is the <code>hardware-configuration.nix<\/code>.<\/p>\n\n<ol>\n  <li>When you have booted from USB you can do what you have done previously, and what it is explained in the <a href=\"https:\/\/nixos.org\/manual\/nixos\/stable\/#sec-installation\">NixOS installation guide<\/a>, format and parition disk.<\/li>\n  <li>When you have the disk layout done you can mount it to <code>\/mnt<\/code> and you can clone\/download somewhere the git repository with your nix configuration. I usually create an clone it where I want it to end up: <code>\/home\/gianarb\/.dotfiles<\/code>.<\/li>\n  <li>Time to run <code>nixos-generate-config<\/code> as you do all the time<\/li>\n  <li>Replace <code>\/ect\/nixos\/configuration.nix<\/code> with the <code>import<\/code>, copy the generated <code>hardware-configuration.nix<\/code> to your <code>machines<\/code> folder<\/li>\n  <li>Last step is to open the hardware-configuration and figure out if it has sense for your hardware.<\/li>\n  <li>When you are happy with it you can run <code>nixos-install<\/code> and it will install it from the configuration you have just declared.<\/li>\n<\/ol>\n\n<p>If it sounds like a convoluted process, it can be simplfied. But I didn\u2019t yet invested into it yet! I don\u2019t want to reinstall them all the time. You can read this article if you want to erase your laptop every day: <a href=\"https:\/\/grahamc.com\/blog\/erase-your-darlings\">\u201cErase your darlings by Graham Christensen\u201d<\/a>.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>You just read about my journey with NixOS. With a centralized repository I can assemble, compile, and ship images to run on AWS, or ISO I can PXE boot.\nI can build and compile a NixOS derivation that I can use as installation driver, for example cloning my dotfiles.<\/p>\n\n<p>As next project I want to build an ISO that I can flash into a Raspberry PI who will act as media hub for my speakers playing Bluetooth audio or Spotify playlists via <a href=\"https:\/\/github.com\/dtcooper\/raspotify\">raspotify<\/a>.<\/p>\n"},{"title":"How I tricked the cable mafia with PXE. Install OpenWRT on APU4d","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/home-made-router-msata-netboot"}},"description":"No matter how many cables or dongle do you have they are never enough. The best you can do is to trick the system. I tried Pixiecore to PXE boot Alpine on my APU4d installing OpenWRT to it.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2021-04-06T10:08:27+00:00","published":"2021-04-06T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/home-made-router-msata-netboot","content":"<p>I am too lazy to buy a cable or another adapter. But not to buy an APU4d. A specialized for networking hardware with AMD Embedded G series GX-412TC, widely used for routers.<\/p>\n\n<p>I got it directly from the manufacturer <a href=\"https:\/\/www.pcengines.ch\/apu4d4.htm\">PC Engine<\/a>. With a serial-USB cable and a <a href=\"https:\/\/it.aliexpress.com\/item\/32443776508.html\">Huawei Lte miniPCI chip<\/a>.<\/p>\n\n<p>I have also got the <a href=\"https:\/\/www.pcengines.ch\/msata16g.htm\">16GB mSata SSD module<\/a> because you never know, having a 16GB SSD, 4GB of RAM router sounds like an opportunity to run more tools on it!<\/p>\n\n<p class=\"text-center\"><img src=\"\/img\/apu4d.jpeg\" alt=\"Picture of the APU4D board from PC Engine\" class=\"img-fluid w-75\" \/><\/p>\n\n<p>I assembled all of it nicely at my desk. It was too late when I realized I don\u2019t know how to flash an mSATA SSD because I don\u2019t have proper cabling\u2026<\/p>\n\n<p>No matter how big the box with all my cables and dongles is, I will never own the one I need. It is a mantra nobody can escape. The best you can do is to trick the system.<\/p>\n\n<p>Luckily for me, the APU4d supports PXE booting, and we know how cool it is, perfect opportunity to try <a href=\"https:\/\/github.com\/danderson\/netboot\/blob\/master\/pixiecore\/README.api.md\">pixiecore<\/a> and have some fun with netbooting.<\/p>\n\n<p>It worked. If all of this sounds unreasonable, you need to remember that most likely you are right. But you know how much I like simple tools. Pixiecore was on my radar.<\/p>\n\n<h2 id=\"get-what-you-need\">Get what you need<\/h2>\n\n<p>First of all, I installed Pixiecore. It is a Go binary, you can run it as a Docker container or, you can compile it with <code>go build<\/code> but I decided to use Nix shell:<\/p>\n\n<pre><code class=\"language-bash\">nix-shell -p pixiecore\n<\/code><\/pre>\n\n<p>In practice, it is a program that helps you to serve what a piece of hardware needs to PXE boot over the network, it servers IPXE and a TFTP server for example. It is light and not intrusive. You can keep your DHCP server, and if you like, even implement an API to drive how and what to PXE boot dynamically. Today I have to boot only one server in a very boring network. My solution is already too overly engineered. I decided to run it in static mode:<\/p>\n\n<pre><code class=\"language-bash\">sudo pixiecore boot .\/vmlinuz-vanilla initramfs-vanilla \\\n    --cmdline='console=ttyS0,115200n8 \\\n    alpine_repo=http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.9\/main\/ \\\n    modloop=http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.9\/releases\/x86\/netboot-3.9.6\/modloop-vanilla'\n<\/code><\/pre>\n\n<p>The first two arguments of the command line are the Alpine init ramdisk and the kernel. I got them directly from the <a href=\"http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.9\/releases\/x86\/netboot-3.9.6\">Alpine repository<\/a>.<\/p>\n\n<p>The <code>--cmdline<\/code> option can be used to pass configuration to the operating system. The <a href=\"https:\/\/wiki.alpinelinux.org\/wiki\/PXE_boot\">Alpine netboot wiki page<\/a> to know the various options supported by the init script.<\/p>\n\n<p>Now that I have set the PXE distribution tool, I powered on the APU4d board. By default, it tries to boot from a couple of different devices. The last one is PXE mode.<\/p>\n\n<pre><code class=\"language-console\">sudo pixiecore boot \\\n    .\/vmlinuz-vanilla initramfs-vanilla \\\n    --cmdline='console=ttyS0,115200n8 \\\n        ssh_key=https:\/\/github.com\/gianarb.keys \\\n        alpine_repo=http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.9\/main\/ \\\n        modloop=http:\/\/dl-cdn.alpinelinux.org\/alpine\/v3.9\/releases\/x86\/netboot-3.9.6\/modloop-vanilla'\n\nPassword:\n[DHCP] Offering to boot 00:0d:b9:5a:3e:10\n[DHCP] Offering to boot 00:0d:b9:5a:3e:10\n[TFTP] Sent \"00:0d:b9:5a:3e:10\/4\" to 192.168.1.87:55360\n[DHCP] Offering to boot 00:0d:b9:5a:3e:10\n[HTTP] Sending ipxe boot script to 192.168.1.87:29233\n[HTTP] Sent file \"kernel\" to 192.168.1.87:29233\n[HTTP] Sent file \"initrd-0\" to 192.168.1.87:29233\n<\/code><\/pre>\n\n<p><code>192.168.1.87<\/code> is the IP the APU4 got from my DHCP. Everything is working and from the serial port I see Alpine booting, the <code>root<\/code> password is <code>root<\/code>! Classy!<\/p>\n\n<h2 id=\"time-to-install-openwrt\">Time to install OpenWRT<\/h2>\n\n<p>I never used OpenWRT before. It is a Linux distribution for routers. You can even flash it to TP-LINK or Netgear devices if supported, at your own risk.<\/p>\n\n<p>Anyway, since I am now running Alpine in memory on my APU4d I have a functional operating system and access to the device. I can use traditional tools like <code>dd<\/code> to write OpenWRT directly to disk, manipulate partitions and, so on\u2026 I followed the blog post <a href=\"https:\/\/teklager.se\/en\/knowledge-base\/openwrt-installation-instructions\/\">\u201cOpenWRT installation instructions for APU2\/APU3\/APU4 boards\u201d<\/a> written by TekLager.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>My router looks up and running. I was able to reach the administrative Web UI. I didn\u2019t use it yet because I have to relocate it to my new house. So I am sure you will read more about it in future articles.<\/p>\n\n<p>Pixiecore was on my TODO list because those days hardware, datacenters automation are taking a good part of my daily working activity. Its support for external API makes it a great alternative to provide an installation environment like <a href=\"https:\/\/github.com\/tinkerbell\/hook\">Hook<\/a> (the one we developed with <a href=\"httsp:\/\/github.com\/tinkerbell\">Tinkerbell<\/a>) without having to onboard the full Tinkerbell stack, in particular, I can avoid <a href=\"https:\/\/docs.tinkerbell.org\/services\/boots\/\">boots<\/a> when not needed.<\/p>\n"},{"title":"DIY Board management control for an Intel NUC: power control","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/homelab-diy-bmc-intel-nuc"}},"description":"This is my first experimentation with reading and understanding a schematic. I hooked up an Intel NUC to a Raspberry PI to get control over its power lifecycle. I see it as a very simple board management control (BMC)","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2021-03-14T10:08:27+00:00","published":"2021-03-14T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/homelab-diy-bmc-intel-nuc","content":"<p>I want to start this article with a disclaimer. What follows is not a tutorial or a guide, do what you want, but do not blame me if your fried your Intel NUC (they do not taste good).<\/p>\n\n<p>When it comes to hardware and datacenters, I am not an expert. I was born and raised in the cloud, and recently, I joined Equinix and Metal (previously PacketHost). That\u2019s why my interest changed, and I now have a disassembled NUC and a multimeter on my desk.<\/p>\n\n<p>If you don\u2019t know the origin of that PCB, I wrote a piece about my homelab for the Equinix Metal blog: <a href=\"https:\/\/metal.equinix.com\/blog\/building-an-ephemeral-homelab\/\">\u201cBuilding an Ephemeral Homelab.\u201d<\/a><\/p>\n\n<p>Long story short, almost one year ago, straight after joining Equinix Metal, I got a couple of NUCs and Invidia Jetsons to play with, fully cabled in a 1U brick. Cool, but I have to admin only helpful for experimentation. It was cheap and the boards themself are old. But this is everything I need, something I can break without feeling too bad.<\/p>\n\n<p>When it comes to fully flagged servers, you quickly learned that they are made of building blocks when an essential one is the board management console (BMC). Think about that as a small and low-consuming PC that manages the big brother, the actual server. When it comes to servers, you know they are loud, consuming a lot of power. That\u2019s why you have a BMC that has the only responsibility to manage the expensive server. It can power control it, switching it on, off, and monitoring its status with metrics like volt consume, temperature, and so on. It can even select the boot device, for example. It is handy if you want to enter PXE mode, for example, to manage your server without touching it.<\/p>\n\n<p>The BMC is wired to the server; you don\u2019t have to power it separately; usually, you only have to hook it with an RJ45 to a switch. Extremely functional because those who have access to the BMC can take control of the actual server; it is an excellent idea to place the NIC in a dedicated VLAN.<\/p>\n\n<h3 id=\"time-to-hack\">Time to hack<\/h3>\n\n<p>My homelab arrived cabled with a relay controllable from an outside board like an Arduino or a Raspberry Pi. Switching on or off the relay cuts the power brutally, almost like directly pulling the board\u2019s power cable off.<\/p>\n\n<p>NUC does not have a BMC and does not consume much, so there is no point in having another computer controlling them, but hey, this is my home lab, and we are after something here.<\/p>\n\n<p>I downloaded my <a href=\"https:\/\/www.intel.com\/content\/dam\/support\/us\/en\/documents\/boardsandkits\/NUC5CPYB_NUC5PPYB_TechProdSpec11.pdf\">board\u2019s schematic<\/a> a few months ago, and from time to time, I look at it for inspiration. I studied electronics at high school, and Arduino got invented in my region, but I don\u2019t think it counts.<\/p>\n\n<p>I want to switch on and off my boards properly without leaving my desk because I am lazy. I want to use a Raspberry PI for this job because I can write code in any language I know. Spoiler alert for this prototype I have used ~5 lines of Bash.<\/p>\n\n<p><img src=\"\/img\/bmc_pi_front_panel_spec.png\" alt=\"Picture coming from the NUC schematic. It describe the pinout of the front\npanel. It exposes a power switch and a few output pings to get power status from\nthe NUC\" class=\"img-fluid d-block mx-auto\" \/><\/p>\n\n<p>\u2014 Picture front panel schematics \u2014<\/p>\n\n<p>During one of many rounds of randomly reading the table of contents, I saw a Front Panel Header exposed from the NUC that says: \u201cPower\/Sleep LED Header\u201d. It looks like there is a way to connect a LED to the NUC to see its status, fun! Nothing complicated: 1 the board is on, 0 the board is off. The LED can be replaced with a GPIO from the Raspberry PI (I used GPIO22) and hooked to a few BASH lines (as a prototype) to read the actual value from the NUC. I used this guide, <a href=\"https:\/\/raspberrypi-aa.github.io\/session2\/bash.html\">\u201cBash Control of GPIO Ports.\u201d<\/a> I used tmux so I can leave it running in the background:<\/p>\n\n<pre><code class=\"language-sh\">#!\/bin\/bash\n\ntmux new-session -d -s power\\_status\ntmux send-keys \"watch -n 1 'cat \/sys\/class\/gpio\/gpio22\/value &gt;&gt; \/tmp\/current\\_power\\_status'\" C-m\ntmux detach -s power\\_status\n<\/code><\/pre>\n\n<p>Now that the first mini-circuit is done and I am a bit more confident, I kept reading: \u201cPower Switch Header.\u201d<\/p>\n\n<blockquote>\n  <p>Pins 6 and 8 can be connected to a front panel momentary-contact power switch. The switch must pull the SW_ON# pin to ground for at least 50 ms to signal the power supply to switch on or off.<\/p>\n<\/blockquote>\n\n<p>This sounds easy; I cabled PIN 6 from the NUC to GPIO 17 in the RPI and PIN 8 to ground, and with two bash scripts, I figured it all out!<\/p>\n\n<p><img src=\"\/img\/bmc_pi_prototype.jpg\" alt=\"A picture of a Raspberry PI cabled to a Intel NUC board to control the power\nlifectycle\" class=\"img-fluid d-block mx-auto\" \/><\/p>\n\n<pre><code class=\"language-terminal\">\n$ root@raspberrypi:~\/power\\_swtich# ls\nlog.sh poweroff.sh poweron.sh\n\n$ root@raspberrypi:~\/power\\_swtich# cat poweroff.sh\n#!\/bin\/bash\necho \"0\" &gt; \/sys\/class\/gpio\/gpio17\/value\n\n$ root@raspberrypi:~\/power\\_swtich# cat poweron.sh\n#!\/bin\/bash\necho \"1\" &gt; \/sys\/class\/gpio\/gpio17\/value\nsleep 0.2\necho \"0\" &gt; \/sys\/class\/gpio\/gpio17\/value\nsleep 0.2\necho \"1\" &gt; \/sys\/class\/gpio\/gpio17\/value\n<\/code><\/pre>\n\n<h3 id=\"power-the-raspberry-pi\">Power the Raspberry PI<\/h3>\n\n<p>Half of myself likes the idea of getting a Raspberry PI or equivalent for each NUC, pretending that it is a BMC (I have other things I want to do with it, more at the end of the article). Either way, I like the idea to power the Raspberry PI from the NUC itself to save a power supplier and a cable. If you carefully looked at the front panel picture, you probably noticed that PIN 9 is a +5V_DC (2A), just enough to power an RPI via GPIO. But you need to know that GPIO unlikely the USB one does not implement any safety protection technique. If you supply an incorrect voltage, the RPI will burn.<\/p>\n\n<p>Anyway, PIN 9 is not what I am looking for because it goes up to +5 V only when the NUC is on. We want to get the RPI powered on even when the NUC is off (but plugged with the power supply).<\/p>\n\n<p>The NUC has a header called: \u201cAuxiliary power connector\u201d that does just what I need! I hooked it all up, and we have power.<\/p>\n\n<p><img src=\"\/img\/rpi_bmc_auxiliary_power_spec_png\" alt=\"This is an image I took from the NUC schematics. It describes how the\n&quot;Auxiliary power connector&quot; works. And how it can be hook to another destination\nof power to switch it on\" class=\"img-fluid d-block mx-auto\" \/><\/p>\n\n<h3 id=\"conclusion\">Conclusion<\/h3>\n\n<p>I can\u2019t tell if this is or will never be a BMC, but I quite like where this is going, and I had fun. Short term, I can hook more NUCs to the same RPI and play with it. I can re-write the bash scripts you saw using some other languages exposing something like an HTTP API that I can interact with programmatically.<\/p>\n\n<p>But I am after something better that I am not sure I can figure out. There is something else. I want to visualize output from the NUC. With Tinkerbell, I already have some control over the machine lifecycle because the NUC is capable of PXE booting. I can inject an in-memory operating system (Linux) and SSH into a NUC even if it does not have an operating system installed. But I want more; I want to look at the BIOS and things like that. An \u201ceasy\u201d solution is the HDMI dongle. I can get an HDMI video capture, hook it up to the RPI and forward the NUC output with VNC or something like that; I can do something similar to forward what I type with a keyboard. A better solution is to use a serial console. Unfortunately, my board does not expose it. Joel, one of my colleagues at Equinix Metal, told me that my CPU most likely has it, and he is correct (accordingly with the CPU schematic), but the board does not have a header that I can use. But this is a story for a next article (if we will figure it out).<\/p>\n\n<p><a href=\"https:\/\/pixabay.com\/photos\/measuring-equipment-electronic-2622334\/\" class=\"small\">Hero image from Pixabay<\/a><\/p>\n"},{"title":"Nix for developers","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/nix-for-developers-sneaking-in-my-toolchain"}},"description":"Nix is slowly sneaking in my toolchain as a developer bringing back the joy of provisioning. Not as an exercise of translation between technologies and environment but as the art of building your own environment. ","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2021-01-25T10:08:27+00:00","published":"2021-01-25T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/nix-for-developers-sneaking-in-my-toolchain","content":"<h2 id=\"nix-is-slowly-sneaking-in-my-toolchain\">Nix is slowly sneaking in my toolchain.<\/h2>\n\n<p>Currently, a lot of my colleagues use Nix. It is a package manager that runs on Linux and macOS. It is versatile. I will show you more about it moving forward, but for now, think about it as a replacement for APT, YUM, and HomeBrew can work on both Mac and Linux. It is also a build system, but I didn\u2019t use it much for it just yet.<\/p>\n\n<h3 id=\"not-tight-to-an-operating-system\">Not tight to an operating system<\/h3>\n\n<p>For me, this is already a huge benefit. From time to time, for no reason, I end up switching from Mac to Linux and vice versa. It usually happens because I change the place where I work, and the policy forces me to make some weird decisions.<\/p>\n\n<p>When I got off college, my parents bought me my first laptop; it was a Macbook Pro, but for the first two, three jobs, I used Linux because Mac was too expensive for my employers, and the main reason for owning a Mac was because back then I was not a fully flashed developers. I used to do video editing, playing with Photoshop, and so on. I quickly learned that I couldn\u2019t match two colors nicely together, and Linux was just enough for me as a developer logging CLI, VIM, and tools like that. When I was in Dublin, Macbook Pro was the only available option; at InfluxData, I had a Thinkpad (the best laptop I ever had). Currently, I work on a Macbook Pro again because the non-apple available option was pretty low in terms of performance.<\/p>\n\n<p>Now that you know my struggle for laptop and operating system consistency, a tool that works on both sounds appealing.<\/p>\n\n<p>Nix has its own Linux distribution called NixOS. I slowly have a lot at it, but it is not a topic for this article.<\/p>\n\n<h3 id=\"declarative-environment\">Declarative environment<\/h3>\n\n<p>The open-source project I mainly consistently from years is my <a href=\"https:\/\/github.com\/gianarb\/dotfiles\">dotfiles<\/a> repository. I am probably the only person who knows how to run it, but it contains configuration for the various tools I use.<\/p>\n\n<p>I have to admit that I would like to install it consistently and quickly on any of my on-demand servers I spin up, but I too lazy for it. Anyway, I like that approach because I describe what I want, and I can consistently get it everywhere. Nix gave me the same possibility, and it does not use a specification language like YAML, JSON, or whatever it uses a dialect.<\/p>\n\n<p>It is a lazy, pure, and functional language. It is pretty awkward; I have to say, at least for my background. I didn\u2019t figure it out yet, but the more I use it, and better it sticks to my mind.<\/p>\n\n<p>I am also not that good when it comes to picking up new languages, it takes me some time, and I have to practice with them.<\/p>\n\n<p>The good thing is that there are plantly of tutorials, each of them with different stakeholders. Do you like to be driven by example? There is <a href=\"https:\/\/nixos.wiki\/wiki\/Nix_Expression_Language\">\u201cNix by example.\u201d<\/a> You have time, and you want a more traditional <a href=\"https:\/\/nixos.org\/manual\/nix\/stable\/#ch-expression-language\">reference manual<\/a>; they got you covered.<\/p>\n\n<p>The fact that I don\u2019t have to fight with the template engine makes me happy.<\/p>\n\n<h3 id=\"it-is-all-based-on-git\">It is all based on Git.<\/h3>\n\n<p>I use Git since my first day at my first job. I was a solo developer, and my remote repository was not GitHub but a USB stick.<\/p>\n\n<p>The Nix package manager is a GitHub repository. You can have your one, or you can use <a href=\"https:\/\/github.com\/NixOS\/nixpkgs\">nixpkgs<\/a>. You can even merge multiple ones. Or import your derivation (the way Nix calls package).<\/p>\n\n<p>A text editor and Git to clone a repository are what you need to look at all the packages, and their definition gives me a friendly feeling.<\/p>\n\n<p>Based on how you want to define your environment, you can pin all the packages you are installing to a specific commit SHA from the package manager repository:<\/p>\n\n<pre><code class=\"language-nix\">let _pkgs = import &lt;nixpkgs&gt; { };\nin\n{ pkgs ?\n  import\n    (_pkgs.fetchFromGitHub {\n      owner = \"NixOS\";\n      repo = \"nixpkgs\";\n      #branch@date: nixpkgs-unstable@2021-01-25\n      rev = \"ce7b327a52d1b82f82ae061754545b1c54b06c66\";\n      sha256 = \"1rc4if8nmy9lrig0ddihdwpzg2s8y36vf20hfywb8hph5hpsg4vj\";\n    }) { }\n}:\n\nwith pkgs;\n<\/code><\/pre>\n\n<p>Very powerful.<\/p>\n\n<h3 id=\"environment-composition-with-nix\">Environment composition with Nix<\/h3>\n\n<p>I am not sure if environment composition has any sense, but it sounds descriptive to me. Nix is user and project aware.<\/p>\n\n<p>With nix-env, you can install packages as a user. With nix-shell, you can manipulate your system at the project level. If you add NixOS to this chain, you get free customization at the operating system layer.<\/p>\n\n<p>Currently, nix-shell is the tool I know more about, and I am in love with.<\/p>\n\n<p>I didn\u2019t experience those composition levels; I am currently writing my home-manager configuration file to solve my dotfiles repository\u2019s required dependencies. Right now, I don\u2019t have a way to install them automatically. I am not sure if that\u2019s the right layer for such a problem yet, but I will figure it out soon.<\/p>\n\n<h3 id=\"project-level-sandboxing-with-nix-shell\">Project level sandboxing with nix-shell<\/h3>\n\n<p>With a combination of symlinks and who knows what, nix-shell gives you a sandboxed environment with the only dependencies you need for your project. When you run nix-shell, it looks for a file called shell.nix that describes needed dependencies, environment variables, and so on. By default, you get all the commands and utilities you have in your system plus the one you declared for that project. If you have Go 1.15 in your system but want 1.13 for a single project nix-shell, you want to make it happen, for example. Tinkerbell has a <a href=\"https:\/\/github.com\/tinkerbell\/tink\/blob\/master\/shell.nix\">shell.nix<\/a> for almost all the repositories.<\/p>\n\n<p>For some particular scenarios, I use the Docker container in development. But with Nix, I can remove that extra layer. I use containers and images to ship and run my applications on Kubernetes. Removing that layer decreases the need for volume mounting, port forwarding, the debugger works much more comfortably, and performance is the one your hardware provides to you, without virtualization if you are on Mac.<\/p>\n\n<p>Containers in development are my way to go when for dependencies that I don\u2019t care about or that I will never modify and have a state such as databases. But it is a joy to develop \u201clocally.\u201d<\/p>\n\n<h3 id=\"everything-can-be-nixyfied\">Everything can be \u201cnixyfied\u201d<\/h3>\n\n<p>Passing the flag \u2013pure to nix-shell won\u2019t rely on the system installed packages but only on the one specified in nix-shell. It is a great way to validate that the declaration you wrote for your project can work everywhere you can run Nix. It makes continuous delivery what it should be a way to run workflows. It is not like that; for me, it is a constant translation exercise between Jenkinsfile, bash, YAML for GitHub Actions, drone, or Travis. With Nix, you declare the environment, and you can run it everywhere. For example, you can set shebang in your scripts, leaving to nix-shell the responsibility for satisfying the dependencies it needs:<\/p>\n\n<pre><code class=\"language-sh\">#!\/usr\/bin\/env nix-shell\n#!nix-shell -i bash ..\/shell.nix\n\nmake deploy\n<\/code><\/pre>\n\n<p>If you don\u2019t want to translate from Nix to GitHub actions, there is an action that installs Nix, combined with the right shebang; you can reuse the shell.nix description for your project. I do that in <a href=\"https:\/\/github.com\/gianarb\/tinkie\/blob\/master\/.github\/workflows\/ci.yaml\">gianarb\/tinkie<\/a>:<\/p>\n\n<pre><code class=\"language-yaml\">name: For each commit and PR\non:\n  push:\n  pull_request:\n\njobs:\n  validation:\n    runs-on: Ubuntu-20.04\n    env:\n      CGO_ENABLED: 0\n    steps:\n    - name: Checkout code\n      uses: actions\/checkout@v2\n    - uses: cachix\/install-nix-action@v12\n      with:\n        nix_path: nixpkgs=channel:nixos-unstable\n    - run: .\/hack\/build-and-deploy.sh\n<\/code><\/pre>\n\n<p>As you can see, I am not using actions to install the dependencies I need, I use <code>cachix\/install-nix-action@v12<\/code> to get Nix, and everything is managed as I do locally. Something I don\u2019t have to maintain, I suppose.<\/p>\n\n<p>Mitchell Hashimoto uses Nix to provision its virtual machine, quickly enjoying the Linux environment when it comes to development compared with MacOS.<\/p>\n\n<p class=\"text-center\"><img src=\"\/img\/mitchellh-tweet-nixos.png\" alt=\"Mitchel Hashimoto tweet: I switched my primary dev environment to a graphical NixOS VM on a macOS host. It has been wonderful. I can keep the great GUI ecosystem of macOS, but all dev tools are in a full screen VM. One person said \u201cit basically replaced your terminal app\u201d which is exactly how it feels.\" class=\"img-fluid\" \/><\/p>\n\n<h3 id=\"conclusion\">Conclusion<\/h3>\n\n<p>I tend to avoid complications, and I am picky when it comes to the number of tools I have in my toolchain, but after a few months of observation, I think Nix deserves a place in my daily workflow. I just scratched the Nix surface; I didn\u2019t even write my first derivation yet.<\/p>\n\n<p>It brings back the joy I had a few years ago provisioning infrastructure, which I have lost in the last few years.<\/p>\n\n<p class=\"small\">Hero image via <a href=\"https:\/\/medium.com\/@robinbb\/what-is-nix-38375ed59484\">Medium.com<\/a><\/p>\n"},{"title":"Evolution of a logline","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/evolution-of-a-logline"}},"description":"This is a story that represent the evolution I had in thinking and writing logs for an application. It highlights why they are important as a communication mechanism from your application and the outside. Explaining what I think are the responsibility we have as a developer when writing logs.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2021-01-15T10:08:27+00:00","published":"2021-01-15T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/evolution-of-a-logline","content":"<p>This story represents the evolution of a logline for myself when I write and\ninterpret it.<\/p>\n\n<p>Late in 2012, I worked as a developer for a software agency in Turin,\nspecialized in software for tour operators. It was my second \u201creal\u201d job and the\nfirst one not as a solo developer. Exciting time!  An AJAX application with PHP\nand MySQL backend running as a service developed mainly by a single person, the\nlead developer.  I interpreted the log back then as the equivalent of a save\npoint in a game. The tail was the primary tool to figure out what was going on;\na logline was helpful to figure out that a lot of customers were reaching a\nparticular line of code. The interpretation of the situation was up to humans.<\/p>\n\n<p>Every developer involved in the project participates in adding the logline in\nthe codebase directly, developing the code, or indirectly when chatting about a\nparticular feature over lunch or doing a code review.  Building a context from\nan unknown log line was a useless exercise because the lead was always there\nto help you figure out what that logline was supposed to tell.<\/p>\n\n<p>Even the stream\u2019s speed was a crucial metric to figure out the sanity of the\napplication. Where the logs too fast? The application was under heavy load, and\nprobably it was slow, not fast enough, or not smooth as usual, well something\nwas going on, and it was not good!<\/p>\n\n<p>You can judge this story as unpractical but not as unusual. This approach does\nnot scale; it has an unmeasurable risk of \u201cbus factor\u201d but, if you don\u2019t have a\npanel in your Grafana dashboard representing the distribution of loglines, you\nshould look at it. Just for fun.<\/p>\n\n<h1 id=\"bus-factor\">Bus Factor<\/h1>\n\n<p>Bus factor represents the risk of knowledge and responsibility centralization in\na single location. If the lead in my story resigns or gets hit by a bus, nobody\nwill build context from \u201cnot that descriptive\u201d log lines quickly like him. And\nthe \u201cspeed of tail\u201d requires to be very familiar with the stream. Sharing\nknowledge and responsibility across the company, writing documentation, and\ndoing staff rotations are standard techniques that mitigate such risk.<\/p>\n\n<h2 id=\"automation\">Automation<\/h2>\n\n<p>When your application state\u2019s interpretation requires a human, it is tough to\nbuild automation for it. Standardization in the way your application\ncommunicates to the outside is another way to spread the knowledge in a team,\nallowing you to write automation for it.<\/p>\n\n<p>The format of a logline is the protocol to develop.<\/p>\n\n<p>The format has to be parsable and useable by automation. You have to see logs as\na point in time, as time series more than as something that I should carefully\nwatch and try to interpret by myself.<\/p>\n\n<p>A logline that looks like this:<\/p>\n\n<pre><code>1610107485 New user inserted in the DB with ID=1234\n<\/code><\/pre>\n\n<p>Will become:<\/p>\n\n<pre><code>time=1610107485 service=\"db\" action=\"insert\" id=1234 resource=\"user\"\n<\/code><\/pre>\n\n<p>You can add a message that can be used to communicate with a person: msg=\u201dnew\nuser registered.\u201d but not sure if it is mandatory, you can combine it later.<\/p>\n\n<p>We do this exercise with ElasticSearch, applying full-text algorithms, and\ntokenizing on the message. It is expensive, and it hides the developer\u2019s\nresponsibility when it comes to consciously describe the current state of the\nsystem with a logline. No, they are not random printf anymore.  You can even see\nit as JSON if you prefer.<\/p>\n\n<p>Or, more in general, as a description of a particular point in time via\nkey-value pairs that you can aggregate, visualize, and use to drive powerful\nautomation, I work a lot in the cloud field. For me, reconciliation or a\nsystem\u2019s ability to repair itself is often based on those pieces of information.\nIf you want to go deeper on this topic, structured logging is what you should\nlook for.<\/p>\n\n<h2 id=\"flexibility\">Flexibility<\/h2>\n\n<p>Having a logging library that allows you to do structured logging is a\nmust-have, and there is no answer about the number of key-value pairs you need.\nThe overall goal is to derive issues to learn the behavior of an application\nfrom those points. It is not something that you use when having problems. The\napplication exposes the tool we have to figure out what a piece of code we\ndidn\u2019t write is doing in production.  In a highly distributed system, a logline\nwith the right fields such as hostname, PID, region, Git SHA, or version can\ndistinguish where the application having problems is running without looking\nacross many dashboards, Kubernetes UIs, and CLI.<\/p>\n\n<p>Parsing and manipulating a structured log is more convenient than a random text\nthat has to be parsed and tokenized, but everything has a limit, so you have to\nfind the right balance based on experience. It is another never-ending iterative\nprocess that we can call the evolution logline!<\/p>\n\n<p><img src=\"\/img\/watch-4638673_1280.jpg\" alt=\"This picture represents the time it takes to learn something new. It is a\npicture of an open book with an old clock.\" class=\"img-fluid\" \/><\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<ul>\n  <li>Logline is the way you tech your application on how to communicate with the\noutside.<\/li>\n  <li>Communication is useful in many fields. It is an opportunity to learn\nsomething new or a way to communicate that we are in trouble. Same as logs,\nuse them as an opportunity to learn how a system works overall.<\/li>\n  <li>As a developer, do not see a logline as a random printf. The way it is\nstructured and articulated improves the communication quality between your\napplication the outside world.<\/li>\n  <li>A logline is not a fire and forget but an entity that evolves in time.<\/li>\n  <li>Logs represent the internal state of your application at some point in time\nand somewhere in your codebase.<\/li>\n<\/ul>\n\n<p>Recently I spoke with <a href=\"https:\/\/twitter.com\/lizthegrey\">Liz<\/a> and\n<a href=\"https:\/\/twitter.com\/shelbyspees\">Shelby<\/a> from HoneyComb about observability and\nmonitoring during\n<a href=\"https:\/\/www.heavybit.com\/library\/podcasts\/o11ycast\/ep-32-managing-hardware-with-gianluca-arbezzano-of-equinix-metal\/?utm_campaign=coschedule&amp;utm_source=twitter&amp;utm_medium=heavybit&amp;utm_content=Ep.%20%2332,%20Managing%20Hardware%20with%20Gianluca%20Arbezzano%20of%20Equinix%20Metal\">o11ycast<\/a>\na podcast about observability if you want to know more about this topic.<\/p>\n\n<p class=\"small\">Hero image via <a href=\"https:\/\/pixabay.com\/illustrations\/dna-string-biology-3d-1811955\/\">Pixabay<\/a><\/p>\n"},{"title":"Kubenetes v1.20 the docker deprecation dilemma in practice","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/kubernetes-1-20-dockershim-in-practice"}},"description":"there are many discussions going on Twitter about why Kubernetes v1.20 deprecated Docker and dockershim as default runtime. But it was a well thought an planned effort. Nothing to really worry about and here I will go over the process of updating Kubenetes from 1.19 to 1.20","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2020-12-03T10:08:27+00:00","published":"2020-12-03T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/kubernetes-1-20-dockershim-in-practice","content":"<p>Kubernetes v1.20 is not yet out but there is already a lot going on behind the\nscene. The main reason is the deprecation of Docker as default runtime.<\/p>\n\n<p>I won\u2019t go too deep in the theory because at this point I think it is a well\ncovered part. But a few things:<\/p>\n\n<ol>\n  <li>If you run Docker, you run containerd. That\u2019s it. Even if you didn\u2019t know, or\nyou don\u2019t like the idea.<\/li>\n  <li>The container runtime interface is there since a good amount of time and the\ngoal for it was to decouple the orchestrator (kubernetes), from other\nbusiness like running containers. Who cares about containers at the end.<\/li>\n  <li>Deprecating dockershim or at least removing it from the kubelet itself is the\nright thing to do.<\/li>\n<\/ol>\n\n<p>More about this topic:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/kubernetes.io\/blog\/2020\/12\/02\/dockershim-faq\/\">Dockershim Deprecation FAQ<\/a><\/li>\n  <li><a href=\"https:\/\/kubernetes.io\/blog\/2020\/12\/02\/dont-panic-kubernetes-and-docker\/\">Don\u2019t Panic: Kubernetes and Docker<\/a><\/li>\n<\/ul>\n\n<p>I want to tell you how it works in practice. And this article contains my\nexperience updating a Kubernetes cluster from v1.19 to v1.20.<\/p>\n\n<p>So I created a two node clusters on <a href=\"https:\/\/console.equinix.com\/\">Equinix\nMetal<\/a> running Ubuntu using this simple script and\nkubeadm.<\/p>\n\n<pre><code class=\"language-bash\">#!bin\/bash\n\napt-get update\napt-get install -y vim git\n\napt-get install -y \\\n    apt-transport-https \\\n    ca-certificates \\\n    curl \\\n    gnupg-agent \\\n    vim \\\n    git \\\n    software-properties-common\n\nreleaseName=$(lsb_release -cs)\nif [ $releaseName == \"groovy\" ]\nthen\n    releaseName=\"focal\"\nfi\n\ncurl -fsSL https:\/\/download.docker.com\/linux\/ubuntu\/gpg | sudo apt-key add -\napt-key fingerprint 0EBFCD88\nadd-apt-repository \\\n   \"deb [arch=amd64] https:\/\/download.docker.com\/linux\/ubuntu \\\n   ${releaseName} \\\n   test\"\n\napt-get update\napt-get install -y docker-ce docker-ce-cli containerd.io\n\napt-get update &amp;&amp; sudo apt-get install -y apt-transport-https curl\ncurl -s https:\/\/packages.cloud.google.com\/apt\/doc\/apt-key.gpg | apt-key add -\ncat &lt;&lt;EOF | sudo tee \/etc\/apt\/sources.list.d\/kubernetes.list\ndeb https:\/\/apt.kubernetes.io\/ kubernetes-xenial main\nEOF\nsudo apt-get update\nsudo apt-get install -y kubelet kubeadm kubectl\nsudo apt-mark hold kubelet kubeadm kubectl\n<\/code><\/pre>\n\n<p>I placed that script as CloudInit for my two nodes and I ran <code>kubeadm init\/join<\/code>\nto get my cluster.<\/p>\n\n<pre><code class=\"language-terminal\"># kubectl get node\nNAME            STATUS   ROLES    AGE     VERSION\ngianarb-k8s     Ready    master   2m11s   v1.19.4\ngianarb-k8s01   Ready    &lt;none&gt;   91s     v1.19.4\n<\/code><\/pre>\n\n<p>I have installed Flannel and now the nodes are ready. That\u2019s it. That\u2019s how I\nmeasure success here.<\/p>\n\n<p>It is not time to update to v1.20, so I downloaded the binaries from the\nregistry.<\/p>\n\n<pre><code class=\"language-terminal\"># wget https:\/\/dl.k8s.io\/v1.20.0-rc.0\/kubernetes-server-linux-amd64.tar.gz\n# tar xzvf .\/kubernetes-server-linux-amd64.tar.gz\n# .\/kubernetes\/server\/bin\/kubeadm version\nkubeadm version: &amp;version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-rc.0\", GitCommit:\"3321f00ed...\n<\/code><\/pre>\n\n<p>I checked available upgrade plan from <code>kubeadm<\/code> with the flag\n<code>--allow-experimental-upgrades<\/code>, if you do this process when v1.20 will be\nofficially released, you won\u2019t need that flag.<\/p>\n\n<pre><code class=\"language-terminal\">.\/kubernetes\/server\/bin\/kubeadm upgrade plan --allow-experimental-upgrades\n[upgrade\/config] Making sure the configuration is correct:\n[upgrade\/config] Reading configuration from the cluster...\n[upgrade\/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n[preflight] Running pre-flight checks.\n[upgrade] Running cluster health checks\n[upgrade] Fetching available versions to upgrade to\n[upgrade\/versions] Cluster version: v1.19.4\n[upgrade\/versions] kubeadm version: v1.20.0-rc.0\n[upgrade\/versions] Latest stable version: v1.19.4\n[upgrade\/versions] Latest stable version: v1.19.4\n[upgrade\/versions] Latest version in the v1.19 series: v1.19.4\n[upgrade\/versions] Latest version in the v1.19 series: v1.19.4\nI1203 21:50:13.860850   59152 version.go:251] remote version is much newer: ....\nW1203 21:50:14.100325   59152 version.go:101] could not fetch a Kubernetes ....\nW1203 21:50:14.100362   59152 version.go:102] falling back to the local client version: v1.20.0-rc.0\n[upgrade\/versions] Latest experimental version: v1.20.0-rc.0\n[upgrade\/versions] Latest experimental version: v1.20.0-rc.0\n[upgrade\/versions] Latest : v1.19.5-rc.0\n[upgrade\/versions] Latest : v1.19.5-rc.0\n\nComponents that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':\nCOMPONENT   CURRENT       AVAILABLE\nkubelet     2 x v1.19.4   v1.20.0-rc.0\n\nUpgrade to the latest experimental version:\n\nCOMPONENT                 CURRENT    AVAILABLE\nkube-apiserver            v1.19.4    v1.20.0-rc.0\nkube-controller-manager   v1.19.4    v1.20.0-rc.0\nkube-scheduler            v1.19.4    v1.20.0-rc.0\nkube-proxy                v1.19.4    v1.20.0-rc.0\nCoreDNS                   1.7.0      1.7.0\netcd                      3.4.13-0   3.4.13-0\n\nYou can now apply the upgrade by executing the following command:\n\n        kubeadm upgrade apply v1.20.0-rc.0 --allow-release-candidate-upgrades\n\n_____________________________________________________________________\n\n\nThe table below shows the current state of component configs as understood by this version of kubeadm.\nConfigs that have a \"yes\" mark in the \"MANUAL UPGRADE REQUIRED\" column require manual config upgrade or\nresetting to kubeadm defaults before a successful upgrade can be performed. The version to manually\nupgrade to is denoted in the \"PREFERRED VERSION\" column.\n\nAPI GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED\nkubeproxy.config.k8s.io   v1alpha1          v1alpha1            no\nkubelet.config.k8s.io     v1beta1           v1beta1             no\n_____________________________________________________________________\n<\/code><\/pre>\n\n<p>As you can see from the output there is <code>v1.20.0-rc.0<\/code> available so it is time\nto apply that plan and rollout the upgrade.<\/p>\n\n<pre><code class=\"language-terminal\"># .\/kubernetes\/server\/bin\/kubeadm upgrade apply v1.20.0-rc.0 --allow-release-candidate-upgrades\n[upgrade\/config] Making sure the configuration is correct:\n[upgrade\/config] Reading configuration from the cluster...\n[upgrade\/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n[preflight] Running pre-flight checks.\n[upgrade] Running cluster health checks\n[upgrade\/version] You have chosen to change the cluster version to \"v1.20.0-rc.0\"\n[upgrade\/versions] Cluster version: v1.19.4\n[upgrade\/versions] kubeadm version: v1.20.0-rc.0\n[upgrade\/confirm] Are you sure you want to proceed with the upgrade? [y\/N]: y\n...\n<\/code><\/pre>\n\n<p>Time to check kubelet with journalctl<\/p>\n\n<pre><code class=\"language-terminal\">#journalctl -xe -u kubelet\n\nI1203 21:58:49.648539  108749 server.go:416] Version: v1.20.0-rc.0\nI1203 21:58:49.649168  108749 server.go:837] Client rotation is on, will bootstrap in background\nI1203 21:58:49.651975  108749 certificate_store.go:130] Loading cert\/key pair from \"\/var\/lib\/kubelet\/pki\/kubelet-client-current.pem\".\nI1203 21:58:49.653428  108749 dynamic_cafile_content.go:167] Starting client-ca-bundle::\/etc\/kubernetes\/pki\/ca.crt\nI1203 21:58:49.764200  108749 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to \/\nI1203 21:58:49.764717  108749 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []\nI1203 21:58:49.764743  108749 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {Ru...\nI1203 21:58:49.764862  108749 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope\nI1203 21:58:49.764874  108749 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope\nI1203 21:58:49.764881  108749 container_manager_linux.go:315] Creating device plugin manager: true\nW1203 21:58:49.765010  108749 kubelet.go:297] Using dockershim is deprecated, please consider using a full-fledged CRI implementation\n<\/code><\/pre>\n\n<p>We finally got it <code>Using dockershim is deprecated, please consider using a\nfull-fledged CRI implementation<\/code>. But everything still works.<\/p>\n\n<pre><code class=\"language-terminal\"># kubectl get node\nNAME            STATUS   ROLES                  AGE   VERSION\ngianarb-k8s     Ready    control-plane,master   17m   v1.20.0-rc.0\ngianarb-k8s01   Ready    &lt;none&gt;                 16m   v1.19.4\n<\/code><\/pre>\n\n<p>There is something cool as well, now the role is <code>control-plane<\/code> and <code>master<\/code>, I\nam sure at some point we will deprecate <code>master<\/code> as well and this is just a\ntransition phase, same as it happened for Docker Shim.<\/p>\n\n<p>We don\u2019t want that working, so it is not time to change the default\nconfiguration of <code>containerd<\/code>, because as you know, it is there sitting behind\ndocker since forever almost.<\/p>\n\n<pre><code class=\"language-terminal\">cat \/etc\/containerd\/\n\n# cat \/etc\/containerd\/config.toml\n...\n# comment this line because we need cri enable\n# disabled_plugins = [\"cri\"]\n...\n<\/code><\/pre>\n\n<p>We need to enable the plugin <code>cri<\/code>, by default it is disabled when installing\ndocker-ce via the Docker registry because <code>dockerd<\/code> does not need a CRI, but\nKubernetes needs it obviously. Now you can restart the service with <code>systemctl<\/code>\nand we have to tell the kubelet that now it has to use <code>containerd<\/code>.<\/p>\n\n<pre><code class=\"language-terminal\"># mkdir -p \/etc\/systemd\/system\/kubelet.service.d\/\ncat &lt;&lt; EOF | sudo tee  \/etc\/systemd\/system\/kubelet.service.d\/0-containerd.conf\n[Service]\nEnvironment=\"KUBELET_EXTRA_ARGS=--container-runtime=remote\n--runtime-request-timeout=15m --container-runtime-endpoint=unix:\/\/\/run\/containerd\/containerd.sock\"\nEOF\n<\/code><\/pre>\n\n<p>After a systemd daemon reload and a kubelet.service restart we are back again.<\/p>\n\n<pre><code># kubectl get node\nNAME            STATUS   ROLES                  AGE   VERSION\ngianarb-k8s     Ready    control-plane,master   23m   v1.20.0-rc.0\ngianarb-k8s01   Ready    &lt;none&gt;                 23m   v1.19.4\n<\/code><\/pre>\n\n<p>Exercise for you, have a look at the kubelet logs, the warning is not there\nanymore and you are good to go.<\/p>\n"},{"title":"Lessons learned working as site reliability engineer","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/lessons-learned-working-as-sre"}},"description":"I want to share a few lessons I learned working three years as site reliability engineer. I kept the focus on the one I think are reusable, and that made me a better developer because reliability just as everything is an everybody business","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2020-11-26T10:08:27+00:00","published":"2020-11-26T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/lessons-learned-working-as-sre","content":"<p>I worked for three years at InfluxData as Site Reliability Engineer. When\nonboarding a new role in this jungle called \u201ccareer\u201d in information technology,\nyou should be ready to learn what that role means.<\/p>\n\n<p>Along the way, I developed new skills and mastered a few that I already had, but\nthey were not widely used elsewhere.<\/p>\n\n<p>My title is not Site Reliability Engineer anymore, but the IT career is like a\nroulette, and everything you learn will get back at some point. So I want to\nshare skills I think are essential when working as a Site Reliability Engineer\nand that I feel they are useful to keep in your toolchain even when your title\nchanges.<\/p>\n\n<h3 id=\"ability-to-develop-a-friendly-environment-for-yourself\">Ability to develop a friendly environment for yourself<\/h3>\n\n<p>One of my goals as a Site Reliability Engineer is to quickly support developers\nhaving trouble with their code at scale.<\/p>\n\n<p>Another one is to figure out criticalities when it comes to on-call.<\/p>\n\n<p>I am far from my local environment in both cases, usually interacting with an\nenvironment much more complicated. Having something you can call familiar help.\nIt can be whatever:<\/p>\n\n<ul>\n  <li>A few bash scripts that wrap other commands with a UX hard to remember<\/li>\n  <li>A CLI tool you wrote with your team<\/li>\n  <li>A directory where you can quickly go and write notes about what is going on\nfor future use<\/li>\n  <li>A set of bullet points or a runbook you know is rock solid and can drive you\nwhere you want to go.<\/li>\n<\/ul>\n\n<p>Those are just a few tricks I use. If you have yours and you want to share them\nas a comment, please do it!<\/p>\n\n<p>This is an essential skill that everyone has to master, but as an SRE, when you\nhave to act quickly, I really learned it, and now I do my best to develop my\nworkflows and a working environment I like. Those days I am giving a try with\nNix and, in particular, nix-shell because it helps me customize my environment\nwithout the overhead of a Docker container.<\/p>\n\n<p>This may sound time-consuming. Many projects have a README describing tools and\nrequirements to contribute or build the project. Why I need my way? Well, I am\nnot saying you should start from zero, but when I glue it with the flavor I\nlike, I code better and am happier. So for me, it is a big YES!<\/p>\n\n<h3 id=\"troubleshoot-like-a-ninja\">Troubleshoot like a ninja<\/h3>\n\n<p>Starting from the same purpose as before, a Site Reliability Engineer looks at\nthe code when it runs in production, and production is a scary and dangerous\nplace. As a developer, if you are lucky and smart, you try to focus on one\napplication at a time, yes it probably has many dependencies, but still, code\nmoves one line at a time.<\/p>\n\n<p>In production with concurrency and thousands of requests happening almost\nsimultaneously, things get pretty messy. Having the ability and the right tools\nto slice and dice from different points of view and prospectives, from an entire\nregion to a specific application requires operational experience and training.<\/p>\n\n<p>A good exercise is to have the desire to troubleshoot everything. Does a\nteammate have a question about a system in production? Go and help him. Visit\nlogs, traces, and dashboards even when everything looks quiet if you are so\nlucky to have a definition of it.<\/p>\n\n<p>Another thing I do, but more in general, is to follow the best. There is a lot\nabout the topic in the forms of books, talks, and similar. Read them but even\nmore importantly, follow who master these topics every day:\n<a href=\"https:\/\/twitter.com\/rakyll\">@rakyll<\/a>,\n<a href=\"https:\/\/twitter.com\/brendangregg\">@brendangregg<\/a>,\n<a href=\"https:\/\/twitter.com\/relix42\">@relix42<\/a>,\n<a href=\"https:\/\/twitter.com\/lizthegrey\">@lizthegrey<\/a>,\n<a href=\"https:\/\/twitter.com\/lauralifts\">@lauralifts<\/a>. Please do not follow them on\nTwitter only, but look at their GitHub as well; sometimes, a small project that\nworks reliably and well for us is gold.<\/p>\n\n<h3 id=\"think-about-code-debuggability-in-production\">Think about code debuggability in production<\/h3>\n\n<p>Like everything you read so far, it is always essential because, as I said, Site\nReliability Engineer is just a role that has a subset of responsibilities and\nobjectives, but we are not silos; everything matters a feature has to be usable;\ngood looking, functional. Code review for me become a lot more about: \u201cis this\ncode understandable in production?\u201d, \u201cwhat do I want it to tell me when running\nat scale?\u201d, \u201chow this trace looks like?\u201d \u201cis this log useful, and how does it\nimpact the overall context?\u201d.<\/p>\n\n<p>Those questions strongly show my mind when working as an SRE, but they made me a\nbetter developer. I still try to answer them when coding or when doing code\nreviews.<\/p>\n\n<h3 id=\"conclusion\">Conclusion<\/h3>\n\n<p>Titles are just titles; reading them will help to know a set of skills you\nleveraged most, but that\u2019s it, and it is not always true. You are not married to\nyour title, and if you are curious about various aspects of our work, you will\nchange many of them. The right balance of all those skills will make you unique.<\/p>\n"},{"title":"Vanity URL for Go mod with zero infrastructure","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/go-mod-vanity-url"}},"description":"A lot of Go modules today are hosted on GitHub. But you can setup vanity URL using a custom domain. This is good to decouple your library from GitHub. The best part is that it does not require any infrastructure, if you don't want to.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2020-11-13T10:08:27+00:00","published":"2020-11-13T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/go-mod-vanity-url","content":"<p>This post is about how I renamed a go module from\ngithub.com\/something\/somethingelse to go.gianarb.it\/somethingelse. It requires\nzero infrastructure, just a static site that can run on GitHub.<\/p>\n\n<p>I used one of the many projects I have that nobody cares about\n<a href=\"https:\/\/github.com\/gianarb\/go-irc\">gianarb\/go-irc<\/a>.<\/p>\n\n<h3 id=\"why\">Why<\/h3>\n\n<p>You know, is one of those ideas you have in your mind for ages, but who cares?\nAt least for me that I don\u2019t have any cool open source project under my name.<\/p>\n\n<p>I am not one of those people who suffer when realizing that it is not escale. If\nyou end up having a project that gets traction and is tight to github.com\nbecause you didn\u2019t think about another way to go, you are stuck. And even if\nGitHub today is cool, it won\u2019t stay cool forever.<\/p>\n\n<p>Filippo Valsorda <a href=\"https:\/\/twitter.com\/FiloSottile\">@FiloSottile<\/a> today\n<a href=\"https:\/\/twitter.com\/FiloSottile\/status\/1327240411266641920\">tweeted<\/a> about this\ntopic, and I looked at how he set up filippo.io\/age to solve this little\ndilemma.<\/p>\n\n<h3 id=\"goals\">Goals<\/h3>\n\n<p>This is not about how to escape from GitHub but is about setting up a \u201cvanity\nURL\u201d that won\u2019t lock you or your project to GitHub. It does not require any\ninfrastructure, just a domain that you can point to a GitHub Pages.<\/p>\n\n<h3 id=\"prerequisite\">Prerequisite<\/h3>\n\n<ol>\n  <li>Create a DNS record that points as CNAME to <code>&lt;github-handle&gt;.github.io<\/code>. I\nused go.gianarb.it<\/li>\n  <li>Create a repository; it will be the home for your static site. Mine is\n<a href=\"https:\/\/github.com\/gianarb\/go-libraries\">gianarb\/go-libraries<\/a><\/li>\n  <li>Set the repository up to be a <a href=\"https:\/\/pages.github.com\/\">GitHub page<\/a> and\nenable HTTPS. You can enable it via the repository Settings; we will push\nHTML files to it directly, so I used the master branch as the GitHub page\u2019s\nsource.<\/li>\n<\/ol>\n\n<h3 id=\"add-your-first-library\">Add your first library.<\/h3>\n\n<p>If your library is already using go mod, you have to change the module name to\nthe new one. In my case, from was github.com\/gianarb\/go-irc to\ngo.gianarb.it\/irc. I just searched and replaced with my editor in all the\nproject. Renaming a module is a bc break; I am not sure how to avoid or mitigate\nthat; if you know, let me know!<\/p>\n\n<p>You can push a new file to your static site repository; I called my irc:<\/p>\n\n<pre><code class=\"language-html\">&lt;html&gt;\n    &lt;head&gt;\n        &lt;meta name=\"go-import\" content=\"go.gianarb.it\/irc git https:\/\/github.com\/gianarb\/go-irc\"&gt;\n        &lt;meta http-equiv=\"refresh\" content=\"0;URL='https:\/\/github.com\/gianarb\/go-irc'\"&gt;\n    &lt;\/head&gt;\n    &lt;body&gt;\n        Redirecting you to the &lt;a href=\"https:\/\/github.com\/gianarb\/go-irc\"&gt;project page&lt;\/a&gt;...\n    &lt;\/body&gt;\n&lt;\/html&gt;\n<\/code><\/pre>\n\n<p>Replace your URL accordingly, but as soon as you push this file and GitHub will\nrelease it to your page, you will be able to import: <code>go.gianarb.it\/irc<\/code>.<\/p>\n\n<h3 id=\"conclusion\">Conclusion<\/h3>\n\n<p>This methods works as a safeguard if you decide to move your code out from\nGitHub. The static site can be deployed to Netlify, S3 or served by Nginx. It\ndoes not need to stay on GitHub.<\/p>\n\n<p>Same for your code, if you decide to move from GitHub to GitLab you can do it\ntransparently.<\/p>\n"},{"title":"What is Tinkerbell?","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/what-is-tinkerbell"}},"description":"I want to share with you what is Tinkerbell. An open source project I help maintaining, developed by Equinix Metal. Tinkerbell helps you to manage your hardware and datacenter programmatically via an API.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2020-11-06T10:08:27+00:00","published":"2020-11-06T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/what-is-tinkerbell","content":"<p>First things first, Tinkerbell is an open-source project mainly written in Go\nthat comes from PacketHost, now Equinix Metal. Equinix Metal is a cloud provider\nthat serves bare metal servers. No virtual machines, no high-level services, I\nsaid bare metal! Imagine a colocation that you can rent per hour.<\/p>\n\n<p>Tinkerbell is the software Equinix Metal dreamed about as an internal\nprovisioner for datacenter automation. They took their internal provisioner and\nremoved any PacketHost references of business specific code, and pushed it to\nGitHub for the community to enjoy the same technologies.<\/p>\n\n<p>The project is a number of micro-services that provide various functionality to\nconfigure hardware and provision bother Operating System and additional software\nthrough it\u2019s workflow engine.<\/p>\n\n<h2 id=\"what-the-project-provides\">What the project provides<\/h2>\n\n<ul>\n  <li>The first micro-service is called\n<a href=\"https:\/\/github.com\/tinkerbell\/boots\">boots<\/a>. This Tinkerbell service\nprovides a DHCP and a TFTP server to tell a piece of hardware (server) what\nto do when netbooting, it provides this information through the iPXE\nproject.<\/li>\n  <li>Tinkerbell serves a CLI that you can use to interact with a control plane\nthat serves HTTP and gRPC API. The service which does all those things is in\nthe <a href=\"https:\/\/github.com\/tinkerbell\/tink\">tink<\/a> repository, and it provides\nthree binaries: tink-server (the control plane), tink-CLI, the command line\ninterface, and tink-worker.<\/li>\n  <li>Tinkerbell provides an operating system that runs in memory, it is based on\nAlpine, and it is called <a href=\"https:\/\/github.com\/tinkerbell\/osie\">Osie<\/a>. This\nin-memory operating system runs directly on the hardware you want to\nprovision, and it runs the tink-worker.<\/li>\n  <li>Once the in-memory Osie has started it begins the tink-worker which in turn\ncommunicates with the control plane (tink-server), asking for any work that\nhas to be done on that server. This unit of work is called a workflow.<\/li>\n  <li><a href=\"https:\/\/github.com\/tinkebell\/hegel\">Hegel<\/a> is a metadata server, comparable\nto the AWS EC2 metadata or the Equinix Metal one; the majority of cloud\nvendors provide this type of service , so you should have it as well! It is\ncrucial when running scripts in a particular server because you can get\nconcrete variables from it, such as the operating system it runs, its IPs,\nlocation, and so on.<\/li>\n<\/ul>\n\n<h2 id=\"the-end-goal\">The end goal<\/h2>\n\n<p>The Tinkerbell end work is to bring to life a piece of hardware.<\/p>\n\n<h2 id=\"workflow-and-template\">Workflow and template<\/h2>\n\n<p>A template is a specification file that describes what we want to execute. A\nworkflow starts from a template, and it has a particular target. Templates are\nreusable; workflows are a single execution and can\u2019t be reused. The single unit\nof work in a template is called action. You can get as many actions you want in\na template, and each action runs in its own Docker container.<\/p>\n\n<h2 id=\"action\">Action<\/h2>\n\n<p>As mentioned above, actions are Docker containers and that means that you can\nbuild each action in isolation in the language you want. It can use python,\nbash, Golang, Rust, or whatever you can run in a container.<\/p>\n\n<p>You may think that Docker would sound like an overhead, however we took a\nnatural decision based on how we could use the container concept in operations.\nThe concept of build, pull, and push has become commonplace within development\nenvironments, and we think it could also work well in operational environments\ntoo. Building containers to contain operational tasks in isolation and enhancing\nthat with testing and simplified execution of a container is a clear benefit. It\nis an effective way to move code around in a reusable way without having to\nreinvent the distribution model. Some of the actions you will see very often in\na Tinkerbell workflow may be:<\/p>\n\n<ul>\n  <li>Disk related actions: mounting a disk, wiping it, or setting up a partition\ntable to boot an operating system<\/li>\n  <li>Downloading an Operating System like Ubuntu, Debian, NixOS, CentOS<\/li>\n  <li>Copy an operating system in a partition<\/li>\n<\/ul>\n\n<p>But you will be able to write actions related to your business:<\/p>\n\n<ul>\n  <li>notify a particular API when provisioning fails<\/li>\n  <li>Attempt a recovery<\/li>\n  <li>Observe and mark the status of your provisioning<\/li>\n  <li>Who knows! There are no limitations here.<\/li>\n<\/ul>\n\n<h2 id=\"how-a-template-and-a-workflow-looks-like\">How a template and a workflow looks like<\/h2>\n\n<p>Unfortunately, there are not many examples, but as maintainers, the next three\nmonths will be all about public workflows and reusable actions.<\/p>\n\n<p>Kinvolk wrote a blog post about <a href=\"https:\/\/kinvolk.io\/blog\/2020\/10\/provisioning-flatcar-container-linux-with-tinkerbell\/\">how to provision\nFlatcar<\/a>\non bare metal with Tinkerbell.<\/p>\n\n<p>The Tinkerbell documentation <a href=\"https:\/\/tinkerbell.org\/examples\/hello-world\/\">has an\nexample<\/a> of a \u201chello world.\u201d\ntemplate.<\/p>\n\n<p><a href=\"https:\/\/www.fransvanberckel.nl\/\">Frans van Berckel<\/a> wrote a workflow for\n<a href=\"https:\/\/github.com\/fransvanberckel\/debian-workflow\">CentOS<\/a> and\n<a href=\"https:\/\/github.com\/fransvanberckel\/debian-workflow\">Debian<\/a>.<\/p>\n\n<p>One of my next projects will be to write a workflow that won\u2019t install an\noperating system. It will start something like k3s or k8s directly on Osie for\nmy ephemeral homelab! I am not sure it has a sense or will ever work, but I\nthink it is an excellent example: \u201cit is not all about having a persisted and\ntraditional operating system those days.\u201d<\/p>\n\n<h2 id=\"how-to-get-started\">How to get started<\/h2>\n\n<p>We put a fair amount of effort into a\n<a href=\"https:\/\/github.com\/tinkerbell\/sandbox\">sandbox<\/a> project and setup guide. You\ncan run it <a href=\"https:\/\/tinkerbell.org\/docs\/setup\/local-with-vagrant\/\">locally with Vagrant\n<\/a>or on <a href=\"https:\/\/tinkerbell.org\/docs\/setup\/terraform\/\">Equinix\nMetal<\/a>.<\/p>\n\n<p>Aaron Ramblings wrote a blog post, <a href=\"https:\/\/geekgonecrazy.com\/2020\/09\/07\/tinkerbell-or-ipxe-boot-on-ovh\/\">\u201cTinkerbell or iPXE boot on\nOVH\u201d<\/a>\nusing the sandbox to run Tinkerbell on OVH! I am still surprised when I read it\nbecause he experimented with the sandbox in a very early stage of the project,\nand in the same way, he was able to run sandbox on OVH; it can run almost\nwherever else (at least for the control plane part).<\/p>\n\n<h2 id=\"next-steps\">Next steps<\/h2>\n\n<p>With the help of our community we recently improved our continuous integration\npipeline to build all the projects for various architecture: <code>linux\/386<\/code>,\n<code>linux\/amd64<\/code>,<code> linux\/arm\/v6<\/code>, <code>linux\/arm\/v7<\/code>, <code>linux\/arm64<\/code> levering Docker\nbuildx, Qemu, and GitHub Actions. My goal was to be able to run the provisioner\nin a Raspberry Pi. Because as I wrote before, my homelab tends to go away, get\nmoved, disconnected, and I think I can keep running reliably only a Raspberry PI\nas it is today. So I want to run the control plane on a RaspberryPI. I presume\nthere are smarter things to do with multi-arch, but let\u2019s be honest; we all have\na RaspberryPI leftover somewhere.<\/p>\n\n<p>We use the sandbox project as a way to release Tinkerbell\u2019s version as an all\nproject. We are pinning all the various dependencies such as Boots, Hegel,\nTink-Server, CLI, Osie, and when they all pass the integration tests, we tag a\nnew release. The generated artifacts are containers for now. We want to get\nbinaries in this way. You can run Tinkerbell as you like, even without\ncontainers. At some point, we will tag and manage each component independently,\nbut for now, it is a lot of effort.<\/p>\n\n<p>Releasing new workflows is something we are working on already. So stay tuned!<\/p>\n\n<p>Another project is available in the Tinkerbell GitHub organization that I didn\u2019t\nmention because it is not hooked yet as part of the stack. After all, we are\nworking at its version two. <a href=\"https:\/\/github.com\/tinkerbell\/pbnj\">PBNJ<\/a> provides\na standard API to interact with various BMCs and IPMIs (Intelligent Platform\nManagement Interface). Having this kind of ability in a datacenter is essential\nbecause we want to pilot things like reboot, restart, switch off for each server\nprogrammatically, and even as part of a workflow.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>There already exists huge demand for bare-metal usage, which with the growth\ncaused by things like 5G, dedicated GPUs\/FPGAs, HPC, constant and expected\nperformance and security boundaries is only going to grow. A recent report by\nthe <a href=\"https:\/\/www.mordorintelligence.com\/industry-reports\/bare-metal-cloud-market\">Mordor Intelligence\ncompany<\/a>\nreports  \u201cThe bare metal cloud market was valued at USD 1.75 billion in 2019 and\nexpected to reach USD 10.56 billion by 2025\u201d which clearly shows that there is\ngrowing demand for a modern platform to provision their bare-metal\ninfrastructure.<\/p>\n\n<p>Datacenter management is hard, and that\u2019s why the public cloud got so much\ntraction. For companies and products, managing hardware is unnecessary and a\ndistraction, but when it becomes a requirement or when you think it is strategic\nto manage your own hardware Tinkerbell and its community comes to rescue you.<\/p>\n\n<p class=\"alert alert-info\">A big thank you goes to <a href=\"https:\/\/twitter.com\/thebsdbox\">Dan<\/a> for his review and\nsupport writing this article!<\/p>\n\n<h2 id=\"more-i-want-more\">More, I want more!<\/h2>\n\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=Y04eCSKaQCc\">Dan and Jeremy had a conversation<\/a>\nabout netbooting and bare metal provisioning.  It is available on YouTube, you\nshould really have a look at it!<\/p>\n\n<p><a href=\"https:\/\/www.youtube.com\/watch?v=QxpKnMGywTU\">Alex Ellis and Mark Coleman recorded a\nvideo<\/a> setting up and using Tinkerbell.\nThe video is a bit out of date and they did not use the new sandbox project\nbecause it was not available at that time. But still a good and valuable!<\/p>\n"},{"title":"Reactive planing in Golang. Reach a desired number adding and subtracting random numbers","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/reactive-plan-golang-example"}},"description":"An example about how to write reactive planning in Go. Code and step by step solution for an exercise I developed to learn planner","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2020-10-26T10:08:27+00:00","published":"2020-10-26T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/reactive-plan-golang-example","content":"<p>Ciao! A few months ago, probably a year, I wrote a small library called\n<a href=\"https:\/\/github.com\/gianarb\/planner\">planner<\/a>. It comes from my experience using\nreactive planning and Kubernetes. I am really in love with this way of writing\ncode because it sounds very reliable to me.<\/p>\n\n<p>Over the last couple of days, I decided to write documentation for it! So now it\nis presentable; I streamed that with <a href=\"https:\/\/twitch.tv\/gianarb\">Twitch<\/a> if you\nlike to watch people coding!<\/p>\n\n<p>As part of the library\u2019s readme, I wrote a small program, and I left a couple of\nexercises to the reader. With this article, I want to solve them.<\/p>\n\n<p>You can follow this article and try it yourself, starting from\n<a href=\"https:\/\/play.golang.com\/p\/0LuIoMtp10f\">play.golang.com<\/a>.<\/p>\n\n<pre><code class=\"language-golang\">package main\n\nimport (\n\t\"context\"\n\t\"time\"\n\n\t\"github.com\/gianarb\/planner\"\n\t\"go.uber.org\/zap\"\n)\n\nfunc main() {\n\tctx, done := context.WithTimeout(context.Background(), 10*time.Second)\n\tdefer done()\n\n\tcountPlan := &amp;CountPlan{\n\t\tTarget: 20,\n\t}\n\tscheduler := planner.NewScheduler()\n\tscheduler.WithLogger(initLogger())\n\n\tscheduler.Execute(ctx, countPlan)\n}\n\ntype CountPlan struct {\n\tTarget  int\n\tcurrent int\n}\n\nfunc (p *CountPlan) Create(ctx context.Context) ([]planner.Procedure, error) {\n\tif p.current &lt; p.Target {\n\t\treturn []planner.Procedure{&amp;AddNumber{plan: p}}, nil\n\t}\n\treturn nil, nil\n}\n\nfunc (p *CountPlan) Name() string {\n\treturn \"count_plan\"\n}\n\ntype AddNumber struct {\n\tplan *CountPlan\n}\n\nfunc (a *AddNumber) Name() string {\n\treturn \"add_number\"\n}\n\nfunc (a *AddNumber) Do(ctx context.Context) ([]planner.Procedure, error) {\n\ta.plan.current = a.plan.current + 1\n\treturn nil, nil\n}\n\nfunc initLogger() *zap.Logger {\n\tcfg := zap.NewProductionConfig()\n\tcfg.Encoding = \"console\"\n\tl, _ := cfg.Build()\n\treturn l\n}\n<\/code><\/pre>\n\n<p>This program tries to reach the <code>Target<\/code> (20 in this case) adding numbers to the\ncurrent state. If you execute this program as it is you will get the following\nlogs:<\/p>\n\n<pre><code class=\"language-console\">1.257894e+09\tinfo\tplanner@v0.0.1\/scheduer.go:41\tStarted execution plan count_plan\t{\"execution_id\": \"98d28eed-9b3b-4ad8-bfbd-1b5338d1a649\"}\n1.257894e+09\tinfo\tplanner@v0.0.1\/scheduer.go:59\tPlan executed without errors.\t{\"execution_id\": \"98d28eed-9b3b-4ad8-bfbd-1b5338d1a649\", \"execution_time\": \"0s\", \"step_executed\": 20}\n<\/code><\/pre>\n\n<p>As you can see, the scheduler executed the plan <code>count_plan<\/code> successfully, and\nit took 20 steps to get there (<code>step_executed: 20<\/code>).<\/p>\n\n<p>Reasonable because, as you can see, the <code>CounterPlan.Create<\/code> function returns an\n<code>AddNumber<\/code> procedure and that procedure only adds 1 to the current state. It is\njust a counter; let\u2019s make it a bit more fun. I want to add or substract random\nnumber until the target is reached. The program adds when the current state is above the\ntarget, when above it subtracts. If it\u2019s equal we are done. This is a simple way\nto simulate something that has to adapt, too simple to sound cool but still\nsomething understandable.<\/p>\n\n<h3 id=\"change-the-addnumber-to-use-a-randomly-generated-number\">Change the AddNumber to use a randomly generated number.<\/h3>\n\n<p>We need to change the AddNumber in order to add not 1 but a random number. Let\u2019s\ndo it:<\/p>\n\n<pre><code class=\"language-go\">var random *rand.Rand\nfunc initRandom() {\n    s1 := rand.NewSource(time.Now().UnixNano())\n    random = rand.New(s1)\n}\n<\/code><\/pre>\n\n<p>At this point, we can use <code>random<\/code> as part of the <code>AddNumber.Do<\/code> function.<\/p>\n\n<pre><code class=\"language-go\">type AddNumber struct {\n\tplan *CountPlan\n}\n\nfunc (a *AddNumber) Name() string {\n\treturn \"add_number\"\n}\n\nfunc (a *AddNumber) Do(ctx context.Context) ([]planner.Procedure, error) {\n\ta.plan.current = a.plan.current + random.Intn(10)\n\treturn nil, nil\n}\n<\/code><\/pre>\n\n<p>For simplicity, I am taking a random number between 0 and 10. What happens now?\nThe problem now is that we can go above the target, so we have to make our\n<code>CounterPlan.Create<\/code> function and our logic a bit more complicated.<\/p>\n\n<h2 id=\"evolve-the-create-function-to-subtract-numbers-from-the-current-state\">Evolve the Create function to subtract numbers from the current state<\/h2>\n\n<pre><code class=\"language-go\">func (p *CountPlan) Create(ctx context.Context) ([]planner.Procedure, error) {\n\tif p.current &lt; p.Target {\n\t\treturn []planner.Procedure{&amp;AddNumber{plan: p}}, nil\n\t} else if p.current &gt; p.Target {\n\t\treturn []planner.Procedure{&amp;SubtractNumber{plan: p}}, nil\n\t}\n\treturn nil, nil\n}\n<\/code><\/pre>\n\n<p>When we go above the target, the Plan subtracts a random number, and it keeps\ngoing until we get to it. <code>SubtractNumber<\/code> does the opposite of what\n<code>AddNumber<\/code> does, it subtracts a random number between 0 an 10.<\/p>\n\n<pre><code class=\"language-go\">type SubtractNumber struct {\n\tplan *CountPlan\n}\n\nfunc (a *SubtractNumber) Name() string {\n\treturn \"subtract_number\"\n}\n\nfunc (a *SubtractNumber) Do(ctx context.Context) ([]planner.Procedure, error) {\n\ta.plan.current = a.plan.current - random.Intn(10)\n\treturn nil, nil\n}\n<\/code><\/pre>\n\n<p>You can run the result <a href=\"https:\/\/play.golang.com\/p\/JDuizzUI86M\">here<\/a>, and you\nwill see that based on the random numbers, it adds or subtracts the number of\nexecuted steps changes.<\/p>\n\n<p>NOTE: the golang playground always starts from the same time; in my example, I\nuse time as Seed; for this reason, to see a variation in the number of steps,\nyou will have to run the code locally.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>This is probably a too straightforward example, but let\u2019s imagine that\nyour Target is not fixed and varies based on external factors. Your house\u2019s\ntemperature and this program is a thermostat that has to keep your room at the\ndesired temperature. Or the number of instances running in your cloud provider,\nand you have to keep them balanced. This last use case is the exact problem I\nsolved writing <a href=\"https:\/\/github.com\/gianarb\/keepit\">keepit<\/a> a replica set for\n<a href=\"https:\/\/metal.equinix.com\">Equinix Metal<\/a> servers. I used planner, so check it out.<\/p>\n\n<p>I didn\u2019t highlight this example because this pattern gives you an excellent way\nto measure how reliable your program is. Think about it in this way; you can\nprogrammatically handle errors returning a procedure or more than one that can\nmitigate the error itself. It can be a \u201csleep for 5 minutes and retry\u201d, or you\ncan do something more complicated, and until the Plan keeps returning work to\ndo, you will have the opportunity to succeed. I extracted an highlight from the\n<a href=\"https:\/\/www.twitch.tv\/videos\/780401570\">Twitch stream<\/a> rambling about this.<\/p>\n\n<p>Have a nice week!<\/p>\n"},{"title":"Your release workflow is code, it is just about time","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/release-workflow-as-code"}},"description":"I think code is some way has to win against specification languages or DSL or even from languages that are not easy to move around like bash. The Kubernetes sig-release is migrating a bunch of scripts from bash to Go and I think it is the right way to go. You will do the same, it is just about time.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2020-10-21T10:08:27+00:00","published":"2020-10-21T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/release-workflow-as-code","content":"<p>20th October 2020 is the day I released Kubernetes for the first time. To be\nprecise, I piloted with the help of sig-release Kubernetes <code>v1.20.0alpha3<\/code>. One of\nthe reasons I am happy to work with the sig-release is to learn and see how such\na significant process gets released reliably and continuously by a group of\npeople coming from different backgrounds, jobs, and locations.<\/p>\n\n<p>The first lesson learned you would notice as soon as you join the SIG meeting\nmore frequently or as soon as you start contributing is the general effort in\nconverting what used to be bash scripts to Go.<\/p>\n\n<p>Now, don\u2019t fight against the languages by themself, but I think the story is\nreasonable. You start small, and when it comes to releasing code, a lot happens\nin somebody\u2019s terminal. That\u2019s why many of the release workflows I saw in my\nlife are a mix of Makefile and bash script.<\/p>\n\n<p>I don\u2019t think it scales because it is hard to get error handling, retry logic,\nand testing made right in bash. Maybe I am just not good enough with BASH, and I\nknow there are testing libraries for it like\n<a href=\"https:\/\/github.com\/sstephenson\/bats\">bats<\/a>, for example.<\/p>\n\n<p>Anyway, I have to admit, I feel good enough with BASH, but I code way better in\nGo, PHP, and probably even JavaScript. Also, I am sure this is a feeling a share\nwith many people, and more in general, the Kubernetes development community is\nvery fluent with Golang.<\/p>\n\n<p>Anyway, let\u2019s treat the code that empowers the release lifecycle as application\ncode, just as the Sig Release is doing with Kubernetes. Documentation, testing,\nuser experience, and so on. Develop useful libraries that can be encapsulated in\ncommand-line tools, or API, or bots.<\/p>\n\n<p>There is a BASH script that takes snapshots from\n<a href=\"http:\/\/testgrid.k8s.io\/\">Testgrid<\/a> called\n<a href=\"https:\/\/github.com\/kubernetes\/release\/blob\/master\/testgridshot\">testgridshot<\/a>,\nuploads them to Google Cloud, and outputs a markdown that can be copy-pasted as\na comment in <a href=\"https:\/\/github.com\/kubernetes\/sig-release\/issues\/1296\">the issue we use to track every\nrelease<\/a>. We run it to\ntake a snapshot of the various testing pipeline status at the time of a release.<\/p>\n\n<p>testgridshot is the unique one in BASH I had to interact with for now, and it\ndidn\u2019t work because of some environmental issues with my laptop. Coincidence? It\ncan be solved by running it as a container and having a binary with statically\ncompiled with all the needed dependencies.<\/p>\n\n<p><a href=\"https:\/\/twitter.com\/comedordexis\">Carlos<\/a> is currently working on rewriting\ntestgridshot in Golang; it will use as a command-line interface, and I think it\nwill be even better to encapsulate it as prow capability.<\/p>\n\n<p><a href=\"https:\/\/github.com\/kubernetes\/test-infra\/tree\/master\/prow\">Prow<\/a> is the\nKubernetes CI\/CD system. It can trigger jobs for particular actions, and almost\neverything you see happening in GitHub when using <code>\/<\/code> commands like <code>\/open<\/code> <code>\/assign<\/code>\nand so on is a Prow responsibility.<\/p>\n\n<p>Testgridshot is useful during a release cut. The cut starts from a GitHub issue;\nas we saw, it sounds very comfortable to have a command available like\n\/testgridshot and leaving Prow the responsibility to comment.<\/p>\n\n<p>Now the takeaway hidden by the word <strong>encapsulates<\/strong>, it is great to have both a\nCLI and a Prew command. Go becomes your baseline where the operational\nexperience live. All the rest is a UX, and you can have as many of them.<\/p>\n\n<p>I am not writing this because you should stop and move all your BASH to\nsomething else, but I experienced by myself. I see this little story with the\nKubernetes SIG release to confirm that it\u2019s easy to block ourselves as release\nengineering because there is a BASH script that we don\u2019t want to rewrite. After\nall, it is like that since forever. The project is not the same since day one,\nthe team grew or changed, and it is reasonable for a workflow to follow this\nevolution.<\/p>\n"},{"title":"How bare metal provisioning works in theory","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/how-bare-metal-works-in-theory"}},"description":"What I learned about how bare metal provisioning works developing tinkerbell.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2020-10-08T10:08:27+00:00","published":"2020-10-08T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/how-bare-metal-works-in-theory","content":"<p>I am sure you heard about bare metal. Clouds are made of bare metal, for example.<\/p>\n\n<p>The art of bringing to life an unanimated piece of metal like a server to something useful is something I am learning since I joined <a href=\"https:\/\/metal.equinix.com\">Equinix Metal<\/a> in May.<\/p>\n\n<p>Let me make a comparison with something you are probably familiar with. Do you know why Kubernetes is hard? Because there is not one Kubernetes. It is a glue of an unknown number of pieces working together to help you deploy your application.<\/p>\n\n<p>Bare Metal is almost the same, hundreds of different providers, server size, architectures, chips that in some way you have to bring to life.<\/p>\n\n<p>We have to work with some common concepts we have. When a server boots, it runs a BIOS that looks in different places for something to run:<\/p>\n\n<ol>\n  <li>It looks for a hard drive<\/li>\n  <li>It looks for external storage like a USB stick or a CD-Rom<\/li>\n  <li>It looks for help from your network (netbooting)<\/li>\n<\/ol>\n\n<p>Options one and two are not realistic if the end goal is to get to a handsfree, reliable solution. I am sure cloud providers do not have people running around with a USB stick containing operating systems and firmware.<\/p>\n\n<h2 id=\"netbooting\">Netbooting<\/h2>\n\n<p>I spoke about <a href=\"https:\/\/gianarb.it\/blog\/first-journeys-with-netboot-ipxe\">my first experience netbooting Ubuntu<\/a> on my blog. That article is efficient with reproducible code. Here the theory.<\/p>\n\n<p>When it comes to netbooting, you have to know what PXE means. Preboot Execution Environment is a standardized client\/server environment that boots when no operating system is found, and it helps an administrator boot an operating system remotely. Don\u2019t think about this OS as the one you have in your laptop, I mean, technically it is, but the one your run there or in a server is persisted, that\u2019s why you can have files that survive a reboot.<\/p>\n\n<p>The one you start with PXE runs in memory, and from there, you have to figure out how to get the persisted OS you will run in your machine.<\/p>\n\n<p>When the in-memory operation system is up and running, you can do everything you are capable of with Ubuntu, Alpine, CentOS, or Debian. In practice, what people tend to do is to run applications and scripts to format a disk with the right partition and to install the end operation system.<\/p>\n\n<p>Pretty cool. PXE is kind of old, and for that reason, it is burned into a lot of different NICs. You will hear a lot more about iPXE, a \u201cnew\u201d PXE implementation. What is cool about those is the <code>chain<\/code> function. From one PXE\/iPXE environment, you can chain another PXE\/iPXE environment. That\u2019s how you get from PXE (that usually runs by default in a lot of hardware (if you have a NUC you run it)) to iPXE.<\/p>\n\n<pre><code>chain --autofree https:\/\/boot.netboot.xyz\/ipxe\/netboot.xyz.lkrn\n<\/code><\/pre>\n\n<p>iPXE supports a lot more protocols usable to download OS from such as TFTP, FTP, HTTP\/S, NFC\u2026<\/p>\n\n<p>This is an example of iPXE script:<\/p>\n\n<pre><code>#!ipxe\ndhcp net0\n\nset base-url http:\/\/archive.ubuntu.com\/ubuntu\/dists\/focal\/main\/installer-amd64\/current\/legacy-images\/netboot\/ubuntu-installer\/amd64\/\nkernel ${base-url}\/linux console=ttyS1,115200n8\ninitrd ${base-url}\/initrd.gz\nboot\n<\/code><\/pre>\n\n<p>The first command, <code>dhcp net0<\/code>, gets an IP for your hardware from the DHCP. <code>kernel<\/code> and <code>initrd<\/code> set the kernel and the initial ramdisk to run in memory.<\/p>\n\n<p><code>boot<\/code> starts the <code>kernel<\/code> and the <code>initrd<\/code> you just set.<\/p>\n\n<p>There is more; this is what I find myself using more often.<\/p>\n\n<h3 id=\"infrastructure\">Infrastructure<\/h3>\n\n<p>To netboot successfully, you need to distribute a couple of things:<\/p>\n\n<ol>\n  <li>An iPXE script<\/li>\n  <li>The operating system you want to run (kernel and initrd)<\/li>\n<\/ol>\n\n<h3 id=\"workflow\">Workflow<\/h3>\n\n<ol>\n  <li>Server starts<\/li>\n  <li>There is nothing to boot in the HD<\/li>\n  <li>Starts netbooting<\/li>\n  <li>It makes a DHCP request to get network configuration, and  the DHCP returns the TFTP address with the location of the iPXE binary<\/li>\n  <li>iPXE starts and makes another DHCP request; the response contains the URL of the iPXE scripts with the commands you saw above<\/li>\n  <li>At this point, iPXE runs the script, downloads the kernel, and the initrd with the protocol you specified, and it runs the in-memory operating system.<\/li>\n<\/ol>\n\n<p>Pretty cool!<\/p>\n\n<h2 id=\"the-in-memory-operating-system\">The in-memory operating system<\/h2>\n\n<p>The in-memory operating system can be as smart as you like; you can build your one, for example, starting from Ubuntu or Alpine. Size counts here because it has to fit in memory.<\/p>\n\n<p>When the operating system starts, it runs as PID 1, what is called <code>init.<\/code> It is an executable located in the ramdisk called <code>\/init.<\/code> That script can be as complicated as you like. It can be a problematic binary that downloads from a centralized location commands to execute, or it can be bash scripts that format the local disk and installs the final operating system.<\/p>\n\n<p>What I am trying to say is that you have to make the in-memory operating system useful for your purpose. If you use native Alpine or Ubuntu, the init script will start a bash shell, not that useful.<\/p>\n\n<h2 id=\"dhcp\">DHCP<\/h2>\n\n<p>As you saw, the DHCP plays an important role. It is the first point of contact between unanimated hardware and the world. If you can control what the DHCP can do, you can, for example, register and monitor the healthiness of a server.<\/p>\n\n<p>Imagine you are at your laptop, and you are expecting a hundred new servers in one of your datacenters, monitoring the DHCP requests. You will know when they are plugged into the network.<\/p>\n\n<h2 id=\"containers-what\">Containers what?<\/h2>\n\n<p>Containers are a comfortable way to distribute and run applications without having to know how to run them. Think about this scenario. Your in-memory operating system at boot runs Docker. The <code>init<\/code> script at this point can pull and run a Docker container with your logic for partitioning the disk and installing an operating system, or it runs some workload and exit leaving space for the next boot (a bit like serverless, but with servers, way cooler).<\/p>\n\n<p>Or the Docker Container can run a more complex application that reaches a centralized server that dispatches a list of actions to execute via a REST or gRPC API. Those actions can be declared and stored from you.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>The chain of tools and interactions to get from a piece of metal to something that runs some workload is not that long. Controlling all the steps and the tools along the way gives provides the ability to provisioning cold servers from zero to something that developers know better how to use.<\/p>\n\n<p>Ok, I lied to you. This is not just theory. This is how <a href=\"https:\/\/tinkerbell.org\">Tinkerbell<\/a> works.<\/p>\n\n<p class=\"small\">This post was originally posted on <a href=\"https:\/\/dev.to\/gianarb\/how-bare-metal-provisioning-works-in-theory-1e4e\">dev.to<\/a>.<\/p>\n"},{"title":"Thinking in Systems written by Donella H. Meadows","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/thinking-in-systems-donella-meadows-review"}},"description":"Review of the book Thinking in Systems written by Donella Meadows","image":"https:\/\/gianarb.it\/img\/thinking-in-systems-book.jpg","updated":"2020-09-12T10:08:27+00:00","published":"2020-09-12T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/thinking-in-systems-donella-meadows-review","content":"<p><a href=\"https:\/\/amzn.to\/3khu7k9\">\u201cThinking in Systems\u201d<\/a> acted on me as a <strong>reinforcing loop<\/strong>. The motivation I have to\nexplore the ability to think of Software as loops and systems show up reading\nthose pages.<\/p>\n\n<blockquote>\n  <p>The second kind of feedback loop is amplifying, reinforcing, self-multiplying,\nsnowballing\u2013 a vicious or virtuous circle that can cause healthy growth or\nrunaway destruction. It is called a reinforcing feedback loop\u2026<\/p>\n\n  <p>Thinking in systems - Donella H. Meadows<\/p>\n<\/blockquote>\n\n<p class=\"text-center\"><img src=\"\/img\/thinking-in-systems-book.jpg\" alt=\"Thinking in Systems's book cover\" class=\"img-fluid mh-25\" \/><\/p>\n\n<p>It is a great way to put words close to concepts I am exploring in practice. It\nis not a book written for developers, but you know, it works, and you can apply\nwhat you read everywhere.<\/p>\n\n<p>Words are coming from <a href=\"https:\/\/en.wikipedia.org\/wiki\/Donella_Meadows\">Donatella Meadows<\/a>, the author of this book, a\nscientist, author, teacher, and a system analyst.<\/p>\n\n<p>If you enjoyed <a href=\"https:\/\/gianarb.it\/blog\/reactive-planning-and-reconciliation-in-go\">\u201cReactive planning and reconciliation in Go.\u201d<\/a> this book is a\ngreat way to go deeper into this topic without reading a long book; in the end,\nit is less than 200 pages.<\/p>\n\n<p>Systems are everywhere: the water cycle, the ability our body has to recover,\nthe evolution. This book helps you understand how to describe and spot them,\ngiving you the ability to see systems, such as when writing Software or when\nlooking for alternative solutions.<\/p>\n\n<p>From my experience, when you can simplify a problem into a system, you get an\nentity capable of balancing itself, a more resilient solution, and repeatable\nworkflow.<\/p>\n\n<p>Self-balancing as a thermostat, for example. Resilient as a Kubernetes\nreconciliation loop, repeatable as the water cycle is or an idempotent server\nprovisioning.<\/p>\n\n"},{"title":"Maintainer life, be an enabler","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/maintainer-life-be-enabler"}},"description":"Be an enabler is important in my daily job. It is a skill I learned as open source maintainer.","image":"https:\/\/gianarb.it\/img\/me.jpg","updated":"2020-08-20T09:08:27+00:00","published":"2020-08-20T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/maintainer-life-be-enabler","content":"<p>This is not something important only in open source, or as maintainer. But it is\na skill I have personally learned as one.<\/p>\n\n<p>When I have to build a sustainable open source project, but it applies with\nteams as well developing the right code or new feature is often not that useful\nin the long term. I think you get quickly to something better when people\ncollaborate together in an effective way.<\/p>\n\n<p>The maintainer\u2019s role is to enable other people to contribute successfully.<\/p>\n\n<p>You have to switch from \u201clet\u2019s write documentation\u201d to \u201chow do I create a\nworkflow that enables contributors to write documentation.\u201d I started with doc\nbecause I think it is crucial. It is easier to write documentation when you are\nwriting code or a new feature. And we know a developer prefers to write code; as\na maintainer, you need to create a workflow that allows the contributor to write\ndocumentation quickly when writing code. In practice, mark a PR with a label\n<code>needs-doc<\/code> and make it a requirement for the PR to be merged. The\nmaintainer has to design a rock-solid structure for the documentation. In this\nway, the contributor won\u2019t spend two days trying to figure out where to add the\ndocumentation for its feature.<\/p>\n\n<p>You can\u2019t ask a contributor to create an entire test suite or to write\ndocumentation if you don\u2019t have one. But from a solid foundation is reasonable\nto ask a contributor to keep at least the same quality level.<\/p>\n\n<p>You don\u2019t write all the tests; you create and maintain the continuous delivery\npipeline required to help contributors to stay compliant. Is your project suffering\nfrom low test coverage? Do not waste time writing all the tests yourself;\ncodebase is significant, and pull requests are flowing continuously. You have to\nstay focused on developing a system that brings and keeps you where you want:\ngood coverage in this case.<\/p>\n\n<p>In practice, you can create another label <code>needs-tests<\/code> to notify the\ncontributor that its work won\u2019t be merged until tests will be added (the plural\nis crucial, tests!). You can use something like <a href=\"https:\/\/codecov.io\/\">codecov<\/a>\nin your CI to evaluate with numbers the situation. Invest time being sure that\ntests are easy to write; write a doc in the contributor file highlighting how to\nwrite a good test. If a package is too hard to test, you can write a few tests\nthat other people can use as a starting point or a reusable set of utility\nfunctions.<\/p>\n\n<p>Being a facilitator or an enabler is a lot of work. If you feel less\neffective because you wrote 90% of the codebase at this point and you can write\ndocumentation and tests by yourself in a couple of days you are wrong. Or at\nleast from my experience you can do it but the outcome will be worst in quality\ncompared with the one you can build from a solid foundation in a collaborative\nenvironment.<\/p>\n"},{"title":"Interface segregation in action with Go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/interface-segreation-in-action-with-go"}},"description":"It takes a couple of hours to get an hello world up and running in a new language but it takes ages to learn it deeply. Even if Go has a learning curve that is affordable some concepts take time to stick in mind. Interface are everywhere and this flexibility makes them crucial to write maintainable Go code.","image":"https:\/\/gianarb.it\/img\/me.jpg","updated":"2020-08-20T09:08:27+00:00","published":"2020-08-20T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/interface-segreation-in-action-with-go","content":"<p>Everybody should write an article about Golang interface! I don\u2019t know why I\nwaited so long for mine!<\/p>\n\n<p>Golang interfaces are your best friends when it comes to mocking an object or to\nspecify a well scoped set of functionalities required by a function to interact\nwith an object.<\/p>\n\n<p>Yep! That\u2019s how they work, you have an entire object that does a lot of cool\nthings, but when you pass it to a function only a subset of it get used, that\u2019s\nwhen you can replace the structure itself with an interface that only requires\nwhat it is needed by the structure.<\/p>\n\n<p>In this way you will have a smaller piece of code to mock in your test and to\ndeal with (this is a good way to hide functions you don\u2019t want other people or\nyourself in a rush to use).<\/p>\n\n<p>Even more when you remember to keep the interface small via composition.<\/p>\n\n<p>For example let\u2019s suppose you have to build an interface that describes a generic\nresource that you can Create, Update and Delete. This is useful to standardize\nsomething that can be persisted in a database. I am setting this up\nso.<\/p>\n\n<p>You should not use <code>interface{}<\/code> because it is too generic. I used it for\nsimplicity but Kubernetes for examples uses an object called\n<a href=\"https:\/\/godoc.org\/k8s.io\/apimachinery\/pkg\/runtime\"><code>runtime.Object<\/code><\/a> and it way\nbetter. Go 2 will have generics that will make this situation even easier. Or\nyou can use code generation as well. But the idea to use a serializable object\nlike Kubernetes is good.<\/p>\n\n<pre><code class=\"language-golang\">type Resource interface {\n    Create(ctx context.Context) error\n    Update(ctx context.Context, updated interface{}) error\n    Delete(ctx context.Context) error\n}\n<\/code><\/pre>\n\n<p>This is a reasonably small interface, it is easy to satisfy but I do not like\nthe name. I think it does not give me the ability to figure out what\u2019s its\npurpose. It represents, a resource but I prefer to call interface as actions or\na adjective. In this case the structure who implements this interface can be\nstored in a database. I think a better name for it is:\n<a href=\"https:\/\/en.wiktionary.org\/wiki\/persistable\">\u201cPersistable\u201d<\/a> because it makes\nclear its purpose.<\/p>\n\n<p>A strategy to make an interface smaller in this case is to break it in actions:<\/p>\n\n<pre><code class=\"language-golang\">type Creatable interface {\n    Create(ctx context.Context) error\n}\n\ntype Updatable interface {\n    Update(ctx context.Context, updated interface{}) error\n}\n\ntype Deletable interface {\n    Delete(ctx context.Context) error\n}\n<\/code><\/pre>\n\n<p>And you can use composition to create an interface that requires all the three\nactions to work if you need it:<\/p>\n\n<pre><code class=\"language-golang\">type Persistable interface {\n    Deletable\n    Updatable\n    Creatable\n}\n<\/code><\/pre>\n\n<p>This is useful when a function uses more than one of those actions, if you have\nan interface that contains also <code>Get<\/code> or <code>View<\/code> you can think about a different\nsplit <code>ReadOnly<\/code> contains <code>Get<\/code>, <code>View<\/code> and <code>Modifiable<\/code> that will require only\nthe functions <code>Update<\/code>, <code>Create<\/code>, <code>Delete<\/code>.<\/p>\n\n<p>Imagine you are writing a set of http handlers to expose a CRUD API around your\nresources:<\/p>\n\n<pre><code>Create\nUpdate\nDelete\nList\nGetByID\n<\/code><\/pre>\n\n<p>Usually it looks like this, you can create an interface for every function, all\nyour resources will implement the functions and you will be able to write a\nsingle \u201cCreate\u201d handle for all the resources:<\/p>\n\n<pre><code class=\"language-golang\">func CreateHandle(c Creatable) func(w http.ResponseWriter, r *http.Request) {\n    return http.HandleFunc(\"\/resource\", func(w http.ResponseWriter, r *http.Request) {\n        if err := c.Create(r.Context); if err != nil {\n            w.WriteHeader(http.StatusInternalServerError)\n            return\n        }\n        w.WriteHeader(http.StatusCreated)\n    })\n}\n<\/code><\/pre>\n\n<p>If you have to write a test for the handler it does not matter how complicated\nthe resource is, you just have to mock the <code>Creatable<\/code> interface, one single\nfunction. This is a very basic example, if you need to add validation the\n<code>Creatable<\/code> function can require a <code>func Valid() error<\/code> that you can\nadd incrementally in all your resources.<\/p>\n\n<pre><code class=\"language-golang\">func CreateHandle(c Creatable) func(w http.ResponseWriter, r *http.Request) {\n    return http.HandleFunc(\"\/resource\", func(w http.ResponseWriter, r *http.Request) {\n        if err := c.Valid(); err != nil {\n            w.WriteHeader(http.StatusBadRequest)\n            return\n        }\n        if err := c.Create(r.Context); if err != nil {\n            w.WriteHeader(http.StatusInternalServerError)\n            return\n        }\n        w.WriteHeader(http.StatusCreated)\n    })\n}\n<\/code><\/pre>\n"},{"title":"E2E testing Tinkerbell Setup tutorial in Go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/e2e-test-tinkerbell-vagrant-setup-with-go"}},"description":"My takeaway from having a to write a end to end test for the Tinkerbell Vagrant setup tutorial. How I wrote it and why, lesson learned and tips.","image":"https:\/\/gianarb.it\/img\/me.jpg","updated":"2020-08-03T09:08:27+00:00","published":"2020-08-03T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/e2e-test-tinkerbell-vagrant-setup-with-go","content":"<p><a href=\"https:\/\/tinkerbell.org\">Tinkerbell<\/a> is a tool open sourced recently by\n<a href=\"https:\/\/packet.com\">Packet, an Equinix company<\/a>, the company I work for.<\/p>\n\n<p>It is a provisioner for bare metal. You can switch servers on and off via API,\nexecuting workflows and install operating systems on a server that does not have\none!<\/p>\n\n<p>Tinkerbell is in its early days as open source project but the concept is battle\ntested from 6 years of production use internally at Packet.<\/p>\n\n<p>I am excited to learn a lot of the cool technologies that are making datacenters\nworking, but I am not here to write about it<sup id=\"fnref:1\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1<\/a><\/sup>.<\/p>\n\n<p>One of my recent tasks<sup id=\"fnref:2\"><a href=\"#fn:2\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">2<\/a><\/sup> was about end to end testing the Vagrant Setup\ntutorial<sup id=\"fnref:3\"><a href=\"#fn:3\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">3<\/a><\/sup> we wrote.<\/p>\n\n<p>I like the idea! The Setup tutorial is important for our community because it is\nthe entry point for a lot of people and having a consistent way to test its\naccuracy is crucial.<\/p>\n\n<p>It is also a quick way to get a valuable end to end test running that covers the\nentire project, at a high level.<\/p>\n\n<p>Tinkerbell is under development and it is easy to make mistakes and break\nthings at this point, we have to know when it happens. Tinkerbell requires\nvirtualisation capabilities, and we do not have an end to end testing framework\nfor that yet.<\/p>\n\n<h2 id=\"tell-me-more-about-the-test-itself\">Tell me more about the test itself<\/h2>\n\n<p>It is a long tasks but let\u2019s summarize it (have a look at the tutorial, it helps\nto read this article moving forward):<\/p>\n\n<ol>\n  <li>The script has to start a vagrant machine called provisioner<\/li>\n  <li>When the provisioner is up it has to exec via ssh a docker-compose command\nthat starts a bunch of containers, one of those is Tink grpc server<\/li>\n  <li>When Tinkerbell is up and running we have to do a bunch of things like:\n a. Register a new hardware\n b. Create a template\n c. Create the workflow that will get executed in the worker from a template<\/li>\n  <li>Start the worker<\/li>\n  <li>Wait and check if the workflows executes as expected.<\/li>\n<\/ol>\n\n<p>NOTE: the test should  clean up after itself, Vagrant is not ideal\nto get parallelization of VMs, and we do not support it. A dirty environment\nwill break future tests as it is today.<\/p>\n\n<h2 id=\"how-to-write-this-test\">How to write this test<\/h2>\n\n<p>There are a million way to write end to end test the one I evaluated are bash\nand Go.<\/p>\n\n<p>The project is in Go, Tinkerbell serves a gRPC server and a client, I thought it\nwas a good idea to write everything in Go to try the client itself and because\nit is easier to coordinate long running actions with channels and context\ncompared with bash for example. Or at least that\u2019s what I think.<\/p>\n\n<p>I can also keep the code inside the <code>testing<\/code> framework that Go provides keeping\nthe test closer to the code and the developers that contribute to the project,\ncompared with a random <code>scripts.sh<\/code>.<\/p>\n\n<p>I am not sure if this will be useful in the future but one of my goal was to\nserve a clean API and a small framework that can be used to write other tests\nthat starts from the Vagrant setup. This is the API I designed:<\/p>\n\n<pre><code class=\"language-go\">type Vagrant struct {}\n\nfunc Up(ctx context.Context, opts ...VagrantOpt) (*Vagrant, error) {}\n\nfunc (v *Vagrant) Destroy(ctx context.Context) error {}\n\nfunc (v *Vagrant) Exec(ctx context.Context, args ...string) ([]byte, error) {}\n<\/code><\/pre>\n\n<p>Consistency is important, developers who knows vagrant or that will have to fix\nthe tests coming from the tutorial will know <code>Up<\/code>, <code>Destroy<\/code> and <code>Exec<\/code> because\nthose verbs are used by Vagrant and in the documentation itself.<\/p>\n\n<p>Even for Go developers <code>Exec<\/code> is not a new function, <code>os\/exec<\/code><sup id=\"fnref:4\"><a href=\"#fn:4\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">4<\/a><\/sup> exists and it\ndoes a similar job, the one I wrote is over ssh.<\/p>\n\n<p>This library now has its own repository:\n<a href=\"https:\/\/github.com\/gianarb\/vagrant-go\">gianarb\/vagrant-go<\/a>.<\/p>\n\n<h2 id=\"go-challenges-and-tips-and-tricks\">Go challenges and tips and tricks<\/h2>\n\n<p>I would like to share some of the challenges I faced when writing the Vagrant\nframework and some tips useful for this task.<\/p>\n\n<h2 id=\"opt-are-great\">Opt are great!<\/h2>\n\n<p>I have to say options are great! It is a well known pattern in Go and it\ntranslates to:<\/p>\n\n<pre><code class=\"language-go\">ctx := context.Background()\n\nmachine, err := vagrant.Up(ctx,\n    vagrant.WithLogger(t.Logf),\n    vagrant.WithMachineName(\"provisioner\"),\n    vagrant.WithWorkdir(\"..\/..\/deploy\/vagrant\"),\n)\nif err != nil {\n    t.Fatal(err)\n}\n<\/code><\/pre>\n\n<p>It allowed me to add new options and to tune the Vagrant struct with strong\ndefault. If you never used it, do it! It is pretty easy, you need an interface\nlike this:<\/p>\n\n<pre><code class=\"language-go\">type VagrantOpt func(*Vagrant)\n<\/code><\/pre>\n\n<p>In this way you can write as many <code>With<\/code>function you need:<\/p>\n\n<pre><code class=\"language-go\">func WithStderr(s io.ReadWriter) VagrantOpt {\n\treturn func(v *Vagrant) {\n\t\tv.Stderr = s\n\t}\n}\n\nfunc RunAsync() VagrantOpt {\n\treturn func(v *Vagrant) {\n\t\tv.async = true\n\t}\n}\n<\/code><\/pre>\n\n<p>I execute the opts as part of the <code>Up<\/code> function:<\/p>\n\n<pre><code class=\"language-go\">func Up(ctx context.Context, opts ...VagrantOpt) (*Vagrant, error) {\n\tconst (\n\t\tdefaultVagrantBin = \"vagrant\"\n\t\tdefaultName       = \"vagrant\"\n\t\tdefaultWorkdir    = \".\"\n\t)\n\tv := &amp;Vagrant{\n\t\tVagrantBinPath: defaultVagrantBin,\n\t\tName:           defaultName,\n\t\tWorkdir:        defaultWorkdir,\n\t\tlog: func(format string, args ...interface{}) {\n\t\t\tfmt.Println(fmt.Sprintf(format, args))\n\t\t},\n\t}\n\tfor _, opt := range opts {\n\t\topt(v)\n\t}\n\n    \/\/ ...\n}\n<\/code><\/pre>\n\n<h3 id=\"test-segmentation-with-packages\">test segmentation with packages<\/h3>\n\n<p>I don\u2019t want to run the vagrant end to end tests as part of the default test\nsuite because they take too much time and they require Vagrant installed. They\ndo not even run in CI in the same way unit test works, but I will get to it\nlater.<\/p>\n\n<p>I learned that packages that starts with <code>_<\/code> does not get executed when using\nsomething like <code>.\/...<\/code>.<\/p>\n\n<p>I wrote the framework and tests as part of the package:<\/p>\n\n<pre><code class=\"language-console\">.\/test\/_vagrant\/\n    .\/vagrant.go\n    .\/vagrant_test.go\n<\/code><\/pre>\n\n<p>In this way to run the tests you have to explicitly call the package out:<\/p>\n\n<pre><code class=\"language-console\">$ go test .\/test\/_vagrant\n<\/code><\/pre>\n\n<h3 id=\"observability-or-what-is-going-on\">Observability or \u201cwhat is going on?\u201d<\/h3>\n\n<p>Go has its own way to print logs during the execution of the tests:<\/p>\n\n<pre><code class=\"language-console\">$ go test -v .\/...\n<\/code><\/pre>\n\n<p>It works because <code>testing<\/code> has a function called <code>t.Log<\/code> and <code>t.Logf<\/code>. Those\nfunctions watches the <code>-v<\/code> flags. To be complaint with that and to keep the\n<code>Vagrant<\/code> struct agnostic I wrote a <code>WithLogger<\/code>:<\/p>\n\n<pre><code class=\"language-go\">func WithLogger(log func(string, ...interface{})) VagrantOpt {\n\treturn func(v *Vagrant) {\n\t\tv.log = log\n\t}\n}\n<\/code><\/pre>\n\n<p>The function it accepts as a argument is <code>t.Logf<\/code>.<\/p>\n\n<p>Continuous Integrations runs with verbosity enabled for this task because it is\nlong and complicated, the logging prints all the outputs from the <code>vagrant up<\/code>\nand <code>destroy<\/code> command, and the stdout for the <code>exec<\/code> over ssh, it gives a very\ngood overview about what is going on.<\/p>\n\n<h3 id=\"stdout-and-stdin-buffer-and-loggers\">Stdout and Stdin, buffer and loggers<\/h3>\n\n<p>I don\u2019t have a lot to say about this other than: \u201cit was very hard to do!!\u201d.\nThe code that fixed my problems can be summarized in this way:<\/p>\n\n<pre><code class=\"language-go\">stderrPipe, err := cmd.StderrPipe()\nif err != nil {\n    return nil, fmt.Errorf(\"exec error: %v\", err)\n}\nstdoutPipe, err := cmd.StdoutPipe()\nif err != nil {\n    return nil, fmt.Errorf(\"exec error: %v\", err)\n}\n\ngo v.pipeOutput(ctx, fmt.Sprintf(\"%s stderr\", cmd.String()), bufio.NewScanner(stderrPipe))\ngo v.pipeOutput(ctx, fmt.Sprintf(\"%s stdout\", cmd.String()), bufio.NewScanner(stdoutPipe))\n\nerr = cmd.Start()\n<\/code><\/pre>\n\n<pre><code class=\"language-go\">func (v *Vagrant) pipeOutput(ctx context.Context, name string, scanner *bufio.Scanner) {\n\tfor scanner.Scan() {\n\t\tselect {\n\t\tcase &lt;-ctx.Done():\n\t\t\treturn\n\t\tdefault:\n\t\t\tv.log(\"[pipeOutput %s] %s\", name, scanner.Text())\n\t\t}\n\t}\n}\n<\/code><\/pre>\n\n<h3 id=\"kill-process-and-subprocess\">Kill process and subprocess<\/h3>\n\n<p>There are a lot of process going on when creating or destroying a VM with\nVagrant. There is VirtualBox for example, and we have an edge case for the\nworker machine because the <code>up<\/code> commands technically never ends, it is in\npending until you <code>destroy<\/code> the machine. But you can\u2019t run multiple commands\nagainst the same machine because <code>up<\/code> holds a lock and it blocks <code>destroy<\/code> to\nexecute. <code>os\/exec<\/code> helps here but you have to tune it a little bit:<\/p>\n\n<pre><code class=\"language-go\">cmd := exec.CommandContext(ctx, v.VagrantBinPath, args...)\ncmd.Dir = v.Workdir\ncmd.Stdout = v.Stdout\ncmd.Stderr = v.Stderr\ncmd.SysProcAttr = &amp;syscall.SysProcAttr{Setpgid: true}\n<\/code><\/pre>\n\n<p>Now when killing <code>cmd<\/code> the subprocess terminates as well.<\/p>\n\n<h2 id=\"continuous-integration\">Continuous Integration<\/h2>\n\n<p>We decided to go with GitHub Actions with a self running runner, in this way we\ncan use Packet bare metal that supports virtualisation.<\/p>\n\n<p>As I told you I don\u2019t want this test to run for all the commit, or for all the\npull request because it is time and resource consuming. It is also risky, so I\nwant maintainers to decide when to trigger it.<\/p>\n\n<p>That\u2019s why it gets triggered with a GitHub label:<\/p>\n\n<pre><code class=\"language-yaml\">name: Setup with Vagrant on Packet\non:\n  push:\n  pull_request:\n    types: [labeled]\n\njobs:\n  vagrant-setup:\n    if: contains(github.event.pull_request.labels.*.name, 'ci-check\/vagrant-setup')\n    runs-on: self-hosted\n    steps:\n    - name: Checkout\n      uses: actions\/checkout@v2\n    - name: Vagrant Test\n      run: |\n        export VAGRANT_DEFAULT_PROVIDER=\"virtualbox\"\n        go test -v .\/test\/_vagrant\n<\/code><\/pre>\n\n<p>This is what it takes to make the process working!! And I am still surprised it\nis so easy! When a contributor label a PR with <code>ci-check\/vagrant-setup<\/code> the\nprocess starts. My idea was to remove the label straight away, but I am\n<a href=\"https:\/\/github.community\/t\/actions-ecosystem-action-remove-labels-fails-resource-not-accessible-by-integration\/124188\">blocked<\/a>.<\/p>\n\n<p>An alternative that we are evaluating is to run it as a cronjob<sup id=\"fnref:5\"><a href=\"#fn:5\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">5<\/a><\/sup> as well.<\/p>\n\n<h2 id=\"testing-is-the-real-power\">Testing is the real power<\/h2>\n\n<p>E2E testing are fun to write because they bring a lot of challenges in terms of\ncoordination and stability. You have to write good code in order to make them\nstable. I hope you learned something from my experience and if you have any\nquestion let me know <a href=\"https:\/\/twitter.com\/gianarb\">here<\/a>. I am happy to go\ndeeper on some of those topics based on your suggestions.<\/p>\n\n<div class=\"footnotes\" role=\"doc-endnotes\">\n  <ol>\n    <li id=\"fn:1\">\n      <p>If you are curious ask me any question on Twitter @gianarb\u00a0<a href=\"#fnref:1\" class=\"reversefootnote\" role=\"doc-backlink\">&#8617;<\/a><\/p>\n    <\/li>\n    <li id=\"fn:2\">\n      <p>https:\/\/github.com\/tinkerbell\/sandbox\/pull\/7\u00a0<a href=\"#fnref:2\" class=\"reversefootnote\" role=\"doc-backlink\">&#8617;<\/a><\/p>\n    <\/li>\n    <li id=\"fn:3\">\n      <p>https:\/\/tinkerbell.org\/setup\/local-with-vagrant\/\u00a0<a href=\"#fnref:3\" class=\"reversefootnote\" role=\"doc-backlink\">&#8617;<\/a><\/p>\n    <\/li>\n    <li id=\"fn:4\">\n      <p>https:\/\/golang.org\/pkg\/os\/exec\/#pkg-examples\u00a0<a href=\"#fnref:4\" class=\"reversefootnote\" role=\"doc-backlink\">&#8617;<\/a><\/p>\n    <\/li>\n    <li id=\"fn:5\">\n      <p>https:\/\/docs.github.com\/en\/actions\/reference\/workflow-syntax-for-github-actions#onschedule\u00a0<a href=\"#fnref:5\" class=\"reversefootnote\" role=\"doc-backlink\">&#8617;<\/a><\/p>\n    <\/li>\n  <\/ol>\n<\/div>\n"},{"title":"Show Me Your Code with David McKay (rawkode): Terraform what?","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/terraform-what-with-rawkode"}},"description":"I would like to layout a GitHub repository using Terraform that helps me to manage GitHub organization via code in a collaborative way. I am not that good with Terraform, even less when it comes to Terraform 0.12 and all the sweet things it does. My friend David McKay aka @rawkode is way better than myself and he will teach me a bunch of things live on Twitch","image":"https:\/\/gianarb.it\/img\/show-me-your-code-logo.png","updated":"2020-07-29T09:00:27+00:00","published":"2020-07-29T09:00:27+00:00","id":"https:\/\/gianarb.it\/blog\/terraform-what-with-rawkode","content":"<blockquote class=\"twitter-tweet tw-align-center\"><p lang=\"en\" dir=\"ltr\">\ud83c\udd98Tomorrow is\n&quot;Show me your code&quot; time! \u231b\ufe0fLive on Twitch!<br \/><br \/>Who is the guest?\nThe unique <a href=\"https:\/\/twitter.com\/rawkode?ref_src=twsrc%5Etfw\">@rawkode<\/a> !<br \/>What is\nall about? David will teach <a href=\"https:\/\/twitter.com\/hashtag\/Terraform?src=hash&amp;ref_src=twsrc%5Etfw\">#Terraform<\/a>\nto a newbie (me!) and I hope to learn how to manage a GitHub organization as\ncode! <a href=\"https:\/\/t.co\/jNbq7uoAD3\">https:\/\/t.co\/jNbq7uoAD3<\/a><br \/>See you\nthere! \ud83d\udda5\ufe0f <a href=\"https:\/\/t.co\/b2P4bj8Tki\">pic.twitter.com\/b2P4bj8Tki<\/a><\/p>&mdash; gianarb\n(@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/1288511212351901697?ref_src=twsrc%5Etfw\">July\n29, 2020<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>Let\u2019s suppose I don\u2019t know a lot about Terraform because infrastructure as code\nis not something that makes me happy. I know it is useful. There is a lot of\ncool things in Terraform itself and I should know it more.<\/p>\n\n<p>I happen to know and work with David that is always on top of the topic! I had\nan idea, and he will help me to validate it live tomorrow August 29th in the\nafternoon Central Europe.<\/p>\n\n<p>We will put together a repository of \u201cGitHub as Code\u201d that we can use to\nmanage collaboratively a GitHub organisations:<\/p>\n\n<ol>\n  <li>Adding new members<\/li>\n  <li>Creating new teams<\/li>\n  <li>New repositories<\/li>\n  <li>Importing already created resources from GitHub to Terraform<\/li>\n  <li>If we can have a look at CI\/CD with GitHub action and\/or Terraform Cloud.<\/li>\n<\/ol>\n\n<p>I will try first time <a href=\"https:\/\/visualstudio.microsoft.com\/services\/live-share\/\">Visual Studio Code with Live\nShare<\/a> and I think\nDavid will teach me a lot of things about Terraform and 0.12 syntax.<\/p>\n\n<p>This is something I hope we can use with\n<a href=\"https:\/\/github.com\/tinkerbell\">Tinkerbell<\/a>.<\/p>\n\n<h2 id=\"david-mckey-rawkode\">David McKey (rawkode)<\/h2>\n\n<p>David McKay is a technologist from Glasgow, Scotland. Currently, working at\nPacket as Senior Tech Evangelist. Well known on Twitter as\n<a href=\"https:\/\/twitter.com\/rawkode\">@rawkode<\/a> and he writes on\n<a href=\"https:\/\/rawkode.com\/articles\">rawkode.com<\/a>.<\/p>\n\n<h2 id=\"links\">Links<\/h2>\n\n<ul>\n  <li>We will start from a nosense prototype I did yesterday\n<a href=\"https:\/\/github.com\/gianarb\/terraformy-github-org\">github.com\/gianarb\/terraformy-github-org<\/a><\/li>\n  <li>We will use Terraform 0.12 with the <a href=\"https:\/\/www.terraform.io\/docs\/providers\/github\/index.html\">GitHub provider<\/a><\/li>\n  <li>Have a look at what we are doing at Packet with\n<a href=\"https:\/\/tinkerbell.org\">Tinkerbell<\/a><\/li>\n<\/ul>\n"},{"title":"First journeys with netboot and ipxe installing Ubuntu","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/first-journeys-with-netboot-ipxe"}},"description":"I have started to experiment with netbot, ipxe and OS automation recently to better understand how bare metal provisioning works. I got to a point where I am able to install Ubuntu automatically via iPXE and preseed. This is an article about how and a bit of why.","image":"https:\/\/gianarb.it\/img\/me.jpg","updated":"2020-06-10T09:08:27+00:00","published":"2020-06-10T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/first-journeys-with-netboot-ipxe","content":"<p>I recently joined Packet, a company acquired by Equinix, finally after 7 years\nworking with cloud computing I can see how they look like from the other side!<\/p>\n\n<blockquote class=\"blockquote text-center text-center text-center text-center text-center text-center text-center\">\n  <p>Spoiler alert: clouds are made of servers<\/p>\n<\/blockquote>\n\n<p>As a first task I had to revamp the kubernetes cluster-api implementation,\nmoving it from v1alpha1 to v1alpha3. Kind of cool and in a domain I know very\nwell. We tagged the first release and I can\u2019t wait to see how it will be used.\nI got some meetups and webinars planned about it, so stay in touch on\n<a href=\"https:\/\/twitter.com\/gianarb\">twitter<\/a> to know more about it.<\/p>\n\n<p>Anyway, one of the topics I am curious about is hardware automation. The idea to\nget up and running, in a repeatable and autonomous way a piece of inanimate\nmetal well known as rack, switch, server is a topic I never touched and I would\nlike to know more! Obviously, this is only one of the articles I will write about\nthe topic. Mainly because there is much to learn.<\/p>\n\n<p>As you can image when we buy a server it does not do much, it\u2019s great to keep\nyour door open, and as a table. It has to be configured, in our case customers\ncan do it via API, it means there is some code involved! I want to know more!!<\/p>\n\n<p>For sure there are a couple of things that you have to do manually, assemble the sever,\npower it on, plug the ethernet cable in, pick the right location and things like\nthat.<\/p>\n\n<p>But as you can imagine it comes without operating system, even more complicated to\ninstall because the customers can select the one they like most, or even push\ntheir own one. This is for sure something that has to be done magically, I doubt\nwe have people running with USB stick in a datacenter installing operating\nsystems.<\/p>\n\n<p class=\"blockquote text-center text-center text-center text-center text-center text-center text-center\"><img src=\"https:\/\/i0.wp.com\/www.anonimacinefili.it\/wp-content\/uploads\/2019\/07\/forrest-gump-25-anni.jpg?fit=1200%2C600\" alt=\"Forst Gump picture\" class=\"img-fluid\" \/><\/p>\n\n<p>One of the things that happen when booting a laptop or server\nis the bootloader. The one that requires a master skill to get in because there\nis a timeout, and I never know what to press! I have to be honest I thought they\nwere pretty static and not that fun.<\/p>\n\n<p>BUT, there are smart boot-loaders! We know what smart means does days: internet! In\npractice there is a bootloader capable of booting not from USB, not from disk\nbut from the internet.<\/p>\n\n<p>Usually a private network but it is not mandatory, and we spend the last couple\nof years doing <code>curl something.com | bash<\/code>.  The bootloader downloads a\n<code>kernet<\/code>, the <code>initrd<\/code> and it will <code>boot<\/code> an operating system. It is like having a\nUSB stick that starts the live installation of Ubuntu, there is a lot more after\nthat because we have to persist the installation on disk, format them and so on.<\/p>\n\n<p>In this article I will show you at first how to get to something that looks like\nlike the installation wizard for ubuntu, and also how to automate the\ninstallation via preseed.<\/p>\n\n<p>The bootloader is called PXE, and the new generation I used is called iPXE.<\/p>\n\n<p>The cool things about PXE\/iPXE is that you can chain scripts from one to\nanother. <a href=\"https:\/\/twitter.com\/grhmc\">Graham Christensen<\/a> told me that the main\ngoal when you work with PXE is to escape from it and get to iPXE, that is way\ncooler. Time for a recap:<\/p>\n\n<ol>\n  <li>The machine starts and enters PXE<\/li>\n  <li>PXE download and chains iPXE in this way we are on iPXE bootload<\/li>\n  <li>From iPXE you can download kernel, initrd and boot the OS in RAM.<\/li>\n<\/ol>\n\n<p>IPXE supports different ways to download what it needs from the internet, the\none I used so far are TFTP and HTTP.<\/p>\n\n<h2 id=\"what-is-this-pxeipxe\">What is this PXE\/iPXE?<\/h2>\n\n<p>Such a nice question, I had the same one few days ago. A couple of links:<\/p>\n\n<ol>\n  <li><a href=\"https:\/\/en.wikipedia.org\/wiki\/Preboot_Execution_Environment\">Wikipedia: Preboot Execution Environment<\/a><\/li>\n  <li><a href=\"https:\/\/ipxe.org\/\">iPXE: open soure boot firmware<\/a><\/li>\n<\/ol>\n\n<p>Roughly you can think about iPXE as a shell that has a bunch of commands like:<\/p>\n\n<ol>\n  <li>dhcp: to require an IP from a DHCP server and configures the network\ninterface<\/li>\n  <li>route: to figure out if the network interface is configured (if it has an IP\nalready)<\/li>\n  <li>chain: gets an argument (a URL) and it executes the content, it is a good way\nto pass scripts<\/li>\n  <li>You can see variables <code>set name value<\/code><\/li>\n  <li>kernet: downloads the kernel from a source and load it<\/li>\n  <li>initrd: download the init ramdisk<\/li>\n  <li>boot: triggers the boot<\/li>\n<\/ol>\n\n<p>And <a href=\"https:\/\/ipxe.org\/cmd\">many more<\/a> that I did not use yet but docs lists\nthem.<\/p>\n\n<p>It also has support for building a menu like this one:<\/p>\n\n<p class=\"blockquote text-center text-center text-center text-center text-center text-center text-center\"><img src=\"https:\/\/netboot.xyz\/images\/netboot.xyz.gif\" alt=\"netboot menu\" class=\"img-fluid\" \/><\/p>\n\n<p>The image comes from <a href=\"https:\/\/netboot.xyz\/\">netboot.xyz<\/a> as you can see from\ntheir website is a project that simplify the process of installing a lot of\ndifferent operating systems via PXE. I started with it at first for my\nexperiments. Obviously menus and automations does not play nice together but in\nthe process of learning I took this extra step.<\/p>\n\n<h2 id=\"hello-world\">Hello world<\/h2>\n\n<p>To give you some context this is the script I used to start the installation\nwizard for Ubuntu:<\/p>\n\n<pre><code>#!ipxe\ndhcp net0\n\nset base-url http:\/\/archive.ubuntu.com\/ubuntu\/dists\/focal\/main\/installer-amd64\/current\/legacy-images\/netboot\/ubuntu-installer\/amd64\/\nkernel ${base-url}\/linux console=ttyS1,115200n8\ninitrd ${base-url}\/initrd.gz\nboot\n<\/code><\/pre>\n\n<p>I hope it looks familiar, some code at least. In order to\nreach the internet you need an IP, and to get one if you are lazy like me is to\nuse a DHCP (the alternative is to set one statically). The first command does that, asks the DHCP an IP for the network\ninterface <code>net0<\/code>.<\/p>\n\n<p>When the IP is set, iPXE reachs <code>ubuntu.com<\/code> to get the kernel and the initrd.\nEverything I need to boot an OS in RAM.<\/p>\n\n<h2 id=\"lets-try\">let\u2019s try<\/h2>\n\n<p>I am using <a href=\"https:\/\/packet.com\">Packet<\/a> for my tests because it serves the low\nlevel capabilities I need, it supports the server creation without OS and with\niPXE. You can register and do it yourself, <code>gophernetes<\/code> is a coupon that will\ngive you 30$ credit.<\/p>\n\n<p>When you request a device (a server) on Packet you can select the operating\nsystem, we don\u2019t need to do it, so you can select <code>Custom iPXE<\/code> because we are\ngoing to install it ourselves.<\/p>\n\n<p class=\"blockquote text-center text-center text-center text-center text-center text-center text-center\"><img src=\"\/img\/packet-create-device.png\" alt=\"A screenshot from packet.com about how to create an on demand device with Custom iPXE\" class=\"img-fluid w-75\" \/><\/p>\n\n<p>There are two ways we can inject out script to iPXE in order to teach the server\nwhat to do when it boot, first one is giving a URL (I use a gist (raw link)), or\nvia user data. The script I used is the one pasted above. You can create a gist\nand paste the link in \u201ciPXE Script URL\u201d or you can use the user data, as I am\ndoing right now.<\/p>\n\n<p class=\"blockquote text-center text-center text-center text-center text-center text-center text-center\"><img src=\"\/img\/packet-user-data.png\" alt=\"A screenshot form packet.com about how to pass a user data to a server\" class=\"img-fluid w-75\" \/><\/p>\n\n<p>As soon as the machine starts you can click on its name to enter the get its\ndetails and you get a ssh into the \u201cOut-of-Band Console\u201d console:<\/p>\n\n<p class=\"blockquote text-center text-center text-center text-center text-center text-center text-center\"><img src=\"\/img\/packet-out-of-band.png\" alt=\"A screenshot from packet.com that shows where to locate the out-of-band\nconsole\" class=\"img-fluid w-75\" \/><\/p>\n\n<p class=\"blockquote text-center text-center text-center text-center text-center text-center text-center\"><img src=\"\/img\/packet-out-of-band-ssh.png\" alt=\"A screenshot from packet.com that shows how to get the ssh command to use\nthe out-of-bound console\" class=\"img-fluid w-75\" \/><\/p>\n\n<p>ALERT: if you are doing this activity, remember to enable OpenSSH when you\nfollow the installation wizard otherwise you won\u2019t be able to SSH in the server\nat the end!<\/p>\n\n<p>When deploying the server the code you passed gets chained from the Packet iPXE.\nAnd you should see the Ubuntu wizard ready for you:<\/p>\n\n<p class=\"blockquote text-center text-center text-center text-center text-center text-center text-center\"><img src=\"\/img\/packet-ubuntu-install-wizard.png\" alt=\"A screenshot from my terminal that shows the first wizard for ubuntu\" class=\"img-fluid\" \/><\/p>\n\n<p>At the end of the wizard you will get a persisted operating system in the server\nitself, it will survive the reboot and it will be just as any other server you\nused in the past, but better because you know how you installed the OS! Get its\nIP and ssh in!<\/p>\n\n<h2 id=\"preseed\">Preseed<\/h2>\n\n<p>Debin like operating system, like Ubuntu support a technology called\n<a href=\"https:\/\/help.ubuntu.com\/lts\/installation-guide\/s390x\/apb.html\">preseed<\/a>, it\npractice is a text file that contains the answers for all the questions the\nUbuntu wizard makes.<\/p>\n\n<p>In this way no point and click is required. I put together a file here and I\nuploaded it as a gist:<\/p>\n\n<pre><code>#### Contents of the preconfiguration file (for stretch)\n### Localization\n# Preseeding only locale sets language, country and locale.\nd-i debian-installer\/locale string en_US.UTF-8\nd-i localechooser\/supported-locales multiselect en_US.UTF-8\nd-i console-setup\/ask_detect boolean false\nd-i keyboard-configuration\/xkb-keymap select GB\n\n# Keyboard selection.\n# Disable automatic (interactive) keymap detection.\nd-i console-setup\/ask_detect boolean false\nd-i keyboard-configuration\/xkb-keymap select us\n\n# netcfg will choose an interface that has link if possible. This makes it\n# skip displaying a list if there is more than one interface.\nd-i netcfg\/choose_interface select auto\n\n# Any hostname and domain names assigned from dhcp take precedence over\n# values set here. However, setting the values still prevents the questions\n# from being shown, even if values come from dhcp.\nd-i netcfg\/get_hostname string unassigned-hostname\nd-i netcfg\/get_domain string unassigned-domain\n\n# Disable that annoying WEP key dialog.\nd-i netcfg\/wireless_wep string\n\n### Mirror settings\nd-i mirror\/country string manual\nd-i mirror\/http\/hostname string archive.ubuntu.com\nd-i mirror\/http\/directory string \/ubuntu\nd-i mirror\/http\/proxy string\n\n# Root password, either in clear text\nd-i passwd\/root-password password rootroot\n#d-i passwd\/root-password-again password rootroot\n# or encrypted using a crypt(3)  hash.\n#d-i passwd\/root-password-crypted password [crypt(3) hash]\n\n# To create a normal user account.\nd-i passwd\/user-fullname string yay\nd-i passwd\/username string yay\n# Normal user's password, either in clear text\nd-i passwd\/user-password password norootnoroot\nd-i passwd\/user-password-again password norootnoroot\n\n# Set to true if you want to encrypt the first user's home directory.\nd-i user-setup\/encrypt-home boolean false\n\n### Clock and time zone setup\n# Controls whether or not the hardware clock is set to UTC.\nd-i clock-setup\/utc boolean true\n\n# You may set this to any valid setting for $TZ; see the contents of\n# \/usr\/share\/zoneinfo\/ for valid values.\nd-i time\/zone string US\/Eastern\n\n# Controls whether to use NTP to set the clock during the install\nd-i clock-setup\/ntp boolean true\n# LG provided NTP, should be replaced!\nd-i clock-setup\/ntp-server string ntp.ubuntu.com\n\n### Partitioning\nd-i preseed\/early_command string umount \/media || true\nd-i partman-auto\/method string lvm\nd-i partman-auto-lvm\/guided_size string max\nd-i partman-lvm\/device_remove_lvm boolean true\nd-i partman-lvm\/confirm boolean true\nd-i partman-lvm\/confirm_nooverwrite boolean true\nd-i partman-auto-lvm\/new_vg_name string main\nd-i partman-md\/device_remove_md boolean true\nd-i partman-md\/confirm boolean true\nd-i partman-partitioning\/confirm_write_new_label boolean true\nd-i partman\/choose_partition select finish\nd-i partman\/confirm boolean true\nd-i partman\/confirm_nooverwrite boolean true\nd-i partman-basicmethods\/method_only boolean false\n\n### Partitioning\nd-i partman-auto\/method string lvm\nd-i partman-lvm\/device_remove_lvm boolean true\nd-i partman-lvm\/confirm boolean true\nd-i partman-lvm\/confirm_nooverwrite boolean true\n\n### Package selection\ntasksel tasksel\/first multiselect ubuntu-desktop\n\n# Individual additional packages to install\nd-i pkgsel\/include string openssh-server build-essential\n# Whether to upgrade packages after debootstrap.\n# Allowed values: none, safe-upgrade, full-upgrade\nd-i pkgsel\/upgrade select full-upgrade\n\nd-i pkgsel\/update-policy select none\n\n# Individual additional packages to install\nd-i pkgsel\/include string openssh-server \\\n    vim \\\n    git \\\n    tmux \\\n    build-essential \\\n    telnet \\\n    wget \\\n    curl\n\n# This is fairly safe to set, it makes grub install automatically to the MBR\n# if no other operating system is detected on the machine.\nd-i grub-installer\/only_debian boolean true\n\n# This one makes grub-installer install to the MBR if it also finds some other\n# OS, which is less safe as it might not be able to boot that other OS.\nd-i grub-installer\/with_other_os boolean true\n\n# Avoid that last message about the install being complete.\nd-i finish-install\/reboot_in_progress note\n<\/code><\/pre>\n\n<p>It is a bit weird but if you are familiar with\nthe Ubuntu installation process I am sure you can spot some similarity.<\/p>\n\n<p>At this point we have to pass some <code>cmdline<\/code> arguments to the kernel in order to\nhave it downloading the preseed file from a raw gist and to tell the kernel that\nthe installation is automatic:<\/p>\n\n<pre><code>#!ipxe\ndhcp net0\n\nset base-url http:\/\/archive.ubuntu.com\/ubuntu\/dists\/focal\/main\/installer-amd64\/current\/legacy-images\/netboot\/ubuntu-installer\/amd64\/\nset preseed-url https:\/\/gist.githubusercontent.com\/gianarb\/acea1ca5b73a318fd74cbb002cae21f3\/raw\/76e5d036ee28c485cc7cf42a317c99e678f08a6c\/ubuntu.preseed\nkernel ${base-url}\/linux console=ttyS1,115200n8 auto=true fb=false priority=critical preseed\/locale=en_GB url=${preseed-url} DEBCONF_DEBUG=5\ninitrd ${base-url}\/initrd.gz\nboot\n<\/code><\/pre>\n\n<p>The mechanism is the same as before, you can create a gist and link it during\nthe server creation or you can paste this as cloud init.<\/p>\n\n<p>At this point you can connect to the <code>Out of Band<\/code> console via ssh and the\ninstallation wizard will look like a movie! When the process is over the server\nreboots and you will be able to SSH in using username: <code>yay<\/code>, password\n<code>norootnoroot<\/code>. If you are looking for the root password have a look at the\npreseed file, the answer is there!<\/p>\n\n<p>Preseed is probably not what you want at the end, but it is an easy enough way\nto get to a persisted OS. It does a lot runtime, by consequence it is time\nconsuming and it can be flaky when reaching the network.\n<a href=\"https:\/\/twitter.com\/thebsdbox\">Dan<\/a> pointed me to other ways to do it using\n<code>raw<\/code> images but probably I will experiment moving forward.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>That\u2019s it, this is a layer I am not familiar with Packet has an open\nsource project called <a href=\"https:\/\/tinkerbell.org\/\">Tinkerbell<\/a>, that does bare metal\nprovisioning.<\/p>\n\n<p>I want to know what it does under the hood! In practice is an open source\nversion of the provisioner used internally to set up servers. We are moving\ntowards to it as well!<\/p>\n\n<p>A lot of the underline technologies like preseed, PXE are 20 years old, and as I\nlike to say: \u201cI have a lot of new things to learn from the 80s\u201d.<\/p>\n\n<p>Don\u2019t know where this will bring me but I think the next articles will look\nlike:<\/p>\n\n<ol>\n  <li>How to get a iPXE server to serve my own kernel, initrd<\/li>\n  <li>How to get a set of RPIs provisioned<\/li>\n<\/ol>\n\n<p>Point me to the right direction or if you are curious to know more about this\ntopic.<\/p>\n"},{"title":"Show Me Your Code with Ivan Pedrazas: Application Lifecycle and GitOps","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/show-me-your-code-application-lifecycle-gitops-with-ivan"}},"description":"We are gonna have a chat with Ivan Pedrazas, we will we can go over a simple API how to build, how to deploy and the issues around it and how GitOps helps.","image":"https:\/\/gianarb.it\/img\/show-me-your-code-logo.png","updated":"2020-05-05T09:00:27+00:00","published":"2020-05-05T09:00:27+00:00","id":"https:\/\/gianarb.it\/blog\/show-me-your-code-application-lifecycle-gitops-with-ivan","content":"<p>Everyone speaks about GitOps those days! Everyone in their own way. Some\npeople think that GitOps means pulling git from inside a kubernetes cluster,\nother people think it has to be done in CI.<\/p>\n\n<p>I don\u2019t know how it works or how it should work but my friend Ivan Pedrazas\n(@ipedrazas) knows, so we are gonna learn from it.<\/p>\n\n<p>The idea is to take a simple application and try to figure out how to apply\nGitOps to it. Let\u2019s see how far we go.<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\"><p lang=\"en\" dir=\"ltr\">My plan is to show a\nsimple app (frontend in vue.js and a flask API), how to build the different\ncomponents, how to deploy them using GitOps. Finally, we will modify different\nparts of the app and build and deploy and rinse and repeat :)<\/p>&mdash; Ivan\nPedrazas (@ipedrazas) <a href=\"https:\/\/twitter.com\/ipedrazas\/status\/1266013907057065985?ref_src=twsrc%5Etfw\">May\n28, 2020<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<h2 id=\"links\">Links<\/h2>\n\n<ul>\n  <li><a href=\"https:\/\/ismenta.slack.com\/join\/shared_invite\/zt-ex04og6p-i9GsneKysUCHc3g6su7y1Q#\/\">Menta - Slack Channel<\/a><\/li>\n  <li><a href=\"https:\/\/gitops-community.github.io\/kit\/#gitops-days-2020-youtube-playlist\">https:\/\/gitops-community.github.io\/kit\/#gitops-days-2020-youtube-playlist<\/a><\/li>\n<\/ul>\n"},{"title":"Show Me Your Code with Enrique Paredes: Kubernetes Permission Manager","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/show-me-your-code-kubernetes-permission-manager"}},"description":"Enrique will share tips and code around kubernetes permission manager a project that brings sanity to Kubernetes RBAC and Users management, Web UI FTW","image":"https:\/\/gianarb.it\/img\/show-me-your-code-logo.png","updated":"2020-05-05T09:00:27+00:00","published":"2020-05-05T09:00:27+00:00","id":"https:\/\/gianarb.it\/blog\/show-me-your-code-kubernetes-permission-manager","content":"<p>Kubernetes when it comes to authentication and authorization is extremely\ncomplicated.<\/p>\n\n<p>I think its philosophy is well described in  the documentation:<\/p>\n\n<blockquote>\n  <p>Normal users are assumed to be managed by an outside, independent service. An\nadmin distributing private keys, a user store like Keystone or Google\nAccounts, even a file with a list of usernames and passwords. In this regard,\nKubernetes does not have objects which represent normal user accounts. Normal\nusers cannot be added to a cluster through an API call.<\/p>\n<\/blockquote>\n\n<p>Kubernetes has user but they have to come from the outside, it is not its\nbusiness to care about them. For authorization it uses RBAC and you have a very\nlong list of possibilities and combinations between actions like: LIST, WATCH,\nCREATE, DELETE and resources: pods, deployments, ingress, services\u2026<\/p>\n\n<p>Sighup is a company based in Italy and well known for their contributions to the\nCloud Native ecosystem. One of their last project is called\n<a href=\"https:\/\/github.com\/sighupio\/permission-manager\">permission-manager<\/a>, it is open\nsource and it can be describe as follow: \u201cit is a project that brings sanity to\nKubernetes RBAC and Users management, Web UI FTW\u201d.<\/p>\n\n<p>I will host Enrique (<a href=\"https:\/\/twitter.com\/iknite\">@twitter<\/a>) to talk about the\nchallenges he had when writing such a crucial project, hoping to see some code!<\/p>\n\n<p>Links:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/kubernetes.io\/docs\/reference\/access-authn-authz\/authentication\/\">Kubernetes\nAuthentication<\/a><\/li>\n  <li><a href=\"https:\/\/sighup.io\/\">Sighup webiste<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/sighupio\/permission-manager\">github.com\/sighupio\/permission-manager<\/a><\/li>\n<\/ul>\n"},{"title":"Show Me Your Code with Philippe and Giacomo: Vault plugin for Wireguard","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/show-me-your-code-vault-wireguard"}},"description":"Show me your code has two special guest Giacomo Tirabassi from InfluxData and Scorsolini from Sighup. We will code a new Hashicorp Vault plugin to generate Wireguard configuration. Vault is a popular secret storage developed in open source. Wireguard is networking module and VPN part of the Linux kernel. Let's have some fun","image":"https:\/\/gianarb.it\/img\/show-me-your-code-logo.png","updated":"2020-04-30T09:00:27+00:00","published":"2020-04-30T09:00:27+00:00","id":"https:\/\/gianarb.it\/blog\/show-me-your-code-vault-wireguard","content":"<p>When: Friday 1st May 10am GMT+2 (4am EDT)<\/p>\n\n<h2 id=\"lets-write-a-vault-plugin-for-wireguard\">Let\u2019s write a Vault plugin for Wireguard<\/h2>\n\n<p>1st of May. We are on vacation, the perfect day to start a side project.<\/p>\n\n<p>Giacomo and Philippe started coding an integration between <a href=\"https:\/\/www.vaultproject.io\/\">HashiCorp\nVault<\/a> and\n<a href=\"https:\/\/www.wireguard.com\/\">Wireguard<\/a>. This is great by itself. Vault is cool,\nWireguard is awesome, what do you need more?<\/p>\n\n<p>The integration is Vault plugin, and we decided to stream the session on\n<a href=\"https:\/\/twitch.tv\/gianarb\">Twitch<\/a> because that\u2019s what cool kids do those days.<\/p>\n\n<p>We actually made it, as every side project it is not ready at all, but we got\nthe boilerplate code required by a Vault Plugin to work and the project is\navailable on GitHub\n<a href=\"https:\/\/github.com\/gitirabassi\/vault-plugin-secrets-wireguard\">gitirabassi\/vault-plugin-secrets-wireguard)<\/a>.<\/p>\n\n<p>After more than an hour of fun we had to stop but they promise we will have a\nfollow up meeting as soon as they have an E2E workflow to show me!<\/p>\n\n<h2 id=\"about-giacomo\">About Giacomo<\/h2>\n\n<p><a href=\"https:\/\/twitter.com\/gitirabassi\">Giacomo<\/a> works as Site Reliability Engineer at\nInfluxData. He is an expert on Kubernetes (\ud83d\udcbc AWS DA, CKA, CKAD), containers,\nTerraform and everything coming from Hashicorp! Traveler and cooker, in a\nquest for flavors \ud83e\udd16<\/p>\n\n<h2 id=\"about-philippe\">About Philippe<\/h2>\n\n<p><a href=\"https:\/\/twitter.com\/Phisc0\">Philippe<\/a> works as DevOps engineer at Sighup.\nComputer Science and Engineering M.Sc. Student @ Politecnico di Milano. Linux\nuser, open source lover and new technologies\u2019 explorer.<\/p>\n\n<h2 id=\"links\">Links<\/h2>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/gitirabassi\/vault-plugin-secrets-wireguard\">GitHub repository for the project<\/a><\/li>\n  <li><a href=\"https:\/\/www.vaultproject.io\/\">HashiCorp Vault<\/a><\/li>\n  <li><a href=\"https:\/\/www.wireguard.com\/\">Wireguard<\/a><\/li>\n  <li><a href=\"https:\/\/www.vaultproject.io\/docs\/internals\/plugins\">Vault: Plugin System<\/a><\/li>\n<\/ul>\n"},{"title":"How to write documentation efficiently","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/how-to-write-documentation-efficiently"}},"description":"A bunch of experiences and considerations about how to write documentation efficiently. Without wasting too much time or even more important, without getting too bored or stressed out.","image":"https:\/\/gianarb.it\/img\/me.jpg","updated":"2020-04-18T09:00:27+00:00","published":"2020-04-18T09:00:27+00:00","id":"https:\/\/gianarb.it\/blog\/how-to-write-documentation-efficiently","content":"<p>You have to remember two things to effectively read this article:<\/p>\n\n<ol>\n  <li>I have a blog post where I create content with a good frequency, and I do it\nfor fun, so I like to write.<\/li>\n  <li>I work in remote, the HQ for InfluxData is in San Francisco, it means I am +9\nfrom a lot of my colleagues. Writing is a solid communication channel I use\nevery day at work because I think it is great, and because I do not have\nother alternatives.<\/li>\n<\/ol>\n\n<p class=\"text-center text-center\"><img src=\"\/img\/child-writes.jpg\" alt=\"Child writing on paper\" class=\"img-fluid\" \/><\/p>\n\n<h2 id=\"develop-a-workflow\">Develop a workflow<\/h2>\n\n<p>If you do not like to clean your apartment a strategy you have is to try to keep\nit as clean as possible, and in order day by day, in this way you won\u2019t have to\nspend a full weekend cleaning every corner of it. Spread a boring task in\na way that won\u2019t make you too tired.\nAn effective way is to write along the way, side by side with the code you are\ndeveloping.<\/p>\n\n<p>I can highlight a few steps in the process of writing code: analysis, design,\nvalidation, PoC, rollout. Those phases are not unique, they go continuously over\nmany iteration. I write during all of those steps, many times. Iterations do not\nhelp only your code, they make documentation solid, you can check for typos an\nso on.<\/p>\n\n<p>If you make writing an ongoing process you will find yourself at the end where\nthe only thing left is to organize and move what you wrote a way that readers\nwill find familiar.<\/p>\n\n<h2 id=\"find-the-right-place\">Find the right place<\/h2>\n\n<p>There are many time of documentation, because there are a lot of stakeholders\nand many phases to document (some of them where listed previously).<\/p>\n\n<p>If I have to think about my stakeholders they are:<\/p>\n\n<ol>\n  <li>project managers<\/li>\n  <li>documentation team if you are lucky, otherwise let\u2019s say customers or end users.<\/li>\n  <li>VP or tech leads.<\/li>\n  <li>your teammate or reviewers<\/li>\n<\/ol>\n\n<p>All those people will enjoy reading a specific point of view, or phase of work.<\/p>\n\n<p>I think teammate or reviewers are kind of happy to read the process you followed\nto design and implement what you wrote, and they will really appreciate to read\ninline documentation for your code, doc blocks and so on.<\/p>\n\n<p>Project Managers will enjoy reading considerations on issues and things like\nthat, they are super valuable and I end up copy pasting a lot from those\ndiscussions.<\/p>\n\n<p>End user obviously need a function documentation they they can follow and also a\nbit about internal design, monitoring mainly to get them onboard with the work\nyou developed. It really depends on your audience. We are lucky and we have a\nteam that is capable of reading code and figure out what we did, but it is a\nnice exercise to help them explaining in a good way your work.<\/p>\n\n<p>VP and tech leaders are usually focused on the design, why you did something in\na way other than another, the trade off you accepted, the one you avoided, why\nand how. I like the idea to write this kind of documentation in the code itself.<\/p>\n\n<p>I am fascinated when I open C codebases where the first thousen lines of code\nare documentation. In Go packages can have a file called <code>.\/doc.go<\/code> that <code>godoc<\/code>\nwill render as a package introduction. If you work with the kind of tech lead or\nVP that are not used to read code anymore, you can always copy paste it to\ngoogle doc.<\/p>\n\n<h2 id=\"write-a-lot\">Write a lot<\/h2>\n\n<p>This point self explains itself. More you write during all the phases if your\nwork, less you will have to do all together at the end of the code iteration.<\/p>\n\n<p>Where I usually end up tired about the code I wrote, even more when it takes\nweeks, and it is not easy to work on.<\/p>\n\n<h2 id=\"pair-on-documentation\">Pair on documentation<\/h2>\n\n<p>I am not a fan of pair programming but recently I changed my mind a little bit,\nprobably caused by all this social isolation. Before jumping straight on writing\ncode with my teammate two solid hours over two iterations writing the <code>.\/doc.go<\/code>\nfile together. The outcome made me happy, I hope it will work the same for you.<\/p>\n\n<p class=\"text-center text-center\"><img src=\"\/img\/toomany-files.jpg\" alt=\"Child writing on paper\" class=\"img-fluid\" \/><\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>This is my experience when writing documentation, but as I said, I love to do\nit! Do you have anything to share about it? I am particularly curious about how\nand if you READ somebody else documentation, internally written by your teammate\nhow do you evaluate it and if you have any suggestion to make it more friendly.\nBecause it is good to write but people has to be able to read it and get what\nthey need out of it without wasting too much time.<\/p>\n"},{"title":"Show Me Your Code with Dan and Walter: How to contribute to OpenTelemetry JS","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/show-me-your-code-otel-nodejs"}},"description":"Show me your code has two special guest Walter CTO for CorelyCloud SRL the company behind the CloudConf in Turin and Dan Engineer at Dynatrace Maintainer of OpenTelemetry JS. This during show we will talk about OpenTelemetry and NodeJS. Walter wrote a plugin for instrumenting mongoose with opentelemetry. We are gonna see how he did it, considerations from Dan and so on","image":"https:\/\/gianarb.it\/img\/show-me-your-code-logo.png","updated":"2020-04-11T09:00:27+00:00","published":"2020-04-11T09:00:27+00:00","id":"https:\/\/gianarb.it\/blog\/show-me-your-code-otel-nodejs","content":"<p>When: Thursday 16th 6-7pm GMT+2 (9am PDT)<\/p>\n\n<h2 id=\"opentelemetry-for-js-and-how-to-contribute\">OpenTelemetry for JS and how to contribute<\/h2>\n\n<p>OpenTelemetry is a specification and set of instrumentation libraries developed\nin open source from multiple companies such as Google, HoneyComb.io, Dynatrace,\nLightStep and many more!<\/p>\n\n<p>OpenTracing and OpenCensus joined the force, and they started a common project\ncalled OpenTelemetry that I hope will become the way to go in terms of code\ninstrumentation because I really think it is something we need.<\/p>\n\n<p>Walter and his team develop in Javascript, frontend and backend and back in the\nday we experimented OpenTracing but we had some issue and it was not easy to\npick up at that time. When I tried OpenTelemetry I realized that it was for him.<\/p>\n\n<p>He tried it out and he wrote its first opentelemetry instrumentation plugin\nmongoose, a popular library he uses and that it was not instrumented yet.<\/p>\n\n<p>Dan will help us to figure how they designed the opentelemetry-js implementation\nas it is today, the good the bad and the ugly about this experience. I hope to\nget some feedback about roadmap and future development as well now that the\nlibrary reached its first release beta.<\/p>\n\n<h2 id=\"about-dan\">About Dan<\/h2>\n\n<p>When I was working on my observability workshop Dan gave me a huge help,\ndrastically increasing my very low experience with NodeJS. Thank you for that.<\/p>\n\n<p>Dan works as Engineer at Dynatrace, and he maintains the OpenTelemetry JS\nlibrary. You can find him on twitter as <a href=\"https:\/\/twitter.com\/dyladan\">@dyladan<\/a>\nand in <a href=\"https:\/\/gitter.im\/open-telemetry\/opentelemetry-node\">Gitter<\/a> discussing\nopentelemetry.<\/p>\n\n<h2 id=\"about-walter\">About Walter<\/h2>\n\n<p>Walter Dal Mut works as a Solutions Architect <a href=\"https:\/\/corley.it\/\">@Corley SRL<\/a>.\nHe is an electronic engineer who moved to Software Engineering and Cloud\nComputing Infrastructures. Passionate about technology in general and open\nsource movement lover.<\/p>\n\n<p>You can follow him on <a href=\"https:\/\/twitter.com\/walterdalmut\">Twitter<\/a>\nand <a href=\"https:\/\/github.com\/wdalmut\">GitHub<\/a>.<\/p>\n\n<h2 id=\"links\">Links<\/h2>\n\n<ul>\n  <li><a href=\"https:\/\/opentelemetry.io\/\">opentelemetry.io<\/a><\/li>\n  <li><a href=\"https:\/\/www.dynatrace.com\">dynatrace.com<\/a><\/li>\n  <li><a href=\"https:\/\/gianarb.it\/blog\/how-to-start-with-opentelemetry-in-nodejs\">How to start tracing with OpenTelemetry in NodeJS?<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/open-telemetry\/opentelemetry-js\">github.com\/open-telemetry\/opentelemetry-js<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/wdalmut\/opentelemetry-plugin-mongoose\">wdalmut\/opentelemetry-plugin-mongoose<\/a><\/li>\n<\/ul>\n"},{"title":"Show Me Your Code with Carlos and Tibor: Chat about GoReleaser and multiarch support","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/show-me-your-code-goreleaser-buildkit-multi-arch"}},"description":"Informal chat with Carlos maintainer for GoReleaser, myself and Tibor from Docker about docker build image with buildx and buildkit to add support for multiple architecures image in GoReleaser.","image":"https:\/\/gianarb.it\/img\/show-me-your-code-logo.png","updated":"2020-04-08T09:00:27+00:00","published":"2020-04-08T09:00:27+00:00","id":"https:\/\/gianarb.it\/blog\/show-me-your-code-goreleaser-buildkit-multi-arch","content":"<p>When: Thursday 23th 6-7pm GMT+2 (9am PDT)<\/p>\n\n<h2 id=\"goreleaser-and-buildkit\">GoReleaser and BuildKit<\/h2>\n\n<p>The main reason about why I started \u201cShow me your code\u201d is to chat with a couple\nof friends from the open source space about what they are doing, what I am doing.\nAnd ideally have a drink: beers, water or coffe based on timezone discussing the same topic.<\/p>\n\n<p>Me, Carlos and Tibor will meet on Skype for an informal chat about two open\nsource project I love: GoReleaser and BuildKit. The conversation will be\nstreamed on Twitch.<\/p>\n\n<p>You can follow the event live, or the recording will be available here! Watching\nit lives will give you the unique opportunity to share your love for those\nprojects and your feedback about how to support multi arch docker build in\nGoReleaser.<\/p>\n\n<p><strong>IMPORTANT:<\/strong> The outcome of this conversation will not force in any way Carlos\nto do anything! I hope the experience you as attendees and user of GoReleaser\ncan share and the experience Tibor has with buildkit will drive to a possible\nintegration. Because I will love to have multi arch support for my releases!<\/p>\n\n<p>I had this idea because there is a long-standing PR about this\n<a href=\"https:\/\/github.com\/goreleaser\/goreleaser\/issues\/530\">\u201cSupport multi-platform docker\nimages#530\u201d<\/a> and I am sure\nthat a discussion all together will be constructive and nice!<\/p>\n\n<h2 id=\"about-carlos-and-goreleaser\">About Carlos and GoReleaser<\/h2>\n\n<p>I am very excited to have <a href=\"https:\/\/twitter.com\/caarlos0\">Carlos<\/a> with me, I\nrelay so much on <a href=\"https:\/\/goreleaser.com\/\">GoReleaser<\/a> and its integration with\nGitHub action to make my development life cycle reliable, repeatable and fast,\nand I am happy to have a chat about its project and what he will do next!<\/p>\n\n<h2 id=\"about-tibor-from-docker-and-buildkit\">About Tibor from Docker and BuildKit<\/h2>\n\n<p><a href=\"https:\/\/twitter.com\/tiborvass\">Tibor @tiborvass<\/a> is a well-known contributor\nand maintainer for Docker since the early days. Active on various open source\ncommunities he is now involved with BuildKit as maintainer.<\/p>\n\n<p>We know each other virtually and thanks to DockerCon and other events since I\njoined the Docker Captain program, I am happy to have him around showing\nBuildKit, buildx and the multi arch feature.<\/p>\n\n<h2 id=\"links\">Links<\/h2>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/goreleaser\/goreleaser\">github.com\/goreleaser\/goreleaser<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/moby\/buildkit\">github.com\/moby\/buildkit<\/a><\/li>\n  <li><a href=\"https:\/\/www.youtube.com\/watch?v=5KgaisTEzC8\">BuildKit: A Modern Builder Toolkit on Top of containerd, Tonis Tiigi &amp; Akihiro Suda<\/a><\/li>\n  <li><a href=\"https:\/\/www.infoq.com\/br\/presentations\/goreleaser-lessons-learned-so-far\/\">(INFOQ) GoReleaser: lessons learned so far<\/a><\/li>\n<\/ul>\n"},{"title":"How to start tracing with OpenTelemetry in NodeJS?","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/how-to-start-with-opentelemetry-in-nodejs"}},"description":"I developed an eight hours workshop about application monitoring and code instrumentation two years ago. This year I updated it to use OpenTelemetry and that's what I learned to instrument a NodeJS application.","image":"https:\/\/gianarb.it\/img\/logo\/otel-black-stacked.svg","updated":"2020-04-07T09:08:27+00:00","published":"2020-04-07T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/how-to-start-with-opentelemetry-in-nodejs","content":"<p>This post is to celebrate the first beta release for the OpenTelemetry NodeJS\napplication <i class=\"fas fa-glass-cheers\"><\/i><\/p>\n\n<p>Recently I developed a workshop about code instrumentation and application\nmonitoring. It is an 8 hours full immersion on logs, metrics, tracing and so on.\nI developed it last year and I gave it twice. Let me know if you are looking for\nsomething like that.<\/p>\n\n<p>Almost all of it is opensource but I didn\u2019t figure out a good way to make it\nusable without my brain for now. This year I updated it to use OpenTelemetry and\nInfluxDB v2.<\/p>\n\n<p>Anyway the application is called\n<a href=\"https:\/\/github.com\/gianarb\/shopmany\">ShopMany<\/a>. This application does\nnot return any useful information about its state. It is an e-commerce made of a\nbunch of services in various languages. Obviously one of them is in NodeJS and\nthat\u2019s the one I am gonna show you today.<\/p>\n\n<p><strong>Discaimer<\/strong>: I can not define myself as a NodeJS developer. I wrote a bunch of\nAngularJS single page application back in the day, I wrote some Cordova mobile\napplications ages ago. I am not writing any JS production code since 2015 more\nor less.<\/p>\n\n<h2 id=\"first-approach\">First approach<\/h2>\n\n<p>I concluded the application instrumentation it was the day the maintainers\ntagged the first beta release. Overnight I had to update libraries and test\ncode. Very luckily.<\/p>\n\n<p>The way I learned about how to properly instrument\n<a href=\"https:\/\/github.com\/gianarb\/shopmany\/tree\/master\/discount\">discount<\/a> required a\nlot of digging in the actual\n<a href=\"https:\/\/github.com\/open-telemetry\/opentelemetry-js\">opentelemery-js<\/a> but\nluckily for us it has a lot of examples and the library is designed to load a\nbunch of useful modules that are able to instrument the application by itself.\nThe community is very helpful and you can chat via\n<a href=\"https:\/\/gitter.im\/open-telemetry\/opentelemetry-js\">Gitter<\/a>.<\/p>\n\n<h2 id=\"getting-started\">Getting Started<\/h2>\n\n<p>I am using ExpressJS and OpenTelemetry has a plugin for it that you can load,\nand it instruments the app by itself, same for MongoDB that is the packaged I am\nusing.<\/p>\n\n<p>Those are the dependencies I installed in my applications, all of them are\nprovided by the repository I linked above:<\/p>\n\n<pre><code>\"@opentelemetry\/api\": \"^0.5.0\",\n\"@opentelemetry\/exporter-jaeger\": \"^0.5.0\",\n\"@opentelemetry\/node\": \"^0.5.0\",\n\"@opentelemetry\/plugin-http\": \"^0.5.0\",\n\"@opentelemetry\/plugin-mongodb\": \"^0.5.0\",\n\"@opentelemetry\/tracing\": \"^0.5.0\",\n\"@opentelemetry\/plugin-express\": \"^0.5.0\",\n<\/code><\/pre>\n\n<p>I created a <code>.\/tracer.js<\/code> file that initialize the tracer, I have added inline\ndocumentation to explain the crucial part of it:<\/p>\n\n<pre><code class=\"language-js\">'use strict';\n\nconst opentelemetry = require('@opentelemetry\/api');\nconst { NodeTracerProvider } = require('@opentelemetry\/node');\nconst { SimpleSpanProcessor } = require('@opentelemetry\/tracing');\n\/\/ I am using Jaeger as exporter\nconst { JaegerExporter } = require('@opentelemetry\/exporter-jaeger');\n\n\/\/ This is not mandatory, by default httptrace propagation is used\n\/\/ but it is not well supported by the PHP ecosystem and I have\n\/\/ a PHP service to instrument. I discovered B3 is supported\n\/\/ form all the languages I where intrumenting\nconst { B3Propagator } = require('@opentelemetry\/core');\n\nmodule.exports = (serviceName, jaegerHost, logger) =&gt; {\n  \/\/ A lot of those plugins are automatically loaded when you install them\n  \/\/ So if you do not use express for example you do not have to enable all\n  \/\/ those plugins manually. But Express is not auto enabled so I had to add them\n  \/\/ all\n  const provider = new NodeTracerProvider({\n    plugins: {\n      mongodb: {\n        enabled: true,\n        path: '@opentelemetry\/plugin-mongodb',\n      },\n      http: {\n        enabled: true,\n        path: '@opentelemetry\/plugin-http',\n          \/\/ I didn't do it in my example but it is a good idea to ignore health\n          \/\/ endpoint or others if you do not need to trace them.\n          ignoreIncomingPaths: [\n            '\/',\n            '\/health'\n          ]\n      },\n      express: {\n        enabled: true,\n        path: '@opentelemetry\/plugin-express',\n      },\n    }\n  });\n\n  \/\/ Here is where I configured the exporter, setting the service name\n  \/\/ and the jaeger host. The logger is helpful to track errors from the\n  \/\/ exporter itself\n  let exporter = new JaegerExporter({\n    logger: logger,\n    serviceName: serviceName,\n    host: jaegerHost\n  });\n\n  provider.addSpanProcessor(new SimpleSpanProcessor(exporter));\n  provider.register({\n    propagator: new B3Propagator(),\n  });\n  \/\/ Set the global tracer so you can retrieve it from everywhere else in the\n  \/\/ app\n  return opentelemetry.trace.getTracer(\"discount\");\n};\n<\/code><\/pre>\n\n<p>You will be thinking, that\u2019s too easy! You are right, the nature of NodeJS\nmakes tracing very code agnostic. With this configuration you get a lot \u201cfor\nfree\u201d.<\/p>\n\n<p>You get a bunch of spans for every http request that ExpressJS serves, plus a\nspan for every MongoDB query. All of them with useful information like the\nstatus code, path, user agents, query statements and so on.<\/p>\n\n<p>We have to include it in our <code>.\/server.js<\/code> the entrypoint for our nodejs\napplication:<\/p>\n\n<pre><code class=\"language-js\">'use strict';\n\nconst url = process.env.DISCOUNT_MONGODB_URL || 'mongodb:\/\/discountdb:27017';\nconst jaegerHost = process.env.JAEGER_HOST || 'jaeger';\n\nconst logger = require('pino')()\n\n\/\/ Import and initialize the tracer\nconst tracer = require('.\/tracer')('discount', jaegerHost, logger);\n\nvar express = require(\"express\");\nvar app = express();\n\nconst MongoClient = require('mongodb').MongoClient;\nconst dbName = 'shopmany';\nconst client = new MongoClient(url, { useNewUrlParser: true });\n\nconst expressPino = require('express-pino-logger')({\n  logger: logger.child({\"service\": \"httpd\"})\n})\n<\/code><\/pre>\n\n<p>As I told you, that\u2019s it! With this code you have enough to make your NodeJS\napplication to show up in your trace.<\/p>\n\n<p>The instrumented version of the application is available here\n<a href=\"https:\/\/github.com\/gianarb\/shopmany\/tree\/discount\/opentelemetry\/discount\">github.com\/gianarb\/shopmany\/tree\/discount\/opentelemetry<\/a><\/p>\n\n<h2 id=\"understand-the-project\">understand the project<\/h2>\n\n<p>I tend to checkout projects when in the process of learning how they work.\nDocumentation is useful but always incomplete for such a high moving projects.<\/p>\n\n<p>I have to say that the scaffolding is clear even for an not fluent NodeJS\ndeveloper like me.<\/p>\n\n<pre><code>$ tree -L 1\n.\n\u251c\u2500\u2500 benchmark\n\u251c\u2500\u2500 CHANGELOG.md\n\u251c\u2500\u2500 codecov.yml\n\u251c\u2500\u2500 CONTRIBUTING.md\n\u251c\u2500\u2500 doc\n\u251c\u2500\u2500 examples\n\u251c\u2500\u2500 getting-started\n\u251c\u2500\u2500 karma.base.js\n\u251c\u2500\u2500 karma.webpack.js\n\u251c\u2500\u2500 lerna.json\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 package.json\n\u251c\u2500\u2500 packages\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 RELEASING.md\n\u251c\u2500\u2500 scripts\n\u251c\u2500\u2500 tslint.base.js\n\u2514\u2500\u2500 webpack.node-polyfills.js\n<\/code><\/pre>\n\n<p>I would like to define it as a monorepo, and it uses\n<a href=\"https:\/\/github.com\/lerna\/lerna\">lerna<\/a> to delivery multiple packages from the\nsame repository.<\/p>\n\n<p><code>examples<\/code> contains workable example of how to use the different <code>packages<\/code>.<\/p>\n\n<pre><code>$ tree -L 1 .\/examples\/\n.\/examples\/\n\u251c\u2500\u2500 basic-tracer-node\n\u251c\u2500\u2500 dns\n\u251c\u2500\u2500 express\n\u251c\u2500\u2500 grpc\n\u251c\u2500\u2500 grpc_dynamic_codegen\n\u251c\u2500\u2500 http\n\u251c\u2500\u2500 https\n\u251c\u2500\u2500 ioredis\n\u251c\u2500\u2500 metrics\n\u251c\u2500\u2500 mysql\n\u251c\u2500\u2500 opentracing-shim\n\u251c\u2500\u2500 postgres\n\u251c\u2500\u2500 prometheus\n\u251c\u2500\u2500 redis\n\u2514\u2500\u2500 tracer-web\n\n$ tree -L 1 .\/packages\/\n.\/packages\/\n\u251c\u2500\u2500 opentelemetry-api\n\u251c\u2500\u2500 opentelemetry-base\n\u251c\u2500\u2500 opentelemetry-context-async-hooks\n\u251c\u2500\u2500 opentelemetry-context-base\n\u251c\u2500\u2500 opentelemetry-context-zone\n\u251c\u2500\u2500 opentelemetry-context-zone-peer-dep\n\u251c\u2500\u2500 opentelemetry-core\n\u251c\u2500\u2500 opentelemetry-exporter-collector\n\u251c\u2500\u2500 opentelemetry-exporter-jaeger\n\u251c\u2500\u2500 opentelemetry-exporter-prometheus\n\u251c\u2500\u2500 opentelemetry-exporter-zipkin\n\u251c\u2500\u2500 opentelemetry-metrics\n\u251c\u2500\u2500 opentelemetry-node\n\u251c\u2500\u2500 opentelemetry-plugin-dns\n\u251c\u2500\u2500 opentelemetry-plugin-document-load\n\u251c\u2500\u2500 opentelemetry-plugin-express\n\u251c\u2500\u2500 opentelemetry-plugin-grpc\n\u251c\u2500\u2500 opentelemetry-plugin-http\n\u251c\u2500\u2500 opentelemetry-plugin-https\n\u251c\u2500\u2500 opentelemetry-plugin-ioredis\n\u251c\u2500\u2500 opentelemetry-plugin-mongodb\n\u251c\u2500\u2500 opentelemetry-plugin-mysql\n\u251c\u2500\u2500 opentelemetry-plugin-postgres\n\u251c\u2500\u2500 opentelemetry-plugin-redis\n\u251c\u2500\u2500 opentelemetry-plugin-user-interaction\n\u251c\u2500\u2500 opentelemetry-plugin-xml-http-request\n\u251c\u2500\u2500 opentelemetry-propagator-jaeger\n\u251c\u2500\u2500 opentelemetry-resources\n\u251c\u2500\u2500 opentelemetry-shim-opentracing\n\u251c\u2500\u2500 opentelemetry-test-utils\n\u251c\u2500\u2500 opentelemetry-tracing\n\u251c\u2500\u2500 opentelemetry-web\n\u2514\u2500\u2500 tsconfig.base.json\n<\/code><\/pre>\n\n<p>The suffix of the package helps you to figure out what they are:<\/p>\n\n<ul>\n  <li><code>opentelemetry-plugin-*<\/code> usually contains the code that instrument a specific\nlibrary, you can see here <code>express<\/code>, <code>http<\/code>, <code>https<\/code>, <code>dns<\/code>. Some plugins are\nloaded by the <code>NodeTracerProvider<\/code> by default. Other has to be specified. You\ncan to relay on the code or read the documentation to figure it out. For\nexample <code>http<\/code> is loaded by default but if you need <code>express<\/code> you have to load\nthem up by yourself, figuring out the right dependencies. At least or now.<\/li>\n  <li><code>opentelemetry-exporter-*<\/code> contains various exporters for now Jaeger,\nPrometheus, Zipkin the and otel-collector.<\/li>\n<\/ul>\n\n<p>Anyway, what I am trying to say is that it is very intuitive and looking here it\nis clear what you can get from this project.<\/p>\n\n<h2 id=\"plugin\">Plugin<\/h2>\n\n<p>NodeJS sounds very easy to instrument and on the right path to get automatic\ninstrumentation right,  because you can listen from the outside to function\ncall, you do not need to specifically change your code where you do a request or\nwhere you get one, you can add tracing in a centralized location. That\u2019s how the\nprovided plugins work.<\/p>\n\n<p><a href=\"https:\/\/github.com\/othiym23\/shimmer\">Shimmer<\/a> is the library that simplify the\ntrick. I recently had a chat with <a href=\"https:\/\/twitter.com\/walterdalmut\">Walter<\/a>\nbecause I know he works in NodeJS and during the experiments otel were easy\nenough to fit his use case. He is currently trying it and as he discovered that\n<a href=\"https:\/\/github.com\/Automattic\/mongoose\">mongoose<\/a>, the ORM library he uses does\nnot use the officially provided <a href=\"https:\/\/mongodb.github.io\/node-mongodb-native\/\">mongodb\ndriver<\/a> so the\notel-plugin-mongodb where not magically tracing his requests to mongodb, sadly.\nBut he is currently writing a <a href=\"https:\/\/github.com\/wdalmut\/opentelemetry-plugin-mongoose\">plugin for\nthat<\/a>, so it won\u2019t be\na problem pretty soon.<\/p>\n"},{"title":"Checklist for a new project","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/new-project-checklist"}},"description":"A person list I developed along those years that I try to implement across projects I start or contribute to","image":"https:\/\/gianarb.it\/img\/myselfie.jpg-large","updated":"2020-04-02T09:08:27+00:00","published":"2020-04-02T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/new-project-checklist","content":"<p>Back in the day I used to start a lot of projects. From zero on GitHub, some of\nthem are still there, unused probably.<\/p>\n\n<p>Recently I started to take part of other people projects like\n<a href=\"https:\/\/github.com\/testcontainers\">testcontainers<\/a> or\n<a href=\"https:\/\/github.com\/profefe\">profefe<\/a>. I wrote about why I do it during the\n<a href=\"\/blog\/year-in-review\">\u201c2019 year in review\u201d<\/a> post.<\/p>\n\n<p>In both cases, so even when joining a new existing project or when I start a new\none I try to follow a checklist.<\/p>\n\n<p>I developed this checklist along the years, moving parts and extending the\nnumber of checks. The main goal for that is to validate that the project has\ngood answers for a couple of questions, not related at what it does, but to how\nit does it.<\/p>\n\n<ol>\n  <li>is it easy to onboard as a user?<\/li>\n  <li>as a new contributor is the project easy to understand?<\/li>\n  <li>as a maintainer do I have everything I can under control in order to waste as\nless time as possible?<\/li>\n<\/ol>\n\n<p>I follow the checklist when working on opensource but also in close source\nproject and what I like about it is that you can propose a change by youself,\nyou can try to apply those feedback as a solo-developer, hoping to make\ncontributors, maintainers and colleagues to buy them spreading joy.<\/p>\n\n<p>But let\u2019s make to the list now.<\/p>\n\n<h3 id=\"have-a-place-where-you-can-write\">Have a place where you can write<\/h3>\n\n<p>When I start a new project, but also during the onboard of an existing one in my\ntool chain I look for a written format of it.<\/p>\n\n<p>I look for a readme, an installation process, a getting started guide, a\ncontribution document. It does not need to be pretty one, a copy\/paste of a few\nbash scripts that the maintainer does to set itself up is enough.<\/p>\n\n<p>Having a place during the early days of a project where I can write what I\nthink, how I would like to get things done is important in order to design\nsomething usable and to spot sooner misleading assumption.<\/p>\n\n<p>If you can build the place for all those information it will take you one\nsecond to save them forever, it is just a matter of copy\/pasting the command you\nrun in your terminal to spin up dependencies, build the project and so on.<\/p>\n\n<p>I like to use the README.md, CONTRIBUTOR.md and a <code>.\/docs<\/code> folder to save\neverything I am thinking about or everything I do that I hope will make my life\neasy in a month where I will be back on that piece of code without even knowing\nit was there. The feeling you get is the same a new person has when it looks at\nyour project for the first time.<\/p>\n\n<p>There is no way you can get it right since the beginning, because there is not a\ndefinition of right. First day everything you write is mainly for yourself, in a\nmonth and some editing it will become the first version of the documentation for\nyour project.<\/p>\n\n<h3 id=\"logging-and-instrumentation-library\">Logging and instrumentation library<\/h3>\n\n<p>As I said at the beginning of the article all the checks do not depend on the\nbusiness logic of your application or library. All of them has to speak with the\noutside world sharing their internal state in a way that is reusable,\ncomprehensive, configurable.<\/p>\n\n<p>There are a lot of people that speaks about observability, logging, tracing,\nmonitoring. Everybody has its own opinion, but form a technical point of your\nwhat you write has to be easy to troubleshoot.<\/p>\n\n<p>You do it using the right telemetry libraries. For logging I do not have any\ndoubt. In Go I use <a href=\"https:\/\/github.com\/uber-go\/zap\">zap<\/a>.<\/p>\n\n<p>During a workshop about observability I built where I had to instrument 4\napplications on different languages I selected:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/pinojs\/pino\">pino<\/a> for NodeJS<\/li>\n  <li><a href=\"https:\/\/github.com\/Seldaek\/monolog\">monolog<\/a> for PHP<\/li>\n  <li><a href=\"https:\/\/logging.apache.org\/log4j\/2.x\/\">log4j<\/a> for JAVA<\/li>\n<\/ul>\n\n<p>In general I look for libraries that allow me to do structured logging, so for\nthe one that enables me to attach key value pairs to log line. I also look for\nlogging libraries that has the concept of exporters and format. Nothing unusual.<\/p>\n\n<p>For tracing and events I do not have a favourite one but I would like to see\n<a href=\"https:\/\/opentelemetry.io\">Opentelemetry<\/a> to become the way to go.<\/p>\n\n<h3 id=\"continuous-integration\">Continuous integration<\/h3>\n\n<p>A project without CI can not be called in that way. Nowadays there are a lot of\nfree services that you can use, so no excuse. When I am on GitHub I go for\nActions now because they are free and embedded in the VCS itself.<\/p>\n\n<p>If you didn\u2019t write any test at least set the process up and running. Just run\ntests, usually they do not fails if empty. And there are static checker, linters\nand things like that for every language, set them up!<\/p>\n\n<h3 id=\"continuous-delivery\">Continuous delivery<\/h3>\n\n<p>You made the CI part, you are half way done. Release is important and we have\nthe tools to get it right since day one. It is a pain to do a release, there are\na lot of potential manual state to get it right:<\/p>\n\n<ol>\n  <li>Bump version<\/li>\n  <li>Changelog<\/li>\n  <li>Compile and push binaries if it is an application<\/li>\n  <li>\u2026<\/li>\n<\/ol>\n\n<p>There are tools that helps you to do that in automation. For my apps I use\n<a href=\"https:\/\/github.com\/goreleaser\/goreleaser\">goreleaser<\/a>, for the libraries I use\n<a href=\"https:\/\/github.com\/marketplace\/actions\/release-drafter\">Releaser drafter<\/a>.<\/p>\n\n<h3 id=\"testing-framework\">Testing framework<\/h3>\n\n<p>Write tests, and when you see repeated code extract it in a testing package.\n<code>zap<\/code> has <code>zaptest<\/code>, your project should have <code>yourprojecttest<\/code> as well.<\/p>\n\n<p>It is useful for yourself, because it will make the job to write more test\neffortless, and if you well document your testing package contributors will be\nable to use it when opening a PR because you will make writing tests easier for\neverybody. As a bonus who ever uses your libraries can use the testing package\nto write their own tests for their application.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>This is the list I use, and I will keep it up to date now that I wrote it down,\nadding or editing it, so be sure to stay around!<\/p>\n\n<p>I hope this checklist is general enough and useful to be reusable for you in\nsome of its part.<\/p>\n\n<p>What I like about it is that I do not need to be a CTO, a maintainer or\nsomething like that to drive the adoption of those point that I think are\ncrucial, I drove the adoption of some of them even as a solo contributor.<\/p>\n"},{"title":"Why code instrumentation?","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/why-code-instrumentation"}},"description":"I decided to finally create a category about code instrumentation. Because I am a develop. And I think it matters. It is important to write better code and more reliability application that we can learn from.","image":"https:\/\/gianarb.it\/img\/got-your-back.jpg","updated":"2020-03-29T09:08:27+00:00","published":"2020-03-29T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/why-code-instrumentation","content":"<p>I am writing this blog post as a common introduction for a new category I would\nlike to write about consistently on my blog. If this is the first time you land\nhere this is my blog, and I write about everything that catches my attention but\nsometime earlier sometime later on I realize I can group my posts in categories,\nand that\u2019s what I am doing now.<\/p>\n\n<p>Some of them are: <a href=\"\/planet\/assemble-kubernetes.html\">Assemble Kubernetes<\/a>,\n<a href=\"\/planet\/docker.html\">Docker<\/a>, <a href=\"\/planet\/mockmania.html\">MockMania<\/a>. This one\nwill be called <code>Code Instrumentation<\/code>.<\/p>\n\n<p>There are a lot of people writing about observability, monitoring and I did it\nfor the last 3 years as well. I learned a lot along the way but what I think is\ncrucial is that developers has to write code that is\nunderstandable and easy to debug where it is more valuable, in production. And\nif an application or a system is hard to figure out we as a developer play a\nmojor role on it.<\/p>\n\n<p>That\u2019s why Site Reliability Engineering (SRE) is not related to ops, servers,\nKubernetes but it is something that plays its match in your code.<\/p>\n\n<p>What\u2019s why I think SRE and DevOps are different, not at all connected.<\/p>\n\n<p>The technologies that are leading the landscape are:<\/p>\n\n<ol>\n  <li>Prometheus but not the time series database, their client libraries and the\nexposition format, now branded from the community and the Cloud Native\nComputing Foundation (CNCF) as OpenMetrics<\/li>\n  <li>OpenTracing, OpenCensus and OpenTelemetry. They are part of the same bullet\npoints because I think about them as the consequence of each other since what\nI hope is \u201cTHE LAST ONE\u201d, OpenTelemetry. They are instrumentation libraries\nand specification to increase interoperability and to avoid vendor lock-in\nfor what concerns distributed tracing and metrics. I hope logs will jump\nonboard at some point<\/li>\n<\/ol>\n\n<h2 id=\"prometheus-and-openmetrics\">Prometheus and OpenMetrics<\/h2>\n\n<p>I wrote about this topic previously, so have a look there if you do not know\nwhat I am speaking about.<\/p>\n\n<p>I think they are worth to mention here because that\u2019s how I learned the effect\nof good or bad code instrumentation, and the fact that it has to happen in your\ncode, when you develop it.<\/p>\n\n<p>It has the same weight has writing a good data structure, writing solid unit\ntests, or picking the right design pattern.<\/p>\n\n<h2 id=\"opentelemetry-otel\">OpenTelemetry (otel)<\/h2>\n\n<p>As I said I will refer to otel when I can, not because I think OpenTracing or\nOpenCensus is bad, but because I do not see this as a religion, but for me it is\na technical problem, they is well spread and it has to find a good answer.<\/p>\n\n<p>Those communities decided to merge to otel in the way they are doing, good or\nbad? We can get a beer at some point and I will tell you. It is out of scope.<\/p>\n\n<h2 id=\"what-i-am-gonna-talk-about\">What I am gonna talk about<\/h2>\n\n<p>This is a long new category introduction blog post probably but that\u2019s it. Over\nthe last two years I tried to share what I experienced around this topic with a\nworkshop called: \u201cApplication Monitoring\u201d. A lot of the articles that I will\nwrite comes from there, and it is an attempt to share what I think worked or\nfailed.<\/p>\n\n<h2 id=\"links\">Links<\/h2>\n\n<ul>\n  <li><a href=\"\/planet\/code-instrumentation.html\">All about Code Instrumentation<\/a> from my blog<\/li>\n  <li><a href=\"https:\/\/github.com\/gianarb\/shopmany\">ShopMany<\/a> is the application I developed for the workshop<\/li>\n  <li><a href=\"https:\/\/github.com\/gianarb\/workshop-observability\">Workshop notes<\/a> contains notes, exercises and solutions for the lessons I\nproposed in the workshop itself<\/li>\n  <li><a href=\"https:\/\/www.honeycomb.io\/blog\/\">honeycomb<\/a> because when you speak about o11y you have to quote them!<\/li>\n  <li><a href=\"\/tinyletter.html\">My newsletter<\/a> is probably the best way to stay in touch with the content I\ncreate<\/li>\n  <li><a href=\"https:\/\/twitter.com\/gianarb\">Twitter<\/a> is the best way to stay in touch with me<\/li>\n<\/ul>\n"},{"title":"CNCF Webinar: Continuous Profiling Go Application Running in Kubernetes","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/cncf-webinar-kubernetes-pprof-profefe"}},"description":"Slides, videos and links from a webinar I have with the CNCF about kubrenetes, profefe, golang and pprof.","image":"https:\/\/gianarb.it\/img\/cncf-logo.png","updated":"2020-03-27T09:08:27+00:00","published":"2020-03-27T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/cncf-webinar-kubernetes-pprof-profefe","content":"<div class=\"embed-responsive embed-responsive-16by9\">\n    <iframe class=\"embed-responsive-item\" src=\"https:\/\/www.youtube.com\/embed\/SzhQZQ6VGoY\" allowfullscreen=\"\"><\/iframe>\n<\/div>\n\n<p>Microservices and Kubernetes help our architecture to scale and to be\nindependent at the price of running many more applications. Golang provides a\npowerful profiling tool called pprof, it is useful to collect information from a\nrunning binary for future investigation. The problem is that you are not always\nthere to take a profile when needed, sometimes you do not even know when you\nneed to one, that\u2019s how a continuous profiling strategy helps. Profefe is an\nopen-source project that collect and organizes profiles. Gianluca wrote a\nproject called kube-profefe to integrate Kubernetes with Profefe. Kube-profefe\ncontains a kubectl plugin to capture locally or on profefe profiles from running\npods in Kubernetes. It also provides an operator to discover and continuously\nprofile applications running inside Pods.<\/p>\n\n<p>A bunch of links for you:<\/p>\n\n<ul>\n  <li>Video coming soon<\/li>\n  <li><a href=\"\/blog\/go-continuous-profiling-profefe\">My article: Continuous profiling in Go with Profefe<\/a><\/li>\n  <li><a href=\"\/blog\/continuous-profiling-go-apps-in-kubernetes\">My article: Continuous Profiling Go applications running in Kubernetes<\/a><\/li>\n  <li><a href=\"https:\/\/research.google\/pubs\/pub36575\/\">Google-Wide Profiling: A Continuous Profiling Infrastructure for Data Centers<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/profefe\/profefe\">Profefe on Github<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/profefe\/kube-profefe\">Kube Profefe on Github<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/google\/pprof\">google\/pprof<\/a> library on GitHub<\/li>\n  <li><a href=\"https:\/\/kubernetes.profefe.dev\">Work in progress documentation! help me out!<\/a><\/li>\n<\/ul>\n\n<div class=\"embed-responsive embed-responsive-16by9\">\n    <iframe class=\"embed-responsive-item\" src=\"\/\/speakerdeck.com\/player\/ff55e041659945bca5d31013bd999c28\" allowfullscreen=\"\"><\/iframe>\n<\/div>\n\n"},{"title":"How to do testing with zap a popular logging library for Golang","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/golang-mockmania-zap-logger"}},"description":"How you can use logging to build assertions when testing. What the popular Golang logging library provided by Uber gives you around unit tests.","image":"https:\/\/gianarb.it\/img\/golang-mockmania.png","updated":"2020-03-24T09:08:27+00:00","published":"2020-03-24T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/golang-mockmania-zap-logger","content":"<div class=\"alert alert-dark\" role=\"alert\">\n   <div class=\"row\">\n       <div class=\"col-md-2 align-self-center\">\n          <a href=\"https:\/\/link.testproject.io\/0ak\" target=\"_blank\">\n              <img class=\"img-fluid\" src=\"\/img\/testproject-logo-small.png\" \/>\n          <\/a>\n       <\/div>\n       <div class=\"col-md-8 text-center\">\n           <a href=\"https:\/\/link.testproject.io\/0ak\" class=\"alert-link\" target=\"_blank\">TestProject<\/a> is a community all\n           about testing and you know how much I love communities! Join us.\n       <\/div>\n   <\/div>\n<\/div>\n\n<p>If you follow me on <a href=\"https:\/\/twitter.com\/gianarb\">twitter<\/a> you know that\nI am passionate about o11y, monitoring and code instrumentation.<\/p>\n\n<p>I see logs not as a random print statement that you use only when something is\nwrong, but they have value. Logs are the communication channel our applications\nuse. As developers it is our job to make them to speak in a comprehensive way.<\/p>\n\n<p>Logs should be structured and in some way consistent across functions, http\nhandlers, applications even languages to simplify their use. From algorithms and\nhuman operators.<\/p>\n\n<p>In Go <a href=\"https:\/\/github.com\/uber-go\/zap\">zap<\/a> is a popular logging library provided by Uber, I\nuse it almost by default for all my applications.<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport \"go.uber.org\/zap\"\n\nfunc main() {\n\tlogger, _ := zap.NewProduction()\n\tdo(logger)\n}\n\nfunc do(logger *zap.Logger) {\n\tlogger.Error(\"Start doing things\")\n}\n<\/code><\/pre>\n\n<p>So logging and testing? In the same article? I should be really drunk!<\/p>\n\n<p class=\"text-center\"><img src=\"\/img\/kermit-frog-drunk.jpg\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p>When I discovered that <code>zap<\/code> comes with a testing utility package called\n<code>zaptest<\/code> I felt in love with this library even more:<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n\t\"testing\"\n\n\t\"go.uber.org\/zap\/zaptest\"\n)\n\nfunc Test_do(t *testing.T) {\n\tlogger := zaptest.NewLogger(t)\n    do(logger)\n}\n<\/code><\/pre>\n\n<p>The <code>go test<\/code> command supports the flag <code>-v<\/code> to improve verbosity of test\nexecution. In practice that\u2019s how you forward to <code>stdout<\/code> logs and print\nstatements during a test execution. <code>zaptest<\/code> works with that as well.<\/p>\n\n<p>Very cool, and useful if you write smoke tests, pipeline tests, or how ever you\ncall them and see the logs can be spammy, but helpful to figure out the actual\nissue.<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n\t\"testing\"\n\n\t\"go.uber.org\/zap\"\n\t\"go.uber.org\/zap\/zapcore\"\n\t\"go.uber.org\/zap\/zaptest\"\n)\n\nfunc Test_do(t *testing.T) {\n\tlogger := zaptest.NewLogger(t, zaptest.WrapOptions(zap.Hooks(func(e zapcore.Entry) error {\n\t\tif e.Level == zap.ErrorLevel {\n\t\t\tt.Fatal(\"Error should never happen!\")\n\t\t}\n\t\treturn nil\n\t})))\n\tdo(logger)\n}\n<\/code><\/pre>\n<p>You can use <code>hooks<\/code> to check for expected or unexpected logs.<\/p>\n\n<p>Hook are executed for every log line:<\/p>\n\n<pre><code class=\"language-go\">func(e zapcore.Entry) error {\n    if e.Level == zap.ErrorLevel {\n        t.Fatal(\"Error should never happen!\")\n    }\n    return nil\n})\n<\/code><\/pre>\n\n<p>If you do not expect any error level log line for your execution because you are\ntesting the happy path, you can do something like that.<\/p>\n\n<p><strong>DISCALIMER:<\/strong> This is another way to write assertion. You will may use them to enforce\nother checks, or to validate the workflow from a different point of view that\nwill may be easier to do as first attempt. As I usually say: \u201can easy and\npartial test is better than no test\u201d.<\/p>\n\n<p>Do not test only logs, it won\u2019t age well! Keep writing good tests!<\/p>\n"},{"title":"Show Me Your Code with Walter Dal Mut: Extend Kubernetes in NodeJS","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/show-me-your-code-walter-dal-mut-kubernetes-nodejs-informers"}},"description":"Let's try to get virtual! This is the first attempt as CNCF Meetup from Turin to do something online! The series is called Show me your code. Walter dal Mut from Corley will be the guinea pig to test this new format. Live show on YouTube about Kubernetes and how to use shared informer to extend its capabilities in Node.js.","image":"https:\/\/gianarb.it\/img\/show-me-your-code\/ep1-thump.jpg","updated":"2020-03-13T09:08:27+00:00","published":"2020-03-13T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/show-me-your-code-walter-dal-mut-kubernetes-nodejs-informers","content":"<h2 id=\"about-walter-dal-mut\">About Walter Dal Mut<\/h2>\n\n<p>Walter Dal Mut works as a Solutions Architect <a href=\"https:\/\/corley.it\/\">@Corley SRL<\/a>,\nis is an electronic engineer who moved to Software Engineering and Cloud\nComputing Infrastructures. Passionate about technology in general and open\nsource movement lover.<\/p>\n\n<p>If you wont you can follow him on <a href=\"https:\/\/twitter.com\/walterdalmut\">Twitter<\/a>\nand <a href=\"https:\/\/github.com\/wdalmut\">GitHub<\/a>.<\/p>\n\n<h2 id=\"kubernetes-extendibility-and-nodejs\">Kubernetes extendibility and NodeJS<\/h2>\n\n<p>Almost everybody that is currently working on Kubernetes, developing extensions,\ncontrollers, operator is doing it in Go. That\u2019s reasonable because Kubernetes is\nwritten Go and there is a lot of code that you can reuse in that language.<\/p>\n\n<p>What if you are not a Go developer?<\/p>\n\n<p>Walter coded a shared informer in NodeJS that watches and take actions on Pod\nevents.<\/p>\n\n<h2 id=\"links\">Links<\/h2>\n\n<ul>\n  <li>The code you saw in the video lives here\n<a href=\"https:\/\/github.com\/wdalmut\/k8s-informer-ytlive\">wdalmut\/k8s-informer-ytlive<\/a><\/li>\n  <li><a href=\"https:\/\/get.oreilly.com\/ind_extending-kubernetes.html\">Extend Kubernetes O\u2019Reilly report<\/a><\/li>\n  <li><a href=\"https:\/\/engineering.bitnami.com\/articles\/a-deep-dive-into-kubernetes-controllers.html\">A deep dive into Kubernetes\ncontrollers<\/a><\/li>\n<\/ul>\n"},{"title":"How to test CLI commands made with Go and Cobra","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/golang-mockmania-cli-command-with-cobra"}},"description":"CLI commands are common in Go. Testing them is an effective way to run a big amount of code that is actually very close to the end user. I use Cobra, pflags and Viper and that's what I do when I write unit test for Cobra commands","image":"https:\/\/gianarb.it\/img\/golang-mockmania.png","updated":"2020-03-09T09:08:27+00:00","published":"2020-03-09T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/golang-mockmania-cli-command-with-cobra","content":"<p>Almost everything is a CLI application when writing Go. At least for me. Even\nwhen I write an HTTP daemon I still have to design a UX for configuration\ninjection, environment variables, flags and things like that.<\/p>\n\n<p>The set of libraries I use is very standard, I use\n<a href=\"https:\/\/github.com\/spf13\/cobra\">Cobra<\/a>,\n<a href=\"https:\/\/github.com\/spf13\/pflag\">pflags<\/a> and occasionally\n<a href=\"https:\/\/github.com\/spf13\/viper\">Viper<\/a>. I can say, without a doubt that <a href=\"https:\/\/twitter.com\/spf13\">Steve\nFrancia<\/a> is awesome!<\/p>\n\n<p>This is how a command looks like directly from the Cobra documentation:<\/p>\n\n<pre><code>var rootCmd = &amp;cobra.Command{\n  Use:   \"hugo\",\n  Short: \"Hugo is a very fast static site generator\",\n  Long: `A Fast and Flexible Static Site Generator built with\n                love by spf13 and friends in Go.\n                Complete documentation is available at http:\/\/hugo.spf13.com`,\n  Run: func(cmd *cobra.Command, args []string) {\n    \/\/ Do Stuff Here\n  },\n}\n<\/code><\/pre>\n\n<p>I like to write a constructor function that returns a command, in this case it\nwill be something like:<\/p>\n\n<pre><code>func NewRootCmd() *cobra.Command {\n    return &amp;cobra.Command{\n      Use:   \"hugo\",\n      Short: \"Hugo is a very fast static site generator\",\n      Long: `A Fast and Flexible Static Site Generator built with\n                love by spf13 and friends in Go.\n                Complete documentation is available at http:\/\/hugo.spf13.com`,\n      Run: func(cmd *cobra.Command, args []string) {\n        \/\/ Do Stuff Here\n      },\n  }\n}\n<\/code><\/pre>\n\n<p>The reason why I like the have this function is because it helps me to clearly\nsee the dependency my command requires. In this case nothing. I also like to use\nnot the Run function but the RunE one, it works in the same way but it expects\nan error in return.<\/p>\n\n<pre><code>func NewRootCmd(in string) *cobra.Command {\n    return &amp;cobra.Command{\n      Use:   \"hugo\",\n      Short: \"Hugo is a very fast static site generator\",\n      Long: `A Fast and Flexible Static Site Generator built with\n                love by spf13 and friends in Go.\n                Complete documentation is available at http:\/\/hugo.spf13.com`,\n      RunE: func(cmd *cobra.Command, args []string) (error) {\n          fmt.Fprintf(cmd.OutOrStdout(), in)\n          return nil\n      },\n  }\n}\n<\/code><\/pre>\n\n<p>In order to execute the command, I use cmd.Execute().<\/p>\n\n<p>Let\u2019s write a test function:<\/p>\n\n<p>The output with <code>go test -v<\/code> contains \u201chi\u201d because by default cobra prints to\nstdout, but we can replace it to assert that automatically<\/p>\n\n<pre><code>func Test_ExecuteCommand(t *testing.T) {\n\tcmd := NewRootCmd(\"hi\")\n\tcmd.Execute()\n}\n<\/code><\/pre>\n\n<pre><code>=== RUN   Test_ExecuteCommand\nhi--- PASS: Test_ExecuteCommand (0.00s)\nPASS\nok      ciao    0.006s\n<\/code><\/pre>\n\n<p>The trick here is to replace the stdout with something that we can read\nprogrammatically like a bytes.Buffer for example:<\/p>\n\n<pre><code class=\"language-go\">func Test_ExecuteCommand(t *testing.T) {\n\tcmd := NewRootCmd(\"hi\")\n\tb := bytes.NewBufferString(\"\")\n\tcmd.SetOut(b)\n\tcmd.Execute()\n\tout, err := ioutil.ReadAll(b)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif string(out) != \"hi\" {\n\t\tt.Fatalf(\"expected \\\"%s\\\" got \\\"%s\\\"\", \"hi\", string(out))\n\t}\n}\n<\/code><\/pre>\n\n<p>Personally I do not think there is much more to know in order to effectively\ntest CLI commands, they can be very complex, but if you can mock its\ndependencies and check what the execution prints out you are very flexible!<\/p>\n\n<p>Another thing you have to control when running a command is its arguments and\nits flags because based on them you will get different behavior that you have to\ntest in order to figure out that your commands work with all of them.<\/p>\n\n<p>The logic works the same for both but arguments are very easy, you just have to\nset the argument in the command with the function\n<code>cmd.SetArgs([]string{\"hello-by-args\"}).<\/code><\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\/ioutil\"\n\t\"testing\"\n\n\t\"github.com\/spf13\/cobra\"\n)\n\nfunc NewRootCmd() *cobra.Command {\n\treturn &amp;cobra.Command{\n\t\tUse:   \"hugo\",\n\t\tShort: \"Hugo is a very fast static site generator\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tfmt.Fprintf(cmd.OutOrStdout(), args[0])\n\t\t\treturn nil\n\t\t},\n\t}\n}\n\nfunc Test_ExecuteCommand(t *testing.T) {\n\tcmd := NewRootCmd()\n\tb := bytes.NewBufferString(\"\")\n\tcmd.SetOut(b)\n\tcmd.SetArgs([]string{\"hi-via-args\"})\n\tcmd.Execute()\n\tout, err := ioutil.ReadAll(b)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif string(out) != \"hi-via-args\" {\n\t\tt.Fatalf(\"expected \\\"%s\\\" got \\\"%s\\\"\", \"hi-via-args\", string(out))\n\t}\n}\n<\/code><\/pre>\n\n<p>Flags works in the same:<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\/ioutil\"\n\t\"testing\"\n\n\t\"github.com\/spf13\/cobra\"\n)\n\nvar in string\n\nfunc NewRootCmd() *cobra.Command {\n\tcmd := &amp;cobra.Command{\n\t\tUse:   \"hugo\",\n\t\tShort: \"Hugo is a very fast static site generator\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tfmt.Fprintf(cmd.OutOrStdout(), in)\n\t\t\treturn nil\n\t\t},\n\t}\n\tcmd.Flags().StringVar(&amp;in, \"in\", \"\", \"This is a very important input.\")\n\treturn cmd\n}\n\nfunc Test_ExecuteCommand(t *testing.T) {\n\tcmd := NewRootCmd()\n\tb := bytes.NewBufferString(\"\")\n\tcmd.SetOut(b)\n\tcmd.SetArgs([]string{\"--in\", \"testisawesome\"})\n\tcmd.Execute()\n\tout, err := ioutil.ReadAll(b)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif string(out) != \"testisawesome\" {\n\t\tt.Fatalf(\"expected \\\"%s\\\" got \\\"%s\\\"\", \"testisawesome\", string(out))\n\t}\n}\n<\/code><\/pre>\n\n<p>This is it! I like a lot to write unit tests for cli command because in real\nlife they are way more complex than the one I used here. It means that they run\na lot more functions but the command is well scoped in terms of dependencies (if\nyou write a constructor function) and in terms of input and output. So it is\neasy to write an assertion and write table tests with different inputs.<\/p>\n"},{"title":"Smart working does not need to be remote","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/smart-working-does-not-need-to-be-remote"}},"description":"There is a different between remote work and smart work. You can have both, or just one. It is on you. I prefer both at the moment!","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2020-03-04T09:08:27+00:00","published":"2020-03-04T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/smart-working-does-not-need-to-be-remote","content":"<p>I work remotely for almost 3 years now and I am happy with it.  First of all,\nbecause the company I am working for, InfluxData, is not based where I live. I\nam currently in Turin and it is in San Francisco, so the unique way for me to do\nthe kind of work I am doing today is to be remote. I moved to Dublin for 2 years\nbecause my English was not as I hoped it to be and I was looking for a quick way\nto make it right. Beers and good friends helped me to succeed! I still have to\nuse Grammarly to write blog posts but hey, at least I can write them.<\/p>\n\n<p>Anyway, I like to work with people from all over, and that\u2019s why I am not even\nthinking about the possibility to get back to work for a small company in Italy\nfor now. Right now the perception is almost like getting paid to work in the\nsame environment I used to work for free when contributing to open source\ncommunities. People from all over my conditions in terms of time commitment.\nI just have to work hard and do what it takes to make my team and the\ncompany to improve. This is, in essence, the difference between remote work\nand smart working. You can work remotely as you work in an office. You have\nto be at your desk, at some time for 8 hours, even when it is raining, even\nwhen you feel unproductive.<\/p>\n\n<p>This rambling is to remember to myself that I do not like remote working, I like\nsmart working. I like to be able to organize my time because my boss trusts my\nability to judge the situation. I can work day and night and take a break an\nentire day if I feel like I have to. Obviously it is a huge responsibility and a\nrisk from both side, as a company you do not have the common framework that we\nbuilt across years to figure out how productive an employee is, and as a worker,\nyou have to develop the risk skills to read the situation you are in. But this\nability helped me to be a conscious developer and not just a code generator that\nsolves self-built challenges.<\/p>\n\n<p class=\"text-center\"><img src=\"\/img\/cruna-ago.jpg\" alt=\"Middle East, Needle, Threads, Sewing Thread\" class=\"img-fluid\" \/><\/p>\n\n<p class=\"small text-center\">Hero image via <a href=\"https:\/\/pixabay.com\/photos\/middle-east-needle-threads-4854847\/\">Pixabay<\/a><\/p>\n\n<p>So I would like to rephrase the title: \u201cSmart working does not need to be\nremote, but it helps\u201d.<\/p>\n\n<p>I realized the difference because I had to get smarter, my company is 9 hours\nbehind my current time. I work alone a lot. I have to read in advance what the\nproduct or my product manager will ask me to work on because I have to develop a\nbuffer that will keep you busy when the current task I am resolving is blocked\nor I can\u2019t get around it without reaching to a person that will probably be\noffline, or maybe I can but it will take so much effort that it is smarter for\nme to just wait. It is equivalent to procrastinate until you can shake the chair\nof your coworker that wrote the freaking recursive function that you have to\ndebug. The problem is that you never know if the coworker will even show up.<\/p>\n\n<p>Companies you have to set up your workload to assist smart workers, not a remote\nworker. In tech at least, where this is a reachable goal.<\/p>\n\n<p>I think it is a temporary condition, right now I feel like I need both, remote\nto be able to work where I think I will learn or perform better, or where it is\nfunnier. And smart, because I like to develop organization skills and to feel\nthe master of my clock.<\/p>\n\n<p>Who knows how it will evolve.\nThank you for your time.<\/p>\n"},{"title":"The awesomeness of the httptest package in Go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/golang-mockmania-httptest"}},"description":"One of the reasons why testing in Go is friendly is driven by the fact that the core team already provides useful testing package as part of the stdlib that you can use, as they do to test packages that depend on them. This article explains how to use the httptest package to mock HTTP servers and to test sdks that use the http.Client.","image":"https:\/\/gianarb.it\/img\/golang-mockmania.png","updated":"2020-02-25T09:08:27+00:00","published":"2020-02-25T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/golang-mockmania-httptest","content":"<p>Go has a nice http package. I am able to say that because I am not aware of any\nother implementation of it in Go other than the one provided by the standard\nlibrary. This is for me a good sign.<\/p>\n\n<pre><code class=\"language-go\">resp, err := http.Get(\"http:\/\/example.com\/\")\nif err != nil {\n\t\/\/ handle error\n}\ndefer resp.Body.Close()\nbody, err := ioutil.ReadAll(resp.Body)\n<\/code><\/pre>\n\n<p>This example comes from the <a href=\"https:\/\/golang.org\/pkg\/net\/http\/\">documentation<\/a>\nitself.<\/p>\n\n<p>We are here to read about testing, so who cares about the http package itself!\nWhat matters is the <a href=\"https:\/\/golang.org\/pkg\/net\/http\/httptest\/\">httptest<\/a>\npackage! Way cooler.<\/p>\n\n<p>This article is not the first one for the MockMania series, I wrote about titled\n<a href=\"https:\/\/gianarb.it\/blog\/golang-mockmania-influxdb-v2-client\">\u201cInfluxDB Client\nv2\u201d<\/a>, it uses the\nhttptest service already! But hey it deserves its own blog post.<\/p>\n\n<h2 id=\"server-side\">Server Side<\/h2>\n\n<p>The http package provides a client and a server. The server is made of handlers.\nThe handler takes a request and based on that it returns a response. This is its\ninterface:<\/p>\n\n<pre><code class=\"language-go\">type Handler interface {\n    ServeHTTP(ResponseWriter, *Request)\n}\n<\/code><\/pre>\n\n<p>As you can see if gets a ResponseWriter to compose a response based on the\nRequest it gets. This process can be as complicated as you like, it can reaches\ndatabases, third party services but in the end, it writes a response.<\/p>\n\n<p>It means that mocking all the dependencies to get the right scenario we use the\nResponseWriter to figure out if the handler made what we want.<\/p>\n\n<p>The httptest package provides a replacement for the ResponseWriter called\nResponseRecorder. We can pass it to the handler and check how it looks like\nafter its execution:<\/p>\n\n<pre><code class=\"language-go\">handler := func(w http.ResponseWriter, r *http.Request) {\n\tio.WriteString(w, \"ping\")\n}\n\nreq := httptest.NewRequest(\"GET\", \"http:\/\/example.com\/foo\", nil)\nw := httptest.NewRecorder()\nhandler(w, req)\n\nresp := w.Result()\nbody, _ := ioutil.ReadAll(resp.Body)\n\nfmt.Println(resp.StatusCode)\nfmt.Println(string(body))\n<\/code><\/pre>\n\n<p>This handler is very simple, it just manipulates the response body. If your\nhandler is more complicated and it has dependencies you have to be sure to\nreplace them as well, injecting the appropriate one.<\/p>\n\n<h2 id=\"client-side\">Client-Side<\/h2>\n\n<p>Handlers are useful if you can\u2019t use them. The Go http package provides an http\nclient as well that you can use to interact with an http server. An http client\nby itself is useless, but it is the entry point for all the manipulation and\ntransformation you do on the information you get via HTTP. With the\nproliferation of microservices, it is a very common situation.<\/p>\n\n<p>The workflow is well understood, you have an HTTP backend to interact with, you\nfetch data from there are you manipulate them with your business logic. When\ntesting what you can do is to mock the http backend in order to return what you\nwant, testing that your business logic does what it is supposed to do based on\nthe input you get from the HTTP server.<\/p>\n\n<p>During our first example, the handler was the subject of our testing, this is\nnot the case anymore, we are testing the consumer this time, so we have to mimic\nand handler in order to get what we expect to return<\/p>\n\n<pre><code class=\"language-go\">ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n\tfmt.Fprintln(w, \"I am a super server\")\n}))\ndefer ts.Close()\n<\/code><\/pre>\n\n<p>As you can see we are creating a new HTTP server via the httptest. It accepts a\nhandler. The goal for this handler returns what we would like to gest our code\non. In theory, it should just use the ResponseWriter to compose the response we\nexpect.<\/p>\n\n<p>The server has a bunch of methods, the one you are looking for is the URL one.\nBecause we can pass it to an http.Client, the one we will use as a mock for our\nfunction<\/p>\n\n<pre><code class=\"language-go\">res, err := http.Get(ts.URL)\nif err != nil {\n\tlog.Fatal(err)\n}\nbb, err := ioutil.ReadAll(res.Body)\nres.Body.Close()\n<\/code><\/pre>\n\n<p>That\u2019s it, as you can see <code>ts.URL<\/code> points the http.Client to the mock server we\ncreated.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>I use the httptest package a lot even when writing SDKs for services that do not\nhave integration with Go because I can follow their documentation mocking their\nserver and I do not need to reach them until I am confident with the code I\nwrote.<\/p>\n\n<p>My suggestion is to test your client code for edge cases as well because of the\nhttptest.Server gives you the flexibility to write any response you can think\nabout. You can mimic an authorized response to seeing how your code with handle\nit, or an empty body or a rate limit. The only limit is our laziness.<\/p>\n"},{"title":"Golang MockMania InfluxDB Client v2","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/golang-mockmania-influxdb-v2-client"}},"description":{},"image":"https:\/\/gianarb.it\/img\/golang-mockmania.png","updated":"2020-02-09T09:08:27+00:00","published":"2020-02-09T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/golang-mockmania-influxdb-v2-client","content":"<p>Recently I had to develop an integration with the <a href=\"https:\/\/github.com\/influxdata\/influxdb-client-go\">InfluxDB Client v2 Golang\nSDK<\/a>.<\/p>\n\n<p>This SDK is useful to interact with InfluxDB v2, create organizations and users,\nwrite new points, and submit queries; it accepts the Golang http.Client.<\/p>\n\n<pre><code class=\"language-golang\">influx, err := influxdb.New(myHTTPInfluxAddress, myToken, influxdb.WithHTTPClient(myHTTPClient))\nif err != nil {\n\tpanic(err)\n}\n<\/code><\/pre>\n\n<p>Having the ability to pass the HTTP client from the outside\n<code>influxdb.WithHTTPClient(myHTTPClient)<\/code> improves the familiarity golang\ndevelopers have with the library; they know how to configure Transporters or how\nto inject logging, tracing, debugging.  For what concerns <code>Golang MockMania<\/code>, it\ngives to use the possibility to pass the\n<a href=\"https:\/\/golang.org\/pkg\/net\/http\/httptest\/#example_Server\">httptest<\/a> client.<\/p>\n\n<pre><code class=\"language-golang\">influxDBServer := httptest.NewServer(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {\n\n}))\ninfluxClient, err := influxdb.New(myHTTPInfluxAddress, myToken, influxdb.WithHTTPClient(influxDBServer.Client()))\n<\/code><\/pre>\n\n<p>At this point you can write the response you expect from the influxdb server\nusing the <code>http.ResponseWriter<\/code>.<\/p>\n\n<p>Either way, even if you have to check what influxdb receives from the sdk or if\nyou have to obtain a specific answer from InfluxDB to validate what your\nbusiness logic will do, nothing will stop you from using checking the\nhttp.Request or utilizing the http.ResponseWriter to get what you expect.<\/p>\n"},{"title":"Continuous Profiling Go applications running in Kubernetes","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/continuous-profiling-go-apps-in-kubernetes"}},"description":"Kube-Profefe is an open source project that acts like a bridge between Kubernetes and Profefe. It helps you to implement continuous profiling for Go applications running in Kubernetes.","image":"https:\/\/gianarb.it\/img\/profefe.png","updated":"2020-02-04T09:08:27+00:00","published":"2020-02-04T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/continuous-profiling-go-apps-in-kubernetes","content":"<p>Recently I wrote <a href=\"https:\/\/gianarb.it\/blog\/go-continuous-profiling-profefe\">\u201cContinuous profiling in Go with\nProfefe\u201d<\/a>, an article\nabout the new shiny open source project I am contributing to.<\/p>\n\n<p><strong>TLDR:<\/strong> Profefe is a registry for pprof profiles. You can push them embedding\nan SDK in your application or you can write a collector (cronjob) that gets\nprofiles and push the tar via the Profefe API. Side by side with the profile you\nhave to send other information like:<\/p>\n\n<ul>\n  <li>Type: represents the profile type such as mutex, goroutines, CPU and so on<\/li>\n  <li>Service: identifies the source for this profile, for example, the binary name<\/li>\n  <li>InstanceID: identifies where it comes from, for example, pod name or server\nhostname<\/li>\n  <li>Labels: are optional key\/value pairs that you can use at query time to filter\nprofiles. If you are building the same service with two different Go versions\nto check for performance degradation you can label the profiles with\n<code>go=1.13.4<\/code> for example.<\/li>\n<\/ul>\n\n<p>The article has way more content but that\u2019s enough. You can keep reading with\nonly this information.<\/p>\n\n<h2 id=\"kubernetes\">Kubernetes<\/h2>\n\n<p>As you know at InfluxData we use Kubernetes, our services already expose the\n<a href=\"https:\/\/golang.org\/pkg\/net\/http\/pprof\/\">pprof HTTP handler<\/a> and we can not\ninstrument all the services with the Profefe SDK, for those reasons we had to\nwrite our own collectors capable of getting pprof profiles via the Kubernetes\nAPI and to push them into Profefe. That\u2019s why we decided to go with a different\napproach. I wrote a project called\n<a href=\"https:\/\/github.com\/profefe\/kube-profefe\">kube-profefe<\/a>. It acts as a bridge\nbetween the Profefe API and Kubernetes. The repository provides two different\nbinaries:<\/p>\n\n<ul>\n  <li>A kubectl plugin that you can install (even via krew) that servers useful\nutilities to interact with the profefe API (profefe at the moment does not\nhave a CLI) and to capture profiles from running pod.<\/li>\n  <li>A collector that can run as a cronjob, it goes pod by pod looking for profiles\nto collect and it will push them to Profefe.<\/li>\n<\/ul>\n\n<h2 id=\"architecture\">Architecture<\/h2>\n\n<p>In order to configure the collector or to capture profiles from a running\ncontainer, it leverages pod annotations. Only the pods with the annotation\n<code>pprof.com\/enable=true<\/code> will be taken into consideration from kube-profefe.\nOther annotations are optional or they have default values. This one is the\nunique one that has to be set to make kube-profefe aware of your pod.<\/p>\n\n<p>The example above shows a Pod spec that enables profefe capabilities:<\/p>\n\n<pre><code class=\"language-yaml\">apiVersion: v1\nkind: Pod\nmetadata:\n  name: influxdb-v2\n  annotations:\n    \"profefe.com\/enable\": \"true\"\n    \"profefe.com\/port\": \"9999\"\nspec:\n  containers:\n  - name: influxdb\n    image: quay.io\/influxdb\/influxdb:2.0.0-alpha\n    ports:\n    - containerPort: 9999\n<\/code><\/pre>\n\n<p>As you can see there are other annotations such as <code>profefe.com\/port<\/code> by default\nis 6060. In this case it is pointed to 9999 because that\u2019s where the pprof HTTP\nhandler runs in InfluxDB v2.  A full list of annotations is maintained in the\nproject\u2019s README.md.<\/p>\n\n<p>There is not a lot more to know about the underling mechanism that enpowers\nkube-profefe, we are gonna deep dive on both components: the kubectl plugin and\nthe collector.<\/p>\n\n<h2 id=\"kubectl-profefe-the-kubectl-plugin\">Kubectl-profefe: the kubectl plugin<\/h2>\n\n<p>A kubectl plugin is nothing more than a binary located in your $PATH with the\nprefix name \u201ckubectl-\u201d. In my case the binary is released with the name\nkubectl-profefe, when located in your $PATH you will be able to run a command\nlike:<\/p>\n\n<pre><code class=\"language-bash\">$ kubectl profefe --help\nIt is a kubectl plugin that you can use to retrieve and manage profiles in Go.\n\nUsage:\n  kubectl-profefe [flags]\n  kubectl-profefe [command]\n\nAvailable Commands:\n  capture     Capture gathers profiles for a pod or a set of them. If can filter by namespace and via label selector.\n  get         Display one or many resources\n  help        Help about any command\n  load        Load a profile you have locally to profefe\n\nFlags:\n  -A, --all-namespaces                 If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.\n      --as string                      Username to impersonate for the operation\n      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --cache-dir string               Default HTTP cache directory (default \"\/home\/gianarb\/.kube\/http-cache\")\n      --certificate-authority string   Path to a cert file for the certificate authority\n      --client-certificate string      Path to a client certificate file for TLS\n      --client-key string              Path to a client key file for TLS\n      --cluster string                 The name of the kubeconfig cluster to use\n      --context string                 The name of the kubeconfig context to use\n  -f, --filename strings               identifying the resource.\n  -h, --help                           help for kubectl-profefe\n      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.\n  -n, --namespace string               If present, the namespace scope for this CLI request\n  -R, --recursive                      Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. (default true)\n      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n  -l, --selector string                Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)\n  -s, --server string                  The address and port of the Kubernetes API server\n      --token string                   Bearer token for authentication to the API server\n      --user string                    The name of the kubeconfig user to use\n\nUse \"kubectl-profefe [command] --help\" for more information about a command.\n<\/code><\/pre>\n\n<p>This output should look very familiar to you, there are a lot of options usable\nwith any other kubectl native command. Mainly around authentication: \u2013user,\n\u2013server, \u2013kubeconfig, \u2013client-certificate\u2026 Or around pod selection: -l,\n\u2013selector, -n, \u2013namespace, \u2013all-namespaces. If you are curious about how to\nwrite a friendly kubectl plugin I wrote <a href=\"https:\/\/gianarb.it\/blog\/kubectl-flags-in-your-plugin\">\u201ckubectl flags in your\nplugin\u201d<\/a> check it out.<\/p>\n\n<p>This plugin, even if it is not native, uses the same authentication mechanism in\nuse from the kubectl so, where ever the kubectl works, this plugin should work\nas well.<\/p>\n\n<p>The pod selectors -l, -n, for example, are useful when running the command:<\/p>\n\n<pre><code>$ kubectl profefe capture\n<\/code><\/pre>\n\n<p>Capture, as the name suggests, goes straight to one or more pods and it\ndownloads or pushes to profefe various profiles. It is very flexible, you can\ncapture pprof profiles from a specific pod (or multiple pods) by ID:<\/p>\n\n<pre><code>$ kubectl profefe capture &lt;pod-id&gt;,&lt;pod-id&gt;...\n<\/code><\/pre>\n\n<p><em>NB: just remember to use the namespace where the pods are running with the flag\n-n or \u2013namespace.<\/em><\/p>\n\n<p>You can use the pod selectors to collect multiple profiles:<\/p>\n\n<pre><code>$ kubectl profefe capture -n web\n<\/code><\/pre>\n\n<p>Captures profiles from all the pod with the pprof.com\/enable=true annotation\nrunning in the pod namespace and it will store them under the <code>\/tmp<\/code> directory.\nYou can change the output directory with <code>--output-dir<\/code>. If you do not want to\nstore them locally you can push them to profefe specifying its location via\n<code>--profefe-hostport<\/code>.<\/p>\n\n<p>The are other combinations for the capture command and you can get profiles from\nprofefe, I will leave the rest to you!<\/p>\n\n<p class=\"text-center\"><img src=\"\/img\/stopwatch.jpg\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p class=\"small text-center\">Hero image via <a href=\"https:\/\/pixabay.com\/illustrations\/time-time-management-stopwatch-3216244\/\">Pixabay<\/a><\/p>\n\n<h2 id=\"kprofefe-the-collector\">Kprofefe: the collector<\/h2>\n\n<p>The main responsability for the collector is to make the continuous profiling\nmagic to happen! It uses the same mechanism we already saw for the capture\nkubectl plugin but it is a single binary and it can run as a cronjob.<\/p>\n\n<pre><code>apiVersion: batch\/v1beta1\nkind: CronJob\nmetadata:\n  name: kprofefe-allnamespaces\n  namespace: profefe\nspec:\n  concurrencyPolicy: Replace\n  jobTemplate:\n    metadata:\n    spec:\n      template:\n        spec:\n          containers:\n          - args:\n            - --all-namespaces\n            - --profefe-hostport\n            - http:\/\/profefe-collector:10100\n            image: profefe\/kprofefe:v0.0.8\n            imagePullPolicy: IfNotPresent\n            name: kprofefe\n          restartPolicy: Never\n          serviceAccount: kprofefe-all-namespaces\n          serviceAccountName: kprofefe-all-namespaces\n  schedule: '*\/10 * * * *'\n  successfulJobsHistoryLimit: 3\n<\/code><\/pre>\n\n<p>You can run a single cronjob that will over all the pods across all the\nnamespaces or you can deploy multiple cronjobs, playing with the label selector\n(-l) and the namespace selector (-n) you can configure the ownership for every\nrunning cronjob. The reasons to split in multiple cronjobs can be:<\/p>\n\n<ul>\n  <li>Scalability: one cronjob is not enough, so you can have one per namespace\nfor example<\/li>\n  <li>Time segmentation: if you have a single cronjob it means that all the pods\nprofiles will get captured with the same frequency, but you will may want to\nget high frequent profiles for a specific subset of applications and less\ndencity for others.<\/li>\n<\/ul>\n\n<p>Documentation about <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/overview\/working-with-objects\/labels\/\">\u201cLabel and\nSelector\u201d<\/a>\nfor your reference.<\/p>\n\n<p><em>Note: serviceAccount is required only if you have RBAC enabled (you should)\nbecause the collector needs access to Kubernetes API to list\/view pods across\nall namespaces in this case.<\/em><\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>There is a lot to do in both the collector and kubectl plugins. I would like to\nadd logs and monitoring to the collector for example. The kubectl plugin get\nprofiles command needs some love, ideally using the same format that <code>kubectl\nget<\/code> has via\n<a href=\"https:\/\/github.com\/kubernetes\/cli-runtime\/tree\/master\/pkg\/printers\">kubernetes\/cli-runtime\/pkg\/printers<\/a>.\nTry, contribute and <a href=\"https:\/\/twitter.com\/gianarb\">let me know<\/a>!<\/p>\n"},{"title":"Make boring tasks enjoyable with go and colly","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/make-boring-task-enjotable-with-go-colly"}},"description":"Recently I had the idea to update the conference page on my website with the end goal to make it a bit more structured. Where structured means a bit more reusable compared with the static HTML table I used to have. I mixed a bit of hacky Go, colly for scraping and that's who I did it","image":"https:\/\/gianarb.it\/img\/go.png","updated":"2020-01-23T09:08:27+00:00","published":"2020-01-23T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/make-boring-task-enjotable-with-go-colly","content":"<p>Recently I had the idea to update the <a href=\"\/conferences.html\">conference<\/a> page on my website with the end\ngoal to make it a bit more structured. Where structured means a bit more\nreusable compared with the static HTML table I used to have.<\/p>\n\n<p>In the beginning, I decided to do an HTML table every year listing all the\nconferences as a single row. It worked but I think at this point I can do\nsomething even cooler with a single page for every conference talk with YouTube\nand slides embedded, the abstract and few links to deep dive on the topic.<\/p>\n\n<p>Jekyll has a cool feature called\n<a href=\"https:\/\/jekyllrb.com\/docs\/collections\/\">collections<\/a>: \u201cCollections are a great\nway to group related content like members of a team or talks at a conference.\u201d I\ndecided to do a \u201cmy_talks\u201d collection.<\/p>\n\n<p>I first added the right configuration in the <code>_config.yaml<\/code> and I added my first\nconference in 2020, DevOps Pro in Vilnius (see you there!!).<\/p>\n\n<pre><code>collections:\n  my_talks:\n    output: true\n<\/code><\/pre>\n\n<p>I have created my first talk as a markdown file, just as I do for my posts:<\/p>\n\n<pre><code>---\ntitle: Continuous Profiling Go Application Running in Kubernetes\ndate: 2020-03-24\nslide:\nembedSlide:\nvideo:\nembedVideo:\neventName: DevOps Pro Europe\neventLink: https:\/\/devopspro.lt\/\ncity: Vilnius, Lithuania\n---\nMicroservices and Kubernetes help our architecture to scale and to be\nindependent at the price of running many more applications. Golang provides a\npowerful profiling tool called pprof, it is useful to collect information from a\nrunning binary for future investigation. The problem is that you are not always\nthere to take a profile when needed, sometimes you do not even know when you\nneed to one, that's how a continuous profiling strategy helps. Profefe is an\nopen-source project that collects and organizes profiles. Gianluca wrote a\nproject called kube-profefe to integrate Kubernetes with Profefe. Kube-profefe\ncontains a kubectl plugin to capture locally or on profefe profiles from running\npods in Kubernetes. It also provides an operator to discover and continuously\nprofile applications running inside Pods.\n<\/code><\/pre>\n\n<p>As you can see I decided to set a bunch of variables that I will hope to re-use\nwhere I will do the \u201csingle page\u201d for each talk.<\/p>\n\n<p>That\u2019s it. All done: 2020 looks awesome and I added a for loop in the conference\npage to print out the row as before:<\/p>\n\n<pre><code>&lt;div class=\"row\"&gt;\n    &lt;h3&gt;&lt;\/h3&gt;\n    &lt;div class=\"col-md-12\"&gt;\n        &lt;table class=\"table table-hover\" id=\"\"&gt;\n          &lt;thead&gt;\n            &lt;tr&gt;\n              &lt;th&gt;Date&lt;\/th&gt;\n              &lt;th&gt;Event&lt;\/th&gt;\n              &lt;th&gt;Talk&lt;\/th&gt;\n              &lt;th&gt;Slide&lt;\/th&gt;\n            &lt;\/tr&gt;\n          &lt;\/thead&gt;\n          &lt;tbody&gt;\n            \n          &lt;\/tbody&gt;\n      &lt;\/table&gt;\n    &lt;\/div&gt;\n&lt;\/div&gt;\n<\/code><\/pre>\n\n<p>In order to make everything a bit more reusable and organized this piece of code\nis what Jekyll call <a href=\"https:\/\/jekyllrb.com\/docs\/includes\/\">include<\/a>. The way I\nuse it inside the conference page looks like:<\/p>\n\n<pre><code>{ assign talks2020 = site.my_talks | where:'date', \"2020\" }\n{ include talks_per_year.html year=\"2020\" talks=talks2020 }\n<\/code><\/pre>\n\n<p>Everything is working fine, and I am pretty happy but I have over 6 years of\ntalks to convert in this new format, it means over 50 conferences to convert one\nby one in the new format made of files and YAML.<\/p>\n\n<p class=\"text-center\"><img src=\"https:\/\/media.giphy.com\/media\/KFz5cubdh5eskezQ6d\/giphy.gif\" alt=\"https:\/\/media.giphy.com\/media\/KFz5cubdh5eskezQ6d\/giphy.gif\" class=\"img-fluid\" \/><\/p>\n\n<h2 id=\"scraping-is-my-superpower\">Scraping is my superpower<\/h2>\n\n<p>I am not a fan of scraping things around and I never did that before, but hey!\nThis solution looks less boring that me doing it manually. I deep dive looking\nfor scraping libraries in new languages (yes you always have to learn new\nlanguages when doing a new side project), but at the end I discovered\n<a href=\"https:\/\/github.com\/gocolly\/colly\">colly<\/a>: \u201cElegant Scraper and Crawler\nFramework for Golang\u201d. I decided to be elegant and effective.<\/p>\n\n<h2 id=\"a-bit-about-colly\">A bit about Colly<\/h2>\n\n<p>I have to say that it took me less than 2 hours to hack a script in Go using\nColly that converted all my tables year by year from HTML to files with the\nformat you saw above. I also added some sweet sugar like:<\/p>\n\n<ul>\n  <li>Be able to convert YouTube links when detected to their embeddable version<\/li>\n  <li>I converted and standardized the end\/start date for the talks because it\nchanged year by year (I am lazy and unconsistent! Don\u2019t tell anybody)<\/li>\n<\/ul>\n\n<p>It was soo easy that I didn\u2019t write any test\u2026 yep, that\u2019s it. The file name is a\nbit weird but at the end it works, so who cares!<\/p>\n\n<pre><code>$ tree .\/_my_talks\/\n.\/_my_talks\/\n\u251c\u2500\u2500 2013-09-12-what-is-vagrant.markdown\n\u251c\u2500\u2500 2014-02-c'\u00e8-un-modulo-zf2-per-tutto!---there-is-a-module-for-all.markdown\n\u251c\u2500\u2500 2014-03-zend-queue.markdown\n\u251c\u2500\u2500 2014-05-getting-start-chromecast-developer.markdown\n\u251c\u2500\u2500 2014-05-vagrant,-riutilizzo-dell'infrastruttura---vagrant,-reuse-architecture.markdown\n\u251c\u2500\u2500 2014-10-sviluppo-di-api-rest-con-zf2-&amp;-mongodb.markdown\n\u251c\u2500\u2500 2014-10-time-series-database,php-&amp;-influx-db.markdown\n\u251c\u2500\u2500 2015-01-angularjs-advanced-startup.markdown\n\u251c\u2500\u2500 2015-06-delorean-made-in-home---reaspberry,-gobot-and-mqtt.markdown\n\u251c\u2500\u2500 2015-07-joomla-and-scalability-with-aws-beanstalk.markdown\n\u251c\u2500\u2500 2015-09-penny-php-middleware-framework.markdown\n\u251c\u2500\u2500 2015-10-angularjs-in-cloud.markdown\n\u251c\u2500\u2500 2015-10-doctrine-orm-cache-layer---it-is-not-a-boomerang.markdown\n\u251c\u2500\u2500 2015-11-wordpress-and-scalability-with-docker.markdown\n\u251c\u2500\u2500 2016-02-slimmer---poc-born-after-a-revolt-instant-vs-jenkins.markdown\n\u251c\u2500\u2500 2016-03-a-zf-story:-parallel-made-easy.markdown\n\u251c\u2500\u2500 2016-04-listen-your-infrastructure-and-please-sleep.markdown\n\u251c\u2500\u2500 2016-05-continuous-delivery-with-jenkins-in-the-real-world.markdown\n\u251c\u2500\u2500 2016-06-aws-under-the-hood.markdown\n\u251c\u2500\u2500 2016-06-listen-your-infrastructure-and-please-sleep.markdown\n\u251c\u2500\u2500 2016-06-parallel-made-easy.markdown\n\u251c\u2500\u2500 2016-07-docker-1.12-and-orchestration-built-in.markdown\n<\/code><\/pre>\n\n<p class=\"text-center\"><img src=\"https:\/\/i.kym-cdn.com\/photos\/images\/newsfeed\/000\/345\/534\/4a2.jpg\" alt=\"https:\/\/i.kym-cdn.com\/photos\/images\/newsfeed\/000\/345\/534\/4a2.jpg\" class=\"img-fluid\" \/><\/p>\n\n<p>Anyway, let\u2019s get to some snippets!<\/p>\n\n<pre><code>type Talk struct {\n\tTitle      string            `yaml:\"title\"`\n\tDate       time.Time         `yaml:\"date\"`\n\tSlide      string            `yaml:\"slide\"`\n\tEmbedSlide string            `yaml:\"embedSlide\"`\n\tVideo      string            `yaml:\"video\"`\n\tEmbedVideo string            `yaml:\"embedVideo\"`\n\tEventName  string            `yaml:\"eventName\"`\n\tEventLink  string            `yaml:\"eventLink\"`\n\tCity       string            `yaml:\"city\"`\n\tLinks      map[string]string `yaml:\"links\"`\n}\n\nvar dateLayout = \"_2 Jan 2006\"\nvar year = \"2020\"\nvar outputDir = \"\/tmp\"\n\nvar errorsToCheck = map[string]string{}\n<\/code><\/pre>\n\n<p>Those are the variables and struct I set. The Talk represent every single talk,\nthe dataLayout converts the way the end\/start date is written into a time.Time\nobject. <code>year<\/code> is a parameter that tells which table to scrape, <code>outputDir<\/code>\ntells where to place the files. Those 3 variables can be changed with cli flags:<\/p>\n\n<pre><code>flag.StringVar(&amp;year, \"year\", \"2020\", \"The year used to identify the table to parse\")\nflag.StringVar(&amp;dateLayout, \"date-layout\", \"_2 Jan 2006\", \"The golang format layour to parse the event date column\")\nflag.StringVar(&amp;outputDir, \"output-dir\", \"\/tmp\", \"Where to place the generated files\")\n\nflag.Parse()\n<\/code><\/pre>\n\n<p><code>errorsToCheck<\/code> is an easy way to collect all the errors for every run. I\nprinted them in a file, if the errors were easy to fix with a code change I did\nthat, if they were easier to change modifying the current conference page I did\nthat.<\/p>\n\n<pre><code>\/\/ Instantiate default collector\nc := colly.NewCollector(\n\t\/\/ Visit only domains: coursera.org, www.coursera.org\n\tcolly.AllowedDomains(\"gianarb.it\", \"www.gianarb.it\"),\n\n\t\/\/ Cache responses to prevent multiple download of pages\n\t\/\/ even if the collector is restarted\n\tcolly.CacheDir(\".\/gianarb_cache\"),\n)\ntalks := []Talk{}\nc.OnHTML(\"table[id=\\\"\"+year+\"\\\"] tbody\", func(e *colly.HTMLElement) {\n\te.ForEach(\"tr\", func(_ int, row *colly.HTMLElement) {\n\t\ttalk := Talk{}\n                        \/\/ for each line \"tr\" do amazing things\n\t\ttalks = append(talks, talk)\n\t})\n})\n\n\/\/ Before making a request print \"Visiting ...\"\nc.OnRequest(func(r *colly.Request) {\n\tlog.Println(\"visiting\", r.URL.String())\n})\nerr := c.Visit(\"https:\/\/gianarb.it\/conferences.html\")\nif err != nil {\n\tprintln(err)\n}\n<\/code><\/pre>\n\n<p>This is how easy colly is to run. You have to configure the collector and with\nthe function <code>OnHTML<\/code> you can look for whatever you need to scrape. In this case\nI was looking for the table identified with the <code>id<\/code> equals the year got from\nthe CLI. For each TR element I was creating a new talk to append in a slice. The\n<code>talk<\/code> has to to be populated with the actual values scraped cell by cell. It\nmeans that ForEach row we need to look for each td (cell in html) and based on\nits index we can identify the content. In my case it looks like this:<\/p>\n\n<pre><code>c.OnHTML(\"table[id=\\\"\"+year+\"\\\"] tbody\", func(e *colly.HTMLElement) {\n\te.ForEach(\"tr\", func(_ int, row *colly.HTMLElement) {\n\t\ttalk := Talk{}\n\t\trow.ForEach(\"td\", func(_ int, el *colly.HTMLElement) {\n\t\t\tswitch el.Index {\n\t\t\tcase 0:\n\t\t\t    \/\/ Date\n\t\t\tcase 1:\n                      \/\/ Event Name and conference URL (task.EventLink)\n\t\t\tcase 3:\n                      \/\/ Video and slides link\n\t\t\t}\n\t\t})\n\t\ttalks = append(talks, talk)\n\t})\n})\n<\/code><\/pre>\n\n<p>I can show you how I coded the case 3, the one that looks for Video or Slides,\ntakes its link and in case of a YouTube Video it also converts the link into an\nembeddable one:<\/p>\n\n<pre><code>links := map[string]string{}\nel.ForEach(\"a\", func(_ int, el *colly.HTMLElement) {\n\tswitch el.Text {\n\tcase \"Video\":\n\t\ttalk.Video = el.Attr(\"href\")\n\t\tif strings.Contains(talk.Video, \"youtube.com\") {\n\t\t\tu, err := url.Parse(talk.Video)\n\t\t\tif err == nil {\n\t\t\t\ttalk.EmbedVideo = \"https:\/\/www.youtube.com\/embed\/\" + u.Query().Get(\"v\")\n\t\t\t} else {\nerrorsToCheck[row.Text+\"\/youtube_video_without_id\"] = el.Text\n\t\t\t}\n\t\t} else {\n\t\t\terrorsToCheck[row.Text+\"\/no_youtube_video\"] = el.Attr(\"href\")\n\t\t}\n\tcase \"Slides\":\n\t\ttalk.Slide = el.Attr(\"href\")\n\tdefault:\n\t\tlinks[el.Text] = el.Attr(\"href\")\n\t}\n\ttalk.Links = links\n})\n<\/code><\/pre>\n\n<p>This is how I made a boring task enjoyable! And now I have all the talks (minus\ntwo that didn\u2019t get converted but I will add manually) converted and ready to be\nrendered as posts.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>This post should not start a useless war between static side generator,\nWordpress or whatever. If you follow me on\n<a href=\"https:\/\/twitter.com\/gianarb\">Twitter<\/a> you know that I tweeted recently about\nchanging Jekyll with something else, mainly because I was thinking how to make a\nbetter use of the contents I create. Digging deeper with Jekyll I discovered\nthat for now I don\u2019t need more than that and changing tool will end up as\nuseless and probably not that fun exercise. I am sure all other tools like\nWordpress, Hugo, Gatsby have something similar.<\/p>\n"},{"title":"My experience with Krew to manage kubectl plugins","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/my-experiene-with-krew-to-manage-kubectl-plugins"}},"description":"Kubectl plugins are extremely useful to provide a set of friendly utilities to interact with kubernetes in your environment. Krew is a project that helps you managing the plugin lifecycle. I have to add profefe to it and this is what I learned.","image":"https:\/\/gianarb.it\/img\/kubernetes.png","updated":"2020-01-16T09:08:27+00:00","published":"2020-01-16T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/my-experiene-with-krew-to-manage-kubectl-plugins","content":"<p>I wrote a good number of kubectl plugins so far, but there is a lot more I can\ndo with them. Every time I write a new one I discover something new and that is\nwhy I am always excited to see what will happen with the next one.<\/p>\n\n<p><a href=\"[https:\/\/gianarb.it\/blog\/unit-testing-kubernetes-client-in-go](https:\/\/gianarb.it\/blog\/unit-testing-kubernetes-client-in-go)\">\u201cUnit Testing Kubernetes Client in\nGo\u201d<\/a>\nand <a href=\"[https:\/\/gianarb.it\/blog\/kubectl-flags-in-your-plugin](https:\/\/gianarb.it\/blog\/kubectl-flags-in-your-plugin)\">\u201cKubectl flags in your kubectl\nplugin\u201d<\/a>\nare two of the lessons learned along the way.<\/p>\n\n<p>With\n<a href=\"[https:\/\/github.com\/profefe\/kube-profefe](https:\/\/github.com\/profefe\/kube-profefe)\">kubectl-profefe<\/a>\nI decided to have a look at\n<a href=\"[https:\/\/github.com\/kubernetes-sigs\/krew](https:\/\/github.com\/kubernetes-sigs\/krew)\">krew<\/a>.\nIt is a package manager for kubectl plugins. It is a plugin itself with the end\ngoal to help you installing and managing the lifecycle for your plugin.<\/p>\n\n<pre><code>$ kubectl krew install profefe\n<\/code><\/pre>\n\n<p>It gives you the ability, with a single command to install, update or delete the\nkubectl-profefe cli command.<\/p>\n\n<p>Twitter got pretty excited recently about\n<a href=\"[https:\/\/github.com\/ahmetb\/kubectl-tree](https:\/\/github.com\/ahmetb\/kubectl-tree)\">kubectl-tree<\/a>,\na plugin from\n<a href=\"[https:\/\/twitter.com\/ahmetb](https:\/\/twitter.com\/ahmetb)\">@ahmetb<\/a> an old\nfriend of mine and active Kubernetes contributor and maintainer for krew as\nwell. It helps you to visualize kubernetes resources as a tree to simplify the\ncomprehension of the hierarchy and the connection between resources.<\/p>\n\n<p>Two other examples that I would like to mention are from @ahmetb too. Kubectl\nplugins don\u2019t need to be extremely complicated, but you always have to keep in\nmind the mantra \u201cusability first.\u201d It doesn\u2019t matter how many lines of code you\nwrite: the end goal should be to develop something usable and well-integrated\nwith kubernetes! <code>kubectl ctx<\/code> and <code>kubectl ns<\/code> are fabulous examples of\nsomething easy but helpful. We switch between context and namespace more than\nonce a day between production clusters, local development, and so on. It is not\na very complicated thing to do natively: for example, changing context with\nkubectl is just a matter of typing:<\/p>\n\n<pre><code>$ kubectl config use new-context\n<\/code><\/pre>\n\n<p>Worst case scenario for the namespace, you have to type the <code>-n<\/code> flags every\ntime you run a kubectl command that is not in the namespace you have set by\ndefault for the context you are using.<\/p>\n\n<p>But <code>kubectl ctx<\/code> and <code>kubectl ns<\/code> simplify this process even more. You only\nhave to type:<\/p>\n\n<pre><code>$ kubectl ctx new-context\n<\/code><\/pre>\n\n<p>Or<\/p>\n\n<pre><code>$ kubectl ns new-namespace\n<\/code><\/pre>\n\n<p>If you are developing an open-source kubectl plugin and you need a friendly and\neasy way to distribute it, you should have a look at krew. The publication\nprocess is straightforward, <a href=\"[https:\/\/github.com\/kubernetes-sigs\/krew-index\/pull\/415](https:\/\/github.com\/kubernetes-sigs\/krew-index\/pull\/415)\">this is the\nPR<\/a>\nI had to submit for profefe, you have to type some YAML as usual.<\/p>\n"},{"title":"Unit test kubernetes client in Go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/unit-testing-kubernetes-client-in-go"}},"description":"A flexible an easy to use testing framework makes all the difference. Kubernetes provides a fake client in Go that works like a charm.","image":"https:\/\/gianarb.it\/img\/kubernetes.png","updated":"2020-01-10T09:08:27+00:00","published":"2020-01-10T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/unit-testing-kubernetes-client-in-go","content":"<p>I write a lot of operations and integrations with Kubernetes those days. You can\nfollow my journey in its dedicated section on this blog <a href=\"\/planet\/assemble-kubernetes.html\">\u201cBuilding\nKubernetes\u201d<\/a>.<\/p>\n\n<p>I had to write a function recently capable of filtering pods based on assigned\nannotations.<\/p>\n\n<pre><code class=\"language-go\">\nconst (\n\tProfefeEnabledAnnotation = \"profefe.com\/enable\"\n)\n\n\/\/ GetSelectedPods returns all the pods with the profefe annotation enabled\n\/\/ filtered by the selected labels\nfunc GetSelectedPods(clientset kubernetes.Interface,\n\tnamespace string,\n\tlistOpt metav1.ListOptions) ([]v1.Pod, error) {\n\n\ttarget := []v1.Pod{}\n\tpods, err := clientset.CoreV1().Pods(namespace).List(listOpt)\n\tif err != nil {\n\t\treturn target, err\n\t}\n\tfor _, pod := range pods.Items {\n\t\tenabled, ok := pod.Annotations[ProfefeEnabledAnnotation]\n\t\tif ok &amp;&amp; enabled == \"true\" &amp;&amp; pod.Status.Phase == v1.PodRunning {\n\t\t\ttarget = append(target, pod)\n\t\t}\n\t}\n\treturn target, nil\n}\n<\/code><\/pre>\n<p>This function is pretty easy, but it has a good amount of assertions that we can\ncheck. Even more when we have a so well scoped functions writing tests should be\nalmost mandatory.<\/p>\n\n<ul>\n  <li>The returned list of pods should only contains pods with the\n<code>ProfefeEnabledAnnotation<\/code> set<\/li>\n  <li>The returned list of pods should only returns pods from the specified\n<code>namespace<\/code><\/li>\n  <li>The returned list of pods should observe the filtering and label selection\ncriteria specified by <code>metav1.ListOptions<\/code><\/li>\n<\/ul>\n\n<p>Covering those use cases will give us a solid foundation to avoid regression\nwhen this function will get more complicated (usually that\u2019s the evolution for\nsuccessful piece of code).<\/p>\n\n<h2 id=\"kubernetes-client-mock\">Kubernetes Client Mock<\/h2>\n\n<p>Kubernetes offers a simple and powerful <code>fake<\/code> client that has a very efficient\nmechanism to simulate the desired output from a specific request, in our case\n<code>clientset.CoreV1().Pods(namespace).List(listOpt)<\/code>. You have to pass the slice\nof <code>runtime.Object<\/code> you desire when you create a new fake client. Awesome and\neasy.<\/p>\n\n<pre><code class=\"language-go\">clientset: fake.NewSimpleClientset(&amp;v1.Pod{\n    ObjectMeta: metav1.ObjectMeta{\n        Name:        \"influxdb-v2\",\n        Namespace:   \"default\",\n        Annotations: map[string]string{},\n    },\n}, &amp;v1.Pod{\n    ObjectMeta: metav1.ObjectMeta{\n        Name:        \"chronograf\",\n        Namespace:   \"default\",\n        Annotations: map[string]string{},\n    },\n}),\n<\/code><\/pre>\n<p>For example this <code>clientset<\/code> will return two pods, one called <code>influxdb-v2<\/code> and\none called <code>chronograf<\/code>, but you can return what ever you need: Services,\nDeployments, Ingress, Custom Resource Definition or even a mix of everything.<\/p>\n\n<h2 id=\"in-practice\">In practice<\/h2>\n\n<p>I wrote a bunch of tests for\n<a href=\"https:\/\/github.com\/profefe\/kube-profefe\/blob\/master\/pkg\/kubeutil\/kube_test.go\">kube-profefe<\/a>\nthat are using a fake client. You can get inspiration over there.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p><code>fake<\/code> client is easy to use, so easy that since I added it in my tool chain for\nsome functions like the one I described here I efficiently do <code>TDD<\/code> because it\nmakes the iteration over my code way faster.<\/p>\n"},{"title":"Continuous profiling in Go with Profefe","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/go-continuous-profiling-profefe"}},"description":"Taking a snapshot at the right time is nearly impossible. A very easy way to fix this issue is to have a continuous profiling infrastructure that gives you enough confidence of having a profile at the time you need it.","image":"https:\/\/gianarb.it\/img\/profefe.png","updated":"2020-01-03T09:08:27+00:00","published":"2020-01-03T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/go-continuous-profiling-profefe","content":"<p>There are a lot of articles about profiling in Go. Julia Evans for examples\nwrote <a href=\"https:\/\/jvns.ca\/blog\/2017\/09\/24\/profiling-go-with-pprof\/\">\u201cProfiling Go programs with\npprof\u201d<\/a> and I rely on\nit when I do not remember how to properly use pprof.<\/p>\n\n<p>Rakyll wrote <a href=\"https:\/\/rakyll.org\/custom-profiles\/\">\u201cCustom pprof profiles\u201d<\/a>.<\/p>\n\n<p><code>pprof<\/code> is a powerful tool provided by Go that helps any developer to figure out\nwhat is going in the Go runtime. When you see a spike in memory in your running\ncontainer the next question is who is using all that memory. Profiles tell you\nthe answer.<\/p>\n\n<p>But they need to be grabbed at the right time. The unique way to have a profile when\nyou need it is by taking them continuously. Based on your application you should\nbe able to specify how often you have to gather a profile.<\/p>\n\n<p>This requires a proper infrastructure that we can call \u201cContinuous profiles\ninfrastructure\u201d. It is made of collectors, repositories and you need an API to\nstore, retrieve and query those profiles.<\/p>\n\n<p>When we had to set it up at InfluxData we started to craft our own one until I\nsaw <a href=\"https:\/\/github.com\/profefe\/profefe\"><code>profefe<\/code><\/a> on GitHub. What I love about\nthe project is its clear scope. It is a repository for profiles. You can push\nthem in Profefe and it provides an API to get them out, it servers the profiles in a\nway that make them easy to visualize directly with <code>go tool pprof<\/code>, you can even\nmerge them together and so on. It also have a clear interface that helps you to\nimplement your own storage.<\/p>\n\n<p>The project\n<a href=\"https:\/\/github.com\/profefe\/profefe\/blob\/master\/README.md\">README.md<\/a> well\nexplains how it works but I am going to summarize the most important actions in\nthis article.<\/p>\n\n<h2 id=\"getting-started\">Getting Started<\/h2>\n\n<p>There is a docker image that you can run with the command:<\/p>\n\n<pre><code>docker run -d -p 10100:10100 profefe\/profefe\n<\/code><\/pre>\n\n<p>You can push a profile in profefe:<\/p>\n\n<pre><code>$ curl -X POST \\\n    \"http:\/\/localhost:10100\/api\/0\/profiles?service=apid&amp;type=cpu\" \\\n    --data-binary @pprof.profefe.samples.cpu.001.pb.gz\n\n{\"code\":200,\"body\":{\"id\":\"bo51acqs8snb9srq3p10\",\"type\":\"cpu\",\"service\":\"apid\",\"created_at\":\"2019-12-30T15:18:11.361815452Z\"}}\n<\/code><\/pre>\n\n<p>You can retrieve it directly via its ID:<\/p>\n\n<pre><code>$ go tool pprof http:\/\/localhost:10100\/api\/0\/profiles\/bo51acqs8snb9srq3p10\n\nFetching profile over HTTP from http:\/\/localhost:10100\/api\/0\/profiles\/bo51acqs8snb9srq3p10\nSaved profile in \/home\/gianarb\/pprof\/pprof.profefe.samples.cpu.002.pb.gz\nFile: profefe\nType: cpu\nTime: Dec 23, 2019 at 4:06pm (CET)\nDuration: 30s, Total samples = 0\n<\/code><\/pre>\n\n<p>There is a lot more you can do, when pushing a profile you can set key value\npairs called <code>labels<\/code> and they can be used to query a portion of the profiles.<\/p>\n\n<p>You can use <code>env=prod|test|dev<\/code> or <code>region=us|eu<\/code> and so on.<\/p>\n\n<p>Retrieving a profile only via ID it\u2019s not the unique way to visualize it.\nProfefe merges together profiles from the same type in a specific time range:<\/p>\n\n<pre><code>GET \/api\/0\/profiles\/merge?service=&lt;service&gt;&amp;type=&lt;type&gt;&amp;from=&lt;created_from&gt;&amp;to=&lt;created_to&gt;&amp;labels=&lt;key=value,key=value&gt;\n<\/code><\/pre>\n\n<p>It returns the raw compressed binary, it is compatible with <code>go tool pprof<\/code> as\nwell as the single profile by id.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>I didn\u2019t develop profefe, <a href=\"https:\/\/github.com\/narqo\">Vladimir (@narqo)<\/a> is the\nmaintainer, I like it and how it is coded. I think it solves a very common\nissue. He wrote a detailed post about his project\n<a href=\"https:\/\/medium.com\/@tvii\/continuous-profiling-and-go-6c0ab4d2504b\">\u201cContinuous Profiling and Go\u201d<\/a><\/p>\n\n<blockquote>\n  <p>Wouldn\u2019t it be great if we could go back in time to the point when the issue\nhappened in production and collect all runtime profiles. Unfortunately, to my\nknowledge, we can\u2019t do that.<\/p>\n<\/blockquote>\n\n<p>One of my colleague Chris Goller wrote a self contained AWS S3 implementation\nthat is now submitted as PR. We are running it since a couple of weeks now. It\nis hard to onboard developers in a new tool, even more during Christmas but the\nAPI layers makes it very comfortable and friendly to use. Next article will be\nabout what we did to get it running in Kubernetes continuously profiling our\ncontainers.<\/p>\n"},{"title":"Year in review","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/year-in-review"}},"description":"Summary about 2019, a year in review. There is not much more to say other than that! Happy new year!","image":"https:\/\/gianarb.it\/img\/me.jpg","updated":"2019-12-30T06:08:27+00:00","published":"2019-12-30T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/year-in-review","content":"<p>I can\u2019t say 2019 was great from a working point of view. I struggled a lot and I\nprobably had a small and dirty burnout. I learned about myself along the\nway. A few folks that joined me at Influx left, I didn\u2019t really enjoyed that\ntime and the massive growth the company had didn\u2019t help me. I had some\ndifficulties finding my place and since there I didn\u2019t really find the right\nmotivation to be productive such the passion for this job forces me to be.<\/p>\n\n<p>Luckily I am surrounded by friends and colleagues with an open mind, I just had\nto ask and to speak loud about my feelings, I always got back useful\nconversations. I am sure every situation is different but other people\nexperienced similar situation and it is great to have them around. I think it is\ngetting better, I work in a different team now, the amount of YAML I have\nto write decreases, I am back writing bugs and fixing them.<\/p>\n\n<h2 id=\"open-source-is-about-collaboration\">Open Source is about collaboration<\/h2>\n\n<p>Open Source is part of my daily job and the reasons about why are well explain in\nmy first podcast ever! You can find it on <a href=\"https:\/\/www.stitcher.com\/podcast\/the-new-stack-makers\/e\/60409328?autoplay=true\">The New\nStack<\/a>.<\/p>\n\n<p>TLDR: I learned how to write code pinging people on IRC since day one. Different\ncommunities helped me to improve in my daily job as nobody ever did. That\u2019s why\nopen source is part of myself and I can\u2019t stay without it. Even more now that I\nhave something to get back.<\/p>\n\n<p>This year I stopped to do small projects in my GitHub profile all alone. It was\na natural decision that I didn\u2019t took on purpose. Even more where I discovered\nthat my shitty useless code is gonna destroy the Arctic because <a href=\"https:\/\/www.youtube.com\/watch?v=fzI9FNjXQ0o\">GitHub spams it\nthere<\/a>.<\/p>\n\n<p>I had the opportunity to discover a community called\n<a href=\"https:\/\/github.com\/testcontainers\">testcontainers<\/a> they do cool things but you\nknow about them because I wrote <a href=\"\/blog\/testcontainers-go\">\u201ctestcontainer library to programmatically\nprovision integration tests in Go with containers\u201d<\/a>, I\ntweet a lot about it and I spoke at DockerCon about the same topic <a href=\"https:\/\/www.youtube.com\/watch?v=RoKlADdiLmU\">\u201cWrite\nMaintainable Integration Tests with\nDocker\u201d<\/a>.<\/p>\n\n<p>Recently at Influx we was looking for a way to setup a continuous profiling\ninfrastructure, some work is still ongoing but Vladimir wrote a nice open source\nproject called <a href=\"https:\/\/github.com\/profefe\/profefe\">profefe<\/a>, we deployed it and\nI wrote a Kubernetes integration called\n<a href=\"https:\/\/github.com\/profefe\/kube-profefe\">kube-profefe<\/a>. It is now part of the\nprofefe organization and I am planning to write a series of posts about it, so\nstay toned!<\/p>\n\n<p>Join ongoing community and projects that you LIKE and USE is way better than\nwriting something alone that looks probably similar to something that already\nexists. It is not easy, you have read more code, you have to reach out to other\npeople that will may be busy but I will keep doing it!<\/p>\n\n<h2 id=\"meetups--conferences\">Meetups &amp; Conferences<\/h2>\n\n<p>I run the CNCF Meetup in Turin, let me know if you would like to speak! I do it\nbecause I work from home for a company based in San Francisco. They are far away\nand I spend a lot of my working our by myself. My local community helps me to\ndevelop great connections with people close to me! Ordering pizza, finding\nlocations, speakers, sponsors are unused tasks that I enjoy. All the videos are\navailable on <a href=\"https:\/\/www.youtube.com\/channel\/UCke-1vle73H9Dy4ojdfLw5A\">YouTube<\/a>\nsome of them are in English other are not.<\/p>\n\n<p>This year the plan is to run a nomad meetup, we will move office by office in\norder to meet more people and to be cool companies or startups.<\/p>\n\n<p>Would you like to sponsor, speak, host us?! Reach out\n<a href=\"mailto:ciao@gianarb.it\">ciao@gianarb.it<\/a> (Turin only locations).<\/p>\n\n<p>I made way too many talks during the first part of the year (counting 11), and the\ndifficulties I had at work convinced me to take a break. I didn\u2019t took any\nflight since June (almost), I feel recharged now but I will keep a low\nnumber of events this year. I would like to write more and to do more podcast.\nDo you host one? Let me know!<\/p>\n\n<h2 id=\"write\">Write<\/h2>\n\n<p>I wrote 26 articles. I am impressed by the number now that I see it. My articles\ncome from what I built I need to keep doing fun projects in order to have\nsomething useful to write. I will probably stay focused on Extending Kubernetes\nbecause I like how dynamic the code is, I would like to keep experimenting with\n<a href=\"\/blog\/reactive-planning-and-reconciliation-in-go\">reconciliations loops and reactive\nplanning<\/a> and to study Control\nTheory because <a href=\"\/blog\/control-theory-is-dope\">\u201cit is done\u201d<\/a>.<\/p>\n\n<p><a href=\"https:\/\/www.cherryservers.com\/?utm_source=garb&amp;utm_medium=ftr&amp;utm_campaign=drs\">CherryServers<\/a>\nis a cloud provider that I met at ContainerDays in Hamburg and since there we\nloved each other! I can ping them and have fun on their platform as much as I\nlike and this is great. They do not have a Kubernetes story yet, let\u2019s see if we\ncan do something about it! If you need to write an operator or a CSI plugin\n(persistent storage), or who knows even a Cluster API implementation let me\nknow!<\/p>\n\n<p>My first collaboration with a publisher was good but not excellent. I wrote a\nreport for O\u2019Reilly called <a href=\"https:\/\/get.oreilly.com\/ind_extending-kubernetes.html\">\u201cExtending Kubernetes\u201d<\/a>. I didn\u2019t get any\ninformation from there about how it is going but it is a normal practice for \u201ca\nreport\u201d for what they told me.\nI defined it \u201cnot excellent\u201d because it does not look like a collaboration, it\nis a one shot effort. I am happy to see it live because I like to write, but\nit is not my best skill. This collaboration helped me to raise the bar.<\/p>\n\n<p>In 2019 as I did for open source I would like to collaborate with other people,\nmaybe to write another book. Something is moving but let me know if you\nhave any idea.<\/p>\n\n<h2 id=\"2020\">2020!<\/h2>\n\n<p>If I have to pick a word to describe 2019 I will use <code>join<\/code>. I <code>joined<\/code> a lot of\ngreat people\/teams embracing what they care or they were working on. I loved\nthat. I hope to keep doing it with the help of the communities I am part of like\nDocker, observability, Kubernetes, CNCF, testcontainers. I hope to join more\npeople that shares my passions in order to improve and build something together.<\/p>\n\n<p>In order to do that I need you all around! Reach out\n<a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a>.<\/p>\n\n<h2 id=\"home-sweet-home\">Home sweet home!<\/h2>\n\n<p>We have a new project! The most important one! We bought a house and there is a\nlot of work to do! Look at how this wall is going down! Made in YAML.<\/p>\n\n<div class=\"row\">\n    <div class=\"col-md-6 offset-md-3\">\n        <video class=\"embed-responsive\" controls=\"\">\n          <source class=\"embed-responsive-item\" src=\"\/img\/destroy-home.mp4\" type=\"video\/mp4\" \/>\n        <\/video>\n    <\/div>\n<\/div>\n"},{"title":"Free PDFs about Docker from a Captain","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/scaledocker"}},"description":"This article contains the ebooks listed in scaledocker.com. That link won't be available forever and I decided to move it here. If you are a beginner and you are happy to read about docker I got you covered. Security? There are 22 pages available for you as well. Enjoy.","image":"https:\/\/gianarb.it\/img\/the-fundamentals.jpg","updated":"2019-12-18T06:08:27+00:00","published":"2019-12-18T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/scaledocker","content":"<p><img src=\"\/img\/mainbg_scaledocker.jpg\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p>In the process of make some order in all my repositories, side projects and so\non my terminal crossed over a project I made in 2017 called scaledocker. The\nidea was a full book. It turned out to be a 2 pdf one describing Docker for a\ntotally beginner, the second one about security.<\/p>\n\n<p>Looking back I see why I am so happy to build and share what I do, I am still\nlike that today!<\/p>\n\n<p>I was discussing with Jenny about what do with this project and we thought about\nthe possibility to refresh those contents as part of a series in docker.com.<\/p>\n\n<p>I had a look around at this scenario, the contents are still good but I can\u2019t\nfind the raw version of the PDF. It makes this work very hard. That\u2019s why I\ndecided to archive it here.<\/p>\n\n<p class=\"lead\">scaledocker.com will stay up and running redirecting everybody to this article,\nwhere you can find both PDFs. It is for sale, just <a href=\"mailto:ciao@gianarb.it\">make an\noffer<\/a> you can have it starting from Sept 2020.<\/p>\n\n<p>This is not at end at all! There is an all section in my blog related to\n<a href=\"\/planet\/docker.html\">docker<\/a> and who knows what I will work on in 2020! I just\ndo not like the idea to have this project keeping a domain busy, I bet there are\npeople that can make a better use of it as I tried to do few years ago!<\/p>\n\n<p>In order to stay in touch with me my website is a good way and I rumble a lot on\n<a href=\"https:\/\/twitter.com\/gianarb\">twitter<\/a> as well!<\/p>\n\n<h2 id=\"docker-the-fundamental\">Docker the Fundamental<\/h2>\n\n<p class=\"small\">25 pages<\/p>\n\n<p><img src=\"\/img\/the-fundamental-cover.jpg\" alt=\"\" class=\"img-fluid w-25 p-3 float-right\" \/><\/p>\n\n<p>Getting Started about Docker, it covers basic concepts about what container\nmeans and it\u2019s a starting point to understand containers, Docker and all the\necosystem.<\/p>\n\n<ol>\n  <li>Introduction<\/li>\n  <li>Install Docker on Ubuntu 16.04<\/li>\n  <li>Install Docker on Mac<\/li>\n  <li>Install Docker on Windows<\/li>\n  <li>Run your first HTTP application<\/li>\n  <li>Docker engine architect<\/li>\n  <li>Image and Registry<\/li>\n  <li>Docker Command Line Tool<\/li>\n  <li>Volumes and File Systems 20<\/li>\n  <li>Network and Links<\/li>\n  <li>Conclusion<\/li>\n<\/ol>\n\n<p><a href=\"\/downloads\/the-fundamental.pdf\" target=\"_blank\">Open the pdf<\/a><\/p>\n\n<h2 id=\"docker-security---play-safe\">Docker Security - Play Safe<\/h2>\n\n<p class=\"small\">55 pages<\/p>\n\n<p><img src=\"\/img\/container-security.png\" alt=\"\" class=\"img-fluid w-25 p-3 float-right\" \/><\/p>\n\n<p>When you think about production everything stops to be a joke. Containers, cloud\ncomputing, scalability is something that fit well with security? I have my\npersonal response for this question and with this paper I am going to show what\nI mean with security and how containers, docker, Linux can make this real.<\/p>\n\n<ol>\n  <li>Introduction<\/li>\n  <li>Mutual TLS and Security by default<\/li>\n  <li>Content Trust<\/li>\n  <li>Overlay Network<\/li>\n  <li>Docker Bench Security<\/li>\n  <li>Process Restriction and Capabilities<\/li>\n  <li>Open Source<\/li>\n  <li>Linux Kernel Security<\/li>\n  <li>Cilium<\/li>\n  <li>About your images<\/li>\n  <li>Secret Manager<\/li>\n  <li>Immutability<\/li>\n<\/ol>\n\n<p><a href=\"\/downloads\/play-safe.pdf\" target=\"_blank\">Open the pdf<\/a><\/p>\n\n<h2 id=\"lets-move-on\">Let\u2019s move on!<\/h2>\n\n<p>I hope this project will keep helping new folks to get onboard with Docker or to\nfigure out what they can do to improve security.<\/p>\n\n<p>From what I can tell more than 2200 people required those PDFs! I am happy and\nimpressed. I can\u2019t wait to see what we will do next!<\/p>\n\n<p class=\"font-weight-bold\">A big thanks to\n<a href=\"https:\/\/www.cherryservers.com\/?utm_source=garb&amp;utm_medium=ftr&amp;utm_campaign=drs\">CherryServer<\/a>\nfor hosting the reverse proxy from scaledocker.com.<\/p>\n"},{"title":"Programmatically Kubernetes port forward in Go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/programmatically-kube-port-forward-in-go"}},"description":"Depending on your networking configuration port forwarding will may be the unique way for you to reach pods or services running in Kubernetes. When you develop a CLI integration that has to interact with pods running inside the cluster you can programmatically do a port forwarding in golang.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-12-05T06:08:27+00:00","published":"2019-12-05T06:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/programmatically-kube-port-forward-in-go","content":"<p>Along the way I saw at least two different ways to manage Kubernetes clusters\nfrom a networking prospective. Some companies configure a VPN inside the\nKubernetes Cluster, in this way a developer connected in the VPN can reach pods\nand services.<\/p>\n\n<p>It is not mandatory but suggested having a good network segmentation in order\nto be able to manage what a person connected in the VPN can touch and see.\nAchieving this level of control is not easy in Kubernetes a lot of the open\nsource CNI plugin does not have this feature at all and I understand why in\noperations this is evaluated as a safe approach. It is very convenient if close\nan eye because pods and services are just IPs that you can reach from your\nlaptop and if you configure the VPN to push the Kubernetes DNS you can also\nresolve them as DNS lookup.<\/p>\n\n<p>The alternative I saw is to lock everybody out leaves as unique way to interact\nwith a service or a pod the command <code>kubectl port-forward<\/code>. In this way the\nauthentication and authorization method in Kubernetes allows you to decide who\ncan do port-forwarding on what based on namespace for example. Or at least you\ncan use Kubernetes Audit logs to figure out who did port forwarding if something\nbad happens.<\/p>\n\n<p>We tried both ways, I was to one pushing for the first, but we never achieved a\ngood segmentation and at some point I got locked down, sadly as it sound. Anyway\nI like to automate things and I had to figure out a way to make my scripts to\nwork with this new approach.<\/p>\n\n<p><img src=\"\/img\/sub.jpg\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p>I started to dig in the <code>kubectl<\/code> code because we all know that it is capable of\ndoing the port forwarding. I had some trouble figuring out the right parameters\nand to make them to work but at the end I did it! So here we are! If I can do it\nyou can do it as well!<\/p>\n\n<p>The main repository with the code and an example is in\n<a href=\"https:\/\/github.com\/gianarb\/kube-port-forward\">github.com\/gianarb\/kube-port-forward<\/a>,\nyou can run it there. I am gonna explain it a bit here.<\/p>\n\n<p>It is a simple CLI that mocks what <code>kubectl port-forward<\/code> already does but I\nextrapolated the code needed to do and control a port forwarding. I will write\nhere as soon as the reason about why I did that is open source, I am telling it\nto you right now STAY TUNED! It will be great!<\/p>\n\n<p>First of all I used the <code>k8s.io\/cli-runtime\/pkg\/genericclioptions<\/code> library to\nconfigure a stream, we already used in the <a href=\"\/blog\/kubectl-flags-in-your-plugin\">blog post about writing a CLI that\nuses the same flags as the kubectl<\/a>. A\nstream is a <code>struct<\/code> used by different <code>kubernetes<\/code> service when they need to\nget or print information from a stream, in this case I am using <code>os.Stdout<\/code>,\n<code>os.Stdin<\/code>, <code>os.Stderr<\/code> for simplicity, but where I do not need to print out the\noutput I use a <code>bytes.Stream<\/code> like this:<\/p>\n\n<pre><code class=\"language-go\">var berr, bout bytes.Buffer\nbuffErr := bufio.NewWriter(&amp;berr)\nbuffOut := bufio.NewWriter(&amp;bout)\n<\/code><\/pre>\n\n<p>In order to make this code easy to read I had a structure to request the port\nforwarding for a pod:<\/p>\n\n<pre><code class=\"language-go\">type PortForwardAPodRequest struct {\n\t\/\/ RestConfig is the kubernetes config\n\tRestConfig *rest.Config\n\t\/\/ Pod is the selected pod for this port forwarding\n\tPod v1.Pod\n\t\/\/ LocalPort is the local port that will be selected to expose the PodPort\n\tLocalPort int\n\t\/\/ PodPort is the target port for the pod\n\tPodPort int\n\t\/\/ Steams configures where to write or read input from\n\tStreams genericclioptions.IOStreams\n\t\/\/ StopCh is the channel used to manage the port forward lifecycle\n\tStopCh &lt;-chan struct{}\n\t\/\/ ReadyCh communicates when the tunnel is ready to receive traffic\n\tReadyCh chan struct{}\n}\n<\/code><\/pre>\n\n<p>And I wrote the function that actually does the port forward:<\/p>\n\n<pre><code class=\"language-go\">func PortForwardAPod(req PortForwardAPodRequest) error {\n\tpath := fmt.Sprintf(\"\/api\/v1\/namespaces\/%s\/pods\/%s\/portforward\",\n\t\treq.Pod.Namespace, req.Pod.Name)\n\thostIP := strings.TrimLeft(req.RestConfig.Host, \"htps:\/\")\n\n\ttransport, upgrader, err := spdy.RoundTripperFor(req.RestConfig)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tdialer := spdy.NewDialer(upgrader, &amp;http.Client{Transport: transport}, http.MethodPost, &amp;url.URL{Scheme: \"https\", Path: path, Host: hostIP})\n\tfw, err := portforward.New(dialer, []string{fmt.Sprintf(\"%d:%d\", req.LocalPort, req.PodPort)}, req.StopCh, req.ReadyCh, req.Streams.Out, req.Streams.ErrOut)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn fw.ForwardPorts()\n}\n<\/code><\/pre>\n<p>An exercise that I can leave for you is to add Service support to this function,\nyou can open a PR if you like on\n<a href=\"https:\/\/github.com\/gianarb\/kube-port-forward\">github.com\/gianarb\/kube-port-forward<\/a>.<\/p>\n\n<p>The <code>Stop<\/code> and <code>Ready<\/code> channels are crucial to manage the port forward because\nas you see in the example it is a blocking operation it means that it will\nluckily always run inside a goroutine. Those two channels gives you what you\nneed to understand when the port forward is ready to get traffic <code>ReadyCh<\/code> and\nyou have the capabilities to stop it <code>StopCh<\/code>.<\/p>\n\n<p>My example is basic, I am closing the port forwarding when the <code>SIGTERM<\/code> signal\ngets notified:<\/p>\n\n<pre><code class=\"language-go\">sigs := make(chan os.Signal, 1)\nsignal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)\ngo func() {\n    &lt;-sigs\n    fmt.Println(\"Bye...\")\n    close(stopCh)\n    wg.Done()\n}()\n<\/code><\/pre>\n\n<p>I just wait until the readyCh tells me that the connection is\nup and running<\/p>\n\n<pre><code class=\"language-go\">select {\ncase &lt;-readyCh:\n    break\n}\nprintln(\"Port forwarding is ready to get traffic. have fun!\")\n<\/code><\/pre>\n\n<p>As soon as I coded this feature I saw that it was gonna be an easy but useful\npost. I wrote a <a href=\"\/blog\/extending-kubernetes-oreilly\">report with O\u2019Reilly<\/a> about\nhow to extend Kubernetes, you can find more about Go and Kube there. It is a\nfree PDF.<\/p>\n\n<p>I hope you enjoyed it and <a href=\"https:\/\/twitter.com\/gianarb\">let me know<\/a> what cool\nthings you are gonna do port-forwarding the universe!<\/p>\n"},{"title":"OpenTelemetry the instrumentation library, I hope","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/opentelemetry-the-instrumentation-library-i-hope"}},"description":"OpenTelemetry, OpenCensus, OpenTracing, Open your heart","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-11-20T08:08:27+00:00","published":"2019-11-20T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/opentelemetry-the-instrumentation-library-i-hope","content":"<p>Hello! If you follow my rumbling here or on\n<a href=\"https:\/\/twitter.com\/gianarb\">twitter<\/a> you know that I like to speak about\nobservability and tracing.<\/p>\n\n<p>If you don\u2019t know what I am speaking about this is an\n<a href=\"\/blog\/faq-distributed-tracing\">FAQ<\/a> about distributed tracing and something\nabout <a href=\"\/blog\/what-is-distributed-tracing-opentracing-opencensus\">OpenTracing and OpenCensus<\/a>.<\/p>\n\n<p>Observability is the ability to figure out what is going on in your application\nfrom the outside. In order to do that you need to instrument your applications\nin order to expose the right information.<\/p>\n\n<p>The instrumentation is not easy, there are too many developers and too many\nopinions, too many languages but in order observe a system that cross all the\napplications and services everything needs to come together in the same way.\nOtherwise the aggregation will become very a complicated job.<\/p>\n\n<p>When you instrument an application there is a lot of code to write and inject,\nyou can not do or change it based on the vendor or services you are using to\nstore your telemetry: Zipkin, InfluxDB, NewRelic, HoneyComb and so on.<\/p>\n\n<p>That\u2019s why over the last couple of years big foundations and companies such as\nLightSteps, Google, CNCF, Uber tried to get their hands on the democratization\nof code instrumentation. First with OpenTracing, after that with OpenCensus and\nnow with OpenTelemetry that is the merge between OpenCensus and OpenTracing.<\/p>\n\n<p>At the beginning when this project went out a was very tired and stressed about\nthe topic. I made a workshop last year about code instrumentation at <a href=\"https:\/\/cloudconf.it\">the\nCloudConf<\/a> and I wish it was easier to prepare and\ndevelop. At the end attendees were satisfied btw, so I am happy enough.<\/p>\n\n<p>Since the beginning I had a very bad feeling about OpenTracing and OpenCensus,\nit is necessary as a project but the fact that we had two ways because they\ndidn\u2019t want to agree on only one for me was unbelievable.<\/p>\n\n<p>Anyway, now that I pushed that feeling back I will give it another try. I will\nget my <a href=\"https:\/\/github.com\/gianarb\/workshop-observability\">observability\nworkshop<\/a> and I will refresh\nit to use OpenTelemetry because as I said, we need a way to instrument\napplications, cross vendor and cross languages.<\/p>\n\n<p>Here some link about it:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/opentelemetry.io\/\">opentelemetry.io<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/open-telemetry\">github.com\/open-telemetry<\/a><\/li>\n  <li><a href=\"https:\/\/lists.cncf.io\/g\/cncf-opentelemetry-community\">Mailing List<\/a><\/li>\n<\/ul>\n\n<p>At KubeCon 2019 <a href=\"https:\/\/twitter.com\/lizthegrey\">lizthegrey<\/a> gave a demo about\nOpenTelemetry and I am confident that my experience will be a bit better.<\/p>\n\n<p>It is not easy to democratize something, even less when you need to change the\nhabit for developers across programming languages. But that\u2019s the goal for\nOpenTelemetry and I think we need to get there and to make it a commodity. It is\nnot a joke!<\/p>\n\n<p>If you would like to help me, let me know!<\/p>\n"},{"title":"o11y.guru introduction and first set of iterations","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/o11yguru-introduction"}},"description":"Part of the o11y.guru series this post is an introduction for this side project and it describes the first architecture designed for the website.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-11-09T08:08:27+00:00","published":"2019-11-09T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/o11yguru-introduction","content":"<div class=\"jumbotron\">\n\n  <h2 id=\"introduction\">Introduction<\/h2>\n\n  <p>This article is part of a series I am writing about a\nside project I made called <a href=\"https:\/\/o11y.guru\">o11y.guru<\/a>. Who knows what this\nseries or the project itself will become. The reason why I started it was to\nhave my <strong>wonderland<\/strong>. A place where I was able to do my mistakes without any\nintermediary.<\/p>\n\n  <p>This series is about my journey there. I will keep the this <code>Introduction<\/code>\ncommon to all the articles part of this series and I will keep a <code>Table of\nContent<\/code> up to date. The best way to follow my journey here is to subscribe to\nthe RSS feed or to follow me on <a href=\"https:\/\/twitter.com\/gianarb\">@twitter<\/a>.<\/p>\n\n  <h3 id=\"what-is-this-project\">what is this project?<\/h3>\n\n  <p>I had this idea to create a mechanism to enable other people that uses twitter\nto follow a group of leaders in a particular space. I decided to start from the\nobservability (#o11y) because it is the area where I am in at the moment.<\/p>\n\n  <p><a href=\"https:\/\/o11y.guru\">o11y.guru<\/a> is pretty simple, a list of faces and a\nbutton, they you press it and if you will authorize the twitter application you\nwill get to follow them.<\/p>\n\n  <h3 id=\"table-of-content\">Table of Content<\/h3>\n\n  <ol>\n    <li><a href=\"\/blog\/o11yguru-introduction\">First day and first set of iterations<\/a><\/li>\n    <li>Build process and automation driven by simplicity<\/li>\n    <li>Monitoring and instrumentation with Honeycomb<\/li>\n    <li><a href=\"\/blog\/o11yguru-history-first-bug\">The history of the first bug<\/a><\/li>\n    <li>OpenTelemetry it is time to embrace a unicorn standard<\/li>\n    <li>The magic of structured logging<\/li>\n    <li>Infrastructure monitoring with InfluxCloud<\/li>\n    <li>Infrastructure as code with Terraform and CherryServer. First deploy.<\/li>\n    <li>FAQ<\/li>\n  <\/ol>\n\n<\/div>\n\n<p><img src=\"\/img\/o11y-guru-series\/index.png\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p>olly.guru is a website that I wrote in Go. It lists a group of people active on\ntwitter that I like to follow around monitoring, reliability, observability. It\nallows you to follow them all in once as well.<\/p>\n\n<p>A few years ago when I was developing almost in PHP somebody from the community,\nI don\u2019t remember who was, I am getting old, did a similar website and I thought\nit was a great idea.<\/p>\n\n<p>Since then, the project was in the back, in my mind. I am a lazy person when it\ncomes to writing code. I think there is enough useless code around, and I don\u2019t\nwant to incentive that practice. That\u2019s why I tend to write as less code I can.<\/p>\n\n<p>There are a bunch of reasons why I changed my mind, and I started to do it:<\/p>\n\n<p>Better mood and I had to try Honeycomb.io, but I never had the right\nopportunity. I didn\u2019t want to try it with a demo running on my laptop.  I\nhave a couple of new friends from CherryServer that supports my crazy ideas, and\nI was looking for a reason to glue a bunch of reliable infrastructure as code that I\nactually like. Even if it is usually a task I hate, mainly because there is no code\ninvolved.<\/p>\n\n<p>As Docker Captain and CNCF Ambassador, I have the feeling than a project like\nthat can be re-used.<\/p>\n\n<p>I made the mistake that everyone does; I started to think about cool technologies\nand not the problem I was going to solve or the project I was going to write.<\/p>\n\n<p>I made a react application, just the foldering, and I quickly realized that I do not\nknow how to React, and I was wasting my time. But for this project, I got lucky\nenough to keep going. I started to think about the problem again, and I decided\nto make it as simple as possible. In practice almost everything gets generated\nby an html template and a piloted by a list of names in a <code>txt<\/code> file. Very easy!<\/p>\n\n<pre><code>.\n\u251c\u2500\u2500 cmd\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 generate\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 www\n\u251c\u2500\u2500 Dockerfile\n\u251c\u2500\u2500 go.mod\n\u251c\u2500\u2500 go.sum\n\u251c\u2500\u2500 index.tmpl\n\u251c\u2500\u2500 Makefile\n\u251c\u2500\u2500 people\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 people.go\n\u251c\u2500\u2500 people.txt\n\u251c\u2500\u2500 style\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 css\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 fonts\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 img\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index.html\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 js\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 node_modules\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 package.json\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 package-lock.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 scss\n\u251c\u2500\u2500 vendor\n\u2514\u2500\u2500 www\n<\/code><\/pre>\n\n<p>You can see the shape of the project; it has a minimal amount of technologies\ninvolved: Go, HTML, Bootstrap 4, and a bit of Javascript.  I started from the\n<code>style<\/code> directory. I use it for prototyping the HTML, CSS part. I am far away to\nbe good with colors and CSS, and I am cool with that, we do not like each other.\nSo I do all my tests there, and when I am ready, I port the <code>style\/index.html<\/code>\ninto <code>index.tmpl.<\/code> I started from a Bootstrap 4 layout already done as you can\nsee. It is in their documentation.<\/p>\n\n<p><img src=\"\/img\/o11y-guru-series\/sheldon.jpeg\" alt=\"\" \/><\/p>\n\n<p><code>index.tmpl<\/code> is the template I use to render the actual homepage.  <code>www<\/code> is the\ntarget destination for all the static files and the generated index page. I use\nMake to copy files from <code>style<\/code> into <code>www,<\/code> and I wrote a CLI that generates the\nHTML and it populates it with Twitter informations. It is inside <code>.\/cmd\/generate.<\/code><\/p>\n\n<p><code>.\/people.txt<\/code> is the list of twitter gurus. It is just a list:<\/p>\n\n<pre><code>gianarb\nrakyll\n<\/code><\/pre>\n\n<p>The <code>cmd\/generate<\/code> reads that file and, it gets the information it needs from\nthe Twitter API like user bio, avatar and it renders the <code>.\/index.tmpl<\/code>\ninto the actual index inside the <code>www<\/code> folder.<\/p>\n\n<p><code>.\/cmd\/www<\/code> is an HTTP server written Go that serves the content of the <code>www<\/code>\ndirectory. Plus it uses:<\/p>\n\n<pre><code>github.com\/dghubble\/go-twitter\ngithub.com\/dghubble\/gologin\/v2\n<\/code><\/pre>\n\n<p>To manage the Twitter authentication flow.<\/p>\n\n<p>I am sure you are wondering, \u201cis he gonna open-source that!?\u201d. I am. Not now.\nThe project needs to refactoring and some code needs to get stronger around\ninstrumentation and logging. As you see in the introduction, I am using this\nexperience as a use case to write down a bunch of practices I like or that I\nwould like to investigate.\nSo stay tuned! It will be available very soon.<\/p>\n\n<h2 id=\"tldr-lesson-learned\">tldr lesson learned<\/h2>\n\n<p>Some of lessons I learned comes from how I am, but hey, this is my blog, I can\ndo whatever I like!\nIt is refreshing to start a project, but <strong>it is way cooler to have something to\nshow<\/strong>. So be careful when you start it, get it right so you won\u2019t get tired.<\/p>\n\n<p><strong>Set clear goals<\/strong>, and see point 1, they need to be very easy to\nachieve, at least at the beginning.<\/p>\n\n<p><strong>Do not type on your terminal, but write bash scripts.<\/strong> I started to do this\nmonths ago at work. Bash scripts are way better than random commands in a\nterminal because you can move them around composing way more powerful workflows.\nYou won\u2019t lose them. That\u2019s how I built my Makefile, just from the terminal\nhistory or from the shortcut I made along the way.\nOften a well-done <strong>dotenv file is enough to manage everything you need<\/strong>.<\/p>\n\n<h2 id=\"lets-get-to-some-code\">let\u2019s get to some code<\/h2>\n\n<p>I told you about bash scripts and Makefile, I will write a post about automation\nfor a small project, but this is part of my Makefile:<\/p>\n\n<pre><code>style\/build:\n    cd .\/style &amp;&amp; npm install\n    cp -r .\/style\/node_modules\/jquery\/dist\/jquery.js .\/style\/js\/jquery.js\n    cp .\/style\/node_modules\/@fortawesome\/fontawesome-free\/js\/all.js .\/style\/js\/\n    cp -r .\/style\/node_modules\/@fortawesome\/fontawesome-free\/webfonts .\/style\/fonts\n    cd .\/style &amp;&amp; npm run scss\n\nstyle\/start: style\/build\n    cd .\/style &amp;&amp; npm start\n\nstyle\/compile: style\/build\n    rm -rf .\/www\n    mkdir .\/www\n    cp -r .\/style\/img .\/www\n    cp -r .\/style\/fonts .\/www\n    cp -r .\/style\/css .\/www\n    cp -r .\/style\/js .\/www\n<\/code><\/pre>\n\n<p>People can do the same with npm and, a hundred node packages, I like to keep\nthings simple at this point to avoid unnecessary blockers that will get my\ntired. This is how I manage the <code>style<\/code> directory and how I build the <code>www<\/code> one.<\/p>\n\n<blockquote>\n  <p>With blockers I mean: googling around for things that should be easy.<\/p>\n<\/blockquote>\n\n<pre><code class=\"language-go\">flag.StringVar(&amp;flags.consumerKey, \"consumer-key\", \"\", \"Twitter Consumer Key\")\nflag.StringVar(&amp;flags.consumerSecret, \"consumer-secret\", \"\", \"Twitter Consumer Secret\")\nflag.StringVar(&amp;flags.accessToken, \"access-token\", \"\", \"Twitter access key\")\nflag.StringVar(&amp;flags.accessSecret, \"access-secret\", \"\", \"Twitter access secret\")\nflag.StringVar(&amp;flags.guruFile, \"guru-file\", \"\", \"File that contains the guru's name\")\nflag.StringVar(&amp;flags.indexTemplate, \"index-template\", \"\", \"File that contains the guru's name\")\nflag.Parse()\nflagutil.SetFlagsFromEnv(flag.CommandLine, \"TWITTER\")\n\nconfig := oauth1.NewConfig(flags.consumerKey, flags.consumerSecret)\ntoken := oauth1.NewToken(flags.accessToken, flags.accessSecret)\nhttpClient := config.Client(oauth1.NoContext, token)\n\n\/\/ Twitter client\nclient := twitter.NewClient(httpClient)\n\n\/\/ Verify Credentials\nverifyParams := &amp;twitter.AccountVerifyParams{\n    SkipStatus:   twitter.Bool(true),\n    IncludeEmail: twitter.Bool(true),\n}\n_, _, err := client.Accounts.VerifyCredentials(verifyParams)\nif err != nil {\n    println(err.Error())\n    os.Exit(1)\n}\n\ngurus := []*twitter.User{}\n\nlines, err := people.ReadLineByLine(flags.guruFile)\nif err != nil {\n    println(err.Error())\n    os.Exit(1)\n}\nfor _, eachline := range lines {\n    user, _, err := client.Users.Show(&amp;twitter.UserShowParams{\n        ScreenName: eachline,\n    })\n<\/code><\/pre>\n\n<p>The generate command is straightforward, I get over the <code>people.txt<\/code> file line\nby line, and for every record, I get information about the user. When I have the\nslice of gurus populated I render the template:<\/p>\n\n<pre><code class=\"language-go\">t, err := template.ParseFiles(flags.indexTemplate)\nif err != nil {\n    panic(err)\n}\nerr = t.Execute(os.Stdout, Render{\n    Gurus: gurus,\n})\n<\/code><\/pre>\n<p>I decided to print the HTML into the stdout because it is way easier to use <code>&gt;<\/code>\nother than accepting another parameter to specify the target output.<\/p>\n\n<p>LDD: laziness driven development.<\/p>\n\n<p>The <code>cmd\/www<\/code> uses the same people.txt file to know who to follow when the user\npresses the <code>Follow<\/code> button and authorize the twitter application:<\/p>\n\n<pre><code class=\"language-go\">for _, eachline := range lines {\n    if strings.EqualFold(eachline, me.ScreenName) {\n        continue\n    }\n    time.Sleep(5 * time.Second)\n    err = newFriendship(ctx, twitterClient, eachline)\n    if err != nil {\n        logger.Warn(err.Error(), zap.String(\"follower_screenname\", eachline), zap.Error(err))\n    }\n}\n<\/code><\/pre>\n\n<h2 id=\"the-project-in-the-project\">The project in the project<\/h2>\n\n<p>This series of posts I am writing is a side project in the side project. As I\nwrote earlier I like to share what and why I do things. I hope to keep having\npractical experiences to write down.<\/p>\n\n<p>The high level expectation I set are:<\/p>\n\n<ol>\n  <li>Have fun<\/li>\n  <li>Create a good network of followers on twitter that likes to speak about\nobservability<\/li>\n  <li>Learning how Honeycomb works and why everybody says that it sounds like magic<\/li>\n  <li>Writing down something about code instrumentation, infrastructure as a code and\nautomation<\/li>\n  <li>Exercise my experience as a decision maker driven by simplicity and\nefficiency.<\/li>\n  <li>I hope to work with a couple of friends from Docker, HashiCorp,\nCherryServier, InfluxData, HoneyComb to help me out with secret management,\nmonitoring, terraform and, automation in order to build the coolest project\never. You will get an email from me (or reach out if you have suggestions).<\/li>\n<\/ol>\n\n<h2 id=\"thats-it\">That\u2019s it<\/h2>\n\n<p>I am sure it gets out clearly from this article the friction between the\nexcitement about having an idea and the effort to make it real, even when it is\nas simple as a single html page.. I struggle with\nthat all the time, and the laziness usually wins. Will this time be different?!\nWell, I have a domain that is not a blank page. I think it is a good starting\npoint.<\/p>\n\n<p>Time matter and have fun!<\/p>\n\n<p><img src=\"\/img\/o11y-guru-series\/sleep.jpg\" alt=\"\" class=\"img-fluid\" \/><\/p>\n"},{"title":"o11y.guru the history of the first bug","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/o11yguru-history-first-bug"}},"description":"Part of the o11y.guru series this is about the first bug I discovered with the help of honeycomb and how I had to fix it twice in order to make it to work.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-11-07T08:08:27+00:00","published":"2019-11-07T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/o11yguru-history-first-bug","content":"<div class=\"jumbotron\">\n\n  <h2 id=\"introduction\">Introduction<\/h2>\n\n  <p>This article is part of a series I am writing about a\nside project I made called <a href=\"https:\/\/o11y.guru\">o11y.guru<\/a>. Who knows what this\nseries or the project itself will become. The reason why I started it was to\nhave my <strong>wonderland<\/strong>. A place where I was able to do my mistakes without any\nintermediary.<\/p>\n\n  <p>This series is about my journey there. I will keep the this <code>Introduction<\/code>\ncommon to all the articles part of this series and I will keep a <code>Table of\nContent<\/code> up to date. The best way to follow my journey here is to subscribe to\nthe RSS feed or to follow me on <a href=\"https:\/\/twitter.com\/gianarb\">@twitter<\/a>.<\/p>\n\n  <h3 id=\"what-is-this-project\">what is this project?<\/h3>\n\n  <p>I had this idea to create a mechanism to enable other people that uses twitter\nto follow a group of leaders in a particular space. I decided to start from the\nobservability (#o11y) because it is the area where I am in at the moment.<\/p>\n\n  <p><a href=\"https:\/\/o11y.guru\">o11y.guru<\/a> is pretty simple, a list of faces and a\nbutton, they you press it and if you will authorize the twitter application you\nwill get to follow them.<\/p>\n\n  <h3 id=\"table-of-content\">Table of Content<\/h3>\n\n  <ol>\n    <li><a href=\"\/blog\/o11yguru-introduction\">First day and first set of iterations<\/a><\/li>\n    <li>Build process and automation driven by simplicity<\/li>\n    <li>Monitoring and instrumentation with Honeycomb<\/li>\n    <li><a href=\"\/blog\/o11yguru-history-first-bug\">The history of the first bug<\/a><\/li>\n    <li>OpenTelemetry it is time to embrace a unicorn standard<\/li>\n    <li>The magic of structured logging<\/li>\n    <li>Infrastructure monitoring with InfluxCloud<\/li>\n    <li>Infrastructure as code with Terraform and CherryServer. First deploy.<\/li>\n    <li>FAQ<\/li>\n  <\/ol>\n\n<\/div>\n\n<h2 id=\"the-history-of-the-first-bug\">The history of the first bug<\/h2>\n\n<p>After the first deploy I used my twitter account\n<a href=\"https:\/\/twitter.com\/dev_campy\">@gianarb<\/a> and\n<a href=\"https:\/\/twitter.com\/dev_campy\">@devcampy<\/a> to try the application. I have also\nasked a friend to try it out.<\/p>\n\n<p>So far so good, the way I coded the following workflow is very basic, and\nprobably it will quickly reach its scalability limit. It is a loop with a\n<code>time.Sleep(5 * time.Second)<\/code> break between each account to avoid the Twitter\nrate limit.<\/p>\n\n<pre><code class=\"language-go\">\nfor _, guru := range gurus {\n    time.Sleep(5 * time.Second)\n    err = newFriendship(ctx, twitterClient, guru)\n    if err != nil {\n        logger.Warn(err.Error(), zap.Error(err))\n    }\n}\n<\/code><\/pre>\n\n<p>No retry or things like that for now. Very simple. I hope to iterate on it in\nthe future when it will start to not working well enough anymore.<\/p>\n\n<p>It does not report any error if the Twitter API request to follow a person\nfails, it just go to the next one. All three tests went well for what I was able\nto say, all three accounts followed the gurus.<\/p>\n\n<p>One of the first benefit about using HoneyComb is that out of the box they are\nable to detect errors looking at the events you return and the graph is made by\nthem. Just clicking around to their UI I ended up with weirdness like this graph:<\/p>\n\n<p><img src=\"\/img\/o11y-guru-series\/first-bug-http-status.png\" alt=\"Requests break down by HTTP Status\" class=\"img-fluid\" \/><\/p>\n\n<p>I noticed some <code>500<\/code> error page, and I do not like that. As you can see\nthere is an <code>Error<\/code> tab, built by Honeycomb again and this is what it showed to me:<\/p>\n\n<p><img src=\"\/img\/o11y-guru-series\/first-bug-span-with-error.png\" alt=\"Span with an error\" class=\"img-fluid\" \/><\/p>\n\n<p>At this point it is clear to me where the problem is: \u201cYou can\u2019t follow\nyourself\u201d. It sounds reasonable.<\/p>\n\n<p>I changed the code and I added a simple <code>if<\/code> statement to skip the guru if it is\nthe person actually following all the other people.<\/p>\n\n<pre><code class=\"language-go\">\/\/ me comes from above when I validate that the token behaves to a user.\n\nif guru == me.ScreenName {\n    continue\n}\n<\/code><\/pre>\n\n<p>It does not sound trivial at all but when I tried the fix didn\u2019t work.<\/p>\n\n<p><img src=\"\/img\/o11y-guru-series\/rambo.jpg\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p>I decided to face up the problem differently. <em>Spoiler alert: I didn\u2019t\nwrite any unit test yet. Feel free to leave now.<\/em><\/p>\n\n<p>Looking at the trace I knew I had set for every <strong>following request<\/strong> the guru name\nto follow and at the root span I had who required to follow the\ngurus. In practice, I had in the root span <code>required_by=me.ScreenName<\/code>, and for every\nguru its span with their name. The next image has those two\nspans side by side:<\/p>\n\n<ul>\n  <li>At the left the span <code>newFriendship<\/code> describes a single following action (a\ntwitter create friendship api request). As you can see it has the <code>error=\"you\ncan't follow yourself\"<\/code> and the <code>follower_screenname=gianarb<\/code>.<\/li>\n  <li>The one to the right is the root span, it has the <code>required_by=GianArb<\/code> field,\nit is the <code>me.ScreenName<\/code> variable.<\/li>\n<\/ul>\n\n<p><img src=\"\/img\/o11y-guru-series\/first-bug-compare-spans.png\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p>Looking at this span the situation is clear, I was comparing <code>GianArb<\/code> the\n<code>required_by<\/code> variable that you see in the right with <code>gianarb<\/code>, the\n<code>follower_screenname<\/code> you see at the left span.<\/p>\n\n<p>At the end of the story the check needs to be case-insensitive. And that\u2019s how\nit is now:<\/p>\n\n<pre><code>if strings.EqualFold(guru, me.ScreenName) {\n    continue\n}\n<\/code><\/pre>\n\n<p>This is the history of the first but I randomly discovered and I had to fix\ntwice for <code>o11y.guru<\/code>.<\/p>\n"},{"title":"O'Reilly Report Extending Kubernetes","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/extending-kubernetes-oreilly"}},"description":"I wrote a report with O'Reilly called: Extending Kubernetes.","image":"https:\/\/gianarb.it\/img\/hero\/cat-sleep.jpeg","updated":"2019-10-07T08:08:27+00:00","published":"2019-10-07T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/extending-kubernetes-oreilly","content":"<p>A few months ago I wrote a report with O\u2019Reilly called <a href=\"https:\/\/get.oreilly.com\/ind_extending-kubernetes.html\">Extending\nKubernetes<\/a>. I realized I\ndidn\u2019t share this major achievement here with all of you!<\/p>\n\n<p>It comes from my experience not from ops but as a developer that works with\nKubernetes.<\/p>\n\n<p>K8S requires a big effort in terms of maintenance and setup, who says something\ndifferent lies. Complexity is not too bad and sometime it is a requirement, a\ngood way to justify it is to use kubernetes as much as you can.<\/p>\n\n<p>Operations and teams need a good UX, Kubernetes is extremely extensible with\ncustom resource definitions, kubectl plugins, controller, shared informers,\naudit logs and operators.<\/p>\n\n<p>This report is about it. I meanly use Go for the examples but a lot of them can\nbe re-written using any SDK provided by the Kubernetes community.<\/p>\n\n<p>It is a practical report, with code examples and ideas about what you can do to\nintegrate your day to day operations with Kubernetes in order to share the pain\nwith the developers.<\/p>\n\n<p><img src=\"\/img\/white-polar.jpeg\" alt=\"Sleepy\" \/><\/p>\n\n<p>Now that I recovered from the effort of writing it, stay tuned because I am\nlooking for something else to do!<\/p>\n\n<p>Read it and let me know if you like it via\n<a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a>.<\/p>\n\n<p class=\"small\">Hero image via <a href=\"https:\/\/pixabay.com\/en\/fractal-complexity-geometry-1758543\/\">Pixabay<\/a><\/p>\n"},{"title":"kubectl flags in your plugin","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/kubectl-flags-in-your-plugin"}},"description":"Develop cure custom kubectl plugins with friendly flags from the kubectl","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-10-07T08:08:27+00:00","published":"2019-10-07T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/kubectl-flags-in-your-plugin","content":"<p>This is not at all a new topic, no hacking involved, but it is something\neverybody needs to know where we design kubectl plugin.<\/p>\n\n<p>I was recently working at one and I had to make the user experience as friendly\nas possible compared with the <code>kubectl<\/code>, because that\u2019s what a good developer does!\nTricks other developers to make their life comfortable, if you are used to do:<\/p>\n\n<pre><code class=\"language-bash\">$ kubectl get pod -n your-namespace -L app=http\n<\/code><\/pre>\n\n<p>To get pods from a particular namespace <code>your-namemespace<\/code> filtered by label\n<code>app=http<\/code> and your plugin does something similar or it will benefit from an\ninteraction that remembers the classic <code>get<\/code> you should re-use those flags.<\/p>\n\n<p>Example: design a <code>kubectl-plugin<\/code> capable of running a <code>pprof<\/code> profile against a\nset of containers.<\/p>\n\n<p>My expectation will be to do something like:<\/p>\n\n<pre><code class=\"language-bash\">$ kubectl pprof -n your-namespace -n pod-name-go-app\n<\/code><\/pre>\n\n<p>The Kubernetes community writes a lot of their code in Go, it means that there\nare a lot of libraries that you can re-use.<\/p>\n\n<p><a href=\"https:\/\/github.com\/kubernetes\/cli-runtime\/tree\/master\/pkg\/genericclioptions\">kubernetes\/cli-runtime<\/a>\nis a library that provides utilities to create kubectl plugins. One of their\npackages is called <code>genericclioptions<\/code> and as you can get from its name the goal\nis obvious.<\/p>\n\n<pre><code class=\"language-go\">\n\/\/ import \"github.com\/spf13\/cobra\"\n\/\/ import \"github.com\/spf13\/pflag\"\n\/\/ import \"k8s.io\/cli-runtime\/pkg\/genericclioptions\"\n\n\/\/ Create the set of flags for your kubect-plugin\nflags := pflag.NewFlagSet(\"kubectl-plugin\", pflag.ExitOnError)\npflag.CommandLine = flags\n\n\/\/ Configure the genericclioptions\nstreams := genericclioptions.IOStreams{\n    In:     os.Stdin,\n    Out:    os.Stdout,\n    ErrOut: os.Stderr,\n}\n\n\/\/ This set of flags is the one used for the kubectl configuration such as:\n\/\/ cache-dir, cluster-name, namespace, kube-config, insecure, timeout, impersonate,\n\/\/ ca-file and so on\nkubeConfigFlags := genericclioptions.NewConfigFlags(false)\n\n\/\/ It is a set of flags related to a specific resource such as: label selector\n(-L), --all-namespaces, --schema and so on.\nkubeResouceBuilderFlags := genericclioptions.NewResourceBuilderFlags()\n\nvar rootCmd = &amp;cobra.Command{\n    Use:   \"kubectl-plugin\",\n    Short: \"My root command\",\n    Run: func(cmd *cobra.Command, args []string) {\n\t\tcmd.SetOutput(streams.ErrOut)\n    }\n}\n\n\/\/ You can join all this flags to your root command\nflags.AddFlagSet(rootCmd.PersistentFlags())\nkubeConfigFlags.AddFlags(flags)\nkubeResouceBuilderFlags.AddFlags(flags)\n<\/code><\/pre>\n\n<p>This is the output:<\/p>\n\n<pre><code class=\"language-bash\">$ kubectl-plugin --help\nMy root command\n\nUsage:\n  kubectl-plugin [flags]\n\nFlags:\n      --as string                      Username to impersonate for the operation\n      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --cache-dir string               Default HTTP cache directory (default \"\/home\/gianarb\/.kube\/http-cache\")\n      --certificate-authority string   Path to a cert file for the certificate authority\n      --client-certificate string      Path to a client certificate file for TLS\n      --client-key string              Path to a client key file for TLS\n      --cluster string                 The name of the kubeconfig cluster to use\n      --context string                 The name of the kubeconfig context to use\n  -f, --filename strings               identifying the resource.\n  -h, --help                           help for kubectl-pprof\n      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.\n  -n, --namespace string               If present, the namespace scope for this CLI request\n  -R, --recursive                      Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. (default true)\n      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n  -l, --selector string                Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)\n  -s, --server string                  The address and port of the Kubernetes API server\n      --token string                   Bearer token for authentication to the API server\n      --user string                    The name of the kubeconfig user to use\n<\/code><\/pre>\n"},{"title":"Kubernetes is not for operations","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/as-a-developer-i-dont-care-about-kubernetes"}},"description":"Kubernetes it not for operations. It democratize resources and workloads. It can be the solution to bring developers closer to ops. But YAML is not the answer.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-09-18T08:08:27+00:00","published":"2019-09-18T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/as-a-developer-i-dont-care-about-kubernetes","content":"<p>I work in tech from 8 years. It is not a lot but it is something.\nI started as a PHP developer doing CMS with MySQL and  things like that.<\/p>\n\n<p>Where I saw what I was capable to do with a set of API requests to AWS I enjoyed\nit and I moved to what people called DevOps probably.<\/p>\n\n<p>I like communities and people so Docker was everywhere and I became a Docker\nCaptain for my passion about delivery, and development workflow, containers but\nwith always developers in mind. That\u2019s what I like do to. Write code.<\/p>\n\n<blockquote>\n  <p>The complexity not hidden behind Kubernetes, or not solved by who runs\nKubernetes in your company creates that friction.<\/p>\n<\/blockquote>\n\n<p>Everyone that was\/is in the containers space more or less touched Kubernetes.\nI did it, I enjoyed to look at the patterns used by it like control theory,\nreconciliation loops and so on.<\/p>\n\n<p>In the last couple of years I saw a lot of company moving to Kubernetes\nand I worked on that path in InfluxData as well. Yes we use Kubernetes obviously!<\/p>\n\n<p>I have always sawed friction from developers forced to onboard Kubernetes (no\ndeveloper will do it otherwise). First because everybody uses YAML and I\nthink yaml is just the wrong answer for your problem, nothing personal with it.<\/p>\n\n<p>What developers are happy to do is to <strong>write code<\/strong> that runs in production and\nthat gives them good challenge to debug and fix. <strong>write code<\/strong> is in bold\nbecause that\u2019s what we like most. At least the majority of us.<\/p>\n\n<p>The complexity not hidden behind Kubernetes, or not solved by who runs\nKubernetes in your company creates that friction.<\/p>\n\n<p>Running Kubernetes is not hard, we have tutorials, companies, contractors, cloud\nproviders that can help us out. It is a set of binaries and a database. We run\nthem since ages! There are a good amounts of them, and they need to be configured,\nconnected and there are also a lot of different combinations, but that\u2019s fine.\nWe are used to playing with mobile apps, wordpress plugins and so on.<\/p>\n\n<p>When I think about myself as a developer I understand why there is this\nfriction, if I was not passionate about containers at the right time to try out\nKubernetes I probably even had that friction myself.<\/p>\n\n<p>It does not help me to write better code or to do something different compared\nwith updating systemd service one by one via <code>ssh<\/code>. I bet developers working with\nKubernetes in a system under real load will likely get back to <code>ssh<\/code> to the\nservers one by one deploying their new version of the application to have all\nthe control and visibility they can. That\u2019s what a lot of developers\ntries to achieve when I look at them using Kubernetes.<\/p>\n\n<p>What Kubernetes does very well is democratize ops, it provides a common set of\nconcepts that we can use to run applications and very good API that abstract the\nconcrete implementation of containers, VMs, workload, ingress, dns and so on.<\/p>\n\n<p>We should not west our time trying to run it, we should spend time to make it\nusable in our company because that\u2019s we can get from k8s.<\/p>\n\n<h2 id=\"my-recipe\">my recipe<\/h2>\n\n<p>I do not have a recipe, a product or something ready to go. But I think there\nare two directions I would like to see and to try with a team.<\/p>\n\n<h3 id=\"leave-yaml\">leave yaml<\/h3>\n\n<p>YAML is the wrong answer, it is good to make an impact and to write a\ndocument that everyone can read, but your company is not \u201ceverybody\u201d, you are\npretty unique. You should use the K8S API. I didn\u2019t have time to make a public\nprototype yet but I will do I promise. You should use the language you know\nbetter! I have a lot of experience with go, so my suggestion is to replace yaml\nwith real code, real function and so on. From Kubernetes 1.16 <code>kubectl diff<\/code>\nruns server side. Sweet!<\/p>\n\n<h3 id=\"split-spec-file-by-team\">split spec file by team<\/h3>\n\n<p>It is very easy to end up with a single Kubernetes YAML file that is crazy long.\nThat file contains everything you run. Across teams, responsabilities and\npeople. Do not do it. Split it in different files or repositories by team or\napplication owners.<\/p>\n\n<p>DevOps, SRE, Sysadmin, reliability, penguins or what ever you call the team that\nowns the underline architecture will have the Yaml related to the foundation of\nthe infrastructure. The content of it is not important for other teams, they\nwill only write and see what matters to them.<\/p>\n\n<p>This approaches will decrease complexity for developers making them probably\nless worried to screw up part of the infrastructure that is not related to their\nwork.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>If you are a developer please develop good code! If you own Kubernetes in your\ncompany make it to work for your users.<\/p>\n"},{"title":"Reactive planning and reconciliation in Go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/reactive-planning-and-reconciliation-in-go"}},"description":{},"image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-09-13T08:08:27+00:00","published":"2019-09-13T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/reactive-planning-and-reconciliation-in-go","content":"<p>Reading my recent posts you can spot my attempt to share what I learned, and\nI am still studying around distributed system, control theory and provisioning.<\/p>\n\n<p>I wrote a quick introduction about why I think <a href=\"\/blog\/reactive-planning-is-a-cloud-native-pattern\">reactive planning is a cloud\nnative pattern<\/a> and I\npublished an article about <a href=\"\/blog\/control-theory-is-dope\">control theory<\/a>, but I\nhave just scratched the surface of this topic obviously. I have a 470 pages to\nread from the book <a href=\"https:\/\/www.amazon.it\/Designing-Distributed-Control-Systems-Veli-Pekka\/dp\/B01FIX9LMG\">Designing Distributed Control Systems: A Pattern Language\nApproach<\/a>.\nIt will take me forever.<\/p>\n\n<h2 id=\"introduction\">Introduction<\/h2>\n\n<p>It is easier to explain how much powerful reactive planning is looking at one\nexample, I wrote it in go, and in this article I am explaining the most\nimportant parts.<\/p>\n\n<p>Just to summarize, I think resiliency in modern application is crucial and very\nhard to achieve in practice, mainly because we need to implement and learn a set\nof patterns and rules. When I think about a solid application inside a\nmicroservices environment, or in a high distributed ecosystem my mind drives me\ninto a different industry. I think about tractors, boilers, and what ever\ndoes not depend on a static state stored inside a database but on a dynamic\nsource of truth.<\/p>\n\n<p>When I think about an orchestrator it is clear to me that there is no way to\ntrust a cache layer in order to understand how many resources (VMs, containers,\npods) are running. We need to check them live because you never know what is\nhappening to your workload. Those kinds of applications are sensible to latency,\nand they require a fast feedback loop.<\/p>\n\n<p>That\u2019s one of the reason about why when you read about Kubernetes internals you\nread about reconciliation loops and informers.<\/p>\n\n<h2 id=\"our-use-case\">Our use case<\/h2>\n\n<p>I wrote a small PoC, it is an application that I called\n<a href=\"https:\/\/github.com\/gianarb\/cucumber\">cucumber<\/a>, it is available on GitHub and\nyou can run it if you like.<\/p>\n\n<p>It is a CLI tools that provisions a set of resources on AWS. The provisioned\narchitecture is very simple. You can define a number of EC2 and, they will be\ncreated and assigned to a Route53 record, when the record does not exist the\napplication will create it.<\/p>\n\n<p>I learned about how to think about problems like that. At the beginning of my\ncareer the approach was simple, \u201cI know what to do, I need to write a program\nthat reads the request and does what need to be done\u201d. So you start configuring\nthe AWS client, parsing the request and making a few API requests.<\/p>\n\n<p>Everything runs perfectly and you succeed at creating 100 clusters.\nThing starts to be more complicated, you have more resources to provisioning\nlike load balancers, subnets, security groups and more business logic related to\nwho can do what. Requests start to be more than 5 at execution and the logic\nsomethings does not work as linear as it was doing before. At this point you\nhave a lot of conditions and figuring out where the procedure failed and how to\nfix the issue becomes very hard.<\/p>\n\n<p>This is why my current approach is different when I recognize this kind of\npattern I always start from the current state of the system.<\/p>\n\n<p>You can question the fact that at the first execution it is obvious that nothing\nis there, you can just create what ever needs to be created. And I agree on\nthat, but assuming that you do not know your starting points drives you to implement\nthe workflow in a way that is idempotent. When you achieve this goal you can\nre-run the same workflow over and over again, if there is nothing to do the\nprogram won\u2019t do anything otherwise it is smart enough to realize what needs to\nbe done. In this way you can create something called <strong>reconciliation loop<\/strong>.<\/p>\n\n<h2 id=\"reconciliation-loop\">Reconciliation loop<\/h2>\n\n<p>The idea to re-run the procedures over and over assuming you do not know where\nyou left it is very powerful. Following our example, if the creation flow does\nnot end because AWS returned a 500 you won\u2019t be stuck in a situation where you\ndo not know how to end the procedure, you will just wait for the next\nre-execution of the flow and it will be able to figure what is already created.\nIn my example this patterns works great when provisioning the route53 DNS record\nbecause the DNS propagation can take a lot of time and in order to realize if\nthe DNS record already exists or if there are the right amounts of IPs attached\nto it I use the\n<a href=\"https:\/\/jameshfisher.com\/2017\/08\/03\/golang-dns-lookup\/\"><code>net.LookupIP<\/code><\/a>, it\nis the perfect procedure that can take an unknown amount of time to be\naddressed.<\/p>\n\n<h2 id=\"reactive-planning\">Reactive planning<\/h2>\n\n<p>At the very least reconciliation loop can be explained as a simple <code>loop<\/code> that\nwill execute a procedure forever but how do you write a workflow that is able to\nunderstand the state of the system and autonomously make a plan to fix the gap\nbetween current and desired state? This is what reactive planning does and\nthat\u2019s why control theory is done!<\/p>\n\n<pre><code class=\"language-go\">\/\/ Procedure describe every single step to be executed. It is the smallest unit\n\/\/ of work in a plan.\ntype Procedure interface {\n\t\/\/ Name identifies a specific procedure.\n\tName() string\n\t\/\/ Do execute the business logic for a specific procedure.\n\tDo(ctx context.Context) ([]Procedure, error)\n}\n\n\/\/ Plan describe a set of procedures and the way to calculate them\ntype Plan interface {\n\t\/\/ Create returns the list of procedures that needs to be executed.\n\tCreate(ctx context.Context) ([]Procedure, error)\n\t\/\/ Name identifies a specific plan\n\tName() string\n}\n<\/code><\/pre>\n\n<p>Let\u2019s start with a bit of Go. <code>Procedure<\/code> and <code>Plan<\/code> are the fundamental\ninterfaces to get familiar with:<\/p>\n\n<ul>\n  <li><code>Plan<\/code> are a collection of <code>Procedures<\/code>. The <code>Create<\/code> function is able to\nfigure out the state of system adding procedures dynamically<\/li>\n  <li><code>Procedure<\/code> are the unit of work, they need to be as small as possible. The\ncool part about them is that they can return other procedures (and they can\nreturn other procedures as well) in this way build resilience. If a procedure\nreturns an error the <code>Plan<\/code> is marked as failed.<\/li>\n<\/ul>\n\n<pre><code class=\"language-go\">\/\/ Scheduler takes a plan and it executes it.\ntype Scheduler struct {\n\t\/\/ stepCounter keep track of the number of steps exectued by the scheduler.\n\t\/\/ It is used for debug and logged out at the end of every execution.\n\tstepCounter int\n\t\/\/ logger is an instance of the zap.Logger\n\tlogger *zap.Logger\n}\n\n<\/code><\/pre>\n\n<p><code>Plan<\/code> and <code>Procedure<\/code> are crucial, but we need a way to execute a plan, it is\ncalled scheduler. The <code>Scheduler<\/code> has an <code>Execture<\/code> function that accept a\n<code>Plan<\/code> and it executes it <strong>until there is nothing left to do<\/strong>. Procedures can\nreturns other procedures it means that the scheduler needs to recursively\nexecute all the procedures.<\/p>\n\n<p>The way the scheduler has to understand where the plan is done if via the number\nof steps returned by the <code>Plan.Create<\/code> function. The scheduler executes every\nplan at last twice, if the second time there are not steps left it means that\nthe first execution succeeded.<\/p>\n\n<pre><code class=\"language-go\">\/\/ Execute accept an plan as input and it execute it until there are not anymore\n\/\/ procedures to do\nfunc (s *Scheduler) Execute(ctx context.Context, p Plan) error {\n\tuuidGenerator := uuid.New()\n\tlogger := s.logger.With(zap.String(\"execution_id\", uuidGenerator.String()))\n\tstart := time.Now()\n\tif loggableP, ok := p.(Loggable); ok {\n\t\tloggableP.WithLogger(logger)\n\t}\n\tlogger.Info(\"Started execution plan \" + p.Name())\n\ts.stepCounter = 0\n\tfor {\n\t\tsteps, err := p.Create(ctx)\n\t\tif err != nil {\n\t\t\tlogger.Error(err.Error())\n\t\t\treturn err\n\t\t}\n\t\tif len(steps) == 0 {\n\t\t\tbreak\n\t\t}\n\t\terr = s.react(ctx, steps, logger)\n\t\tif err != nil {\n\t\t\tlogger.Error(err.Error(), zap.String(\"execution_time\", time.Since(start).String()), zap.Int(\"step_executed\", s.stepCounter))\n\t\t\treturn err\n\t\t}\n\t}\n\tlogger.Info(\"Plan executed without errors.\", zap.String(\"execution_time\", time.Since(start).String()), zap.Int(\"step_executed\", s.stepCounter))\n\treturn nil\n}\n<\/code><\/pre>\n\n<p>The <code>react<\/code> function implements the recursion and as you can see is the place\nwhere the procedures get executed <code>step.Do<\/code>.<\/p>\n\n<pre><code class=\"language-go\">\/\/ react is a recursive function that goes over all the steps and the one\n\/\/ returned by previous steps until the plan does not return anymore steps\nfunc (s *Scheduler) react(ctx context.Context, steps []Procedure, logger *zap.Logger) error {\n\tfor _, step := range steps {\n\t\ts.stepCounter = s.stepCounter + 1\n\t\tif loggableS, ok := step.(Loggable); ok {\n\t\t\tloggableS.WithLogger(logger)\n\t\t}\n\t\tinnerSteps, err := step.Do(ctx)\n\t\tif err != nil {\n\t\t\tlogger.Error(\"Step failed.\", zap.String(\"step\", step.Name()), zap.Error(err))\n\t\t\treturn err\n\t\t}\n\t\tif len(innerSteps) &gt; 0 {\n\t\t\tif err := s.react(ctx, innerSteps, logger); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n<\/code><\/pre>\n\n<p>All the primitives described in this section are in their go module called\n<a href=\"https:\/\/github.com\/gianarb\/planner\">github.com\/gianarb\/planner<\/a> that you can\nuse. Other than what showed here the scheduler supports context cancellation and\ndeadline. In this way you can set a timeout for every execution.<\/p>\n\n<p>One of the next big feature I will develop is a reusable reconciliation loop for\nplans. In cucumber, it is very raw. Just a goroutine and a WaitGroup to keep the main\nprocess up:<\/p>\n\n<pre><code>go func() {\n    logger := logger.With(zap.String(\"from\", \"reconciliation\"))\n    scheduler.WithLogger(logger)\n    for {\n        logger.Info(\"reconciliation loop started\")\n        if err := scheduler.Execute(ctx, &amp;p); err != nil {\n            logger.With(zap.Error(err)).Warn(\"cucumber reconciliation failed.\")\n        }\n        time.Sleep(10 * time.Second)\n        logger.Info(\"reconciliation loop ended\")\n    }\n}()\n<\/code><\/pre>\n<p>But this is too simple and it does not work in a distributed environment where\nonly one process should run the reconciliation and not all the replicas.<\/p>\n\n<p>I wrote this code to help myself to internalize and explain what reactive\nplans means. And also because I think the go community has a lot of great tools\nthat make uses of this concept like Terraform, Kubernetes but there are not low\nlevel or simple to understand pieces of code. The next chapter describes how to\nwrite your own control plan using reactive planning.<\/p>\n\n<h2 id=\"theory-applied-to-cucumber\">Theory applied to cucumber\u2026<\/h2>\n\n<p>Let\u2019s start looking at the <code>main<\/code> function:<\/p>\n\n<pre><code class=\"language-go\">p := plan.CreatePlan{\n    ClusterName:  req.Name,\n    NodesNumber:  req.NodesNumber,\n    DNSRecord:    req.DNSName,\n    HostedZoneID: hostedZoneID,\n    Tags: map[string]string{\n        \"app\":          \"cucumber\",\n        \"cluster-name\": req.Name,\n    },\n}\n\nscheduler := planner.NewScheduler()\nscheduler.WithLogger(logger)\n\nif err := scheduler.Execute(ctx, &amp;p); err != nil {\n    logger.With(zap.Error(err)).Fatal(\"cucumber ended with an error\")\n}\n<\/code><\/pre>\n\n<p>In cucumber there is only one Plan the <code>CreationPlan<\/code>. We create it based on the\nYAML file that contains the requested cluster. For example:<\/p>\n\n<pre><code class=\"language-yaml\">name: yuppie\nnodes_num: 3\ndns_name: yeppie.pluto.net\n<\/code><\/pre>\n\n<p>And it gets executed by the scheduler. As you can see if the schedule returns an\nerror we do not exit, we do not kill the process. We do not panic! Because we\nknow that things can break and our code is designed to break just a little\nand in way it can be recovered.<\/p>\n\n<p>After the first execution the process spins up a goroutine that is the one I\ncopied above to explain a raw and simple control loop.<\/p>\n\n<p>The process stays in the loop until we kill the process.<\/p>\n\n<p>In order to test the reconciliation you can try to remove one or more EC2 or the\nDNS record, watching the logs you will see how inside the loop the scheduler\nexecutes the plan and reconcile the state of the system in AWS with the one you\ndescribed in the YAML.<\/p>\n\n<pre><code class=\"language-bash\">CUCUMBER_MODE=reconcile AWS_HOSTED_ZONE=&lt;hosted-zone-id&gt; AWS_PROFILE=credentials CUCUMBER_REQUEST=.\/test.yaml go run cmd\/main.go \n<\/code><\/pre>\n\n<p>This is the command I uses to start the process.<\/p>\n\n<p>The steps I wrote in cucumber are not many and you can find them inside\n<code>.\/cucumber\/step<\/code>:<\/p>\n\n<ol>\n  <li>create_dns_record<\/li>\n  <li>reconcile_nodes<\/li>\n  <li>run_instance<\/li>\n  <li>update_dns_record<\/li>\n<\/ol>\n\n<p><code>run_instance<\/code> for example is very small, it interacts with AWS via the go-sdk\nand it creates an EC2:<\/p>\n\n<pre><code class=\"language-go\">package step\n\nimport (\n\t\"context\"\n\n\t\"github.com\/aws\/aws-sdk-go\/aws\"\n\t\"github.com\/aws\/aws-sdk-go\/service\/ec2\"\n\t\"github.com\/gianarb\/planner\"\n\t\"go.uber.org\/zap\"\n)\n\ntype RunInstance struct {\n\tEC2svc   *ec2.EC2\n\tTags     map[string]string\n\tVpcID    *string\n\tSubnetID *string\n\tlogger   *zap.Logger\n}\n\nfunc (s *RunInstance) Name() string {\n\treturn \"run-instance\"\n}\n\nfunc (s *RunInstance) Do(ctx context.Context) ([]planner.Procedure, error) {\n\ttags := []*ec2.Tag{}\n\tfor k, v := range s.Tags {\n\t\tif k == \"cluster-name\" {\n\t\t\ttags = append(tags, &amp;ec2.Tag{\n\t\t\t\tKey:   aws.String(\"Name\"),\n\t\t\t\tValue: aws.String(v),\n\t\t\t})\n\t\t}\n\t\ttags = append(tags, &amp;ec2.Tag{\n\t\t\tKey:   aws.String(k),\n\t\t\tValue: aws.String(v),\n\t\t})\n\t}\n\tsteps := []planner.Procedure{}\n\tinstanceInput := &amp;ec2.RunInstancesInput{\n\t\tImageId:      aws.String(\"ami-0378588b4ae11ec24\"),\n\t\tInstanceType: aws.String(\"t2.micro\"),\n\t\t\/\/UserData:              &amp;userData,\n\t\tMinCount: aws.Int64(1),\n\t\tMaxCount: aws.Int64(1),\n\t\tSubnetId: s.SubnetID,\n\t\tTagSpecifications: []*ec2.TagSpecification{\n\t\t\t{\n\t\t\t\tResourceType: aws.String(\"instance\"),\n\t\t\t\tTags:         tags,\n\t\t\t},\n\t\t},\n\t}\n\t_, err := s.EC2svc.RunInstances(instanceInput)\n\tif err != nil {\n\t\treturn steps, err\n\t}\n\treturn steps, nil\n}\n<\/code><\/pre>\n\n<p>As you can see the unique situation where I return an error is if the\n<code>ec2.RunInstance<\/code> fails, but only because this is a simple implementation.\nMoving forward you can replace the return of that error with other steps, for\nexample you can terminate the cluster and cleanup, in this way you won\u2019t left\nbroken cluster around, or if you try other steps to recover from that error\nleaving at the next executions (from the reconciliation loop) to end the\nworkflow.<\/p>\n\n<p>From my experience reactive planning makes refactoring or development very\nmodular, as you can see you do not need to make all the flow rock solid since\nday one, because it is very time-consuming, but you always have a clear\nentrypoint for future work. Everywhere you return or log an error can be\nreplaced at some point with steps, making your flow rock solid from the\nobservation you do from previous run.<\/p>\n\n<p>The <code>reconcile_nodes<\/code> is another interesting steps. Because <code>run_insance<\/code> only\ncalls AWS and it creates one node, but as you can image we need to create or\nterminate a random amount of them depending on the current state of the system.<\/p>\n\n<ol>\n  <li>if you required 3 EC2 but there are zero of them you need to run 3 new nodes<\/li>\n  <li>if there are 2 nodes but your required 3 we need 1 more<\/li>\n  <li>if there are 56 nodes but you required 3 of them we need to terminate 63 EC2s<\/li>\n<\/ol>\n\n<p>The <code>reconcile_nodes<\/code> procedures makes that calculation and returns the right\nsteps:<\/p>\n\n<pre><code class=\"language-go\">package step\n\nimport (\n\t\"context\"\n\n\t\"github.com\/aws\/aws-sdk-go\/service\/ec2\"\n\t\"go.uber.org\/zap\"\n\n\t\"github.com\/gianarb\/planner\"\n)\n\ntype ReconcileNodes struct {\n\tEC2svc        *ec2.EC2\n\tTags          map[string]string\n\tVpcID         *string\n\tSubnetID      *string\n\tCurrentNumber int\n\tDesiredNumber int\n\tlogger        *zap.Logger\n}\n\nfunc (s *ReconcileNodes) Name() string {\n\treturn \"reconcile-node\"\n}\n\nfunc (s *ReconcileNodes) Do(ctx context.Context) ([]planner.Procedure, error) {\n\ts.logger.Info(\"need to reconcile number of running nodes\", zap.Int(\"current\", s.CurrentNumber), zap.Int(\"desired\", s.DesiredNumber))\n\tsteps := []planner.Procedure{}\n\tif s.CurrentNumber &gt; s.DesiredNumber {\n\t\tfor ii := s.DesiredNumber; ii &lt; s.CurrentNumber; ii++ {\n\t\t\t\/\/ TODO: remove instances if they are too many\n\t\t}\n\t} else {\n\t\tii := s.CurrentNumber\n\t\tif ii == 0 {\n\t\t\tii = ii + 1\n\t\t}\n\t\tfor i := ii; i &lt; s.DesiredNumber; i++ {\n\t\t\tsteps = append(steps, &amp;RunInstance{\n\t\t\t\tEC2svc:   s.EC2svc,\n\t\t\t\tTags:     s.Tags,\n\t\t\t\tVpcID:    s.VpcID,\n\t\t\t\tSubnetID: s.SubnetID,\n\t\t\t})\n\t\t}\n\t}\n\treturn steps, nil\n}\n<\/code><\/pre>\n\n<p>As you can see I have only implemented the <code>RunInstnace<\/code> step, and there is a\n<code>TODO<\/code> left in the code, it means that scale down does not work for now.\nIt returns the right steps required to matches the desired state, if there are 2\nnodes, but we required 3 of them this steps will return only one <code>RunInstance<\/code>\nthat will be executed by the scheduler.<\/p>\n\n<p>Last interesting part of the code is the <code>CreatePlan.Create<\/code> function. This is\nwhere the magic happens. As we saw the schedulers calls the <code>Create<\/code> functions\nat least twice for every execution and its responsability is to measure the\ncurrent state and according to it calculate the required steps to achieve that\nwe desire. It is a long function that you have in the repo, but this is an idea:<\/p>\n\n<pre><code class=\"language-go\">resp, err := ec2Svc.DescribeInstances(&amp;ec2.DescribeInstancesInput{\n    Filters: []*ec2.Filter{\n        {\n            Name:   aws.String(\"instance-state-name\"),\n            Values: []*string{aws.String(\"pending\"), aws.String(\"running\")},\n        },\n        {\n            Name:   aws.String(\"tag:cluster-name\"),\n            Values: []*string{aws.String(p.ClusterName)},\n        },\n        {\n            Name:   aws.String(\"tag:app\"),\n            Values: []*string{aws.String(\"cucumber\")},\n        },\n    },\n})\n\nif err != nil {\n    return nil, err\n}\n\ncurrentInstances := countInstancesByResp(resp)\nif len(currentInstances) != p.NodesNumber {\n    steps = append(steps, &amp;step.ReconcileNodes{\n        EC2svc:        ec2Svc,\n        Tags:          p.Tags,\n        VpcID:         vpcID,\n        SubnetID:      subnetID,\n        CurrentNumber: len(currentInstances),\n        DesiredNumber: p.NodesNumber,\n    })\n}\n<\/code><\/pre>\n<p>The code checks if the number of running instances are equals to the desired\none, if they are different it calls the <code>ReconcileNodes<\/code> procedure.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>This is it! It is a long article but there is code and a repository you can run!\nI am enthusiast about this pattern and the work exposed here because I think it\nmakes it clear and I tried to keep the context as small as possible to stay\nfocused on the workflow and the design.<\/p>\n\n<p>Let me know if you will end up using it! Or if you already do how it is going\n<a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a>.<\/p>\n"},{"title":"Control Theory is dope","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/control-theory-is-dope"}},"description":"This is an introductive article about control theory applied to microservices and cloud computing. It is a very high level overview about control theory driven by what I loved most about it.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-09-04T08:08:27+00:00","published":"2019-09-04T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/control-theory-is-dope","content":"<p>For the last two years at InfluxData I worked on our custom orchestrator that\nempower InfluxCloud v1 to run. I have some talk about it at InfluxDays, but they\nare not recorded so, I can\u2019t really post them here, sadly.<\/p>\n\n<p>If you are thinking: \u201cWhy you should write your own orchestrator?\u201d, I have few\nanswers for you.<\/p>\n\n<ol>\n  <li>Back in the day Kubernetes was not so popular, 4 years ago when InfluxCloud\nstarted it was not at least.<\/li>\n  <li>We had since the beginning to manage data and state, people still says that\nKubernetes is not for them today, image how it was 4 years ago.<\/li>\n<\/ol>\n\n<p>Btw now InfluxCloud v2 leverages Kubernetes.<\/p>\n\n<p>Writing a good orchestrator is super fun! When I started but still today a big\npart of it are frustrating and not so good but the one we wrote following\nreactive planning and control theory are lovely! This article is an introduction\nabout Control Theory. <a href=\"https:\/\/twitter.com\/goller\">Chris Goller<\/a> Solution\nArchitect at InfluxData was the first person that told me about how Control\nTheory works in theory, and he pushed me to try reactive planning for our\norchestrator.<\/p>\n\n<p>As Kubernetes contributor I recognized some of those patterns as looking at\nshared informers, controller and so on. So I understood since the beginning that\nthose patterns was everywhere around me!<\/p>\n\n<p><a href=\"https:\/\/twitter.com\/colmmacc\">Colm MacC\u00e1rthaigh<\/a> from Amazon Web Service with\nhis  talks (like the one posted here) helped me to find resources to read, more\npatterns and use cases for it.<\/p>\n\n<div class=\"embed-responsive embed-responsive-16by9 col-xs-12 text-center\">\n    <iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/O8xLxNje30M\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n    picture-in-picture\" allowfullscreen=\"\"><\/iframe>\n<\/div>\n\n<h2 id=\"why-it-works\">Why it works<\/h2>\n\n<p>When I started to work as a Web Developer, designing APIs or websites I had\ndifferent challenges to face. To write a solid CRUD you put all your effort\nwhen a request comes to your API, you validate it, apply transformation to\nsanitize the input and if it is valid you save it\nin your database. You need to build good UX, complex validations systems and so\non. But what lands in the database is right and rock solid.<\/p>\n\n<p>There are other systems where you do not have a database that tells you what is\nright or not. You need to <strong>measure<\/strong> the current state, <strong>calculate<\/strong> what\nneeds to get back to your desired state and you need to <strong>apply<\/strong> what you\ncalculated.<\/p>\n\n<p>Those systems are everywhere:<\/p>\n\n<ul>\n  <li>The boiler you have at home to keep the water warm needs to constantly check if the\ndesired temperature you set is the current one. What it is stored in its\nmemory is what you desire, not the truth.<\/li>\n  <li>The example Colm MacC\u00e1rthaigh used is the Autoscaler. It keeps checking the\nstate of your system based on the scalability rules you set. For example if\nCPU is over 70% spin up 3 nodes. The autoscaler measures the current state of\nyour CPUs and when it is over it calculates what needs to be done and it\nexecutes the scale up or down.<\/li>\n  <li>When you read Kubernetes documentation is will see reference to Controller,\nreconciliation loop, desired state and so on. All of those concepts come from\nControl Theory.<\/li>\n<\/ul>\n\n<p>Orchestrator but more in general big microservices environment do not have the\nconcept of data locality as we used to have in the past. The data you need can\nchange continously, and they need to collected from different sources and\ncombined in order to calculate what needs to be done.<\/p>\n\n<p>I think this is the main reason about why patterns coming from Control Theory\nworks well.<\/p>\n\n<p>If you need to write a program that provisions 3 virtual machines and attach them\nto a random DNS record you can approach this problems in 2 ways. You can write a\nprocedure that:<\/p>\n\n<ol>\n  <li>Creates 3 instances.<\/li>\n  <li>Takes the public IPs.<\/li>\n  <li>Creates the DNS record with the IPs as A record.<\/li>\n<\/ol>\n\n<p>Another way you have to fix this issue is to start from checking what you have,\nmaking a plan to matches what it is not as you desire. So it will look like\nthis:<\/p>\n\n<ol>\n  <li>Check how many instances there are and mark what you need to do, if there are\n2 of them you need one, if there are 5 you need to delete 2, you there are 0\nof them you need to create all of them.<\/li>\n  <li>Check if the DNS record is already there and how many IPs are assigned to it.<\/li>\n  <li>If it does exist you do not need to create it but you need to check if the\nIPs assigned to it are the same of the instances, If they are not you need to\nreconcile the DNS record fixing the IPs.<\/li>\n  <li>The record does not exist? You can create it.<\/li>\n<\/ol>\n\n<p>If you are wondering how all those checks makes the system more reliable is because\nyou never know what you already created or what it is already where. Let\u2019s\nassume you are on AWS. API requests can fails at the middle of your process and\nyou need to know where you are. AWS itself can stop or terminate instances, or\nsome other procedures can do it or for manual mistake.<\/p>\n\n<p>Approaching the problem in this way allows you to repeat the flow over and over\nbecause it idempotent and at every retry the process will be able to reconcile\nany divergence between what you asked for (3 VMs and one DNS record) and what it\nis actually running. This process is called reconciliation loop.<\/p>\n\n<h2 id=\"101-architecture\">101 architecture<\/h2>\n\n<p>Colm MacC\u00e1rthaigh highlights three major areas around how a successful Control\nTheory implementation looks like:<\/p>\n\n<ol>\n  <li>Measurement process<\/li>\n  <li>Controller<\/li>\n  <li>Actuator<\/li>\n<\/ol>\n\n<h2 id=\"measurement-process\">Measurement process<\/h2>\n\n<p>The way you retrieve the current state of the system is crucial in order to have\na low latency. They are crucial in order to calculate what needs to be done\nbecause from the current state your program get different decisions.<\/p>\n\n<h2 id=\"controller\">Controller<\/h2>\n\n<p>This section is where I have more experience with. The desired state is stored\nand clear usually. You know here to go. You get the measurements and with this\ninformation you need to write a procedure capable of making a plan stating from\nyour current state to get to the desired one.<\/p>\n\n<p>I wrote a few weeks ago an introduction about <a href=\"https:\/\/gianarb.it\/blog\/reactive-planning-is-a-cloud-native-pattern\">reactive\nplanning<\/a>\nit is the way I used to calculate a plan.<\/p>\n\n<p>I am also preparing a PoC in Golang with actual code you can run and test to\nshare in practice what means reactive planning.<\/p>\n\n<h2 id=\"actuator\">Actuator<\/h2>\n\n<p>It is the part that take a calculated plan, and it executes it. I worked a lot\nwith schedulers that are able to take a set of steps and execute them one by one\nor in parallel based on needs.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>Think about one of them problem you have a try to think in a more reactive way,\nstarting from checking where you are and not from doing things. Reliability and\nstability for your code will improve drastically.<\/p>\n"},{"title":"Hack your Google Calendar with gcalcli","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/hack-your-google-calendar-gcalcli"}},"description":"Everybody uses google calendar in a way or another and if you are a Linux with a light desktop manager such as i3 you lack on some commodities like reminders and notifications for your events. I find gcalcli a very good solution for my pain.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-08-26T08:08:27+00:00","published":"2019-08-26T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/hack-your-google-calendar-gcalcli","content":"<p>I am pretty bad with meetings. I forget about them for a lot of different\nreasons, sometime I do not show up even if few minutes earlier my mind briefly\nremembered it.<\/p>\n\n<p>Meetings are not my daily job, and I do not have them with a lot of different\npeople: IPM with my team, one-to-one with my manager, various stand up. I can\nremember the recurrent one pretty well but it is still an annoying a useless\nexercise.<\/p>\n\n<p>When they are not recurrent they are usually out of my small circle of friends\nand it gets even worst because I do not like to be late or to miss it! I swear I\nam not like that in real life! I am on time and I prefer to be there earlier.<\/p>\n\n<p>Anyway! Ryan Betts VP Engineer at InfluxData shared a very nice CLI tools called\n<a href=\"https:\/\/github.com\/insanum\/gcalcli\">gcalcli<\/a>. I love CLI tools as much as I\nlove API! Probably a bit more because they are the perfect glue between server\nside and the best UX ever (also known as <strong>my terminal<\/strong>).<\/p>\n\n<p><img src=\"\/img\/gintonic.jpg\" alt=\"A good gin tonic is great as close as my terminal\" \/><\/p>\n\n<p><strong>gcalcli<\/strong> is a lovely CLI tool that uses the Google Calendar API to help you\nto manage your Google Calendar.<\/p>\n\n<p>You can do a lot of things: list, search, edit, add events and even more.\nThe <a href=\"https:\/\/github.com\/insanum\/gcalcli#login-information\">authentication is well\ndocumented<\/a> you need to\ncreate a project on Google Development Platform with Calendar API access. After\nthat you get your credential and you follow the link I just posted! Super easy.<\/p>\n\n<p>When you are logged I wrote this system unit and a timer in order to check every\n10 minutes if there are upcoming events:<\/p>\n\n<pre><code>[Service]\nSyslogIdentifier=gcalcli-notification\nExecStart=\/usr\/bin\/gcalcli remind\n\n[Install]\nWantedBy=multi-user.target\n<\/code><\/pre>\n\n<pre><code>[Unit]\nDescription=\"Send notification for every meetings set for xxxxx@gmail.com\"\n\n[Timer]\nOnBootSec=0min\nOnCalendar=*:0\/10\n\n[Install]\nWantedBy=timers.target\n<\/code><\/pre>\n\n<p>The timer runs every 10 minutes this command <code>\/usr\/bin\/gcalcli remind<\/code>.\n<code>remind<\/code> uses <code>notify-send<\/code> to show a lovely notification.<\/p>\n\n<p>I set it up for my working calendar and let me tell you it works great!\nFor that reason I was looking for a way to support multiple Google account,\nbecause I would like to use it for my personal Google Calendar as well.<\/p>\n\n<p>There is a global flag for <code>gcalcli<\/code> called <code>--config-folder<\/code>, by default it set\nte none it creates a config file with credentials and preferences in your home\ndirectory.  If you run <code>gcalcli<\/code> with that parameter set with a different\nlocation:<\/p>\n\n<pre><code class=\"language-bash\">$ gcalcli --config-folder ~\/.gcalclirc-anotheraccount list\n<\/code><\/pre>\n\n<p>The CLI won\u2019t find the configuration file and it will proceed with a brand-new\nauthentication and it will create a new file located where specified. Sweet! I\ndid that trick in order to have the second Google Account configured and I have\ncreated a new unit and timer with the right flags and now I get notification\nfrom everywhere! So far so good!<\/p>\n\n<p>Ryan allowed me to share a script he hacked called <code>next<\/code>, I have it in my\n<code>bashrc<\/code><\/p>\n\n<pre><code class=\"language-bash\">next() {\n    datetime=$(date \"+%Y-%m-%dT%H:%M\")\n    whatwhere=$(gcalcli --calendar name-your-calendar agenda --tsv --details location $datetime 8pm | head -n 1 | awk 'BEGIN {FS = \"\\t+\"} ; {print $5 \" \" $6}')\n\n    re=\"([[:digit:]]+)\"\n    if [[ $whatwhere =~ $re ]]; then\n       room=\"zoommtg:\/\/zoom.us\/join?confno=${BASH_REMATCH[1]}\"\n    fi\n\n    echo \"What: '$whatwhere'\"\n    echo \"xdg-open $room\"\n    echo \"xdg-open $room\" | clipc\n}\n<\/code><\/pre>\n\n<p>I use Linux, he uses MacOS, so I changed the script a bit.<\/p>\n\n<p><code>xdg-open<\/code> to make it to work with <code>X<\/code>, <code>next<\/code> gets the next closer meeting you have in one\nparticular calendar (<code>name-your-calendar<\/code> in my case) and it stores on my\nclipboard (via <code>clicp<\/code>) the command to join a zoom channel. It is super when you\nare in a hurry, you will join zoom meetings in a second.<\/p>\n\n<p>If you use <code>gcalcli<\/code> and you have other tricks let me know via twitter\n<a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a> because I would like to try them as well!<\/p>\n"},{"title":"I am in love with language servers","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/i-am-in-love-with-language-servers"}},"description":"Language Servers are a nice way to reuse common features required by editors such as auto complete, formatting, go to definition. This article is a an open letter to share my love for this project with everybody","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-07-30T08:08:27+00:00","published":"2019-07-30T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/i-am-in-love-with-language-servers","content":"<p>Hello everybody! I am writing this article because I had a chat with a friend of\nmine <a href=\"https:\/\/twitter.com\/walterdalmut\">@wdalmut<\/a>. He is a busy businessman and\nvimmer like me.<\/p>\n\n<p>This article is a quick and practical way to understand why language servers are\nfantastic! Because they are!<\/p>\n\n<p>When I started to use vim, I was developing almost all the time with PHP. PHP is\na tricky language, and back in the day, YouCompleteMe was the way to go to have\nsome autocomplete. However, as I said, PHP was not an excellent language for\nthat because the number of files is enormous, and to load all of them to suggest\nfunctions and methods is tricky. Probably it is still like that.<\/p>\n\n<p>Compared with a couple of years ago, we have more IDE and editors: Atom, VSCode,\nSublime, and many more. All of them to be successful requires the same features:<\/p>\n\n<ul>\n  <li>Syntax highlight<\/li>\n  <li>autocomplete<\/li>\n  <li>Formatting<\/li>\n<\/ul>\n\n<p>You can see the language serves as a protocol to abstract and reuse those\nfeatures, and many more such as go to definition, find all references, show\ndocumentation.  Vim is almost like WordPress; there is a plugin for everything;\nfor example, there is an excellent vim-go plugin to make vim to work smart with\ngolang. The problem is works for vim and as I said, almost all the editors\nneed the set of shared features just listed to be usable on a daily base.<\/p>\n\n<p>The community that builds a language has a lexer a parser, and it can traverse\nthe AST for the language that it develops. It has the knowledge and all the\nbuilding blocks to provide a tool usable by different clients. The way for them\nto build something reusable is a language server. The clients are different\neditors.<\/p>\n\n<p>This story is real, and the Golang community develops\n<a href=\"https:\/\/github.com\/golang\/go\/wiki\/gopls\">gopls<\/a> (it stays for go please), the\nGolang language server. I use it with vim, and as a client, I use\n<a href=\"https:\/\/github.com\/neoclide\/coc.nvim\">vim-coc<\/a><\/p>\n\n<p>vim-go &gt;1.20 works with gopls as well, you need to set it explicitly:<\/p>\n\n<pre><code>let g:go_def_mode='gopls'\nlet g:go_info_mode='gopls'\n<\/code><\/pre>\n\n<p>This article expresses my love for language server, not for Go or vim-go or vim!\nEven if I love all of them!<\/p>\n\n<p>We spent a good amount of time to achieve developer happiness and to boost our\nproductivity.<\/p>\n\n<p>There are more tools and developers around here. The killer feature for LSP is\nits ability to create communities and to give us the ability to share reusable\ncode.<\/p>\n\n<p>Other than the gopls I also use\n<a href=\"https:\/\/github.com\/sourcegraph\/javascript-typescript-langserver\">sourcegraph\/javascript-typescript-langserver<\/a>\nfor JavaScript and TypeScript and <a href=\"https:\/\/github.com\/rust-lang\/rls-vscode\">rust-lang\/rls-vscode<\/a> for rust.<\/p>\n\n<p>As you can see, rls-vscode looks from the name a VSCode project, but only\nbecause also VSCode supports the language server protocol!<\/p>\n\n<p>Thanks sourcegraph, microsoft and everybody behind the LSP effort!<\/p>\n"},{"title":"When do you need a Site Reliability Engineer?","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/when-do-you-need-a-site-reliability-engineer"}},"description":"I read everyday more job description looking for SRE. In the meantime I hear and live the frustration about who does not understand what SRE means and hires somebody that won't fit.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-07-05T08:08:27+00:00","published":"2019-07-05T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/when-do-you-need-a-site-reliability-engineer","content":"<p>I have started to work as a Site Reliability Engineer more than two years ago as\nfirst hired SRE at InfluxData.  I survived and learned to all the eras that\nevery company that onboard a new position live:<\/p>\n\n<ol>\n  <li>Lack of knowledge about what the job role means<\/li>\n  <li>Adjustment<\/li>\n  <li>Growth<\/li>\n  <li>Re-adjustment<\/li>\n  <li>Repeat<\/li>\n<\/ol>\n\n<p>You are an SRE not because you care about reliability, everybody cares about\nreliability but because the system is too complex to be driven by a person that\nalso does other things.<\/p>\n\n<p>There are no differences with any other \u201cfirst hired\u201d in a company. Even the\nfirst project manager gets hired when the person who was doing that job can\u2019t\nmake it anymore because the company needs somebody 100% focused on the product.<\/p>\n\n<p>The Site Reliability Engineer as a role should improve <strong>service<\/strong> reliability.\nVisibility, observability, logging, scalability, instrumentation are all areas\nwhen it should step to serve better tooling to troubleshoot, identify issues.\nBecause as we all know, even not that complex distributed system are difficult\nto debug, this complexity is caused by what it is called partial failure. The\nidea that a distributed system will never fail drastically alltogheter, but it\nis continuously in a condition of failure mitigated by re-try policies and or\nredundancy.<\/p>\n\n<p>The ability to acknowledge a problem before it will get reported by a customer\nimproves reliability.<\/p>\n\n<p>It is not a responsibility for the Site Reliability Engineer to fix the actual\nbug in the service, it can. For all those reasons the SRE knows how to code, and\nit should modify the application, and it needs to be close to the team that\nbuilds the service, just as every heterogeneous group has who takes care about\nthe design, UI, deploy, management.<\/p>\n\n<h2 id=\"are-they-the-unique-people-on-call\">are they the unique people on-call?<\/h2>\n<p>Obviously no. It\u2019s hard to reach a scale where you can manage a sustainable\nrotation only with SREs, and every developer is responsible for the code it\nships. If you managed to have a rotation for every service with different\npeople, all of the teammates should be on-call.<\/p>\n\n<p>The SREs other than being part of the rotation is the person responsible for the\nMTTR (mean time to repair) and the number of false positive.  The Site\nReliability Engineer needs to be able to make the MTTR as short as possible, and\nthe number of false positive as low as it can. They should improve how the\nservice is monitored, instrumented, and easy to debug.<\/p>\n\n<h2 id=\"do-i-need-an-sre-in-every-service-team\">do I need an SRE in every service team?<\/h2>\n\n<p>It is hard to quantify a number, but the SREs needs to have a structure that\ngives them time to hang out together and to see each other as a unique team as\nwell to share knowledge and to avoid the use of too many technologies across the\ncompany. Even more, if the company is not at a gigantic scale in term of the\nnumber of people.  The amount of SREs per team depends on now crucial, and\ncomplex reliability for the service is, how big the service team is. You can\nshare SREs with organizations and services if they are not too big or too\ncomplicated or if the unit itself has excellent reliability skills embedded in.<\/p>\n\n<h2 id=\"what-sre-is-not\">What SRE is not<\/h2>\n\n<p>SRE  does not replace your ops team; it is not a person with DevOps skills that\nknows containers and Kubernetes. It knows cloud, containers, and kubernetes\nbecause it is a pretty new \u201cunicorn\u201d role.<\/p>\n\n<p>It is a side effect of being a coder that loves to see its code running smoothly\nunder real load.<\/p>\n\n"},{"title":"Test in production behind slogans","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/test-in-production-behind-slogans"}},"description":"How fast we are capable of instrumenting an application decrease the out of time requires to understand and fix a bug.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-05-27T08:08:27+00:00","published":"2019-05-27T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/test-in-production-behind-slogans","content":"<blockquote class=\"tw-align-center twitter-tweet\"><p lang=\"en\" dir=\"ltr\">What do we test before\nprod? We do our known unknowns -- does it work? (unit tests). does it fail in\nways I can predict?<br \/><br \/>We need to test our unknown unknowns in production\nwith \u2728observability\u2728. and experiment upon them with chaos engineering! <a href=\"https:\/\/twitter.com\/hashtag\/VelocityConf?src=hash&amp;ref_src=twsrc%5Etfw\">#VelocityConf<\/a><\/p>&mdash;\nLiz Fong-Jones (\u65b9\u79ae\u771f) (@lizthegrey) <a href=\"https:\/\/twitter.com\/lizthegrey\/status\/1139273082412027904?ref_src=twsrc%5Etfw\">June\n13, 2019<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>I got inspired by Liz\u2019s tweet recently, and I am writing this post as a reminder\nfor everybody. \u201cTest in prod\u201d is a slogan, a trademark. It doesn\u2019t explain all\nthe concepts behind a sentence as \u201cthings bo getter with Coke\u201d hides why or\nhow. Slogans are great as a quick reminder for more articulated ideas. They\nare useful because in one sentence you can recall to more profound contents\ninside your brain.<\/p>\n\n<p><img class=\"img-fluid\" src=\"\/img\/coke-slogan.jpg\" \/><\/p>\n\n<p>\u201cYou\u201d do not test unknown unknowns in production, mainly because you do not know\nyour unknowns. In production, you as a developer <strong>validate<\/strong> three kinds of\nthings:<\/p>\n\n<ul>\n  <li>Complicated parts of your system that are not well covered by tests are\nworking.<\/li>\n  <li>Something you are working on and you would like to be sure that it is working\nfine, even if it has a unit test, integration tests and so on.<\/li>\n  <li>Crucial part of the system that needs to work or your boss will kick your ass,\nand you are afraid that you test them even if you just changed a line of CSS.<\/li>\n<\/ul>\n\n<p>What \u201ctest in prod\u201d means is the fact that somebody, a random customer human or\nnot randomly will trigger an unknown action that will cause an issue. It doesn\u2019t\neven be to be triggered, it can be an environmental issue. For example, what\nTwitch call <a href=\"https:\/\/blog.twitch.tv\/go-memory-ballast-how-i-learnt-to-stop-worrying-and-love-the-heap-26c2462549a2\">\u201cthe refresh\nstorm\u201d<\/a>\nis an excellent example of an environmental issue. When a broadcaster has a\nconnectivity issue, all the watcher starts to refresh the page multiple times\nthinking to solve the problem. As a side effect, the Twitch infrastructure can\nsuffer about a high number of requests. This is a no-Twitch problem that becomes\na Twitch problem.<\/p>\n\n<p>We need to learn and onboard tools and mindset that will help us to improve how\nfast we can track, record, fix, and learn from an issue. All the question that\nmatters happens in production, and by consequence, we need to stay focused on\nit.  I think a lot of people test in prod in some way.<\/p>\n\n<p>When your laptop starts, but it restarts by itself after some point you have a\nproblem. You look around, and you notice that your fan doesn\u2019t run anymore. It\nis a pretty simple issue to solve and detect. You hear that the fan doesn\u2019t make\nany noise, so you replace it.<\/p>\n\n<p>I am sorry! Everybody got distracted by distributed system, containers, cloud.\n90% of our failures if you know how to design a fault tolerance application are\na partial failure! They are a disaster to figure out, understand, and fix! Only\na subset of our system may break, for a subset of customers, but the same part\nworks correctly for another subgroup, and you need to figure out why! You should\nalso be able to message that subgroup of customers to say \u201cI am sorry! Shit\nhappens, we are working on it\u201d, proactively!<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>\u201ctest in prod\u201d means all the things I wrote and probably way more! It is\nreasonable to say that nobody can do anything to avoid \u201ctest in prod\u201d to happen,\nso have fun!<\/p>\n\n"},{"title":"Instrumentation code is a first citizen in a codebase","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/instrumentation-code-is-a-first-citizen-in-a-codebase"}},"description":"How fast we are capable of instrumenting an application decrease the out of time requires to understand and fix a bug.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-05-27T08:08:27+00:00","published":"2019-05-27T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/instrumentation-code-is-a-first-citizen-in-a-codebase","content":"<p>A few years ago a log was very similar to a printf statement with a message that\nin some way we\u2019re trying to communicate to the outside the current situation of\na specific procedure.  The format and composition of the message were not\ncrucial, the main purpose was to make it easy enough to read.  A full-text\nsearch engine is capable of tokenizing and indexing every message for easy\nlookup and aggregation and it was enough to fix the gap between a human\nunderstanding log and something that a program can parse and visualize.  Cloud\nComputing and containers changed the way we architect, visualize and deploy\nsoftware:<\/p>\n\n<ol>\n  <li><strong>The distribution of our applications<\/strong>. Compared with a more traditional\napproach our application runs in the smallest but much-replicated units\n(container, pods, ec2 and so on).<\/li>\n  <li><strong>The size<\/strong> of our application (microservices) and by consequence the\ninteraction between them, over a not perfect communication layer (the\nnetwork).<\/li>\n  <li>Applications come and go much more frequently because we have automation that\ntakes care of the number of replicas running inside a system. They are more\n<strong>dynamic<\/strong> and we do not really have a stable identifier as before:\nhostnames, IPs change more often.<\/li>\n<\/ol>\n\n<p>These points increase the importance for us to get applications metrics out from\nour code because that\u2019s the language our application speaks. We rely on them in\norder to understand what is going on.  We need to realize that logs and metrics\nhave different purposes:<\/p>\n\n<ul>\n  <li>To understand what is going right now<\/li>\n  <li>To verify what happened in the past (even from a legal perspective)<\/li>\n  <li>To compare<\/li>\n<\/ul>\n\n<p>They are not random printf. All these purposes require methodologies and tools.\nThis article will stay focused on the first point: \u201cWhat is going on?\u201d because\nit is a question I ask even to myself when I look at the system I wrote or\nmanage and the answer is a real pain to retrieve.  To troubleshoot a system we\nneed a very dense amount of information \u201calmost in real time\u201d because that\u2019s\nwhen a system is broken \u201cnow\u201d and a picture or a sample of older data in order\nto compare the current situation with something that we can define as \u201cworking\u201d.\nWe can not really use old data because our codebase changes frequently (because\nsomebody told us that we can break and develop fast). So there is not a lot of\nvalue at looking at high-density data coming from two weeks ago where the\ncodebase was different. That\u2019s why time series databases as InfluxDB have data\nretention features built in to keep themselves clean.\n<a href=\"https:\/\/github.com\/kapacitor\/influxdb\">InfluxDB<\/a> removes the data after a\ncertain amount of time, but with\n<a href=\"https:\/\/github.com\/influxdata\/kapacitor\">Kapacitor<\/a> you can aggregate or sample\nthe data to an older retention policy in order to keep what you need in the\ndatabase.  Back in the day, I wrote this article about <a href=\"https:\/\/gianarb.it\/blog\/what-is-distributed-tracing-opentracing-opencensus\">Opentracing and\nOpencensus<\/a>.\nThis is a follow up after another year of working around code instrumentation,\nobservability, and monitoring.<\/p>\n\n<p>First of all both of them are vendor-neutral projects that help you instrument\nyour applications without lock you with a specific provider. It doesn\u2019t really\nneed to be a bad, evil vendor. If you use the Prometheus client directly in your code,\neverywhere, you will be locked to it forever or until you will find the right\ntime to move over all your codebase. But it sounds like \u201cchange your logger\u201d:\nsomething you would like to do magically, one shot without wasting your time.<\/p>\n\n<p>OpenTracing is 100% for tracing, the problem it solves is about how to\ninstrument your application to send traces. OpenCensus does the same, plus it\nalso takes care of metrics.<\/p>\n\n<p>These two projects have a major issue, they are TWO different projects. They\n were not smart enough to agree on the same format and it split the dev community without\nany reason, sham of you!. Good for us they will be\n<a href=\"https:\/\/medium.com\/opentracing\/merging-opentracing-and-opencensus-f0fe9c7ca6f0\">merged<\/a>\ntogether at some point to something called OpenTelemetry. Finally!<\/p>\n\n<p>Another misunderstanding is around how tracers such as Zipkin, Jager, XRay advertise them self as\n\u201copentracing compatible\u201d. When I think about \u201ccompatible\u201d I think like a REST\nAPI that follow some rules, and for that reason, the SystemA is compatible with\nSystemB and you can change them transparently.<\/p>\n\n<p>This is not what happens with tracing infrastructure, because you need to\nremember that OpenTracing and OpenCensus play in your codebase size, it is not\nREST or nothing like that.<\/p>\n\n<p>Compatibility, in this case, means that the\ntracers (Zipkin, Jaeger, AWS X-Ray, NewRelic) ship an OpenTracing compatible\nlibrary across many languages that you can change in your codebase in order to\npoint your application to a different tracer without changing the\ninstrumentation code you wrote.<\/p>\n\n<p>NB: OpenCensus has the same goal for metrics as well<\/p>\n\n<pre><code class=\"language-javascript\">function initTracer(serviceName) {\n  var config = {\n    serviceName: serviceName,\n    sampler: {\n      type: \"const\",\n      param: 1,\n    },\n    reporter: {\n      agentHost: \"jaeger-workshop\",\n      logSpans: true,\n    },\n  };\n  var options = {\n    logger: {\n      info: function logInfo(msg) {\n        logger.info(msg, {\n          \"service\": \"tracer\"\n        })\n      },\n      error: function logError(msg) {\n        logger.error(msg, {\n          \"service\": \"tracer\"\n        })\n      },\n    },\n  };\n  return initJaegerTracer(config, options);\n}\n\nconst tracer = initTracer(\"discount\");\nopentracing.initGlobalTracer(tracer);\n<\/code><\/pre>\n<p>This example comes from\n<a href=\"https:\/\/github.com\/gianarb\/shopmany\/blob\/end\/discount\/server.js\">shopmany<\/a> a\ntest e-commerce I wrote. In this case, the <code>tracer<\/code> is Jaeger, but if you need\nto change to Zipkin you can probably use\n<a href=\"https:\/\/github.com\/DanielMSchmidt\/zipkin-javascript-opentracing\">zipkin-javascript-opentracing<\/a><\/p>\n\n<p>It is important to evaluate an instrumentation library like OpenCensus,\nOpenTracing, OpenTelemetry because there is a community that writes and supports\nlibraries across many languages and tracers. it means that you do not really\nneed to write your own library, that sounds a bit like too much!  I was very\nfrustrated about the fact that these two libraries was TWO! I can\u2019t wait to see\nhow the result will look like.  How easy it is to instrument an application is a\nkey value for a company like Honeycomb.io and this sounds like a good reason for\nthem to have their own instrumentation library\n(<a href=\"https:\/\/github.com\/honeycombio\/beeline-go\">go<\/a>,\n<a href=\"https:\/\/github.com\/honeycombio\/beeline-nodejs\">js<\/a>,\n<a href=\"https:\/\/github.com\/honeycombio\/beeline-ruby\">Ruby<\/a>), and when they started the\necosystem was different (it is still a mess today as you read) but I hope that\nOpenTelemetry will push everybody to just work together because understanding\nwhat is going in production right now is a hard, messy and amazing challenge.<\/p>\n\n<blockquote class=\"twitter-tweet\"><p lang=\"en\" dir=\"ltr\">it is so nice to see\nhow two great open source community such as <a href=\"https:\/\/twitter.com\/InfluxDB?ref_src=twsrc%5Etfw\">@InfluxDB<\/a> and <a href=\"https:\/\/twitter.com\/ntop_org?ref_src=twsrc%5Etfw\">@ntop_org<\/a> can do\ntogheter. That&#39;s how we can solve observability\/monitoring challanges all\ntogheter <a href=\"https:\/\/twitter.com\/Chris_Churilo?ref_src=twsrc%5Etfw\">@Chris_Churilo<\/a><\/p>&mdash;\ngianarb (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/1126107355895214082?ref_src=twsrc%5Etfw\">May\n8, 2019<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<h2 id=\"keep-instrumentation\">Keep instrumentation<\/h2>\n\n<p><img src=\"\/img\/infinite-loop.png\" alt=\"The infinite symble\" \/><\/p>\n\n<p>The ability to instrument an application fast, precisely increase your\ntroubleshooting capabilities. Fast your iterate on your instrumentation code\nfaster your will understand what is going on. It is not a one short exercise but\nit is something you improve everyday based on what you will learn. But your\nability to learn depends on how well you can read the language that your\napplications exposes (let me tell you a secret, it depends on how well you\ninstrument your code).<\/p>\n\n<p>More to read:<\/p>\n<ul>\n  <li><a href=\"https:\/\/medium.com\/jaegertracing\/jaeger-and-opentelemetry-1846f701d9f2\">Jaeger and\nOpenTelemetry<\/a><\/li>\n  <li><a href=\"https:\/\/www.honeycomb.io\/blog\/how-are-structured-logs-different-from-events\/\">Structured\nlogs<\/a><\/li>\n  <li><a href=\"https:\/\/gianarb.it\/blog\/logs-metrics-traces-aggregation\">Logs Metrics Traces are uqually\nuseless<\/a><\/li>\n<\/ul>\n"},{"title":"After two years at InfluxData","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/two-years-at-influxdata"}},"description":"Two years at InfluxData. Feeling, sensations, pain point, what I have learned.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-05-23T08:08:27+00:00","published":"2019-05-23T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/two-years-at-influxdata","content":"<p>Hello dudes! I am gonna write down some rambling today.<\/p>\n\n<p>I am writing this article during my flight back to KubeCon 2019 in Barcelona. I\nhad a great time at our booth speaking with community enthusiasts, customers,\nand developers. I had to leave a day before for unknown circumstances (I am\ncrazy).<\/p>\n\n<blockquote class=\"tw-align-center twitter-tweet\"><p lang=\"en\" dir=\"ltr\">I figure out just in\ntime that I am suppose to take my flight now and not Friday as my brain\nmemorized. I am running to the airport! Bye <a href=\"https:\/\/twitter.com\/hashtag\/KubeCon?src=hash&amp;ref_src=twsrc%5Etfw\">#KubeCon<\/a>\nand sorry for everybody I won&#39;t say \ud83d\udc4b to as planned! \ud83d\ude22 See y\nsoon!<\/p>&mdash; gianarb (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/1131468802707939329?ref_src=twsrc%5Etfw\">May\n23, 2019<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>A few days ago I just realized I am been working at InfluxData for two years, I\nthink it is time for me to share something with you about my experience over\nthere.<\/p>\n\n<h2 id=\"community-matters\">Community matters<\/h2>\n\n<p>I moved to Dublin 4 years ago for 1.5 years. I was not able to speak in English\nand I knew it was an important skill to learn. So I got a job and I move there.<\/p>\n\n<p>I knew it was not gonna be forever because I love who I left in my hometown and\nI like Italy. Luckily for me, I love open source and Dublin has a huge set of\nmeetups to attend. The Docker Meetup was my favorite one and also the one where\nI met <a href=\"https:\/\/twitter.com\/tomwillfixit\">@tomwillfixit<\/a> and\n<a href=\"https:\/\/twitter.com\/jpetazzo\">@jpetazzo<\/a> he pushed me to join the amazing\nCaptain program at Docker that was just gonna start at that time.<\/p>\n\n<p>Anyway, I used and shared my InfluxDB love with the world even before moving to\nDublin when I worked at <a href=\"https:\/\/twitter.com\/corleycloud\">@corleycloud<\/a>  with\nWalter and we wrote the <a href=\"https:\/\/github.com\/corley\/influxdb-php-sdk\">PHP\ninfluxdb-sdk<\/a>.<\/p>\n\n<p>At some point, as you can hear from this podcast from <a href=\"https:\/\/www.stitcher.com\/podcast\/the-new-stack-makers\/e\/60409328?autoplay=true\">The New\nStack<\/a>\nI obsessed <a href=\"https:\/\/twitter.com\/Chris_Churilo\">@Chris_Churilo<\/a> from Influx so\nmuch that she referred me there as an SRE. I was so excited, my practical\ninterview toke three hours of troubleshooting a Go application running in\nDocker. I still remember the test, it was fun and friendly.<\/p>\n\n<p>Anyway, that\u2019s how I got here. Open Source, community, new friends and some luck!<\/p>\n\n<h2 id=\"timezone-\">Timezone ???<\/h2>\n\n<p>InfluxData is a remote friendly company and I was ready to get back to Italy.\nEverybody warned me about the complexity hidden behind remote working, nobody\nreally told me anything about the fact that all my colleagues will start to work\nalmost when I am ready to leave! InfluxData is very respectful and I can say\nthat in two years I can count with one (maybe two) hands the amount of time I\nhad to open my laptop at some weird time.<\/p>\n\n<p><em>I love my laptop, I have it open by myself at a weird time as well!<\/em><\/p>\n\n<p>But after two years I need to admit that it requires a great effort from both\nsides to work with +9 hours folks for so long. You need to be good at reaching\nout to them, they need to remember that it is late in your side if they see me\nfrustrated or tired. But it is fun and you learn a lot about yourself and from\nyour teammate all around the globe, I think the effort pays back. Everyone\nshould have the chance to open their mind with cultures that different.<\/p>\n\n<p>When I joined the people in Europe was not that many, probably 3-4. Now we are\nmore, my team is not made by myself anymore but there are other 6 people and\n<a href=\"https:\/\/twitter.com\/gitirabassi\">@gitirabassi<\/a> is in my timezone. It makes\neverything way easier.<\/p>\n\n<h2 id=\"unicorn-sf-start-up\">Unicorn SF start-up<\/h2>\n\n<p>This is also my first experience in a \u201cunicorn\u201d startup in US\/SF. I do now know\nif I can define InfluxData as a unicorn startup but every conference I go, even\nin South Africa, there are people that use or known InfluxDB. So I bet we\nare <strong>pretty unicorn<\/strong>. Since I joined I think we are at least x4 more\npeople and we are still growing <a href=\"https:\/\/grnh.se\/97725b851\">(We are hiring)<\/a>. It is excited and stressful.<\/p>\n\n<p>There are a lot of roles and teams that I never heard before in my career\nbecause I worked a lot with small companies and I am very happy to hang out and\nchat with them when we are face to face to understand how salespeople follow\ncustomers or how the outbound sales team can make thousands of calls a day to\nfigure out the right person that should hear about what we do.<\/p>\n\n<p>Almost all the time people are the bottleneck because it is hard to collaborate\nin a good way when your work environment keeps changing under your feet. But\nthat\u2019s how this business works and there is a lot to learn about how to survive\nand how it works.<\/p>\n\n<h2 id=\"i-am-whatever-i-want-and-thats-awesome\">I am whatever I want! And that\u2019s awesome<\/h2>\n\n<p>I started as a web developer 7 years ago, I moved to automation and devops\nbecause I liked to make people comfortable and confident in deploying their code\nin production and as a developer, I knew the pain and I am happy to solve them.<\/p>\n\n<p>As SRE I helped to develop a custom orchestrator for our SaaS with stateful\nworkloads and databases. I also enjoyed all the tracing and instrumentation\nrevolution that <strong>observability<\/strong> pushes.<\/p>\n\n<p>I love the people and working from home sometimes brings some loneliness on the\ntable that\u2019s why we have tech communities. That\u2019s why I organize the CNCF Meetup\nin Turin, I do open source and I go to conferences. There are millions of ways\nto feel less alone online and offline obviously! Meetups, co-working, friends,\nbeers, and BBQ.<\/p>\n\n<p>I think I am a bit tired to work so close to infra, ops, and automation. My\n\u201cdeveloper side\u201d is pushing me back to where I started. The code (no not at\nall PHP I am sorry).<\/p>\n\n<p>At InfluxData there are a lot of Golang ROCK starts, so I will look around to\nunderstand what I am happy to hack on!<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>I have no idea where I am trying to go with this rumblings. Re-reading what I\nwrote it looks my way to thanks all the people that over these two years helped\nme to grow and get better. I like to think that MAYBE somebody in the same\nsituation having some hard time will stumble to this article and she\/he will\nrealize that <em>everything will be all right!<\/em>.<\/p>\n\n<p>Keep rocking!<\/p>\n"},{"title":"Workshop Design","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/workshop-design"}},"description":"I recently developed a workshop about application instrumentation. I ran it at the CloudConf in Turin. I developed it in open source and I thought it was a nice idea to share more about why I did it and how.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-04-19T08:08:27+00:00","published":"2019-04-19T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/workshop-design","content":"<p>Hello sweet internet! At this point, you should know that I am far to be a\nlovely happy code. I like to share what I learn and to have a chat about what\nyou are doing. That\u2019s as it is! Feel free to follow my wheeze on\n<a href=\"https:\/\/twitter.com\/gianarb\">twitter\/gianarb<\/a>.<\/p>\n\n<p>If you don\u2019t know what I am gonna speak about, I can tell you this is another\nway to enjoy coding!<\/p>\n\n<p>Recently a friend of mine that organizes the <a href=\"https:\/\/cloudconf.it\">CloudConf<\/a>\nin Turin, Italy asked me if I was able to deliver a workshop. Let me say THE\nworkshop, 8 hours of chatting with exercises and questions.\nI did something like that years ago about AngularJS but hey, this sounds like a\nchallenge, and I love challenges! So I took it.\n<img src=\"\/img\/got-your-back.jpg\" alt=\"\" \/>\nIf you read my recent posts you know I have a passion nowadays:<\/p>\n\n<ul>\n  <li><a href=\"\/blog\/go-observability-is-for-troubleshooting\">Observability is for troubleshooting<\/a><\/li>\n  <li><a href=\"\/blog\/high-cardinality-database\">You need an high cardinality database<\/a><\/li>\n  <li><a href=\"\/blog\/logs-metrics-traces-aggregation\">Logs, metrics and traces are equally useless<\/a><\/li>\n<\/ul>\n\n<p>The topic was clear, I have called it \u201cApplication instrumentation\u201d. Lovely!<\/p>\n\n<p>I am driven by passion and purpose. My passion for troubleshooting and the\npurpose of figuring out what the f happen in production. I was ready to work on\nit!<\/p>\n\n<p><img src=\"\/img\/passion-fruit.jpg\" alt=\"\" \/><\/p>\n\n<h2 id=\"workshop\">Workshop?<\/h2>\n\n<p>This article is about how I prepared the workshop and I hope it can help\nsomebody to avoid the same mistake and also to use some of the material I\ndeveloped.<\/p>\n\n<p>I made everything in open source. There are two new repositories on my GitHub one with fake\ne-commerce I made using 4 different programming languages:<\/p>\n\n<ul>\n  <li>Golang as frontend proxy with a UI in HTTP\/JQuery.<\/li>\n  <li>Java to do the most secure part of the e-commerce obviously the payment\nservice.<\/li>\n  <li>NodeJS to get discounts for the items.<\/li>\n  <li>PHP to get the list of items currently available.<\/li>\n<\/ul>\n\n<p>You can find the code on\n<a href=\"https:\/\/github.com\/gianarb\/shopmany\">github.com\/gianarb\/shopmany<\/a>.<\/p>\n\n<p>I decided to develop a minimum version of the application in order to have it\nreusable for another purpose. It can be used to build a use case for Kubernetes\ndeployment for example or a CI lesson.<\/p>\n\n<p>The branch <code>master<\/code> contains the minimum set of features that I need to have an\napplication that has some sense. But for example, the services are without logs,\nmetrics and tracing because they will be added as exercises from the attendees.<\/p>\n\n<p>If you check out the workshop you will be able to see in the history a commit for\nevery exercise and applications.<\/p>\n\n<p>The lessons are available on\n<a href=\"https:\/\/github.com\/gianarb\/workshop-observability\">github.com\/gianarb\/workshop-observability<\/a>,\nevery directory is a lesson. The readme contains a couple of information about\nwhat where we are, why we should care and one or more exercise to do in practice\nin order to familiarize with the concepts.<\/p>\n\n<p>The lessons I developed for the purpose of the CloudConf workshop are:<\/p>\n\n<ol>\n  <li>lesson1 designing a health check endpoint. Adding a single endpoint is a good\nway to familiarize with a new application and there is so much to learn about\nhow to design a good health check endpoint!<\/li>\n  <li>lesson2 is about logging and <a href=\"https:\/\/charity.wtf\/2019\/02\/05\/logs-vs-structured-events\/\">structure\nlogging<\/a>. I tried\nto pick the most popular logging libraries for the languages. Logging using\nJSON format to open the door for future serialization as an event.<\/li>\n  <li>lesson3 is about InfluxDB v1 and the TICK stack. The goal was to serve a\nmonitoring stack that can work with a different structure such as events and\ntraces.<\/li>\n  <li>lesson4 is about tracing. Using Jaeger we instrumented and build a trace for\nthe application.<\/li>\n<\/ol>\n\n<p>I have also reported an idea of a possible timeline (the one I used at the\nCloudConf):<\/p>\n\n<p>09.00 Registration and presentation\n09.30 - 13.00 Theory<\/p>\n\n<ul>\n  <li>Observability vs monitoring<\/li>\n  <li>Logs, events, and traces<\/li>\n  <li>How a monitoring infrastructure looks like: InfluxDB, Prometheus, Jaeger,\nZipkin, Kapacitor, Telegraf\u2026<\/li>\n  <li>Deep dive on InfluxDB and the TICK Stack<\/li>\n  <li>Deep dive on Distributed Tracing<\/li>\n<\/ul>\n\n<p>13.00 - 14.00 Launch\n14.00 - 17.00 Let\u2019s make our hands dirty\n17.30 - 18.00 Recap, questions and so on<\/p>\n\n<h2 id=\"learning-during-the-development\">Learning during the development<\/h2>\n\n<p>I like to prepare slides, posts, and workshop because I learn a lot along the\nway about concepts that I usually develop during a long set and frustrating\nattempts. Or reading a lot of blog posts, books, code. Writing about it helps me\nto put together what I learned developing easy to understand materials.<\/p>\n\n<p>This workshop was not a special case. It is not clear for me that even if there\nis a lot going on with OpenCensus, OpenTracing, and other instrumentation\nlibraries there is still room for improvement.<\/p>\n\n<p>Instrumenting an application is not anymore just a matter of adding <code>printf<\/code>\naround the execution of the code. But it is the way we have to write an\napplication capable of behind debugged and that speaks with the outside in an\nunderstandable way.<\/p>\n\n<p>The course has two different sections: theory and practice.<\/p>\n\n<p>The theory went well. I do not have a lot to say about it and for me, it is where\nI am most comfortable with because it looks like a long talk.<\/p>\n\n<p>The practical part was for sure a bit too long and I didn\u2019t have time to walk\nall the people over it but the fact that there are all the solutions, the\npurpose written down helped them to feel less lonely and everyone can follow the\nresolution is it can do the exercise in practice.<\/p>\n\n<p>This usually happens because of different skills set or for trouble configuring\nthe environment.<\/p>\n\n<p><code>git<\/code> helped me a lot, every commit has a diff that I used to explain the\nsolution of the lessons. People that were not confidently writing the solution in a\nparticular language had to just <code>cherry-pick<\/code> the commit in the language they\ndidn\u2019t know.<\/p>\n\n<h2 id=\"collaboration\">Collaboration<\/h2>\n\n<p>The practical part was designed to be a collaboration between people. IMHO it\nhelps to feel less \u201cat school\u201d but more as a team that it is something we should\nfeel more comfortable with at work.<\/p>\n\n<p>I think it worked but not that well. People were supporting and helping each\nother. But I probably need to cut the lessons in a different way. I think I will\nremove the <code>influxdb<\/code> lessons injecting the learning process of only what\nmatters for the course along the other lessons.  Next time and I will develop a\nnew lesson about how to parse the logs and push them to InfluxDB for example.\n(let me know if you would like to help me!)<\/p>\n\n<h2 id=\"feedback\">Feedback<\/h2>\n\n<p>I asked them to do a survey before the end of the course in order to help me\nget their feeling. There is a lot to do and some of their feedback is part\nof this article. But in general, I am happy because I have all the material in\norder and this for me was just a first iteration. I hope to make it better, to\ngrant more feedback from the open source and to run it again! So let me know if\nyou would like to have me on board!<\/p>\n\n<h2 id=\"next\">Next<\/h2>\n\n<p>As I said instrumentation is hard and I still hoping to get an easier solution\nacross languages. I tried OpenCensus but I didn\u2019t manage to have it running at I\nwas in the rush so I used Jaeger.<\/p>\n\n<p>I will develop something about structured logging as I said for sure.<\/p>\n\n<p>I hope to get a lesson from some as a service provider like HoneyComb for\nexample.<\/p>\n\n<h2 id=\"fun-fact\">Fun Fact<\/h2>\n<p>The youngest person in the room was a student in high school! Wow!<\/p>\n"},{"title":"Observability is for troubleshooting","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/go-observability-is-for-troubleshooting"}},"description":"The difference between monitoring and observability is the fact that observability is for troubleshooting. And you troubleshoot in any environment not only in production. This article contains how I do observability in one of my application in Go.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-02-28T08:08:27+00:00","published":"2019-02-28T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/go-observability-is-for-troubleshooting","content":"<p>Monitoring notifies you when something does not work. You get an alert, a slap in\nthe face based on the priority of the issue.  Observability is about\ntroubleshooting, debugging, \u201clooking around.\u201d You don\u2019t use observability\ntechniques only when something doesn\u2019t work.<\/p>\n\n<p>Mainly because you don\u2019t know where it happens, it can be anytime.\nYou observe during development, locally or in production, anytime.<\/p>\n\n<p>The ability to use the same observability tools and techniques such as tracing,\nlog analysis and metrics is a tremendous value. You get used to them day by day\nand not only under pressure, during an outage.\nI practical trip that I can give you when you are instrumenting an application\nis about interconnection. You need a way to connect logs, with traces and with\nmetrics.<\/p>\n\n<p>There is nothing too complicated to understand. Every HTTP request has its own\ngenerated ID.<\/p>\n\n<p>This ID will become the trace ID, and it will be attached to all the logs\ngenerated by that request.\nOne of my application I instrumented uses\n<a href=\"https:\/\/github.com\/opentracing\/opentracing-go\">opentracing\/opentracing-to<\/a> and\n<a href=\"https:\/\/github.com\/uber-go\/zap\">uber-go\/zap<\/a> as the logger.  I use a middleware\nsimilar to the one provided by the\n<a href=\"https:\/\/github.com\/opentracing-contrib\/go-stdlib\/blob\/master\/nethttp\/server.go\">opentracing-contrib\/go-stdlib<\/a>.<\/p>\n\n<p>Inside an HTTP handler, I configure the logger to add the <code>trace_id<\/code> for every\nlog:<\/p>\n\n<pre><code class=\"language-go\">logger := GetLogger().With(zap.String(\"api.handler\", \"ping\"))\nif intTraceId := req.Context().Value(\"internal_trace_id\"); intTraceId != nil {\n    logger = logger.With(zap.String(\"trace_id\", intTraceId.(string)))\n}\n<\/code><\/pre>\n<p>In this way from this point in time the <code>logger<\/code> will add the trace_id to every line of log.<\/p>\n\n<p>With this code <code>req.Context().Value(\"internal_trace_id\")<\/code> I am retrieving the\n\u201ctrace_id\u201d from the context. In Go every HTTP request has a context attached and\nthis work because inside the middleware I set the trace_id in the context of the\nrequest and also as HTTP header:<\/p>\n\n<pre><code class=\"language-go\">\/\/ This is a temporary fix until this issue will be addressed\n\/\/ https:\/\/github.com\/opentracing\/opentracing-go\/issues\/188\n\/\/ This works only with Zipkin.\nzipkinSpan, ok := sp.Context().(zipkin.SpanContext)\nif ok == true &amp;&amp; zipkinSpan.TraceID.Empty() == false {\n  w.Header().Add(\"X-Trace-ID\", zipkinSpan.TraceID.ToHex())\n  r = r.WithContext(context.WithValue(r.Context(), \"internal_trace_id\", zipkinSpan.TraceID.ToHex()))\n}\n<\/code><\/pre>\n<p>Having the <code>trace_id<\/code> exposed as header is nice because I can ask and traing\neveryone to just grab that parameter when they have issues. Or we can code the\nAPI consumer in a way that takes care about this value when something doesn\u2019t go\nas expected.<\/p>\n\n<p>All these connections are useful to build a context from different sources. This\nis the secret for happiness and Welcome to my Wonderland!<\/p>\n\n<p><img src=\"\/img\/alice-observability.jpg\" alt=\"\" \/><\/p>\n"},{"title":"From sequential to parallel with Go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/go-parallelization-trick"}},"description":"From a sequence of action to parallelization in Go. Using channels and wait groups from the sync package.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-02-21T08:08:27+00:00","published":"2019-02-21T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/go-parallelization-trick","content":"<p>Everything starts as a sequence of events. You have a bunch of things to do and\nyou are not sure how long or hard to manage they will be.<\/p>\n\n<p>As a pragmatic developer, you go over the list of things, and you make them one\nby one. The script runs, it works, and everyone is happy.<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n    \"fmt\"\n    \"log\"\n    \"time\"\n)\n\nfunc main() {\n    list := []string{\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\", \"l\"}\n    for _, v := range list {\n        v, err := do(v)\n        if err != nil {\n            log.Printf(\"nop\")\n        }\n        fmt.Println(v)\n    }\n\n}\n\nfunc do(s string) (string,error) {\n    time.Sleep(1*time.Second)\n    return fmt.Sprintf(\"%s-%d\", s, time.Now().UnixNano()),nil\n}\n<\/code><\/pre>\n\n<p>Let\u2019s execute it:<\/p>\n\n<pre><code>$ time go run c.go\na-1550742371537033061\nb-1550742372537419148\nc-1550742373537846015\nd-1550742374538086031\ne-1550742375538488129\nf-1550742376538746707\ng-1550742377539047837\nh-1550742378539540979\ni-1550742379539938404\nl-1550742380540339887\n\nreal    0m10.174s\nuser    0m0.149s\nsys     0m0.074s\n<\/code><\/pre>\n\n<p>Until something changes from the outside, the outside world is a terrible place.<\/p>\n\n<p><img src=\"https:\/\/media.giphy.com\/media\/124pc9nFq7ZScU\/giphy.gif\" alt=\"\" \/><\/p>\n\n<p>The list of things to do grows too much, and your program runs too slow to be\ncompetitive, so you start to think about parallelization.<\/p>\n\n<p>Luckily for you, every action doesn\u2019t depend on anything else, so you don\u2019t need\nto stop if one of them fails or even worst you don\u2019t need to do nothing weird,\nyou skip that, and you log the failure.<\/p>\n\n<p>There is an easy way to migrate the code about with something that safely runs\nin parallel just using some built-in functions in Go like channels and\nWaitGroups.<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n    \"fmt\"\n    \"log\"\n    \"sync\"\n    \"time\"\n)\n\nfunc main() {\n    fmt.Println(\"Start\")\n    parallelization := 2\n    list := []string{\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\", \"l\"}\n    c := make(chan string)\n\n    var wg sync.WaitGroup\n    wg.Add(parallelization)\n    for ii := 0; ii &lt; parallelization; ii++ {\n        go func(c chan string) {\n            for {\n                v, more := &lt;-c\n                if more == false {\n                    wg.Done()\n                    return\n                }\n\n                v, err := do(v)\n                if err != nil {\n                    log.Printf(\"nop\")\n                }\n                fmt.Println(v)\n            }\n        }(c)\n    }\n    for _, a := range list {\n        c &lt;- a\n    }\n    close(c)\n    wg.Wait()\n    fmt.Println(\"End\")\n}\n\nfunc do(s string) (string, error) {\n    time.Sleep(1 * time.Second)\n    return fmt.Sprintf(\"%s-%d\", s, time.Now().UnixNano()), nil\n}\n<\/code><\/pre>\n\n<p><code>parallelization<\/code> should be an external parameter that you can change to\nparallelize more or less. With a parallelization factor of 2 the benchmark looks\nlike:<\/p>\n\n<pre><code class=\"language-bash\">$ time go run c.go\nStart\na-1550742531701829912\nb-1550742531701820924\nd-1550742532702088077\nc-1550742532702180981\ne-1550742533702473002\nf-1550742533703389899\ng-1550742534702714251\nh-1550742534703981070\ni-1550742535702992582\nl-1550742535704308486\nEnd\n\nreal    0m5.269s\nuser    0m0.249s\nsys     0m0.078s\n<\/code><\/pre>\n\n<p>Almost half of the time. Let\u2019s try with 5.<\/p>\n\n<pre><code class=\"language-bash\">$ time go run c.go\nStart\ne-1550742633337320607\nb-1550742633337280491\nc-1550742633337474112\nd-1550742633337280481\na-1550742633337298154\nh-1550742634338002235\ni-1550742634338073772\nf-1550742634338033897\ng-1550742634338019639\nl-1550742634338231670\nEnd\n\nreal    0m2.145s\nuser    0m0.144s\nsys     0m0.058s\n<\/code><\/pre>\n\n<p>I wrote this article because I like how easy it was for this use case to run in\nparallel. Based on how complicated your <code>do<\/code> function is you need to be more\ncareful.<\/p>\n\n<p>If your <code>do<\/code> function calls an external service it can fail, or it can rate\nlimit you because you are parallelizing too much. But these are all problem that\nyou can solve increasing the number of safeguards in your code.<\/p>\n\n<p>Something I learned using this and calling AWS intensively to take snapshots is\nthe fact that EC2 snapshots happen in the background on AWS, so if you have\nthousands of nodes and you call AWS it will rate limit you or you won\u2019t have a\ngood experience of what happens on the AWS side in reality.<\/p>\n\n<p>A basic trick is to place a <code>batch delay<\/code> parameter that sleeps before every\nexecution<\/p>\n\n<pre><code class=\"language-go\">v, more := &lt;-c\nif more == false {\n    wg.Done()\n    return\n}\n\n\/\/ Sleep here!\n\nv, err := do(v)\nif err != nil {\n    log.Printf(\"nop\")\n}\nfmt.Println(v)\n<\/code><\/pre>\n\n<p>This is a very crafty fix but if you catch this problem like me when everything\nis failing this is a safe bullet you should try.<\/p>\n\n<p>Parallelization is fun, but in reality, it increases complexity. Go servers\nprimitives that are solid foundations but it is not you to instrument your code\nwell enough to be confident about how it works.<\/p>\n\n<p>I will write the next chapter about this where I will use opencensus or\nopentracing to trace what is going on here!<\/p>\n"},{"title":"Short TTL vs Long TTL infrastructure resource","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/infra-as-code-short-long-ttl-resource"}},"description":"I called this framework \"short vs long ttl\". GitOps, Infrastructure as code are an hot topic today where the infrastructure is more dynamic and YAML doesn't look like a great solution anymore. In this article I explain a framework I am trying to use to understand when a resource is good to be managed in the old way or not.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-02-14T08:08:27+00:00","published":"2019-02-14T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/infra-as-code-short-long-ttl-resource","content":"<blockquote class=\"tw-align-center twitter-tweet\"><p lang=\"en\" dir=\"ltr\">I am excited to listen\nto a lot of ideas and pains about infra as code and yaml. Everyone is more or\nless walking in the same direction. This is what I have in my mind atm. More\nwill come. Short TTL vs Long TTL resources <a href=\"https:\/\/t.co\/XRCOgbB3Rg\">https:\/\/t.co\/XRCOgbB3Rg<\/a><\/p>&mdash;\npilesOfAbstractions (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/1095960644195680257?ref_src=twsrc%5Etfw\">February\n14, 2019<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>Recently I gave a talk at the ConfigManagementCamp about \u201cInfrastructure as\ncode\u201d\n<a href=\"https:\/\/speakerdeck.com\/gianarb\/cfgmgmtcamp-infrastructure-as-code-should-contain-code\">(slides)<\/a>\nand I wrote an article about <a href=\"\/blog\/infrastructure-as-real-code\">infrastructure as {real}\ncode<\/a>.<\/p>\n\n<p>This post is a follow-up focused on how I identify YAML-friendly resources vs.\nsomething else.<\/p>\n\n<p>I don\u2019t hate YAML; I think it is a functional specification language, well\nsupported by a lot of different languages. It works I use it when I need to\nwrite parsable and human-friendly files.<\/p>\n\n<p><img src=\"https:\/\/media.giphy.com\/media\/1Mng0gXC5Tpcs\/giphy.gif\" alt=\"\" \/><\/p>\n\n<p>In infrastructure as code resources mean, almost everything: a subnet, an ec2, a\nvirtual machine, a DNS record, or a pod.<\/p>\n\n<p>I reference a single unit you can describe as a <strong>resource<\/strong>. The name probably\ncomes from too much CloudFormation specification that I wrote over these years.<\/p>\n\n<p><strong>Short TTL vs. Long TTL<\/strong> are two different categories that I use to identify\nthem. The resources during the evolution of your infrastructure can move between\ngroups.<\/p>\n\n<p><strong>Long TTL<\/strong> resources are the one that doesn\u2019t change much. For example, an AWS\nVPC currently doesn\u2019t change. It gets deleted or replaced, but you can not\nchange the cidr. A Route53 Hosted Zone doesn\u2019t change that often. I am more\nconfident about using specification languages and traditional tools like\nTerraform, CloudFormation or kubectl and YAML for these resources.<\/p>\n\n<p><strong>Short TTL<\/strong> resources changes often. Kubernetes deployment and statefulset.\nRoute53 DNS record in my case or Autoscaling Groups.  Manage the lifecycle of\nthese kinds of resources via YAML requires a lot of automation and file\nmanipulation that I don\u2019t think it is safe to do. I like a lot more to interact\nwith the API of my provider, ex. AWS or Kubernetes for them. To avoid programs\nthat parse and modify YAML or JSON to deploy a slightly different version of a\ntemplate I prefer to manipulate actual code. It is what I do every day. I have\ntesting frameworks, libraries and a lot more patterns to use<\/p>\n\n<p><img src=\"\/img\/shortlongttl.png\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p>The location of a resource is dynamic; it can jump from a category to another\nbased on architectural decisions. One example I have is with AWS AutoScaling\nGroups.  I like to use them to manage Kubernetes Nodes (workers). At the\nbeginning when you need a k8s cluster to play with I usually create one\nautoscaling group with n replicas of the node. The node as the last command\njoins the cluster via kubeadm. Easy like it sounds. In this case, the\nautoscaling group is one. It doesn\u2019t change that often.  When your use case\nbecomes more realistic, you need a more complicated topoligies. You need pods to\ngo on different nodes with more RAM or more CPU or at least you need to labels\nor add taints to your cluster to have pods far or closer to others.  This means\nthat you end up having more AutoScaling Group with different configuration and\nusually, they go away and get replaced very often with varying versions of\nKubernetes and so on.  This dynamicity brought as side effect the request of a\nmore friendly UX for ops, in our case integrated with the kubectl for example.\nThat\u2019s when we promoted AutoScaling Groups from a long TTL to a short TTL\nresource.  We developed a K8S CRD to create autoscaling groups and so on.<\/p>\n\n<p>The missing part is the <strong>reconciliation<\/strong> between long TTL and short TTL. As\nyou can see you end up having YAML or JSON in a repository for the long TTL one\nand API requests for the short TTL. It means that you can not tell what\u2019s the\nsituation for your short TTL resources looking at your repository.  You can see\nwhat you run via the kubernetes API, but that\u2019s not what I am looking for. I\nthink GitOps can fix the issue, but I will write more after more tests.<\/p>\n\n<p>I tried to make these concepts as clear as possible but let me know what you\nthink via twitter <a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a><\/p>\n"},{"title":"Extend Kubernetes via a Shared Informer","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/kubernetes-shared-informer"}},"description":"Kubernetes is designed to be extended. There a lot of way to do it via Custom Resource Definition for example. Kubernetes is an event-based architecture and you can use a primitive called Shared Informer to listen on the events triggered by k8s itself.","image":"https:\/\/gianarb.it\/img\/k8s.png","updated":"2019-02-07T08:08:27+00:00","published":"2019-02-07T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/kubernetes-shared-informer","content":"<p>Kubernetes runs a set of controllers to keep matching the current state of a\nresource with its desired state. It can be a Pod, Service or whatever is\npossible to control via Kubernetes.\nK8S has as core value <em>extendibility<\/em> to empower operators and applications to\nexpand its set of capabilities. An event-based architecture where everything\nthat matters get converted to an event that can be trigger custom code.<\/p>\n\n<p>When I think about a problem I have that requires to take action when Kubernetes\ndoes something my first target is one of the events that it triggers, example:<\/p>\n\n<ul>\n  <li>New Pod Created<\/li>\n  <li>New Node Joined<\/li>\n  <li>Service Removed\nand many, many more.<\/li>\n<\/ul>\n\n<p>To stay informed about when these events get triggered you can use a primitive\nexposed by Kubernetes and the\n<a href=\"https:\/\/github.com\/kubernetes\/client-go\">client-go<\/a> called SharedInformer,\ninside the cache package. Let\u2019s see how it works in practice.<\/p>\n\n<p>First of all as every application that interacts with Kubernetes you need to\nbuild a client:<\/p>\n\n<pre><code class=\"language-go\">\/\/ import \"os\"\n\/\/ import  corev1 \"k8s.io\/api\/core\/v1\"\n\/\/ import  \"k8s.io\/client-go\/kubernetes\"\n\/\/ import  \"k8s.io\/client-go\/tools\/clientcmd\"\n\n\n\/\/ Set the kubernetes config file path as environment variable\nkubeconfig := os.Getenv(\"KUBECONFIG\")\n\n\/\/ Create the client configuration\nconfig, err := clientcmd.BuildConfigFromFlags(\"\", kubeconfig)\nif err != nil {\n    logger.Panic(err.Error())\n    os.Exit(1)\n}\n\n\/\/ Create the client\nclientset, err := kubernetes.NewForConfig(config)\nif err != nil {\n    logger.Panic(err.Error())\n    os.Exit(1)\n}\n<\/code><\/pre>\n\n<p>As you can see I am commenting the code almost line by line to give you a good\nunderstanding about what is going. Now that you have the client we can create\nthe SharedInformerFactory. A shared informer listens to a specific resource; the\nfactory helps you to create the one you need. For this example it lookup the Pod\nSharedInformer:<\/p>\n\n<pre><code class=\"language-go\"> \/\/ import v1 \"k8s.io\/api\/core\/v1\"\n \/\/ import \"k8s.io\/client-go\/informers\"\n\/\/ import  \"k8s.io\/client-go\/tools\/cache\"\n\/\/ import \"k8s.io\/apimachinery\/pkg\/util\/runtime\"\n\n\/\/ Create the shared informer factory and use the client to connect to\n\/\/ Kubernetes\nfactory := informers.NewSharedInformerFactory(clientset, 0)\n\n\/\/ Get the informer for the right resource, in this case a Pod\ninformer := factory.Core().V1().Pods().Informer()\n\n\/\/ Create a channel to stops the shared informer gracefully\nstopper := make(chan struct{})\ndefer close(stopper)\n\n\/\/ Kubernetes serves an utility to handle API crashes\ndefer runtime.HandleCrash()\n\n\/\/ This is the part where your custom code gets triggered based on the\n\/\/ event that the shared informer catches\ninformer.AddEventHandler(cache.ResourceEventHandlerFuncs{\n    \/\/ When a new pod gets created\n    AddFunc:    func(obj interface{}) { panic(\"not implemented\") },\n    \/\/ When a pod gets updated\n    UpdateFunc: func(interface{}, interface{}) { panic(\"not implemented\") },\n    \/\/ When a pod gets deleted\n    DeleteFunc: func(interface{}) { panic(\"not implemented\") },\n})\n\n\/\/ You need to start the informer, in my case, it runs in the background\ngo informer.Run(stopper)\n<\/code><\/pre>\n\n<p>Knowing about Shared Informers gives you the ability to extend Kubernetes\nquickly. As you can see it is not a significant amount of code, the interfaces\nare pretty clear.<\/p>\n\n<h2 id=\"use-cases\">Use cases<\/h2>\n\n<p>I used them a lot to write dirty hack but also to complete automation gab a system for example:<\/p>\n\n<ol>\n  <li>We used to have a very annoying error during the creation of a Pod with a\npersistent volume. It was not a high rate error a restart makes everything to\nwork as expected. A dirty hack is pretty clear; I automated the manual\nprocess of restarting the pod with that error using a Shared Informer just\nlike to one I showed you<\/li>\n  <li>I am using AWS, and I would like to push some EC2 tags down as kubelet\nlabels. I use a shared informer but this time to watch when a new node joins\nthe cluster. From the new node I can get its AWS instanceID (it is a label\nitself), and with the AWS API. I can retrieve its tags to identify how to\nedit the node itself via Kubernetes API. Everything is part of the <code>AddFunc<\/code>\nin the shared informer itself.<\/li>\n<\/ol>\n\n<h2 id=\"complete-example\">Complete Example<\/h2>\n<p>This example is a function go program that logs when a new node that contains a\nparticular tag joins the cluster:<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n    \"fmt\"\n    \"log\"\n    \"os\"\n\n    corev1 \"k8s.io\/api\/core\/v1\"\n    \"k8s.io\/apimachinery\/pkg\/util\/runtime\"\n\n    \"k8s.io\/client-go\/informers\"\n    \"k8s.io\/client-go\/kubernetes\"\n    \"k8s.io\/client-go\/tools\/cache\"\n    \"k8s.io\/client-go\/tools\/clientcmd\"\n)\n\nconst (\n    \/\/ K8S_LABEL_AWS_REGION is the key name to retrieve the region from a\n    \/\/ Node that runs on AWS.\n    K8S_LABEL_AWS_REGION = \"failure-domain.beta.kubernetes.io\/region\"\n)\n\nfunc main() {\n    log.Print(\"Shared Informer app started\")\n    kubeconfig := os.Getenv(\"KUBECONFIG\")\n    config, err := clientcmd.BuildConfigFromFlags(\"\", kubeconfig)\n    if err != nil {\n        log.Panic(err.Error())\n    }\n    clientset, err := kubernetes.NewForConfig(config)\n    if err != nil {\n        log.Panic(err.Error())\n    }\n\n    factory := informers.NewSharedInformerFactory(clientset, 0)\n    informer := factory.Core().V1().Nodes().Informer()\n    stopper := make(chan struct{})\n    defer close(stopper)\n    defer runtime.HandleCrash()\n    informer.AddEventHandler(cache.ResourceEventHandlerFuncs{\n        AddFunc: onAdd,\n    })\n    go informer.Run(stopper)\n    if !cache.WaitForCacheSync(stopper, informer.HasSynced) {\n        runtime.HandleError(fmt.Errorf(\"Timed out waiting for caches to sync\"))\n        return\n    }\n    &lt;-stopper\n}\n\n\/\/ onAdd is the function executed when the kubernetes informer notified the\n\/\/ presence of a new kubernetes node in the cluster\nfunc onAdd(obj interface{}) {\n    \/\/ Cast the obj as node\n    node := obj.(*corev1.Node)\n    _, ok := node.GetLabels()[K8S_LABEL_AWS_REGION]\n    if ok {\n        fmt.Printf(\"It has the label!\")\n    }\n}\n<\/code><\/pre>\n"},{"title":"Serverless means extendibility","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/serverless-means-extendibility"}},"description":"Looking at the GitHub Actions design and connecting the docs I think I got why serverless is useful. It is a great mechanism to extend platform and SaaS.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-01-22T08:08:27+00:00","published":"2019-01-22T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/serverless-means-extendibility","content":"<p>I wrote an article about a <a href=\"\/blog\/kubernetes-GitHub-action\">GitHub Action<\/a> I\nrecently created to deploy my code to kubernetes. Very nice.  Writing the action\nand the post, I realized what serverless is all about.  I wrote it in the\nincipit of the article, but I think this topic deserves its dedicated post.\nServerless is not yet for web applications. I know some of you will probably\ndisagree but this is my blog, and that\u2019s why I have one, to write whatever I\nlike!<\/p>\n\n<p><img src=\"\/img\/brave_dad.png\" alt=\"\" \/><\/p>\n\n<p>I used Lambda and API Gateway to distribute two pdf I wrote about <a href=\"https:\/\/scaledocker.com\">\u201chow to scale\nDocker\u201d<\/a>, it looks to me way more complicated than a go\ndaemon. So I wrote them because I got the free tier and because I like to try\nnew things.  There are excellent applications written in that way for example\n<a href=\"https:\/\/acloud.guru\/\">acloud.guru<\/a> but I am probably not ready for that! My bad<\/p>\n\n<p>Anyway, I know what I am ready for: We should use serverless to offer\nextendibility for our as a service platform.<\/p>\n\n<p>Good for us distributed system and hispters applications are all based on\nevents, Kafka and so on. Plus now we have\n<a href=\"https:\/\/github.com\/opencontainers\/runc\">runC<\/a>,\n<a href=\"https:\/\/github.com\/moby\/buildkit\">buildkit<\/a> and a lot of the building blocks\nuseful to implement a solid serverless implementation.<\/p>\n\n<p>It is not easy, at scale this is a complicated problem but we are in a better\nsituation now, and it is a massive improvement from a product perspective:<\/p>\n\n<ol>\n  <li>Using containers, we can offer total isolation, and we can take a very\ncarefully and self defensive approach.<\/li>\n  <li>An API already provides extendibility but, you still need to have your server\nand to run your application by yourself to enjoy them. With a serverless\napproach, it will be much easy for the customer to implement their workflow.<\/li>\n  <li>You can ask your customer to share their implementation creating a vibrant\nand virtuous ecosystem.<\/li>\n<\/ol>\n\n<p>You can use a subset of the event that you write in Kafka as a trigger for the\nfunction, VaultDB to store secrets that will be injected inside the service and\nso on.<\/p>\n\n<p><img src=\"\/img\/heart.jpg\" alt=\"\" \/><\/p>\n\n<p>There is a lot more, but I am excited! Is somebody doing something like that? If\nso, let me know <a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a>, I would like to chat!<\/p>\n"},{"title":"GitHub actions to deliver on kubernetes","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/kubernetes-github-action"}},"description":"GitHub recently released a new feature called GitHub Actions. They are a serverless approach to allow developers to run their own code based on what happens to a particular repository. They are amazing for continuous integration and delivery. I used them to deploy and validate kubernetes code.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-01-22T08:08:27+00:00","published":"2019-01-22T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/kubernetes-github-action","content":"<p>Recently GitHub released a new feature called Actions. To me, it looks like the\nbest implementation I can think of for serverless.  I used AWS Lambda and API\nGateway for some basic API, and I wrote a prototype of an application capable of\nrunning functions using containers called\n<a href=\"https:\/\/github.com\/gianarb\/gourmet\">gourmet<\/a> I don\u2019t buy the fact that it will\nmake my code easy to manage. At least not to write API or web applications.<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\"><p lang=\"en\" dir=\"ltr\">I used the <a href=\"https:\/\/twitter.com\/hashtag\/GitHubActions?src=hash&amp;ref_src=twsrc%5Etfw\">#GitHubActions<\/a>\nto verify and deploy code to a <a href=\"https:\/\/twitter.com\/hashtag\/kubernetes?src=hash&amp;ref_src=twsrc%5Etfw\">#kubernetes<\/a>\ncluster <a href=\"https:\/\/t.co\/nfkjmYKPKs\">https:\/\/t.co\/nfkjmYKPKs<\/a> I am\nimpressed about how wonderful this feature is designed and implemented! <a href=\"https:\/\/twitter.com\/github?ref_src=twsrc%5Etfw\">@Github<\/a> you\n\ud83e\udd18!<\/p>&mdash; :w !sudo tee % (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/1087640589838008321?ref_src=twsrc%5Etfw\">January\n22, 2019<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>That\u2019s why I like what GitHub did because they used serverless for what I think\nit is designed for, extensibility.<\/p>\n\n<p>GitHub Actions just like Lambda functions on AWS are a powerful and managed way\nto extend their product straightforwardly.<\/p>\n\n<p>With AWS Lambda you can hook your code to almost whatever event happens: EC2\ncreations, termination, route53 DNS record change and a lot more. You don\u2019t need\nto run a server, you load your code, and it just works.<\/p>\n\n<p>Jess Frazelle wrote a blog post about <a href=\"https:\/\/blog.jessfraz.com\/post\/the-life-of-a-github-action\/\">\u201cThe Life of a GitHub\nAction<\/a>, and I\ndecided to try something I had my mind since a couple of weeks but it required a\nCI server, and it was already too much for me.<\/p>\n\n<p>Time to time I like the idea to have a kubernetes cluster that I can use for the\ntesting purpose, so I created a private repository that it is not ready to be\nopen source because it is a mess with secrets inside and so on.<\/p>\n\n<p><img src=\"\/img\/sorry.jpg\" alt=\"\" \/><\/p>\n\n<p>In any case, to give you an idea, this is the project\u2019s folder:<\/p>\n\n<pre><code>\u251c\u2500\u2500 .github\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 actions\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 deploy\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 deploy\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 Dockerfile\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 dryrun\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 Dockerfile\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u2514\u2500\u2500 dryrun\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 main.workflow\n\u2514\u2500\u2500 kubernetes\n    \u251c\u2500\u2500 digitalocean.yaml\n    \u251c\u2500\u2500 external-dns.yaml\n    \u251c\u2500\u2500 micro.yaml\n    \u251c\u2500\u2500 namespaces.yaml\n    \u251c\u2500\u2500 nginx.yaml\n    \u2514\u2500\u2500 openvpn.yaml\n<\/code><\/pre>\n<p>The <code>kubernetes<\/code> directory contains all the things I would like to install in my\ncluster.  For every new push on this repository, I would like to check if it can\nbe applied to the kubernetes cluster with the command <code>kubectl apply -f\n.\/kubernetes --dryrun<\/code> and when the PR is merged the changes should get applied.<\/p>\n\n<p>So I created my workflow in <code>.github\/main.workflow<\/code>: ( I left some comment to\nmake it understandable)<\/p>\n\n<pre><code>## Workflow defines what we want to call a set of actions.\n\n## For every new push check if the changes can be applied to kubernetes ## using the action called: kubectl dryrun\nworkflow \"after a push check if they apply to kubernetes\" {\n  on = \"push\"\n  resolves = [\"kubectl dryrun\"]\n}\n\n## When a PR is merged trigger the action: kubectl deploy. To apply the new code to master.\nworkflow \"on merge to master deploy on kubernetes\" {\n  on = \"pull_request\"\n  resolves = [\"kubectl deploy\"]\n}\n\n## This is the action that checks if the push can be applied to kubernetes\naction \"kubectl dryrun\" {\n  uses = \".\/.github\/actions\/dryrun\"\n  secrets = [\"KUBECONFIG\"]\n}\n\n## This is the action that applies the change to kubernetes\naction \"kubectl deploy\" {\n  uses = \".\/.github\/actions\/deploy\"\n  secrets = [\"KUBECONFIG\"]\n}\n<\/code><\/pre>\n<p>The <code>secrets<\/code> are an array of environment variables that you can use to set\nvalues from the outside. If your account has GitHub Action enabled there is a\nnew Tag inside the Settings in every repository called \u201cSecrets.\u201d<\/p>\n\n<p>You can set key-value pairs usable as you see in my workflow. For this example,\nI set the <code>KUBECONFIG<\/code> as the base64 of a kubeconfig file that allows the GitHub\nAction to authorize itself to my Kubernetes cluster.<\/p>\n\n<p>Both actions are similar the first one is in the directory\n<code>.github\/actions\/dryrun<\/code><\/p>\n\n<pre><code>\u251c\u2500\u2500 .github\n \u00a0\u00a0 \u251c\u2500\u2500 actions\n \u00a0\u00a0  \u00a0\u00a0 \u2514\u2500\u2500 dryrun\n \u00a0\u00a0  \u00a0\u00a0     \u251c\u2500\u2500 Dockerfile\n \u00a0\u00a0  \u00a0\u00a0     \u2514\u2500\u2500 dryrun\n<\/code><\/pre>\n<p>It contains a Dockerfile<\/p>\n\n<pre><code>FROM alpine:latest\n\n## The action name displayed by GitHub\nLABEL \"com.github.actions.name\"=\"kubectl dryrun\"\n## The description for the action\nLABEL \"com.github.actions.description\"=\"Check the kubernetes change to apply.\"\n## https:\/\/developer.github.com\/actions\/creating-github-actions\/creating-a-docker-container\/#supported-feather-icons\nLABEL \"com.github.actions.icon\"=\"check\"\n## The color of the action icon\nLABEL \"com.github.actions.color\"=\"blue\"\n\nRUN     apk add --no-cache \\\n        bash \\\n        ca-certificates \\\n        curl \\\n        git \\\n        jq\n\nRUN curl -L -o \/usr\/bin\/kubectl https:\/\/storage.googleapis.com\/kubernetes-release\/release\/v1.13.0\/bin\/linux\/amd64\/kubectl &amp;&amp; \\\n  chmod +x \/usr\/bin\/kubectl &amp;&amp; \\\n  kubectl version --client\n\nCOPY dryrun \/usr\/bin\/dryrun\nCMD [\"dryrun\"]\n<\/code><\/pre>\n\n<p>As you can see to describe an action, you need just a Dockerfile, and it works\nthe same as in docker. The CMD <code>dryrun<\/code> is the bash script I copied here:<\/p>\n\n<pre><code class=\"language-bash\">#!\/bin\/bash\n\nmain(){\n    echo \"&gt;&gt;&gt;&gt; Action started\"\n    # Decode the secret passed by the action and paste the config in a file.\n    echo $KUBECONFIG | base64 -d &gt; .\/kubeconfig.yaml\n    echo \"&gt;&gt;&gt;&gt; kubeconfig created\"\n    # Check if the kubernetes directory has change\n    diff=$(git diff --exit-code HEAD~1 HEAD .\/kubernetes)\n    if [ $? -eq 1 ]; then\n        echo \"&gt;&gt;&gt;&gt; Detected a change inside the kubernetes directory\"\n        # Apply the changes with --dryrun just to validate them\n        kubectl apply --kubeconfig .\/kubeconfig.yaml --dry-run -f .\/kubernetes\n    else\n        echo \"&gt;&gt;&gt;&gt; No changed detected inside the .\/kubernetes folder. Nothing to do.\"\n    fi\n}\n\nmain \"$@\"\n<\/code><\/pre>\n<p>The second action is almost the same as this one, the Dockerfile is THE same, so\nI am not posting it here, but the CMD looks like this:<\/p>\n\n<pre><code class=\"language-bash\">#!\/bin\/bash\n\nmain(){\n    # Decode the secret passed by the action and paste the config in a file.\n    echo $KUBECONFIG | base64 -d &gt; .\/kubeconfig.yaml\n     # Check if it is an event generated by the PR is a merge\n    merged=$(jq --raw-output .pull_request.merged \"$GITHUB_EVENT_PATH\")\n    # Retrieve the base branch for the PR because I would like to apply only PR merged to master\n    baseRef=$(jq --raw-output .pull_request.base.ref \"$GITHUB_EVENT_PATH\")\n\n    if [[ \"$merged\" == \"true\" ]] &amp;&amp; [[ \"$baseRef\" == \"master\" ]]; then\n        echo \"&gt;&gt;&gt;&gt; PR merged into master. Shipping to k8s!\"\n        kubectl apply --kubeconfig .\/kubeconfig.yaml -f .\/kubernetes\n    else\n        echo \"&gt;&gt;&gt;&gt; Nothing to do here!\"\n    fi\n}\n\nmain \"$@\"\n<\/code><\/pre>\n<p>That\u2019s everything, and I am thrilled!<\/p>\n\n<p><img src=\"\/img\/party.jpg\" alt=\"\" \/><\/p>\n\n<p>There is nothing more to say other than \u201cGitHub actions are amazing!\u201d. They look\nwell designed since day! The workflow file has a generator that even if I didn\u2019t\nuse it because I don\u2019t like colors, it seems amazing. The secrets allow us to do\nintegration with third-party services out of the box and you can use bash to do\nwhatever you like! Let me know what you use them for on\n<a href=\"https:\/\/twitter.com\/gianarb\">Twitter<\/a>.<\/p>\n"},{"title":"Why I speak at conferences","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/why-I-speak-at-conferences"}},"description":"I am over 50 talks! To celebrate this small, personal achievement I decided to write a post about why I speak at conferences even if I am not an evangelist or a proper DevRel.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-01-15T08:08:27+00:00","published":"2019-01-15T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/why-I-speak-at-conferences","content":"<p>I have recently counted the number of conferences listed <a href=\"\/conferences.html\">here on my\nblog<\/a>, and I realized that I am over 50 talks! I decided to\nwrite this post about why I have started and why I speak at conferences as\ncelebration.<\/p>\n\n<h2 id=\"community\">Community<\/h2>\n\n<p>Everything started when I learned how to develop. When I left university (it was\nnot the best accomplishment) I began to work in one of the Lab to build a CMS in\nPHP to manage their instruments. It was my first experience ever and the worst\nscenario I was a solo-man in that company. Kind of a dangerous first job to me I\nthat\u2019s why I call my experience a community-driven success. I wrote the\napplication three times:<\/p>\n\n<ol>\n  <li>Spaghetti code until I reached the limit for the codebase.<\/li>\n  <li>partially re-wrote it with Classes.<\/li>\n  <li>I jumped into IRC, and I discovered the community behind Zend Framework.<\/li>\n<\/ol>\n\n<p>They helped me to figure out how to write the proper version of it. I am in love\nwith all this open source people that were there to help a newbie like me, and I\ndiscovered the PHP Meetup in my city, Turin. Thank the people I met during an\nevent I got my second job in a proper company, with other developers and servers\nin the basement!<\/p>\n\n<p>To summarize, the community gave me a lot since my first day: things to learn,\nnew friends and mentors, and a job. It is natural to return everything I can.<\/p>\n\n<p>I gave my first talk at one of the local Meetup about Vagrant in 2013. I heart\nabout it on some IRC channel; it was not popular in Italy yet. So the perfect\nchance to give something back to all the people that helped me.<\/p>\n\n<p>Today after a couple of years technologies and motivations changed, but this is\nhow I started. I like to be part of a community, that\u2019s why open source is so\nimportant to me. And I want to share what I do and to learn from other people.<\/p>\n\n<h2 id=\"italy-is-too-small\">Italy is too small<\/h2>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">we are\nprivileged, AND we are hustlers.  both true &lt;3<\/p>&mdash; Charity Majors\n(@mipsytipsy) <a href=\"https:\/\/twitter.com\/mipsytipsy\/status\/1082010778381635584?ref_src=twsrc%5Etfw\">January\n6, 2019<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>The side effect about being part of an open source community is <strong>globalization<\/strong>.\nYou have teammates from everywhere, and you discover great use cases every day.\nThe way I look at Computer Science requires new challenges and issues to solve,\nand I am not ready to take a nap solving too easy problems. This means that I\nneed to take risks, I changed a lot of companies, and to do that, in some way\nyou need to share what you are capable off, you need to put your face out there.<\/p>\n\n<p>This is a rephase from a recent <a href=\"https:\/\/twitter.com\/rakyll\/status\/1084968619505680387\">tweet from\n@rakyll<\/a> or at least how\nI interpreted it.<\/p>\n\n<p>Speaking at conferences is an excellent way to discover what other teams are\ndoing and to meet smart dudes that can turn out to be great mentors.<\/p>\n\n<h2 id=\"remote-work\">Remote Work<\/h2>\n\n<p>Two years ago I went back from Dublin, and I decided to try remote working.  I\nenjoyed it, and it is hard for me to get back at the moment.  Working at home\nmeans that I don\u2019t have a lot of social interaction. I am alone for about 8\nhours a day, you can fix it moving to a coworking, but conferences or meetups\nare a great way to get out!  <strong>You don\u2019t need to go far away<\/strong>, that\u2019s why I run a\n<a href=\"https:\/\/www.meetup.com\/CNCF-Italy\/\">meetup in Turin about cloud computing<\/a> feel\nfree to let me know when you jump in if you would like to speak.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>I am not a developer advocate, and I don\u2019t work in the marketing or sales team\nfor this reason you need to have support from your company. This is not easy, a\nlot of people think that if you have social skills and you are not similar\nto a robot you are not a good coder.<\/p>\n\n<p>I always had managers that helped me to keep going, and I appreciate it.  I work\nin a startup and time to time is InfluxData that needs technical people at the\nconferences that they sponsor, and luckily I love to speak about topics that are\nrelated to what I do at work or to what my company does like monitoring,\nobservability, distributed system, and clouds, so I am always happy to go!<\/p>\n\n<p>That\u2019s it! Let me know why you speak at conferences via\n<a href=\"https:\/\/twitter.com\/gianarb\">@twitter<\/a>, and I hope my experience will help more\npeople to share their experiences; you are great! I will probably follow up with\nother articles about how I approach a conference or an abstract so let me know\nif you would like to read it as well!<\/p>\n\n<h2 id=\"not-really\">not really!<\/h2>\n\n<p>During the process or writing and digesting this post I realized how important\nconferences are for me as person and how sad is that not everyone can enjoy them\nas I do even if they would like to do it. There are plenty of reasons but I am\nnot speaking about laziness here. I am speaking about under represented people\nor who can not afford to pay or it is not supported by its company.<\/p>\n\n<p>Luckily there are a lot of groups that we can support to mitigate this problem\nand to figure out new ways to bring more people in and to build a comfortable\nand friendly environment for everyone. This is a win for everyone!\n<a href=\"https:\/\/twitter.com\/ProjAlloy\">ProjAlloy<\/a>,\n<a href=\"https:\/\/www.womenwhocode.com\/about\">WhomanWhoCode<\/a> accepts donations, but even\nif you can not give money our you can look around when you attend a conference\nin order to be nice and how to make everyone around you too feel good!<\/p>\n\n<p><img src=\"\/img\/share.jpg\" width=\"20%\" style=\"display:initial;\" \/><\/p>\n\n<p>Best Regards,\nGianluca<\/p>\n\n<p>[1] if you don\u2019t know where to start you can pick a Meetup close to your place!\nThey are always looking for a speaker and a smaller community can help you to\ngive your first talk! I usually try my new talks in a meetup too!<\/p>\n\n<p>[2] Be open during interviews, if you like to speak at conference you need to\nconvince the new company that it is a valuable skills for them too!<\/p>\n"},{"title":"testcontainer library to programmatically provision integration tests in Go with containers","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/testcontainers-go"}},"description":"Provisioning the environment for integration tests is not easy. You need a flexible strategy to build isolated environment per test and to inject the data you need to verify your assertions. I have ported a popular library from Java to Golang called testcontainers. It wraps the Docker API in order to provide a simple test friendly library that you can use to run containers in test cases.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2019-01-08T08:08:27+00:00","published":"2019-01-08T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/testcontainers-go","content":"<p>There are a lot of information in the title I know, but I am not good enough to\nmake it simple.<\/p>\n\n<p>Back in the days, I tried to make some contribution to <a href=\"https:\/\/github.com\/openzipkin\/zipkin\">OpenZipkin<\/a> an open\nsource tracing infrastructure in Java. I never really worked in that language, and apparently, I failed, but it wasn\u2019t all a waste of time.<\/p>\n\n<p>OpenZipkin has an excellent integration test suite, and I liked the approach it took to write\nintegration tests for all the backends it supports MySql, ElasticSeach,\nCassandra.<\/p>\n\n<p>Provision the integration test environment is complicated even when you do it\nwrong:<\/p>\n\n<ol>\n  <li>Without a per test isolation.<\/li>\n  <li>Without a cleanup process.<\/li>\n  <li>Without putting the right effort to have isolated tests.<\/li>\n<\/ol>\n\n<p>If you try to make integration tests in the right way, you will have a very hard\ntime, but Zipkin uses a project called\n<a href=\"https:\/\/github.com\/testcontainers\/testcontainers-java\">testcontainers-java<\/a>. It is a library\nthat wraps the Docker SDK to offer a friendly API to write integration\ntests using containers.<\/p>\n\n<h2 id=\"why-containers\">Why containers<\/h2>\n\n<p>In 2019 everyone knows the answer, containers are great for integration testings\nbecause they are a lightweight and flexible technology. Docker provides the\narchitecture that simplifies how you can turn them on and off.<\/p>\n\n<p>You can spin up a bunch of containers for every integration tests, they will be\nfresh new, and you can terminate them at the end of the tests. This increases\nisolation a lot, and it makes your tests more stable and easy to reproduce.<\/p>\n\n<h2 id=\"golang\">Golang<\/h2>\n\n<p>I develop in Go every day I loved the approach, so I decided to port that\nlibrary to Golang and it eventually get moved to the\n<a href=\"https:\/\/github.com\/testcontainers\">testcontainers<\/a> GitHub\norganization under the repository <a href=\"https:\/\/github.com\/testcontainers\/testcontainers-go\">testcontainers\/testcontainers-go<\/a>.<\/p>\n\n<p>There is a lot to do but I think at this point the API is stable and we have\neverything we need to use it. All the rest will be driven by yourself asking for\nnew features or from contributors that will port more things from the java\nproject.<\/p>\n\n<p>This is our \u201cHello World.\u201d<\/p>\n\n<pre><code class=\"language-golang\">package main\n\nimport (\n    \"context\"\n    \"fmt\"\n    \"net\/http\"\n    \"testing\"\n\n    testcontainers \"github.com\/testcontainers\/testcontainers-go\"\n)\n\n\/\/ TestNginxLatestReturn verifies that a requesto to root returns 200 as status\n\/\/ code\nfunc TestNginxLatestReturn(t *testing.T) {\n    ctx := context.Background()\n    \/\/ Request an nginx container that exposes port 80\n    req := testcontainers.ContainerRequest{\n        Image:        \"nginx\",\n        ExposedPorts: []string{\"80\/tcp\"},\n    }\n    nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{\n        ContainerRequest: req,\n        Started:          true,\n    })\n    if err != nil {\n        t.Error(err)\n    }\n    \/\/ At the end of the test remove the container\n    defer nginxC.Terminate(ctx)\n    \/\/ Retrieve the container IP\n    ip, err := nginxC.Host(ctx)\n    if err != nil {\n        t.Error(err)\n    }\n    \/\/ Retrieve the port mapped to port 80\n    port, err := nginxC.MappedPort(ctx, \"80\")\n    if err != nil {\n        t.Error(err)\n    }\n    resp, err := http.Get(fmt.Sprintf(\"http:\/\/%s:%s\", ip, port.Port()))\n    if resp.StatusCode != http.StatusOK {\n        t.Errorf(\"Expected status code %d. Got %d.\", http.StatusOK, resp.StatusCode)\n    }\n}\n<\/code><\/pre>\n\n<p>This is a straightforward test, but you can imagine a lot of other use cases. Let\u2019s say that you need to test how your <code>application A<\/code> interact with an <code>application B<\/code> that\ndepends on Redis. You can programmatically build the environment you need in the tests:<\/p>\n\n<pre><code>\/\/ You spin up the Redis container\nreq := testcontainers.ContainerRequest{\n    Image:        \"redis\",\n    ExposedPorts: []string{\"6379\/tcp\"},\n}\nredisC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{\n    ContainerRequest: req,\n    Started:          true,\n})\nif err != nil {\n    t.Error(err)\n}\ndefer redisC.Terminate(ctx)\nip, err := redisC.Host(ctx)\nif err != nil {\n    t.Error(err)\n}\nredisPort, err := redisC.MappedPort(ctx, \"6479\/tcp\")\nif err != nil {\n    t.Error(err)\n}\n\n\/\/ Spin up Application B\nappB, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{\n    ContainerRequest: req,\n    Started:          true,\n    Env: map[string]string{\n        \"REDIS_HOST\": fmt.Sprintf(\"http:\/\/%s:%s\", ip, redisPort.Port()),\n    },\n})\nif err != nil {\n    t.Error(err)\n}\nipB, err := redisC.Host(ctx)\nif err != nil {\n    t.Error(err)\n}\nportB, err := redisC.MappedPort(ctx, \"8081\/tcp\")\nif err != nil {\n    t.Error(err)\n}\n\ndefer appB.Terminate(ctx)\ndefer redis.Terminate(ctx)\n\n\/\/ Now you can use the go function from your application A that interact with\n\/\/ application B\nbclient := appA.NewServiceBClient(ipB, portB)\ncontent, err := bclient.GetKey(\"my-key\")\n\n\/\/ Check what you need to check\n<\/code><\/pre>\n\n<h2 id=\"programmable-environment-is-the-key\">Programmable environment is the key<\/h2>\n\n<p>I wrote about my relationship with <a href=\"\/blog\/infrastructure-as-real-code\">infrastructure as\ncode<\/a> in a previous article but once again\nthe fact that you can programmatically build your infrastructure\nusing real code is the key for all this flexibility.<\/p>\n\n<p>As plus for integration tests, you can build the environment you need from inside the test case itself, this ability provides significant control over it.<\/p>\n\n<p>If you need to worm up etcd with some data, you spin up the etcd container and\nyou push your data using the traditional Go <a href=\"https:\/\/github.com\/etcd-io\/etcd\/tree\/master\/client\">etcd client<\/a>:<\/p>\n\n<pre><code>\/\/ Spin up Etcd\nreq := testcontainers.ContainerRequest{\n    Image:        \"quay.io\/coreos\/etcd:latest\",\n    ExposedPorts: []string{\"2379\/tcp\"},\n}\netcdC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{\n    ContainerRequest: req,\n    Started:          true,\n})\nif err != nil {\n    t.Error(err)\n}\ndefer etcdC.Terminate(ctx)\nip, err := etcdC.Host(ctx)\nif err != nil {\n    t.Error(err)\n}\netcdPort, err := redisC.MappedPort(ctx, \"2379\/tcp\")\nif err != nil {\n    t.Error(err)\n}\n\n\/\/ Configure the etcd client\ncfg := client.Config{\n    Endpoints:               []string{\"http:\/\/\" + ip + \":\" + etcdPort},\n    Transport:               client.DefaultTransport,\n    \/\/ set timeout per request to fail fast when the target endpoint is unavailable\n    HeaderTimeoutPerRequest: time.Second,\n}\nc, err := client.New(cfg)\nif err != nil {\n    log.Fatal(err)\n}\nkapi := client.NewKeysAPI(c)\n\n\/\/ Set the key foo\nresp, err := kapi.Set(context.Background(), \"\/foo\", \"bar\", nil)\n<\/code><\/pre>\n\n<p>I wrote this article because after a few weeks of coding and revisions I have\nfinally tagged\n<a href=\"https:\/\/github.com\/testcontainers\/testcontainers-go\/releases\/tag\/0.0.1\"><code>v0.0.1<\/code><\/a>\nand the library is ready to be tried we need feedback and feature requests to\nPrioritize the work to do. So feel free to try it and to open GitHub\n<a href=\"https:\/\/github.com\/testcontainers\/testcontainer-go\/issues\">issues<\/a>.<\/p>\n"},{"title":"Infrastructure as (real) code","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/infrastructure-as-real-code"}},"description":"Infrastructure as code today is wrong. Tools like Chef, Helm, Salt, Ansible uses a template engine to make YAML or JSON way to smarter, but comparing this solution with a proper coding language you always miss something. GitOps forces you to stick your infrastructure code in a git repository this is good. But infrastructure as code is way more.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2018-12-31T08:08:27+00:00","published":"2018-12-31T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/infrastructure-as-real-code","content":"<p>I got different signals from the internet around the topic infrastructure as\ncode. I worked with a lot of configuration management tools: Chef, Ansible,\nSalt. All of them are good and bad almost in the same way, for me it is mainly a\nboring syntax switch between them. That\u2019s one of the reasons I have a repulsion\nfor these kind of tools.  This year at InfluxData we moved to Kubernetes, and I\nhad the chance to see how a migration like that works, and the unique\nprivileges to work with my collagues to design how the end result looks\nlike, even if it is a never ending work in progress based on the feedback\nthat we get from outself and other teams.  So I think at this point I can\ntry to explain why I think infrastructure as code today doesn\u2019t work.<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">I\u2019m\nstarting to think the industry didn\u2019t get the point of \u201cinfrastructure as code\u201d.\nThat people believe codified infrastructure is checking YAMLs into a git repo is\ntroubling.<\/p>&mdash; Dan Woods (@danveloper) <a href=\"https:\/\/twitter.com\/danveloper\/status\/1078870433246662656?ref_src=twsrc%5Etfw\">December\n29, 2018<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>Configuration management are not entirely useless, but it is like <a href=\"https:\/\/sizovs.net\/2018\/12\/17\/stop-learning-frameworks\/\">learning a\nnew framework<\/a>, there\nis always something good to learn but it is just a framework.  If you pick the\ncooler Javascript one, you will probably get a well-paid job in a startup with\ncandies and a flexible workplace but I am always interested in learning the\nunderline architecture and patterns. The reconciliation loop that ReactJS built\nto interact with the DOM is pretty nice, or the one that Kubernetes has to\nmanage all the resources.  Architecture, design patterns are well more useful\nthat syntactic sure that you can get from the framework itself even more when\nthe \u201csugar\u201d looks like this:<\/p>\n\n<pre><code class=\"language-yaml\">- name: \"(Install: All OSs) Install NGINX Open Source Perl Module\"\n  package:\n    name: nginx-module-perl\n    state: present\n  when: nginx_type == \"opensource\"\n- name: \"(Install: All OSs) Install NGINX Plus Perl Module\"\n  package:\n    name: nginx-plus-module-perl\n    state: present\n  when: nginx_type == \"plus\"\n- name: \"(Setup: All NGINX) Load NGINX Perl Module\"\n  lineinfile:\n    path: \/etc\/nginx\/nginx.conf\n    insertbefore: BOF\n    line: load_module modules\/ngx_http_perl.so;\n  notify: \"(Handler: All OSs) Reload NGINX\"\n<\/code><\/pre>\n\n<p>The above code is an Ansible script that I took from a randomly from the <a href=\"https:\/\/github.com\/nginxinc\/ansible-role-nginx\/blob\/master\/tasks\/modules\/install-perl.yml\">nginx\nrole<\/a><\/p>\n\n<pre><code class=\"language-yaml\">piVersion: extensions\/v1beta1\nkind: Deployment\nmetadata:\n  name: { template \"drone.fullname\" . }}-agent\n  labels:\n    app: { template \"drone.name\" . }}\n    chart: \"{ .Chart.Name }}-{ .Chart.Version }}\"\n    release: \"{ .Release.Name }}\"\n    heritage: \"{ .Release.Service }}\"\n    component: agent\nspec:\n  replicas: { .Values.agent.replicas }}\n  template:\n    metadata:\n      annotations:\n        checksum\/secrets: { include (print $.Template.BasePath \"\/secrets.yaml\") . | sha256sum }}\n{- if .Values.agent.annotations }\n{ toYaml .Values.agent.annotations | indent 8 }\n{- end }\n      labels:\n        app: { template \"drone.name\" . }}\n        release: \"{ .Release.Name }}\"\n        component: agent\n<\/code><\/pre>\n\n<p>This is an help chart I took from the <a href=\"https:\/\/github.com\/helm\/charts\/blob\/master\/stable\/drone\/templates\/deployment-agent.yaml\">official GitHub\nrepository<\/a>.<\/p>\n\n<p>To be clear, when I imagine a sweet dessert full of sugar it is way different\ncompared with what I have pasted above.<\/p>\n\n<p>Both of them work with a template engine that is capable of rendering a template\nthat looks like YAML.  I will never buy that infrastructure as code doesn\u2019t use\nthe real code but a serialization language.<\/p>\n\n<p>If you don\u2019t know why YAML or JSON or HCL these are a set of reasons that you\nwill hear:<\/p>\n\n<ul>\n  <li>The curve to learn YAML, JSON, HCL is way more friendly than a proper language\nlike Go, Javascript, PHP or whatever.<\/li>\n  <li>You don\u2019t have all the utilities that a language provides but only what the\ntemplate engine exposes. This should help you and your team to avoid terrible\nmistakes.<\/li>\n<\/ul>\n\n<p>These concerns was reasonably at the beginning, when the DevOps culture started,\nbut now everyone has a good sense of how to code.  We do code review, and we\nhave a lot more experience around patterns and API to handle infrastructure\nprovisioning.<\/p>\n\n<ol>\n  <li>If you know Kubernetes, it has powerful API that you can leverage to write\nautomation code, same for cloud provider like AWS, GCP or OpenStack.<\/li>\n  <li>Reconciliation loop, informer, Workqueue, Controller and CRDs are concepts\nfrom Kubernetes that you can reuse.<\/li>\n  <li>I wrote about <a href=\"https:\/\/gianarb.it\/blog\/reactive-planning-is-a-cloud-native-pattern\">reactive\nplanning<\/a>\nand its application in cloud.<\/li>\n<\/ol>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">if people refuse to learn things, fire them.<br \/>if your management won&#39;t fire people for not pulling their weight, quit.<br \/><br \/>ENGINEERS: we live in a golden age of opportunity.  please use it while it lasts. <a href=\"https:\/\/t.co\/OdB24UNl9X\">https:\/\/t.co\/OdB24UNl9X<\/a><\/p>&mdash; Charity Majors (@mipsytipsy) <a href=\"https:\/\/twitter.com\/mipsytipsy\/status\/1078799382009470979?ref_src=twsrc%5Etfw\">December 28, 2018<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>All the concerns that I raised in favor of <code>YAML, JSON vs. code<\/code> drives to the\nrisk of writing bad code, but I think there is no way to \u201cremove bad code.\u201d Even\ncode that looks good today will look bad tomorrow.  Find a way to mitigate the\nrisk is admirable but I don\u2019t think YAML is the right solution, a code\narchitecture, the right patterns, testing, documentation and code review are the\nway to go.<\/p>\n\n<p>Today therea are people with the right skills to write good code even around\ninfrastructure, and if you use real code you will have:<\/p>\n\n<ul>\n  <li>A reacher set of libraries and tools based on the language that you will pick.<\/li>\n  <li>Unit, integration test frameworks.<\/li>\n  <li>Compiling or interpreting an actual language will highlight more syntax errors\nthat every template engine.<\/li>\n  <li>Code is way more fun!<\/li>\n  <li>You can import your code and you don\u2019t need to make trick things to join\nKubernetes template together<\/li>\n  <li>You can instantiate new objects, apply transformations of them from the same\nstructure to reuse the code that describes your resources (aws autoscaling\ngroup, kubernetes ingress or whatever).<\/li>\n<\/ul>\n\n<p>This discussion applied to a real word situation with Kubernetes used not via\nYAML but with the Go struct provided by the\n<a href=\"https:\/\/github.com\/kubernetes\/client-go\/tree\/master\/kubernetes\/typed\/core\/v1\">kubernetes\/client-go<\/a><\/p>\n\n<pre><code class=\"language-yaml\">apiversion: apps\/v1\nkind: deployment\nmetadata:\n  name: micro\n  namespace: micro\n  labels:\n    app: micro\n    component: micro\nspec:\n  replicas: 12\n  selector:\n    matchlabels:\n      app: micro\n  template:\n    metadata:\n      labels:\n        app: micro\n    spec:\n      containers:\n      - name: microapp\n        image: gianarb\/micro\n        ports:\n        - containerport: 8080\n        env:\n        - name: SLACK_TOKEN\n          valuefrom:\n            secretkeyref:\n              name: slack\n              key: token\n        - name: SLACK_USERNAME\n          value: \"myuser'\n        resources:\n          limits:\n            memory: 128mi\n          requests:\n            memory: 100mi\n<\/code><\/pre>\n\n<p>This YAML translated to Golang:<\/p>\n\n<pre><code class=\"language-golang\">func newMicroDeployment() *appsv1.Deployment {\n    return &amp;appsv1.Deployment{\n        TypeMeta: metav1.TypeMeta{\n            Kind:       \"Deployment\",\n            APIVersion: \"apps\/v1\",\n        },\n        ObjectMeta: metav1.ObjectMeta{\n            Name:      \"micro\",\n            Namespace: twodotoh.Namespace,\n            Labels: map[string]string{\n                \"app\":       \"micro\",\n                \"component\": \"micro\",\n            },\n        },\n        Spec: appsv1.DeploymentSpec{\n            Replicas: pointer.Int32Ptr(12),\n            Selector: &amp;metav1.LabelSelector{\n                MatchLabels: map[string]string{\n                    \"app\": \"micro\",\n                },\n            },\n            Template: corev1.PodTemplateSpec{\n                ObjectMeta: metav1.ObjectMeta{\n                    Labels: map[string]string{\n                        \"app\": \"micro\",\n                    },\n                },\n                Spec: corev1.PodSpec{\n                    Containers: []corev1.Container{\n                        {\n                            Name:  \"microapp\",\n                            Image: \"gianarb\/micro\",\n                            Ports: []corev1.ContainerPort{\n                                {\n                                    ContainerPort: 8080,\n                                },\n                            },\n                            Env: []corev1.EnvVar{\n                                {\n                                    Name: \"SLACK_TOKEN\",\n                                    ValueFrom: &amp;corev1.EnvVarSource{\n                                        SecretKeyRef: &amp;corev1.SecretKeySelector{\n                                            LocalObjectReference: corev1.LocalObjectReference{\n                                                Name: \"slack\",\n                                            },\n                                            Key: \"token\",\n                                        },\n                                    },\n                                },\n                                {\n                                    Name:  \"SLACK_USERNAME\",\n                                    Value: \"myuser\",\n                                },\n                            },\n                        },\n                    },\n                },\n            },\n        },\n    }\n}\n<\/code><\/pre>\n\n<p>You can make the function more flexible passing variables like the number of\nreplicas for example, or you can write transformation function that looks like\n<code>WithDifferentMemoryLimit<\/code> to apply transformation to your <code>runtime.Object<\/code>.<\/p>\n\n<pre><code class=\"language-golang\">deployment := newMicroDeployment()\n\n\/\/ You can transform them with utils like:\nWithDifferentMemoryLimit(\"200mi\", deployment)\n<\/code><\/pre>\n\n<p>If you play well will Go packages, and if you structure your code you can have\nsomething like:<\/p>\n\n<pre><code class=\"language-golang\">apps := []*runtime.Object{}\nservice := micro.NewKubernetesService()\ndeployment := micro.NewDeployment()\napps = append(apps, service)\napps = append(apps, deployment)\n\/\/ Deploy via kubernetes api\n<\/code><\/pre>\n<p>I mean, you have the code now! So you can make all the mistakes you usually do\nduring your daily job!<\/p>\n\n<p class=\"small\">Hero image via <a href=\"https:\/\/pixabay.com\/en\/fractal-complexity-geometry-1758543\/\">Pixabay<\/a><\/p>\n"},{"title":"You need an high cardinality database","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/high-cardinality-database"}},"description":"Monitoring and observability in a dynamic environment on Cloud or Kubernetes is a new challange we are facing and I think the tool that plays a big role is an high cardinality database.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2018-11-28T08:08:27+00:00","published":"2018-11-28T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/high-cardinality-database","content":"<p>In order to understand how an application performs you need data. Logs, events,\nmetrics, traces.<\/p>\n\n<p>Observability and monitoring are expensive because you need to retrieve all this\ndata across your system. An architeture these days is not a static rock where\nnothing happens and everything stays the same. You don\u2019t have your 10 VPC, with\nalways the same hostname that you can filter for.<\/p>\n\n<p>Today you are on cloud, your instances are going up and down based on your load\nand it is easier for you to replace an EC2 that troubleshooting a failure.<\/p>\n\n<p>Containers wrap your application and they makes it easy to deploy, as side\neffect you release more often, it means more data.<\/p>\n\n<p>But the data are useless if you can not get anything good out from them, so they\ncan be your silver buffet or a big pain, the difference is all made by your\nability to use them to answer your questions or in the ability for your team\naggregate them together in order to build automation with them.<\/p>\n\n<p>To do all off this you need to manage high cardinality, this is word that sales\nteam in tech are scary off because nobody will never sell an infinite high\ncardinality database, everything has a limit, and the unique solution is not a\nproduct itself but it is more like a mindset developers should have.<\/p>\n\n<ul>\n  <li>You need to store the raw data for just the right time, forever is not an\noption.<\/li>\n  <li>You need to give access to these data across the company in order to build\nbetter aggregation. Build Engineers will probably need data not just from the\nCI pipeline but also from your VCS. SRE to understand how a code change behave\nin prod they need metrics from servers but also from the CI. Spread the\nknowledge<\/li>\n<\/ul>\n\n<p>The technologies that gives you the ability to interact with a big set of\nunstructured data should support an high wrtie troughpoot and smart indexes that\nwill allow your query engine to lookup for what you need fast enough!<\/p>\n\n<p>So that\u2019s what I have in my mind when I think about a database that can support\nmonitoring data.<\/p>\n\n<p>I am not selling anything mainly because I think a final solution doesn\u2019t exist\nyet, I can not really tell you what to buy but you should look around for other\ncompanies at your same scale because everyone has this problem:<\/p>\n\n<ul>\n  <li>Facebook has scuba<\/li>\n  <li>A lot of people use Cassandra and they looks happy at least for its writing\ncapabilities.<\/li>\n  <li>There are new time series databases releases in a daily based<\/li>\n  <li>At InfluxData we obviously use InfluxDB for this purpose<\/li>\n<\/ul>\n\n<p>The general idea here is that the goal should be to group\ndata that now are in different sources: NewRelic, InfluxDB, ElasticSearch,\nPapertrail in the same place, because it is rare to get\nthe answer for your question just looking at logs, or metrics, you need an\naggregation or a sample of different data.<\/p>\n\n<p>This will bring the debugging and troubleshooting capabilities of your team to\nthe next level, and listen to me, if you are working with a microservices\narchitecture or with a highly distributed environment you need help from\neverything!<\/p>\n"},{"title":"Reactive planning is a cloud native pattern","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/reactive-planning-is-a-cloud-native-pattern"}},"description":"I discovered how a reactive plan works recently during a major refactoring for a custom orchestrator that we write at InfluxData to serve our SaaS offer. In this article I will explain why I think reactive planning is perfect to build cloud native application like container orchestrators and provisioning tools.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2018-11-28T08:08:27+00:00","published":"2018-11-28T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/reactive-planning-is-a-cloud-native-pattern","content":"<p>Probably this title can sound a bit weird to anyone that already know what\nreactive plan is and how far it can look from all the cloud-native and\ndistributed system hipster movement but recently one of my colleagues <a href=\"https:\/\/twitter.com\/goller\">Chris\nGoller<\/a> pushed this pattern to one of the projects\nthat we have at <a href=\"https:\/\/influxdata.com\">InfluxData<\/a> and I find it glorious!<\/p>\n\n<p>\u201cIn artificial intelligence, reactive planning denotes a group of techniques for\naction selection by autonomous agents. These techniques differ from classical\nplanning in two aspects. First, they operate in a timely fashion and hence can\ncope with highly dynamic and unpredictable environments. Second, they compute\njust one next action in every instant, based on the current context.\u201d\n(<a href=\"https:\/\/en.wikipedia.org\/wiki\/Reactive_planning\">Wikipedia<\/a>)<\/p>\n\n<p>The Wikipedia definition of reactive planning as you can see is perfect to\nhandle a system where the current status can change very frequently based on\nexternal and unpredictable events.<\/p>\n\n<p>This is a perfect approach for provisioning\/orchestrator tool like Mesos, Cloud\nFormation, Kubernetes, Swarm, Terraform. Some of them are working like this\nalready.<\/p>\n\n<p>The general idea is that before any action you need a plan because for these\ntools an action means: cloud interaction, spin up of resources that cost money.\nYou need to be proactive avoiding useless execution.<\/p>\n\n<p>A plan is made of a serious of steps and every step can return other steps if it\nneeds. The plan is complete when there are no steps anymore.  The plan gets\nexecuted at least twice, the second time it should return zero steps because the\nfirst attempts built everything you need, this is the signal that determines its\nconclusion. If it keeps returning steps it means that there is something to do\nand it tries again.<\/p>\n\n<p>Let\u2019s start with an example. Think about what Cloud Formation does. You can\ndeclare a set of resources and before to take action it needs to understand what\nto do. It is making a plan checking the current state of the system. This first\npart makes the flow idempotent and solid because you always start from the\ncurrent state of the system. It doesn\u2019t matter if it changes over time because\nof somebody that removed one of the resources. If something doesn\u2019t exist it\ncreates or modify it. Very solid.<\/p>\n\n<p>Every single step is very small. Let\u2019s take another example like creating a pod\nin Kubernetes. When you create a pod there are a lot of actions to do:<\/p>\n\n<ul>\n  <li>Validation<\/li>\n  <li>Generate the pod id, the pod name<\/li>\n  <li>Register the pod to the DNS, you<\/li>\n  <li>Store it to etcd\nReach to CNI to configure the network<\/li>\n  <li>Reach to docker, container or whatever you use to get the container<\/li>\n  <li>Maybe reach to AWS to create a persistent volume<\/li>\n  <li>Attach the PV<\/li>\n<\/ul>\n\n<p>If you try to design all interaction in a single \u201ccontroller\u201d you will end up\nwith a lot of * if\/else, error handling and so on. Mainly because as you can see\nalmost every step interact over the network with something: database, DNS, CNI,\ndocker and so on. So it can fail, it needs circuit breaking, retry policy and\nmuch more complexity.<\/p>\n\n<p>It is a lot better to design the code where every point is a small step if the\nstep that reaches docker fails it can return itself as \u201cretry\u201d or it can return\nother steps to abort everything and clean up. You will end up with small\nreusable (or not that much reusable) steps.<\/p>\n\n<p>All the steps are combined within a plan,  the \u201cPodCreation\u201d plan. There is a\nscheduler that takes and execute every step in the plan recursively.<\/p>\n\n<blockquote>\n  <p>This freedom allows you to use an incremental approach<\/p>\n<\/blockquote>\n\n<p>The scheduler as first call a create method for the plan, the create method\nchecks what to do based on the current state of the system, it is the\nresponsibility of this function to return no steps when there is nothing to do.<\/p>\n\n<p>I think this Reactive Planning is one of the best ways to organize the code in a\ncloud-native ecosystem for its reactive nature as I said and for the fact that\nit forces you to check the state of the system if you don\u2019t do that the plan\nwill keep executing forever.  Obviously, you can use a high-level check to skip\na lot of steps, this requires a balance if the plan you are executing is\ncritical and frequently used you should check for every step if it requires an\neffort that won\u2019t pay back you can implement deepest and preciser checks. You\ncan check for the PodStatus. If it is running we are good nothing to do. Or you\ncan check if Docker has a container running and if it has the right network\nconfiguration. If it is running but with no network, you can return the step\nthat interacts with CNI to set the right interface. This freedom allows you to\nuse an incremental approach, you start with an easy creation method with checks\nfor only critical and high-level signal demanding a more solid and sophisticated\nset of checks for later, when you will have the best knowledge about where\nthe system fails.<\/p>\n\n<p class=\"small\">Hero image via\n<a href=\"https:\/\/pixabay.com\/en\/time-time-management-stopwatch-3222267\/\">Pixabay<\/a><\/p>\n"},{"title":"You will pay the price of a poor design","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/price-of-poor-software-design"}},"description":"I read the book Philosophy Software Design from John Ousterhout. It opens my eyes giving me more confidence about how to explain and apply solid design concept in software.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2018-11-10T08:08:27+00:00","published":"2018-11-10T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/price-of-poor-software-design","content":"<p>The unique way to avoid the price of a poor design project to replace them in\ntime. But we all know that replacing a software is not a great idea.<\/p>\n\n<p>Software should be improved via multiple iterations and that\u2019s why you or your\nteammates will read a lot more code compared with the amount written and the of\nthe story.<\/p>\n\n<p>In a rush of writing a new project, this sentence will sound wrong, but this is\njust one of the phase of it. After that, it will be all about reading and\nreplacing lines of code one by one and if you poorly design the software\nsomebody will pay the price of it. You or somebody else, probably somebody else\nlooking at how quick a developer changes job.<\/p>\n\n<blockquote>\n  <p>it is in your hands as a developer to think about design, at\nleast to save yourself from the darkness<\/p>\n<\/blockquote>\n\n<p>In a fancy unicorn startup, you will hear that there is no time to think about\ndesign, if you are in tech probably that\u2019s not true, but there is always\na project that nobody cares but it needs to be fixed.<\/p>\n\n<p>Some other companies don\u2019t have time to think about design because they are\nalways running against something.<\/p>\n\n<p><img src=\"\/img\/hero\/poverty.jpg\" alt=\"\" class=\"img-fluid\" \/><\/p>\n\n<p>So, as you can see, it is in your hands as a developer to think about design, at\nleast to save yourself from the darkness.<\/p>\n\n<p>Recently I read <a href=\"https:\/\/www.amazon.com\/Philosophy-Software-Design-John-Ousterhout\/dp\/1732102201\/\">\u201cPhilosophy Software\nDesign\u201d<\/a>\nby John Ousterhout. This book turned out to be <strong>not<\/strong> a breath of fresh air <strong>for\nme<\/strong> was more like a \u201cbreath of consolidation\u201d. Professor John Ousterhout fixed on\npaper, in an excellent way what I try to do every day but I was not always able to express.<\/p>\n\n<p>Why comments are essential, deep API vs shallow classes, information hiding. You\nshould read it!<\/p>\n\n<p><strong>Design it twice<\/strong>. It looks expensive but it is a take away from the book that\nI think is the practical key to unlock the door to make our work fun and\nhealthy.<\/p>\n\n<p>The first solution can\u2019t be the best one. Even if you are smart enough to design\nsomething that it won\u2019t crash we should make an effort to think about another\nsolutions, to ask for a review, just as we do for when writing code.<\/p>\n\n<p>In theory, we will have a great design looking in between of all the other\nattempts.<\/p>\n"},{"title":"Chaos Engineering","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/chaos-engineer"}},"description":"I took part at a panel at the Jazoon conference in Switzerland called Chaos Engineering. Where I had a change to learn about techniques and practices around this topic that even if I new about it I never had the chance to put my head on it. In this article I am summarizing my ideas and what I get mainly around the definition of Chaos Enginnering.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2018-08-23T08:08:27+00:00","published":"2018-08-23T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/chaos-engineer","content":"<p>At the <a href=\"https:\/\/jazoon.com\/\">Jazoon<\/a> conference in Switzerland, I had the chance\nto speak at the Chaos Engineering panel with <a href=\"https:\/\/twitter.com\/russmiles\">Russ\nMiles<\/a> from <a href=\"https:\/\/chaosiq.io\">ChaosIQ<\/a> and\n<a href=\"https:\/\/twitter.com\/aaronblohowiak\">Aaron P Blohowiak<\/a> from Netflix.<\/p>\n\n<p>The organizers put me in the panel probably because \u201cchaos\u201d was part of the title\nfor the talk I just gave in the morning. I was too curious to mention that I\nnever did it before, at least on purpose!<\/p>\n\n<p>So I was really out of my comfort zone dealing with these two folks that know\ntheir shit so well!<\/p>\n\n<p>I am sure that as Engineers we are part of the Chaos, we create entropy inside the\nsystem during every deploy and even if we have all the tests in the world the\nfirst time it is tough to make it work. But I indeed never associated the\nword engineering to chaos. And that\u2019s the real challenge.<\/p>\n\n<p>So, let\u2019s define Chaos and Engineering altogether.<\/p>\n\n<p>Let\u2019s start with <code>Chaos<\/code> because it is the easy one, as I said we as developers create\nchaos, distribution creates chaos, and customers create chaos. If somebody tells you\nthat his production environment is excellent, you should not listen to him,\nProduction is a nightmare, complicated and painful place. At least if somebody uses it.<\/p>\n\n<p>And if it is just a bit more complicated than a static site it never works 100%,\nthe chaos governs it, and that\u2019s where the sentence Engineering becomes\nessential.<\/p>\n\n<p><code>Engineering<\/code> at least for what I can understand means to be driven by data and\nnot feeling. So associating these two concepts together you have a powerful way\nto measure the chaos.<\/p>\n\n<p>I think you can\u2019t avoid chaos, so the best way to handle it is to learn from\nwhat it generates in your system to anticipate unpredictable situations.<\/p>\n\n<p>As developers, ops or devops we are pessimistic about our system, and we know\nthat it will fail: servers crashes, CoreOS auto updates itself, third party\nservices stop to work. Usual the answer is to wait for it to happen usually\nFriday night.<\/p>\n\n<p>Chaos Engineering is an exercise, a practice to leverage \u201cunusual but possible\u201d\nsituations as teaching vector to our system.<\/p>\n\n<p>It is another tool to achieve resiliency and to test scalability.<\/p>\n\n<p>Chaos Engineering doesn\u2019t bring down all your production system in an\nunrecoverable way. It designs exercises that you and your team will use to\nincrease your operational experience and confidence.<\/p>\n\n<p>Observability is a sort of requirement to understand how a chaotic event changes\nthe \u201cnormal\u201d state of your system. But from another point of view a chaotic even\nshed some light for a particular part of your system showing up lack of\nmonitoring and instrumentation.<\/p>\n\n<p>There open source framework like <a href=\"https:\/\/github.com\/chaostoolkit\">Chaos-tookit<\/a>\nor famous tools like <a href=\"https:\/\/github.com\/Netflix\/chaosmonkey\">chaos-money<\/a>.<\/p>\n\n<p>I will try to start with some very simple example without writing too much\ncode. I will get out from my system these metrics:<\/p>\n\n<ol>\n  <li>Number of requests (probably from ingress\/nginx)<\/li>\n  <li>The number of requests with status code &gt; 499<\/li>\n  <li>Http request latency<\/li>\n<\/ol>\n\n<p>After that, I will try to simulate an outage removing or scaling down particular\npods (the one that gets all the traffic) and I will look at how the metrics will\nchange and how long it takes to recover.<\/p>\n"},{"title":"OpenMetrics and the future of the prometheus exposition format","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/prometheus-openmetrics-expositon-format"}},"description":"This post explain my point of view around prometheus exposition format and it summarise the next step with OpenMetrics behing supported by CNCF and other big companies.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2018-08-23T08:08:27+00:00","published":"2018-08-23T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/prometheus-openmetrics-expositon-format","content":"<p>Who am I to tell you the future about the prometheus exposition format? Nobody!<\/p>\n\n<p>I was at the PromCon in Munich in August 2018 and I found the conference great!\nA lot of use cases about metrics, monitoring and prometheus itself. I work\nat InfluxData and we was there as sponsor but I followed a lot of talks and I\nhad the chance to attend the developer summit the next day with a lot of\npromehteus maintainers. Really good conversarsations!<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">I just\nrealized how lucky I was these days having chance to be so welcomed by the <a href=\"https:\/\/twitter.com\/hashtag\/prometheus?src=hash&amp;ref_src=twsrc%5Etfw\">#prometheus<\/a>\ncommunity. I love my work. Thanks <a href=\"https:\/\/twitter.com\/juliusvolz?ref_src=twsrc%5Etfw\">@juliusvolz<\/a> <a href=\"https:\/\/twitter.com\/TwitchiH?ref_src=twsrc%5Etfw\">@TwitchiH<\/a> <a href=\"https:\/\/twitter.com\/tom_wilkie?ref_src=twsrc%5Etfw\">@tom_wilkie<\/a> and\neveryone.. I feel regenerated<\/p>&mdash; :w !sudo tee % (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/1028414240535793664?ref_src=twsrc%5Etfw\">August\n11, 2018<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>To be honest my scope few years ago was very different, I was working in PHP\nwriting webapplication that yes I was deploying but I wasn\u2019t digging to much\naround them and I was not smart enough to uderstand that all the pull vs push situation was just all garbage.\nSmoke in the eyes that luckily I left behind me pretty soon because I had the\nchance to meet smart people that drove me out.<\/p>\n\n<p>Provide a comfortable way for me to expose and store metrics is a vital\nrequest and the library needs to expose the RIGHT data it doesn\u2019t matter if they\nare pushing or pulling.<\/p>\n\n<p>RIGHT means the best I can get to have more observability from an ops point of\nview, but also from a business intelligence prospetive probably just\nmanipulating again the same data.<\/p>\n\n<p>It is safe to say that a pull based exposition format is easy to pack together\nbecause it works even if the server that should grab the exposed endpoint is\nunavailable or even if nothing will grab them. A push based service will always\ncreate some network noice even if nobody has interest on getting the metrics.<\/p>\n\n<p>Back in the day we had SNMP but other than being an internet standard the\nadoption is not comparable with the prometheus one, if we had how old it is and\nhow fast prometheus growed the situation gets even worst.<\/p>\n\n<pre><code>.1.0.0.0.1.1.0 octet_str \"foo\"\n.1.0.0.0.1.1.1 octet_str \"bar\"\n.1.0.0.0.1.102 octet_str \"bad\"\n.1.0.0.0.1.2.0 integer 1\n.1.0.0.0.1.2.1 integer 2\n.1.0.0.0.1.3.0 octet_str \"0.123\"\n.1.0.0.0.1.3.1 octet_str \"0.456\"\n.1.0.0.0.1.3.2 octet_str \"9.999\"\n.1.0.0.1.1 octet_str \"baz\"\n.1.0.0.1.2 uinteger 54321\n.1.0.0.1.3 uinteger 234\n<\/code><\/pre>\n\n<p>It also started as network exposing format, so it doesn\u2019t express really well\nother kind of metrics.<\/p>\n\n<p>The <a href=\"https:\/\/github.com\/prometheus\/docs\/blob\/master\/content\/docs\/instrumenting\/exposition_formats.md\">prometheus exposition\nformat<\/a>\nis extremly valuable and I recently instrumented a legacy application using the\nprometheus sdk and my code looks a lot more clean and readable.<\/p>\n\n<p>At the beginning I was using logs as transport layer for my metrics and time\nseries but I ended up having a lot of spam in log themself because I was also\nstreaming a lot of \u201cnot logs but metrics\u201d garbage.<\/p>\n\n<p>The link to the prometheus doc above is the best place to start, here I am just\ncopy pasting something form there:<\/p>\n\n<pre><code># HELP http_requests_total The total number of HTTP requests.\n# TYPE http_requests_total counter\nhttp_requests_total{method=\"post\",code=\"200\"} 1027 1395066363000\nhttp_requests_total{method=\"post\",code=\"400\"}    3 1395066363000\n\n# Escaping in label values:\nmsdos_file_access_time_seconds{path=\"C:\\\\DIR\\\\FILE.TXT\",error=\"Cannot find file:\\n\\\"FILE.TXT\\\"\"} 1.458255915e9\n\n# Minimalistic line:\nmetric_without_timestamp_and_labels 12.47\n\n# A weird metric from before the epoch:\nsomething_weird{problem=\"division by zero\"} +Inf -3982045\n\n# A histogram, which has a pretty complex representation in the text format:\n# HELP http_request_duration_seconds A histogram of the request duration.\n# TYPE http_request_duration_seconds histogram\nhttp_request_duration_seconds_bucket{le=\"0.05\"} 24054\nhttp_request_duration_seconds_bucket{le=\"0.1\"} 33444\nhttp_request_duration_seconds_bucket{le=\"0.2\"} 100392\nhttp_request_duration_seconds_bucket{le=\"0.5\"} 129389\nhttp_request_duration_seconds_bucket{le=\"1\"} 133988\nhttp_request_duration_seconds_bucket{le=\"+Inf\"} 144320\nhttp_request_duration_seconds_sum 53423\nhttp_request_duration_seconds_count 144320\n\n# Finally a summary, which has a complex representation, too:\n# HELP rpc_duration_seconds A summary of the RPC duration in seconds.\n# TYPE rpc_duration_seconds summary\nrpc_duration_seconds{quantile=\"0.01\"} 3102\nrpc_duration_seconds{quantile=\"0.05\"} 3272\nrpc_duration_seconds{quantile=\"0.5\"} 4773\nrpc_duration_seconds{quantile=\"0.9\"} 9001\nrpc_duration_seconds{quantile=\"0.99\"} 76656\nrpc_duration_seconds_sum 1.7560473e+07\nrpc_duration_seconds_count 2693\n<\/code><\/pre>\n\n<p>Think about that not as the prometheus way to grab metrics, but as the language\nthat your application uses to teach the outside world how does it feels.<\/p>\n\n<p>It is just a plain text entrypoint over HTTP that everyone can parse and re-use.<\/p>\n\n<p>For example\n<a href=\"https:\/\/www.influxdata.com\/time-series-platform\/kapacitor\/\">kapacitor<\/a> or\n<a href=\"https:\/\/www.influxdata.com\/time-series-platform\/telegraf\/\">telegraf<\/a> have\nspecific ways to parse and extract metrics from that URL.<\/p>\n\n<p>If you don\u2019t have time to write a parser for that you can use\n<a href=\"https:\/\/github.com\/prometheus\/prom2json\">prom2json<\/a> to get a JSON version of\nthat.<\/p>\n\n<p>In Go you can dig a bit more inside that code and reuse some of functions for\nexample:<\/p>\n\n<pre><code class=\"language-go\">\/\/ FetchMetricFamilies retrieves metrics from the provided URL, decodes them\n\/\/ into MetricFamily proto messages, and sends them to the provided channel. It\n\/\/ returns after all MetricFamilies have been sent.\nfunc FetchMetricFamilies(\n\turl string, ch chan&lt;- *dto.MetricFamily,\n\tcertificate string, key string,\n\tskipServerCertCheck bool,\n) error {\n\tdefer close(ch)\n\tvar transport *http.Transport\n\tif certificate != \"\" &amp;&amp; key != \"\" {\n\t\tcert, err := tls.LoadX509KeyPair(certificate, key)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ttlsConfig := &amp;tls.Config{\n\t\t\tCertificates:       []tls.Certificate{cert},\n\t\t\tInsecureSkipVerify: skipServerCertCheck,\n\t\t}\n\t\ttlsConfig.BuildNameToCertificate()\n\t\ttransport = &amp;http.Transport{TLSClientConfig: tlsConfig}\n\t} else {\n\t\ttransport = &amp;http.Transport{\n\t\t\tTLSClientConfig: &amp;tls.Config{InsecureSkipVerify: skipServerCertCheck},\n\t\t}\n\t}\n\tclient := &amp;http.Client{Transport: transport}\n\treturn decodeContent(client, url, ch)\n}\n<\/code><\/pre>\n<p><a href=\"https:\/\/github.com\/prometheus\/prom2json\/blob\/master\/prom2json.go#L123\">FetchMetricsFamilies<\/a> can be used to get a channel with all the fetched\nmetrics. When you have the channel you can make what you desire:<\/p>\n\n<pre><code class=\"language-go\">mfChan := make(chan *dto.MetricFamily, 1024)\n\ngo func() {\n    err := prom2json.FetchMetricFamilies(flag.Args()[0], mfChan, *cert, *key, *skipServerCertCheck)\n    if err != nil {\n        log.Fatal(err)\n    }\n}()\n\nresult := []*prom2json.Family{}\nfor mf := range mfChan {\n    result = append(result, prom2json.NewFamily(mf))\n}\n<\/code><\/pre>\n\n<p>As you can see\n<a href=\"https:\/\/github.com\/prometheus\/prom2json\/blob\/master\/cmd\/prom2json\/main.go#L42\"><code>prom2json<\/code><\/a>\nconverts the result to JSON.<\/p>\n\n<p>It is pretty fleximple! And it is a common API to read applicatin status. A\ncommon API we all know means automation! Dope automation!<\/p>\n\n<h2 id=\"future\">Future<\/h2>\n<p>The prometheus exposition format growed in adoption across the board and a\ncouple of people leaded by <a href=\"https:\/\/twitter.com\/TwitchiH\">Richard<\/a> are now pushing\nto have this format as new Internet Standard!<\/p>\n\n<p>The project is called <a href=\"https:\/\/openmetrics.io\/\">OpenMetrics<\/a> and it is a Sandbox\nproject under CNCF.<\/p>\n\n<p>if you are looking to follow the project here the official repository on\n<a href=\"https:\/\/github.com\/OpenObservability\/OpenMetric\">GitHub<\/a>.<\/p>\n\n<p>Probably it looks just a political step with no value at all from a tech point of\nview but I bet when it will be a standard and not just \u201cthe prometheus\nexposition\u201d we will start to have routers exposing stats over\n<code>http:\/\/192.168.1.1\/metrics<\/code> and it will be a lot of fun!<\/p>\n\n<p>It will be obvious that it is not a <code>only-prometheus<\/code> feature and this new group\nhas people from difference companies and backgrounds. So the exposition format\nwill be probably not just for operational metrics but more generic.<\/p>\n"},{"title":"Apps I used during my nomad working","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/nomad-working-apps"}},"description":"When I travel for conferences or now that I work in remote and I am a bit more like a nomad I discovered and learned some good apps that helps me to plan and to combine better work and travel. Some of them are WorkFrom, oBike, Yep, Adobe Scan. let me know yours.","image":"https:\/\/gianarb.it\/img\/dna.jpg","updated":"2018-08-11T10:38:27+00:00","published":"2018-08-11T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/nomad-working-apps","content":"<p>It is a couple of years now since my first conference and now that I am working\nin remote, it is even harder to combine traveling and work.<\/p>\n\n<p>Mainly because now that I don\u2019t need to go to an office I travel more often, and\nusually conference are a better more far away too. It means that I spent more\ntime, not in my usual workplace.<\/p>\n\n<p>It is a very challenging and exciting opportunity, and I am glad to live it.\nThis is the first important things I think. If you are not happy about what you\ndo, even if it is challenging, you are going to give up.<\/p>\n\n<p>But other things help me a lot. In general, they are come back to <code>planning.<\/code> I\nfeel better when I have the time to look around and to be prepared for the city\nI am going to visit. There are a couple of apps that help me with that:<\/p>\n\n<h2 id=\"workfrom\"><a href=\"https:\/\/workfrom.co\/\">WorkFrom<\/a><\/h2>\n<p>It is a community of digital nomad and remote worker. It keeps up to date a\ndatabase of pubs, libraries, restaurants, bars where you can work from.  It is\nvery nice, and it has some nice feature like:<\/p>\n\n<ol>\n  <li>A map, so you can see what is around you<\/li>\n  <li>Net speed measurement inside the app. Other than a detailed description of\nthe place it also shares (if the person who reviewed the area made it) how\nthe internet connection is fast and sometimes even the WiFi password, in this\nway you don\u2019t need to ask.<\/li>\n  <li>As I mentioned it is a community, so there is also a Slack Channel that you\ncan use to speak with another remote worker.<\/li>\n<\/ol>\n\n<p>I used it in Berlin, Munich, Copenaghen, Amsterdam and it worked pretty well.<\/p>\n\n<h2 id=\"bike-and-other-local-transport-applications-i-am-using\">bike and other local transport applications I am using<\/h2>\n<p><a href=\"https:\/\/www.o.bike\/it\/\">oBike<\/a> as an example because just because I used it\nrecently in Munich but I think you should always have a look to what the city\nuses for bike sharing, for instance, because at least for me even if I love to\nwalk around take a ride time to time is faster and helpful.<\/p>\n\n<p>Bonus point a lot of these apps have a free tier in means that you can even use\nthem for free the first time you visit that city!<\/p>\n\n<h2 id=\"yelp\">Yelp<\/h2>\n<p>In general, I find <a href=\"https:\/\/www.yelp.com\/\">Yelp<\/a> better than TripAdvisor for\nwhat concerns restaurant and place to eat. So when I am not able to spot nothing\ngood by myself during my walks around the city or when I am looking for a\nspecific kind of good I use Yelp.<\/p>\n\n<h2 id=\"adobe-scan\">Adobe Scan<\/h2>\n<p>For this <a href=\"https:\/\/acrobat.adobe.com\/us\/en\/mobile\/scanner-app.html\">app<\/a> I need to give\ncredit to <a href=\"https:\/\/twitter.com\/fntlnz\">Lorenzo<\/a> because he showed that app to me\nthe first time. I use it a lot after a trip when I need to submit the expenses.\nIt is always a very annoying work to do but at least with this app, I can take a\nset of pictures, and it will generate a single pdf ready to be submitted!<\/p>\n\n<h2 id=\"revolut\">Revolut<\/h2>\n<p>A few years ago when I was working at CurrencyFair I started to test\nand play with online banks and <a href=\"https:\/\/revolut.com\/r\/gianlu1b2\">Revolut<\/a> is\nvery good when you are traveling around, and you need to manage different\ncurrencies. First of all, I like the idea to use a different card when I buy on\nAmazon or when I travel because in case of any trouble I will have a limited\namount of money there. For example in Cube I had my card cloned but I only had\n10 Euro, so it was not the first count. (the bank give me the money back in any\ncase btw).<\/p>\n\n<p>Plus Revolut has some excellent features to track where and what you spent.  You\ncan label your transfers to easy lookup them when you need to expense or\ncalculate how much you spent. The exchange commission low compared with the more\ntraditional banks, so this is a free win!<\/p>\n\n<p>Let me know on <a href=\"https:\/\/twitter.com\/gianarb\">Twitter<\/a> if you have any other\napplications to suggest I will be happy to try them next time and maybe to add\nthem here!<\/p>\n\n<p><small><a href=\"https:\/\/www.newhdwallpapers.in\/natural-hd-wallpapers\/himalayas-mountain-series-tibet\/\" target=\"_blank\">hero img credits<\/a> <\/small><\/p>\n"},{"title":"FAQ: Distributed tracing","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/faq-distributed-tracing"}},"description":"Tracing is a well known concept in programming but distributed tracing is a revisitation to adapt the concept for a distributed system. This article is an FAQ where I answer common questions I received or I saw around the net about monitoring and distributed tracing.","image":"https:\/\/gianarb.it\/img\/dna.jpg","updated":"2018-07-06T10:38:27+00:00","published":"2018-07-06T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/faq-distributed-tracing","content":"<p>This article is a write up of a talk that I will give at the\n<a href=\"https:\/\/osmc.de\">osmc<\/a> in Germany in November about Distributed tracing.\nIt is a sequence of questions I got about distributed systems and distributed\nmonitoring.<\/p>\n\n<h2 id=\"why-do-i-need-distributed-tracing\">Why do I need distributed tracing?<\/h2>\n<p>It always depends, I find distributed tracing useful in a microservices\nenvironment or more in general when there is a request that flies in a system\ncrossing different applications, queues or processes.\nIf you have a problem understanding where a request fails, you need to\n<em>follow it<\/em> in some way and tracing does just that.<\/p>\n\n<h2 id=\"how-do-you-follow-a-request\">how do you follow a request?<\/h2>\n<p>First of all, we should probably change the name <em>request<\/em>, it looks too HTTP\noriented, and it is not really what we look for now. In modern application, you\nare interested in <em>events<\/em>.  You need to monitor an event:<\/p>\n\n<ul>\n  <li>user registration<\/li>\n  <li>payment<\/li>\n  <li>a bank transaction<\/li>\n  <li>send an email<\/li>\n  <li>generate an invoice<\/li>\n<\/ul>\n\n<p>These are all events, and probably in your system, they are distributed not via\nHTTP but maybe they go in a queue, or they are broadcasted using Kafka or Redis.\nDistributed tracing is all about tracking events. The way to go is to create an\nid. Usually, it is called <code>request_id<\/code> or <code>trace_id<\/code> and you need a way to\npropagate it in your system.<\/p>\n\n<p>For example, in a queue, you can put the <code>trace_id<\/code> as part of the payload. Via\nHTTP or gRPC you can use Headers.<\/p>\n\n<p>Your application can take that id, and it can create the span to trace a\nparticular section.<\/p>\n\n<h2 id=\"how-a-trace-looks-like\">how a trace looks like?<\/h2>\n\n<p><img src=\"\/img\/trace.jpg\" alt=\"How I image a trace for a distributed tracing app\" class=\"img-fluid\" \/><\/p>\n\n<p>In my mind, this is a picture for one trace. Every segment is a span.\nSo, every span has a trace id, and every span has its own <code>span_id<\/code>.\nYou can attach information to every span as key value store. Let\u2019s suppose a\nspan represent a query in mysql you can put the query as metadata in the span\nitslef. In this way you will have a bit more context.<\/p>\n\n<h2 id=\"do-we-need-a-standard-for-tracing\">do we need a standard for tracing?<\/h2>\n\n<p>I can\u2019t convince you that interoperability is essential if you already analyzed\nthe problem and you answered \u201cNo\u201d to yourself.\nTo build a trace you need to agree on something over languages and\napplications.\nThat\u2019s why I think a standard is something you can not avoid, at the end you\nwill end up having one just for your company.<\/p>\n\n<h2 id=\"how-a-tracing-infrastructure-looks\">how a tracing infrastructure looks?<\/h2>\n\n<p><img src=\"\/img\/tracing_infra.png\" style=\"width:70%\" alt=\"Sketch of tracing infrastructure.\" class=\"img-fluid\" \/><\/p>\n\n<p>The applications that are writing traces is not important. Traces is cross\nplatform and languages. Usually, you\npoint an app to a tracer. It can be Zipkin, Jaeger or others.<\/p>\n\n<p>The tracer takes all the traces, and it stores them in a storage. The databases are\nusually ElasticSearch, Cassandra, InfluxDB. It depends on which tracer you are\nusing. They support different databases.<\/p>\n\n<p>In general traces are high cardinality oriented data, and you can write a lot\nof them in a short amount of time. So it is a write-intensive\napplication.<\/p>\n\n<p>There are a couple of other pieces that you can add in your tracing\ninfrastructure:<\/p>\n\n<ul>\n  <li>You can add a <em>downsampler<\/em> to select what to store. If an API request generate\ntoo many traces probably you are interested in storing only a % of them to\ndecrease pressure on your database. So you can use a simple distributed hash\nalgorithm on the trace_id to declare what to save or not. A <code>mod<\/code> on the\n<code>trace_id<\/code> is enough for example.<\/li>\n  <li>You can add a <em>collector<\/em> in front of the tracer. Zipkin support Kafka For\nexample. In InfluxDB we use telegraf. A collector is usually  a stateless\napplication, it gets all the traces from the applications. It bulks them and\nsends them to the tracer. A collector decreases the pressure on the tracer\nitself because usually, they work better with a bulk of data. In second if a\ntracer go down or you need to update it, the collector is a layer that can\nkeep the traces for a little bit to give you time to restore the tracer.<\/li>\n<\/ul>\n\n<h2 id=\"why-did-i-pick-opentracing\">why did I pick opentracing?<\/h2>\n\n<p>I am an interoperability oriented developer; I think it is essential to avoid\nvendor lock-in and embracing a big community like the opentracing one you get\na lot of tools and services already instrumented with this protocol.  It makes\nmy life easy.<\/p>\n\n<h2 id=\"can-i-have-a-tracing-infrastructure-on-prem\">can I have a tracing infrastructure on-prem?<\/h2>\n\n<p>You can; there are a couple of tracers open source.<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/zipkin.io\/\">Zipkin<\/a> is an open source\nproject in Java started by Twitter.<\/li>\n  <li><a href=\"https:\/\/github.com\/jaegertracing\/jaeger\">Jaeger<\/a> looks a lot like a porting of\nit in Golang and Uber makes it.<\/li>\n<\/ul>\n\n<p>Both of them are open source, and they support different backends like\nElasticSaerch, Cassandra and so on.<\/p>\n\n<h2 id=\"there-are-as-a-service-tracing-infrastructure\">there are as a service tracing infrastructure?<\/h2>\n\n<p>There are, NewRelic has an opentracing compatible API, or\n<a href=\"https:\/\/lightstep.com\/\">Lightstep<\/a> for example.  A lot of cloud providers offer\na tracing service. AWS X-RAY or Google Stackdriver.<\/p>\n\n<h2 id=\"can-i-store-traces-everywhere\">can I store traces everywhere?<\/h2>\n\n<p>You can, but they are a high cardinality data. The <code>trace_id<\/code> is usually the\nlookup parameter for your queries. It means that it should be indexed, but it\nchanges for every request. The consequence is a big index.\nYou need to keep it in mind.<\/p>\n\n<h2 id=\"once-you-do-the-tracethen-what\">Once you do the trace\u2026then what?<\/h2>\n\n<p>I left this question as the last one because I read it in the opentracing\nmailing list and I think it is a hilarious question.<\/p>\n\n<p>First of all, you don\u2019t buy a pan and after the fact you start asking yourself\nwhy you have it.<\/p>\n\n<p>Probably you need to write something, and for that reason, you\nbuy a pen.<\/p>\n\n<p>Anyway, I trace my applications because it helps me to understand my environment\nover the \u201cdistribution complexity.\u201d I can detect what is taking too long and a\ntrace helps me to understand what to optimize.<\/p>\n\n<p>Opentracing has a set of standard annotation very useful to detect network\nlatency between services. You can mark a span as \u201cclient send\u201d request for\nexample. And when the server gets the request, you can mark another span as\n\u201cserver received.\u201d This two information is useful to know how much time your\nrequest spends going from client to the server and you can optimize them time\nusually working on the proximity between these two applications.<\/p>\n\n<p>More in general you can parse a trace to get what ever you need as normal logs\nor events the powerful things is downsampling and analysis.\nIf you are tracing a queue system you can get the average time for a worker to\nprocess a message.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>Let me know if you have more questions on twitter\n<a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a>. I am happy to answer them here.<\/p>\n"},{"title":"Logs, metrics and traces are equally useless","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/logs-metrics-traces-aggregation"}},"description":"The key monitoring a distributed system is not logs, metrics or traces but how you are able to aggregate them. You can not observe and monitor a complex system looking at single signals.","image":"https:\/\/gianarb.it\/img\/dna.jpg","updated":"2018-06-18T10:38:27+00:00","published":"2018-06-18T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/logs-metrics-traces-aggregation","content":"<p>Every signal from applications or infrastructure is useless, in the distributed\nsystem era aggregation matters.<\/p>\n\n<p>The ability to combine logs, metrics and, traces together is the key takeaway\nhere.<\/p>\n\n<p>Kubernetes spin ups too many containers to allow us to stream or tail a log\nfail.<\/p>\n\n<p>Even cloud providers offer too many virtual machines to enable us to tail\nlogs.<\/p>\n\n<p>A centralized place where to store all of them is a great start, but you\nneed to experience and learn how to combine the metrics you are ingesting to\nincrease the visibility over your system.<\/p>\n\n<p>If you instrument your code with\nopentracing, for example, you can get the <code>trace_id<\/code> and attach it to your log\nto associate it with the trace itself. It can also work as the lookup key for\ntroubleshooting.<\/p>\n\n<p>If you get some weird logs, you will know from where it comes.\nWith opentracing, this is still a bit of a mess the specification recently\n<a href=\"https:\/\/github.com\/opentracing\/specification\/blob\/master\/rfc\/trace_identifiers.md\">added explicit support to extract TraceId and SpanId from the\nSpanContext<\/a>.\nIt is currently not implemented in a lot of implementation. I recently started a\nconversation in the\n<a href=\"https:\/\/github.com\/opentracing\/opentracing-go\/issues\/188\">opentracing-go<\/a>\nproject to figure out how to apply it because currently it depends from what\ntracer you are using and it is an essential regression for the specification\nitself that should hide it by design.<\/p>\n\n<p>Using Jaeger this is the way to do it:<\/p>\n<pre><code>if sc, ok := span.Context().(jaeger.SpanContext); ok {\n  sc.TraceID()\n}\n<\/code><\/pre>\n<p>Using Zipkin:<\/p>\n<pre><code>zipkinSpan, ok := sp.Context().(zipkin.SpanContext)\nif ok == true &amp;&amp; zipkinSpan.TraceID.Empty() == false {\n  w.Header().Add(\"X-Trace-ID\", zipkinSpan.TraceID.ToHex())\n}\n<\/code><\/pre>\n\n<p>To get back in track, I wrote this article because I saw this problem and this\ninclination speaking with friends, colleagues and other devs, we are now good\n(or just better) storing high cardinality values but save them inside a database\ndoesn\u2019t give us any value it is all about how we use them.<\/p>\n\n<p>Correlation brings your alert to a different level. You probably have an alarm\nto measure how much disks you still have.<\/p>\n\n<p>An alert on the only CPU usage can be very frustrated\neven more if it happens too often and a lot of time you restart a container or a\nnode to make it work because at 2 am you can\u2019t fix the cause. You can\ninvestigate what matters to fill an issue on GitHub.<\/p>\n\n<p>Every automation tools can make your work leaving you free to sleep. It can\nprobably fill out the issue.<\/p>\n\n<p>Combining the CPU with the time for the system to recover from a node restart\ncan make your alert smart enough to wake you up when it is not able to fix\nitself leaving you ready for more acute and trivial problems.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>It is a pretty straightforward concept, but yes, everything is useless if you\nstore data without getting values out of them doesn\u2019t matter if they are logs,\nmetrics or traces.  The real value is not in a single one of them, it is in how\ndo you aggregate them together because a complex simple doesn\u2019t explain itself\n    over one signal.<\/p>\n"},{"title":"Cloud Native Intranet with Kubernetes, CoreDNS and OpenVPN","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/cloud-native-intranet"}},"description":"Designing an architecture the network should be a top priority because it is very hard to change moving forward. Even in a cloud environment running on Kubernetes the situation doesn't change. Security and networking are hard pattern hard to inject in old projects. In this talk I will share a practical idea about how to start in the best way with OpenVPN and private DNS in a Kubernetes cluster in order to build your own intranet.","image":"https:\/\/gianarb.it\/img\/kubernetes.png","updated":"2018-05-29T10:38:27+00:00","published":"2018-05-29T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/cloud-native-intranet","content":"<p>This article has a marketing and buzzword oriented title. I know.<\/p>\n\n<p>Let me introduce you to what I am going to speak with you here with better\nworlds: VPN, private, DNS, kubernetes, security.<\/p>\n\n<p>I hope we all agree that VPN should be a must-have when you set up an\ninfrastructure. It doesn\u2019t matter what you are doing, how many people are\nworking with you.<\/p>\n\n<p>When you design a new system usually you need to expose to the public only some\nservice over HTTP and HTTPS all the rest: Jenkins, monitoring tools,\ndashboards, log management should be locked-down and accessible just in a\nprivate network. An intranet.<\/p>\n\n<blockquote>\n  <p>An intranet is a private network accessible only to an organization\u2019s staff.\nOften, a wide range of information and services are available on an\norganization\u2019s internal intranet that is unavailable to the public, unlike the\nInternet.<\/p>\n<\/blockquote>\n\n<p>All these concepts apply to \u201cCloud Native\u201d ecosystem as well.<\/p>\n\n<p>Kubernetes has a powerful dashboard and CTL that you can use to interact with\nthe API. That API doesn\u2019t need to be publicly exposed, and to use the CLI from\nyour laptop, you should set up a VPN.<\/p>\n\n<h2 id=\"openvpn\">OpenVPN<\/h2>\n<p>Usually, I configure an OpenVPN using the image\n<a href=\"https:\/\/hub.docker.com\/r\/kylemanna\/openvpn\/\">kylemanna\/openvpn<\/a> available on\nDocker Hub. It is straightforward to apply, and it offers a set of utilities\naround user creation and certification management.<\/p>\n\n<pre><code class=\"language-yml\">apiVersion: v1\nkind: Service\nmetadata:\n  name: openvpn\n  namespace: openvpn\n  labels:\n    app: openvpn\nspec:\n  ports:\n  - name: openvpn\n    nodePort: 1194\n    port: 1194\n    protocol: UDP\n    targetPort: 1194\n  selector:\n    app: openvpn\n  type: NodePort\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: openvpn\n  namespace: \"openvpn\"\n  labels:\n    app: openvpn\nspec:\n  replicas: 1\n  strategy:\n    type: Recreate\n  selector:\n    matchLabels:\n      app: openvpn\n  template:\n    metadata:\n      labels:\n        app: openvpn\n    spec:\n      nodeSelector:\n        role: vpn\n      containers:\n        - name: openvpn\n          image: docker.io\/kylemanna\/openvpn\n          command: [\"\/etc\/openvpn\/setup\/configure.sh\"]\n          env:\n            - name: VPN_HOSTNAME\n              valueFrom:\n                configMapKeyRef:\n                  name: vpn-hostname\n                  key: hostname\n            - name: VPN_DNS\n              valueFrom:\n                configMapKeyRef:\n                  name: vpn-hostname\n                  key: dns\n          ports:\n            - containerPort: 1194\n              name: openvpn\n          securityContext:\n            capabilities:\n              add:\n                - NET_ADMIN\n          volumeMounts:\n            - mountPath: \/etc\/openvpn\/setup\n              name: openvpn\n              readOnly: false\n            - mountPath: \/etc\/openvpn\/certs\n              name: certs\n              readOnly: false\n      volumes:\n        - name: openvpn\n          configMap:\n            name: openvpn\n            defaultMode: 0755\n        - name: certs\n          persistentVolumeClaim:\n            claimName: openvpncerts\n<\/code><\/pre>\n<p>I put the persistentVolumeClaim to remember you to store in a persisted and\nsafe place (and you should backup them too) the certificates used and generated\nby the VPN <code>\/etc\/openvpn\/certs<\/code>.<\/p>\n\n<p>I won\u2019t write more about this topic; we are all excellent yaml developers!<\/p>\n\n<p>How to create users, configuration and so on is a well knows topic that you can\neasily <a href=\"https:\/\/openvpn.net\/index.php\/open-source\/documentation.html\">find in the OpenVPN\u2019s\ndocumentation<\/a>.<\/p>\n\n<p>I don\u2019t know if you realized that, but this VPN runs inside a Kubernetes\nCluster, so well configurated allow us to reach pods via a private network and\na bonus point also via kubedns to ping services, pods and all the other\nresources registered to it.<\/p>\n\n<p>To do that OpenVPN server can be configured to push kubedns to the client:<\/p>\n<pre><code>dhcp-option DNS &lt;kube-dns-ip&gt;\n<\/code><\/pre>\n<p>Something with learned is that if you are using Linux the\nNetworkManager-OpenVPN plugin pushes the DNS correctly, but the OpenVPN cli\ntool doesn\u2019t if you are using the last one you need to set it up in another\nway.<\/p>\n\n<p>Tips: You can take the <code>&lt;kube-dns-ip&gt;<\/code> doing <code>cat \/etc\/resolv.conf<\/code> from inside a pod.<\/p>\n\n<h2 id=\"dns\">DNS<\/h2>\n<p>Push the KubeDNS or the DNS used by kubernetes is not enough to have a complete\nintranet. You should be able to set up a custom domain to have friendly or\nshort URL.<\/p>\n\n<p>You can take two different directions. KubeDNS can have static\nrecord configured, but some person is not happy to touch or customize too much\nthe KubeDNS because Kubernetes itself use it and if you mess it up all it can\nbe a problem.<\/p>\n\n<p>A possible solution is to deploy another DNS like CoreDNS and\nconfigures it to resolve KubeDNS as a fallback. In this way, you will be free\nto register custom LTDs and records. Kubernetes is going to use KubeDNS as\nusual, and if you mess up CoreDNS, only a fraction of your system will blow\nout.<\/p>\n\n<p>Naturally to resolve your custom domains from the VPN you need to push\nthe CoreDNS ip and not the one used by Kubernetes.<\/p>\n\n<p>If two DNSs are too much take the option one or from Kubernetes 1.10 you can\nuse CoreDNS as kubernetes DNS so it is a bit more flexible and you can use only\nthat one if you are brave enough.<\/p>\n\n<p>I suggested CoreDNS because it supports records configuration via\n<a href=\"https:\/\/github.com\/coredns\/coredns\/tree\/master\/plugin\/etcd\">etcd<\/a>. Here an\nexample of Corefile:<\/p>\n\n<pre><code>. {\n      errors\n      etcd *.myinternal {\n          stubzones\n          path \/skydns\n          entrypoint  http:\/\/etcd-1:2379,http:\/\/etcd-2:2379,http:\/\/etcd-3:2379\n          upstream \/etc\/resolv.conf\n      }\n      proxy . \/etc\/resolv.conf\n}\n<\/code><\/pre>\n<p>Running this configuration inside a pod automatically fallback to kubedns (that\nautomatically fallback to the one configured to reach internet). Because of\n<code>upstream<\/code> point to <code>resolv.conf<\/code> that inside a pod contains kubedns.<\/p>\n\n<h2 id=\"benefits\">Benefits<\/h2>\n<p>Resolve Kubernetes DNS record from your local environment is very comfortable\nto build a shared or dynamic development environment for you and your\ncolleagues.<\/p>\n\n<p>You can set up per-developer namespaces that they can use to\ndeploy services reachable from the program that they are writing. Or you can\ndeploy your application, and another person connected to the VPN will be able\nto use it.<\/p>\n"},{"title":"Server time vs Response time","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/server-time-vs-response-time"}},"description":"How do you dimension an infrastructure? How can you calculate container limits or how many nodes your application requires to support a specific load? Response time and server time are two key measurements to monitor saturation.","image":"https:\/\/gianarb.it\/img\/pastrami-sf.jpg","updated":"2018-05-18T10:38:27+00:00","published":"2018-05-18T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/server-time-vs-response-time","content":"<p>If you find yourself in San Francisco walking nearby Market Street, you should\nconsider stopping at the Jewish Museum. There is a charming Pastrami place just\nnext to it. It is a sandwich place with good lemonade. It only takes 3-4 minutes\nto get your meal, and from there it takes no more than 15 minutes walking to be\nin front of the Ocean. Very nice!\nNow, let\u2019s consider this other scenario.\nIt is lunchtime, and you are starving. You rush outside your office, and you run\nto the Pastrami place close to the Jewish Museum. After 35 minutes of wait, you\nget your sandwich and start eating it asking yourself: why it took so long this\ntime? Shall I probably have walked to the next place to get a faster meal?<\/p>\n\n<p>Something similar can happen to your Services as well! And that\u2019s precisely the\nphenomena in computer science we try to capture using the concepts of server\ntime and response time.\nServer time aims to measures how much a server takes to run a specific action.\nLet\u2019s say consider an example operation the generation of a monthly report: it\nusually takes 2ms, but if a lot of customers require the same kind of report at\nthe same time and your system saturates? This situation might very quickly end\nup in having a subset of them getting the report in more than 1 minute or\nactually in the timeout of the operation. The time it takes for a customer to\nget his report is what is typically called response time.<\/p>\n\n<h2 id=\"how-can-we-measure-these-metrics\">How can we measure these metrics?<\/h2>\n<p>The answer to this question is not easy: it depends on your architecture and\nsystem. The starting point is instrumenting your application to determine how\nmuch time it gets to produce the report. Stress testing is the other important\naspect: generating some load on your application and sampling the average\nresponse time will let you estimate the application\u2019s service time. Notice that\nto make this measurement the app should NOT soak during this test!<\/p>\n\n<p>If you control all the chain (from the HTTP app that sends the request to the\nserver), you can trace the request and simulate the same behavior of your\ncustomers. If you can\u2019t do this, you can consider using the frontend edge,\nprobably a load balancer.<\/p>\n\n<blockquote>\n  <p>I would rather have questions that can\u2019t be answered than answers that can\u2019t\nbe questioned. Richard Feynman<\/p>\n<\/blockquote>\n\n<h2 id=\"why-does-it-matter\">Why does it matter<\/h2>\n<p>How many nodes do I need to deploy to accommodate x number of requests per\nsecond? When should I consider scaling out my application? How does scale-out\naffect the customer experience?\nThis is precisely why server time and response time matters! Having an average\nresponse time close to the defined service time is a signal of proper\nutilization and health of an application because it indicates that the response\nlatency is under control and it is far from saturation. Bringing to the limit\nthese two signals, in addition, is a key metric to estimate the correct sizing\nof the applications instances and infrastructure.<\/p>\n\n<p><img alt=\"Market Street San Francisco, Pastrami Resturant Jewish Meseum\" src=\"\/img\/pastrami-sf.jpg\" class=\"img-fluid\" \/><\/p>\n\n<p>Btw the Pastrami place exists! You should try it! I will be in SF in 2 weeks. So\nlet me know about other places <a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a>.\nPicture from GMaps. I will take a better one!<\/p>\n"},{"title":"Go how to cleanup HTTP request terminated.","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/go-http-cleanup-http-connection-terminated"}},"description":"Cleaning up HTTP request, the most expensive one can be a huge performance improvement for your application. This short article shows how to handle HTTP request termination in Go.","image":"https:\/\/gianarb.it\/img\/gopher.png","updated":"2018-04-25T10:38:27+00:00","published":"2018-04-25T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/go-http-cleanup-http-connection-terminated","content":"<p>Expensive HTTP handler is everywhere, doesn\u2019t matter how good you are as a\ndeveloper. Business logic is what matters in our application, and it can be\npretty complicated. It can create large files, resources on AWS starts\nthousands of containers on Kubernetes.<\/p>\n\n<p>This kind of procedures have in common they can be very slow and they produce a\nlot of garbage if the system\/person who requires that stops prematurely by\nmistake or not.<\/p>\n\n<p>If your API requests create AWS resources and the client, terminate the call\nyou should clean what you created.<\/p>\n\n<p>if you are generating a report and the customer changes are mind and refresh\nyou should stop the procedure.<\/p>\n\n<p>You bet! Queues, background processes probably fit better but coming back on\nthe previous example, if you are computing something and who is waiting for the\nresult changed his mind, stop and release resources can be a massive\noptimization.<\/p>\n\n<pre><code class=\"language-bash\">package main\n\nimport (\n    \"fmt\"\n    \"io\/ioutil\"\n    \"net\/http\"\n    \"os\"\n    \"time\"\n)\n\nfunc main() {\n    http.HandleFunc(\"\/a\", func(w http.ResponseWriter, r *http.Request) {\n        err := ioutil.WriteFile(os.TempDir()+\"\/txt\", []byte(\"hello\"), 0644)\n        if err != nil {\n            panic(err)\n        }\n        println(\"new file \" + os.TempDir() + \"\/txt\")\n        notify := w.(http.CloseNotifier).CloseNotify()\n        go func() {\n            &lt;-notify\n            println(\"The client closed the connection prematurely. Cleaning up.\")\n            os.Remove(os.TempDir() + \"\/txt\")\n        }()\n        time.Sleep(4 * time.Second)\n        fmt.Fprintln(w, \"File persisted.\")\n    })\n    http.ListenAndServe(\":8080\", nil)\n}\n<\/code><\/pre>\n\n<p>When you are building an HTTP server in Go, you can use a channel provided by\nthe Zhttp.ResponseWriter` to wait for the connection to be closed. And if it\nhappens, you can take action.  The prototype above is very simple, every\nrequest stores a file but I would like, remove the file if the client closes\nthe connection.<\/p>\n\n<pre><code class=\"language-bash\">$ run main.go\n<\/code><\/pre>\n\n<p>You can start the server, and from another terminal, you can start a <code>curl<\/code>, you\nwill see that after almost 4 seconds your request will succeed and the file\nwill be persisted on disk. Check it!<\/p>\n\n<pre><code>$ time curl http:\/\/localhost:8080\/a\nFile persisted.\n\nreal    0m4.018s\nuser    0m0.008s\nsys     0m0.006s\n$ cat \/tmp\/txt\n<\/code><\/pre>\n\n<p>Now let\u2019s suppose that the client terminates the connection because it is too\nslow or the person who made the request doesn\u2019t care anymore.\nAre you going to leave that request going? Event if nobody cares and it is just\nconsuming resources?<\/p>\n\n<p>As you can see I am using the Notifier to remove the file if the client\nterminates the connection:<\/p>\n\n<pre><code class=\"language-go\">notify := w.(http.CloseNotifier).CloseNotify()\ngo func() {\n    &lt;-notify\n    println(\"The client closed the connection prematurely. Cleaning up.\")\n    os.Remove(os.TempDir() + \"\/txt\")\n}()\n<\/code><\/pre>\n\n<p>You can check it stopping a <code>curl<\/code> just after starting it:<\/p>\n\n<pre><code>$ time curl http:\/\/localhost:8080\/a\n^C\n\nreal    0m1.016s\nuser    0m0.008s\nsys     0m0.005s\n<\/code><\/pre>\n<p>And the server reports<\/p>\n\n<pre><code>$ go run main.go\nnew file \/tmp\/txt\nThe client closed the connection prematurely. Cleaning up.\n<\/code><\/pre>\n\n<p>That\u2019s it! Build and clean after yourself!<\/p>\n"},{"title":"Go testing tricks","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/go-testing-tricks"}},"description":"This post contains some feedback about how to write tests in Go.","image":"https:\/\/gianarb.it\/img\/gopher.png","updated":"2018-04-17T10:38:27+00:00","published":"2018-04-17T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/go-testing-tricks","content":"<p>I recently wrote a blog post with my <a href=\"\/blog\/testing-shit\">point of view about\ntesting<\/a>. I used Go as the language to concretize it. I had\ngood feedback about that article, and this is all about how I write tests in\nGo.<\/p>\n\n<h2 id=\"fixtures\">Fixtures<\/h2>\n<p>I wrote that I don\u2019t like them, but I think they are useful. You can use them\nto verify the same function checking the same assertion with different input.\nSo let\u2019s say you are testing a function that returns the multiplication of two\nnumbers if the first is even, it returns the division if not.<\/p>\n\n<p>I will write two tests, one to test events number and one to test the other\ncase, and I will set up two fixtures one for every test. I won\u2019t write just one\ntest with elaborate fixtures because they are hard to read and the name of the\ntest function will help a lot to understand the assertion. Small example good\nfor blogging purpose. But I hope you got the idea.<\/p>\n\n<pre><code class=\"language-golang\">package test\n\nimport \"testing\"\n\nfunc MagicFunction(f int, s int) int {\n    if f%2 == 0 {\n        return f * s\n    }\n    return f \/ s\n}\n\nfunc TestEventInputsShouldReturnMoltiplication(t *testing.T) {\n    table := []struct {\n        first  int\n        second int\n        result int\n    }{\n        {2, 1, 2},\n        {4, 10, 40},\n    }\n    for _, s := range table {\n        if r := MagicFunction(s.first, s.second); r != s.result {\n            t.Errorf(\"Got %d, expected %d. They should be the same.\", r, s.result)\n        }\n    }\n}\n\nfunc TestOddInputsShouldReturnDivision(t *testing.T) {\n    table := []struct {\n        first  int\n        second int\n        result int\n    }{\n        {15, 3, 5},\n        {21, 7, 3},\n    }\n    for _, s := range table {\n        if r := MagicFunction(s.first, s.second); r != s.result {\n            t.Errorf(\"Got %d, expected %d. They should be the same.\", r, s.result)\n        }\n    }\n}\n<\/code><\/pre>\n\n<h2 id=\"sub-test\">sub-test<\/h2>\n\n<p>To make the fixtures a bit better I use the <code>t.Run<\/code> function a lot. It is a\nfeature introduced in Go 1.9 as part of the <code>testing<\/code> package.<\/p>\n\n<pre><code class=\"language-go\">package test\n\nimport (\n    \"fmt\"\n    \"testing\"\n)\n\nfunc MagicFunction(f int, s int) int {\n    if f%2 == 0 {\n        return f * s\n    }\n    return f \/ s\n}\n\nfunc TestEventInputsShouldReturnMoltiplication(t *testing.T) {\n    table := []struct {\n        first  int\n        second int\n        result int\n    }{\n        {2, 1, 2},\n        {4, 10, 40},\n    }\n    for _, s := range table {\n        t.Run(fmt.Sprintf(\"%d * %d\", s.first, s.second), func(t *testing.T) {\n            if r := MagicFunction(s.first, s.second); r != s.result {\n                t.Errorf(\"Got %d, expected %d. They should be the same.\", r, s.result)\n            }\n        })\n    }\n}\n<\/code><\/pre>\n\n<p><code>vim-go<\/code> has an option <code>let g:go_test_show_name=1<\/code> to allow the name of the\ntest as part of the output for :GoTest or :GoTestFunc. This helps a lot to\nenjoy this feature.<\/p>\n\n<h2 id=\"golden-files\">Golden files<\/h2>\n\n<p>Golden files are something used in different packages in the Go standard\nlibrary, and Michael Hashimoto spoke about it during his brilliant talk about\ntesting at the <a href=\"https:\/\/www.youtube.com\/watch?v=8hQG7QlcLBk\">GopherCon 2017<\/a>.\nIn case of complex output, you can verify the result of the tests with the\ncontent of a file. It improves order and readability.  When you declare a\nglobal flag in your test, it becomes available inside <code>go test<\/code> so if you use\nthe update flags all the tests will pass, but you will update all the golden\nfiles. So this is very useful if you need to compare a lot of bytes.<\/p>\n\n<pre><code class=\"language-go\">update := flag.Bool(\"update-golden-files\", false, \"Update golden files.\")\n<\/code><\/pre>\n\n<pre><code class=\"language-sh\">go test -update-golden-files\n<\/code><\/pre>\n<p>I was using this trick a lot when I was writing PHP code, and I was testing HTTP responses.<\/p>\n\n<h2 id=\"test-helper-and-return-function\">Test helper and return function<\/h2>\n<p>When you have repeatable code across tests, you can create a helper function,\nand you can use it in your tests. There are two general rules about this\napproach:<\/p>\n\n<ol>\n  <li>The helper function should have access to *t testing.T variable.<\/li>\n  <li>Your helper never returns an error; it marks the test as failed. That\u2019s why\nit needs access to <code>*t. testing.T<\/code>.<\/li>\n<\/ol>\n\n<p>Another good trick is to return a function from the helper to clean up what you\ndid in the helper. So let\u2019s say that your helper starts an HTTP server. You can\nreturn the HTTP Close function as a callback.<\/p>\n\n<pre><code class=\"language-go\">func () testHelperStartHTTPServer(t *testing.T) func() {\n    ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n        \/\/ long and complex mock maybe with a golden file and so on\n    }))\n    return func() { ts.Close() }\n}\n\nfunc TestYourTest(t *testing.T) {\n    hclose := testhelperStartHTTPServer()\n    \/\/ All your logic and checks\n    defer hclose()\n}\n<\/code><\/pre>\n<p>I used the same practice when I was writing integration tests using bash and\n<a href=\"https:\/\/github.com\/sstephenson\/bats\">bats<\/a>. It is a very clean and easy to\nread approach.<\/p>\n\n<h2 id=\"parallel\">parallel<\/h2>\n<p>You can use the function <code>t.Parallel()<\/code> to notify at the test runner that your\ncase can run in parallel with other tests marked as parallel.  When you write\nunit tests, you can almost always run them in parallel because they should be\ncompletely isolated.<\/p>\n\n<h2 id=\"short-and-verbose\">Short and verbose<\/h2>\n<p><code>-short<\/code> and <code>-v<\/code> are two flags available when you run <code>go test<\/code>. You can use\nthem in your tests:<\/p>\n\n<pre><code>import \"testing\"\n\nfunc TestVeryLongAndExpensiveCapability(t *testing.T) {\n  if testing.Short() {\n    t.Skip(\"skipping this testsVeryLongAndExpensive is too expensive\")\n  }\n  \/\/ ... other code\n}\n<\/code><\/pre>\n<p><code>-short<\/code> describes itself pretty well, you can skip tests that are too long and expensive.<\/p>\n\n<p><code>-v<\/code> allows you to print more:<\/p>\n<pre><code>import \"testing\"\nfunc TestVeryLongAndExpensiveCapability(t *testing.T) {\n  if testing.Verbose() {\n  }\n  \/\/ ... other code\n}\n<\/code><\/pre>\n\n<h2 id=\"testingquick\">testing\/quick<\/h2>\n<p><a href=\"https:\/\/golang.org\/pkg\/testing\/quick\/\">testing\/quick<\/a> is a nice package that\noffers a set of utilities to write test quick. Go has not an assertion library\ninside the stdlib but this can help if you are like me and you are happy to not\nvendor assertion libraries because <code>if { }<\/code> with some sugar is what I need.<\/p>\n\n<p>So that\u2019s it, have fun and write tests!<\/p>\n"},{"title":"The Go awesomeness","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/go-awesomeness"}},"description":"After 1 year writing go every day at work this is why I like to work with it.","image":"https:\/\/gianarb.it\/img\/fight-club.jpg","updated":"2018-04-09T10:38:27+00:00","published":"2018-04-09T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/go-awesomeness","content":"<p>It\u2019s one year since I started to use Go every day at work. I was using it before\nbut for fun or OSS projects. I was looking for my next challenge, and I was\nmainly working with PHP, JavaScript previously and I knew that a compiled,\nstatically typed language was my next step.<\/p>\n\n<p>At my previous job at CurrencyFair, the environment was pretty standard for a\nfinancial tech company so backend in Java, frontend in PHP. But my experience\nwith all the interfaces and abstract classes that I created in Java at that time\nmade me hate that language. So I was looking for something different.<\/p>\n\n<p>I was as I am now involved in automation, cloud and\noperational other than development so all the tools like Docker, InfluxDB,\nKubernetes, Consul, Vault was in golang and for me as OSS addicted it become the\nnatural choice.\nNow after all this time I am ready to write why I think Go is the right choice\nfor me now.<\/p>\n\n<h2 id=\"1-abstraction-and-maintainability\">1. abstraction and maintainability<\/h2>\n<p>I wrote a lot about <a href=\"\/blog\/the-abstract-manifesto\">this topic<\/a>, so I am not going to repeat myself. But I\nthink maintainability is tied together with abstraction. Previously when I was\nworking with PHP, we always had services, injection and so on. In that\nenvironment it was good, but all that abstraction like in Java doesn\u2019t make your\ncode more flexible. It makes it hard to understand in the long run and code\nneeds to be written with history in mind because delete code is very hard.\nGo with its interface implementation, how it forces you to struct the project\nhelps the codebase to grow in a better way.<\/p>\n\n<h2 id=\"2-stdlib\">2. Stdlib<\/h2>\n<p>Community wasted time across languages to identify the right way to indent code.\nGo comes with that decision done. Same for testing. How to write automatic\ntests, benchmarks is inside the language. No libraries, it is there.\nMore in general os, net, net.http, img and so on, a lot of stuff are provided by\nthe language itself. It is great because you don\u2019t need anything to start,\nother than Go. Compared with other languages you can do a lot more things.\nHaving all this feature inside Go guarantees compatibility over time, they won\u2019t\nbreak compatibility for the next years, and the code is developed and reviewed\nby a large number of people.<\/p>\n\n<h2 id=\"3-pprof\">3. pprof<\/h2>\n<p>pprof is a profiler, and it is shipped as part of Go. You can use it even via\ncli, or it also has an excellent HTTP package under\n<a href=\"https:\/\/golang.org\/pkg\/net\/http\/pprof\/\">net\/HTTP\/pprof<\/a>.\nJust to show you how much power it can be InfluxDB extends it to export a zip\narchive with all the information we need to troubleshoot the database behavior:<\/p>\n\n<pre><code class=\"language-go\">func (h *Handler) handleProfiles(w http.ResponseWriter, r *http.Request) {\n    switch r.URL.Path {\n        case \"\/debug\/pprof\/cmdline\":\n            httppprof.Cmdline(w, r)\n        case \"\/debug\/pprof\/profile\":\n            httppprof.Profile(w, r)\n        case \"\/debug\/pprof\/symbol\":\n            httppprof.Symbol(w, r)\n        case \"\/debug\/pprof\/all\":\n            h.archiveProfilesAndQueries(w, r)\n        default:\n            httppprof.Index(w, r)\n    }\n}\n<\/code><\/pre>\n<p>Here all the code\n<a href=\"https:\/\/github.com\/influxdata\/influxdb\/blob\/442581d299b7d642e073bbe42112fa9b58fb071a\/services\/httpd\/pprof.go#L21\">influxdata\/influxdb<\/a>.\nThis is super useful because we can ask customers or developer in the OSS\ncommunity to export and upload the archive to see what is going on.\nHaving a standard way to troubleshoot and export a profile allows us to build\nvisualization or static analysis on it for common calculation.<\/p>\n\n<h2 id=\"4-delve\">4. delve<\/h2>\n<p>A good debugging session is the best way to approach a new application or to go\ndeeper learning a language or a software.\n<a href=\"https:\/\/github.com\/derekparker\/delve\">delve<\/a> is easy to setup and to use. Even\nif you are not gdb\/debugger superhero as I am not you will be able to make your\n    first steps with delve. So it is a nice starting point too.<\/p>\n\n<h2 id=\"5-godoc\">5. godoc<\/h2>\n<p>Other than behind an excellent way to generate documentation from source code I\nuse it a lot even when I am not designed libraries just to double check that my\npackage has the comprehensive public methods. I always think about what I am\nexposing to the outside when I write code. APIs are not just JSON or HTTP thing,\nevery object exposes their API, and you need to be aware of how you are building\niteration between the internal state and the outside. Avoid misuse of your\nstructs is your responsibility as developer and godoc help me to identify poor\ndecision.<\/p>\n\n<h2 id=\"6-vim-go\">6. vim-go<\/h2>\n<p>I would like to stay in my terminal all day, and vim-go allows me to write good\ncode in my comfort zone. In the past I wrote a lot of vim scripts and plugins,\nfollowing how fatih and all the other maintainers are developing\n<a href=\"https:\/\/github.com\/fatih\/vim-go\">vim-go<\/a> is great.\nBonus point they recently added support for delve, so you can now debug golang\napplication in vim!<\/p>\n\n<h2 id=\"7-dep\">7. dep<\/h2>\n<p>Dependency management is probably the worst things that Go has. The good thing\nis that now we have [dep] and it should become the standard way to manage\ndependencies. Right now the situation looks a lot like this:<\/p>\n\n<p><img class=\"img-fluid\" src=\"\/img\/fight-club.jpg\" \/><\/p>\n\n<p>Govender, go get, glide currently there are a lot of different ways to manage\ndependencies, and it generates a lot of confusion, but I hope at the end we will\nconverge in just one. Probably dep.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>More in general with Go I am learning that the language is one of expect to\nbecome a good developer. A good developer needs to know the language, but the\nbest way to go deeper in it is writing tests, benchmarks, profiling application\nand using the debugger. All these tools make my life as developer easy. Easy\nlife for me means that I can go deeper solving problems and indirectly it will\nmake me a better developer.<\/p>\n\n<p>Go is fun!<\/p>\n"},{"title":"Observability according to me","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/observability"}},"description":"Prometheus, InfluxDB, TimescaleDB, Cassandra and all the time series databases that show up every week is a clear sign that now we need more than just a way to store metrics. I think is now clear that collecting more metrics is the point. More data is not directly related to a deeper understanding of our system.","image":"https:\/\/gianarb.it\/img\/mountain-garbage.jpg","updated":"2018-04-04T10:38:27+00:00","published":"2018-04-04T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/observability","content":"<p>I started to read about observability almost one year ago, <a href=\"https:\/\/twitter.com\/mipsytipsy\">Charity\nMajor<\/a>\ncomes to my mind when I thinking about this topic and she is the person that is\npushing very hard on this.<\/p>\n\n<p>This is probably the natural evolution of how we approach monitoring.<\/p>\n\n<p>Distributed systems require a different way to approach the three monitoring\npiles: collect, storage and analytics.<\/p>\n\n<p>Understand a microservices environment brings a new layer of complexity and the\nmost obvious consequence is the amount of data that we are storing compared, I\nthink it is way more than before.<\/p>\n\n<p>Prometheus, InfluxDB, TimescaleDB, Cassandra and all the time series databases\nthat show up every week is a clear sign that now we need more than just a way to\nstore metrics. I think is now clear that collecting more metrics is the point.\nMore data is not directly related to a deeper understanding of our system.<\/p>\n\n<p>Observability for a lot of companies look like a new way to sell analytics\nplatform but according to me it\u2019s a scream to bring us back to the problem: \u201cHow\ncan we understand what is happening?\u201d or even better \u201cHow should we use the data\nwe have to understand what\u2019s going on?\u201d.<\/p>\n\n<p>All the data should be organized, reliable and usable. Logs, metrics, traces\nare part of the resolution, the brain to analyze and get value out from them is\nwhat Observability means to me.<\/p>\n\n<p>Visualization is one expect, proactive monitoring, correlation, and hierarchy\nare other steps. Looking at our old graphs all of them are driven by the\nhostname for example. But now we have containers, we have virtual machines and\nimmutable infrastructure makes rebuilding less costly and more secure than an\nincremental update. The name of the server should not be the keyword for our\nqueries, the focus should be moved to the role of services.<\/p>\n\n<p>Think about your Kubernetes cluster, you label servers based on what they will\nrun, if something unusual happens the first things to do is to move the node out\nfrom the production pool, the autoscaler will replace it and you will be\ntroubleshoot it later.<\/p>\n\n<p>Before we were looking at processes, we were keeping them alive as the Olympic\nflame but containers are making them volatile. We spin them up and down for\nevery request in some serverless environment. What we care and what we should\nmonitor are the events that float across our services, that\u2019s the new gold. W we\ncan lose 1000 containers but we can\u2019t miss the purchase order made my a\ncustomer. All our effort should be moved on that side.<\/p>\n\n<p>I love this point of view because it brings us to what really matters, our\napplications.<\/p>\n\n<p><img src=\"\/img\/mountain-garbage.jpg\" class=\"img-fluid\" \/><\/p>\n\n<p>According to me the mountain of waste showed in the picture explains really well\nour current situation, we collected what ended up to be a lot of garbage and now\nwe need to climb it looking for a better point of view. I think the data in our\ntime series databases are not garbage but gold, it\u2019s just not simple as it\nshould be.<\/p>\n\n<p>That\u2019s why is great that companies are building tools to fill the gap:\n<a href=\"https:\/\/github.com\/influxdata\/ifql\">IFQL<\/a> is an example. The idea behind the\nproject is to build a language to query and manipulate data in an easy way. Same\nfor company line Honeycomb or open source projects like Grafana and Chronograf\nthat are trying to make these data easy to use.<\/p>\n\n<p>We spoke about tools but there is another big expect and it\u2019s all a cultural,\ndistributed teams need different tools to collaborate and troubleshoot problems.\nDifferent UI and way to interact with graph and metrics.<\/p>\n"},{"title":"I don't give a shit about testing","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/testing-shit"}},"description":"This is all about how I approach testing in development. TDD, DDD, unit test, integration test. It should make my development faster and my code easy to maintain. We have a lot of different techniques because we need to be good on picking the right one.","image":"https:\/\/gianarb.it\/img\/laziness.jpg","updated":"2018-03-29T10:38:27+00:00","published":"2018-03-29T10:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/testing-shit","content":"<p>That\u2019s what I learned during my experience as a developer. Doesn\u2019t matter\nwhich languages you end up working if you are making HTTP API or things like\nthat you don\u2019t have an excuse. Write tests make your development faster, and it will\ndrastically improve maintainability of your project.<\/p>\n\n<p>In this post, I would like to tell you how I approach testing particularly in\nGo obviously.<\/p>\n\n<p>First of all, when you create a new file, you should write its <code>_test.go<\/code> child.\nIt\u2019s hard to tell you who should be the child of who. Sometimes I start\nto write everything inside a test function, just because run the actual test is faster\ncompared with compile, run the binary, trigger the right entry point and so on.\nWhen I am satisfied, I move the\ncode to a function, and I leave the assertions I wrote as a new test. <strong>Pretty good<\/strong>.<\/p>\n\n<blockquote>\n  <p>I don\u2019t give a shit about automated testing. I write tests.<\/p>\n<\/blockquote>\n\n<p>I use <a href=\"https:\/\/github.com\/fatih\/vim-go\"><code>vim-go<\/code><\/a> and <code>:GoTestFunc<\/code> is probably\nthe most used shortcut during my day to day job.<\/p>\n\n<p>When I can choose I don\u2019t use assertion libraries, the <code>testing<\/code> package is\nenough for me and dependency management in go is a pain, so fewer things I vendor\nbetter I feel about myself.<\/p>\n\n<p><img src=\"\/img\/laziness.jpg\" class=\"img-fluid\" \/><\/p>\n\n<p>I use fixtures, but I don\u2019t like them. I prefer to write more small tests than complicated fixtures.<\/p>\n\n<p>A single test for me is more descriptive, and I don\u2019t mind to write redundant code, I can always refactor it later or move it some helper function. A complicated fixture will be hard to maintain.\nThe name of the function is an excellent way to describe what you are covering in your test and the function itself\ncreates a beautiful block that improves readability.<\/p>\n\n<pre><code class=\"language-go\">func TestCarComposition(t *testing.T) {\n    fixtures := []car.Composition{\n        {\"blue\", \"europe\", 1, nil, \"2011-12-05\", \"ford\"},\n        {\"\", \"\", 35, bool, nil, \"ford\"},\n        {\"red\", \"usa\", 0, bool, nil, \"fiat\"},\n        {\"white\", \"\", 35, bool, nil, \"kia\"},\n        {\"orange\", \"\", 1, bool, \"2010-05-12\", nil},\n        {\"\", \"\", 0, bool, nil, \"ford\"},\n    }\n}\n<\/code><\/pre>\n\n<p>Bonus point, as you can see fixtures are sad to read!<\/p>\n\n<p>Even unit vs. integration vs. function is a very annoying discussion. Don\u2019t tell me\nabout TDD, BDD, CCC, DDD things. I don\u2019t care they are all amazing as\nsoon as they can make my development simple.<\/p>\n\n<p>So, CDD is probably my best test methodology: <strong>Comfort driven development<\/strong>.<\/p>\n\n<p>Usually when I am writing a computing function when it elaborates maps, strings,\nfiles without using too many external resources I start from unit test, because\nit makes iteration faster as I told before. And it won\u2019t require too many mocks.\nI don\u2019t like mocks.<\/p>\n\n<h2 id=\"lets-discuss-mocks\">Let\u2019s discuss mocks<\/h2>\n<p>Mocks are a pain; you end up to be bored when you write mocks, they won\u2019t fail when it\u2019s useful for you to see an error and they will fail when you don\u2019t care.\nSo comfort looks very far from mocks!<\/p>\n\n<p>When mocks becomes too complicated, and I can write another kind of tests I go with that solution. Maybe integration or I will try to write the simple mock\npossible, sometimes even the entire web server can be a valuable solution:<\/p>\n\n<pre><code class=\"language-go\">func TestInfluxDBSdkGetTheRightValues(ti *testing.T) {\n    ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {\n        data := influxdb.Response{\n            Results: []influxdb.Result{\n                {\n                    Series: []influxdbModels.Row{},\n                },\n            },\n        }\n        w.Header().Add(\"Content-Type\", \"application\/json\")\n        w.WriteHeader(http.StatusOK)\n        _ = json.NewEncoder(w).Encode(data)\n    }))\n    defer ts.Close()\n\n    config := influxdb.HTTPConfig{Addr: ts.URL}\n    client, _ := influxdb.NewHTTPClient(config)\n    defer client.Close()\n\n    \/\/ Whatever you need to check\n}\n<\/code><\/pre>\n<p>You need to play carefully; these tests are slower and more expensive in\nresources.\nBut I like the idea to take the faster solution when I am developing; you can\ncome back on your tests later when the feature is more stable and better\ndesigned. Write tests should not slow me down too much, I am looking for a way\nto write the implementation and the test fasts to iterate on both of them other than waste time making everything perfect. Nothing will be forever; nothing will be complete in programming, so design your environment to be easy to change.<\/p>\n\n<h2 id=\"integration-tests\">Integration tests<\/h2>\n<p>I am a CLI kind of person, so I often send HTTP requests via cURL.\nDocker is very easy from day one to start and stop your application,\ncleaning databases and so on.<\/p>\n\n<p><a href=\"https:\/\/github.com\/sstephenson\/bats\"><code>bats<\/code><\/a> combines these two sentences. It is an automation test framework for\nbash. It is very simple to setup, and it allows me to copy paste some cURL, and\nwith jq, you can make the checks you need over your JSON response.<\/p>\n\n<p>An integration test suite made with bats looks like that:<\/p>\n\n<ol>\n  <li>An \u201cinit\u201d file in bash where you can run setup and teardown function before and after every test. Usually, you can you that functions to spin up and down the\ncontainers that you need to tests, this is the one that I wrote for this\nexample<\/li>\n<\/ol>\n\n<pre><code class=\"language-bash\">#!\/bin\/bash\n\nfunction setup() {\n  teardownCallback=$(init)\n}\n\nfunction teardown() {\n  eval $teardownCallback\n}\n\nfunction getHost() {\n  echo \"http:\/\/localhost\"\n}\n\nfunction init {\n  executionID=$(cat \/dev\/urandom | tr -dc 'a-zA-Z0-9' | fold -w 7 | head -n 1)\n  containerLabels=\"exec=${executionID}\"\n  $(docker run -d -l $containerLabels -p 80:80 nginx)\n  echo \"docker ps -aq -f 'label=${containerLabels}' | xargs docker rm -f\"\n}\n<\/code><\/pre>\n<ol>\n  <li>You have a set of <code>.bats<\/code> files with the various scenarios, I wrote one to\ncheck if the status code 200 for the nginx home<\/li>\n<\/ol>\n\n<pre><code class=\"language-bash\">#!\/usr\/bin\/env bats\n\nload utils\n\n@test \"Nginx home return 200\" {\n      statusCode=$(curl -I -X GET \"$(getHost)\" 2&gt;\/dev\/null | head -n 1 | cut\n      -d$' ' -f2)\n        [ $statusCode -eq 200 ]\n}\n<\/code><\/pre>\n\n<p>What you are running is a <code>bats<\/code> test to check that <code>nginx:latest<\/code> is serving the right page.\nYour use case will be ten times more complicated.<\/p>\n\n<p>Another reason to take this approach is about bash itself. If you are not a bash\nexpert, you will probably end up to write straightforward tests, cURL, grep,\nregex and some pipes. Nothing more.<\/p>\n\n<p>And you won\u2019t use any code that runs your application. It\u2019s important to avoid\nweird buggy tests.<\/p>\n\n<h2 id=\"developer-happiness\">developer happiness<\/h2>\n<p>Tests are a methodology to decrease the cost of maintenance and to improve your\nability to write code.<\/p>\n\n<p>It should not be a fashion way to show how good you are as a developer. You will\nbe a good developer as a side effect.<\/p>\n\n<p>I look at all the different way to tests my code as a tool set, AI is becoming\nvery smart. So we need to be less \u201cserver\u201d and more human been. 100% coverage\nfor unit tests looks a lot like something that a server can do. Pick the right\nmethod based on your feeling.<\/p>\n\n<script>\n$(document).ready(function() {\n\t$('body').css(\"background\", \"#F5F3E6\");\n});\n<\/script>\n\n"},{"title":"Review book Database Reliability Engineer","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/database-reliability-engineer-review"}},"description":"Review Book Database Reliability Engineer. Author Laine Campbell and Chairty Major. Published by O'Reilly","image":"https:\/\/gianarb.it\/img\/dbre-book.jpg","updated":"2018-03-27T09:08:27+00:00","published":"2018-03-27T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/database-reliability-engineer-review","content":"<p>The author of <a href=\"https:\/\/amzn.to\/2FE5z4V\">Database Reliability Engineer<\/a> are Laine Campbell, Charity Majors.\nIt is published by O\u2019Reilly. You can buy it on Amazon.<\/p>\n\n<p>You probably know the book <a href=\"https:\/\/gianarb.it\/blog\/site-reliability-engineering-review\">Site Reliability\nEngineer<\/a>, if you don\u2019t I reviewed\nit a few days ago.<\/p>\n\n<p><img src=\"\/img\/dbre-book.jpg\" class=\"img-fluid\" \/><\/p>\n\n<p>This book walks the same experience but focused on databases. I work for\nInfluxData as SRE and I deal every days with databases running on our Cloud\nProduct so I am in that topic but now because I am good as DBA but just as a\ndeveloper with a background on distributed system and cloud computing.<\/p>\n\n<p>It doesn\u2019t really matter if you manage databases as DBA or if you use it as a\ndeveloper this book contains useful contents for both categories.<\/p>\n\n<p>It is a dance book, I started it ready it few months ago and at some point I\nspotted, just because it contains too many notions and experiences and it\nrequires some time to metabolize them.<\/p>\n\n<p>I particularly enjoyed the Chapter 7 about <strong>Backup and Recovery<\/strong>. But also Chapter\n3 about about <strong>Risk Management<\/strong> is great because it goes deeper on how metrics\nshould drive your way to look at risks and outage.<\/p>\n\n<p>My daily job is all around database orchestration, containers and so on. So I\nfound this book very useful for my day to day job and it enforced the expectation\nthat I had about the application that I am building.<\/p>\n\n<p>I will try to go deeper on backup management to make the recovery part of a more\nstructure pipeline to be sure that it is always usable for example.<\/p>\n\n<p>It is a very practical book based and you can fill all the enthusiasm and the\nexperiences made by the two authors Charity and Laine. If you are happy to\nlearn from who has its hands dirty this book is your book. It drives you an\nstory and less learned.<\/p>\n\n<p>How to manage migration and how to fill the gap between developer and DBA\nbecause they are both the same goals the success of the company and the project.<\/p>\n\n<p>If we can keep both on the same loop avoiding backstabbing we will increase our\nchance of success. If you are a manages and you see some friction in your teams,\nthis book can give you some good feedback.<\/p>\n"},{"title":"How to use a Forwarding Proxy with golang","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/golang-forwarding-proxy"}},"description":"Cloud, Docker, Kubernetes make your environment extremely dynamic, it has a lot of advantages but it adds another layer of complexity. This article is about forward proxy and golang. How to configure your http Client to use an http, https forward proxy for your golang application to increase security, scalability and to have a set of public ips for outbound traffic.","image":"https:\/\/gianarb.it\/img\/gopher.png","updated":"2018-03-21T09:38:27+00:00","published":"2018-03-21T09:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/golang-forwarding-proxy","content":"<p>A forwarding proxy is a proxy configuration that handle requests from a set of\ninternal clients that are trying to create a connection to the outside.<\/p>\n\n<p>In practice is a man in the middle between your application and the server that you are\ntrying to connect. It works over the HTTP(S) protocol and it is implemented at the\nedge of your infrastructure.<\/p>\n\n<p>Usually, you can find it in large organizations or universities and it is used as\nadditional control mechanism for authorization and security.<\/p>\n\n<p>I find it useful when you work with containers or in a dynamic cloud environment\nbecause you will have a set of servers for all the outbound network\ncommunication.<\/p>\n\n<p>If you work in a dynamic environment as AWS, Azure and so on you will end up\nhaving a variable number of servers and also a dynamic number of public IPs.\nSame if your application runs on a Kubernetes cluster. Your container can be\neverywhere.<\/p>\n\n<p>Now let\u2019s suppose that a customer asks you to provide a range of public IPs\nbecause he needs to set up a firewall\u2026 How can\nyou provide this feature?  In some environments can be very simple, in others\nvery complicated.<\/p>\n\n<p>1st December 2015 a users asked this question on the <a href=\"https:\/\/discuss.circleci.com\/t\/circleci-source-ip\/1202\">CircleCI\nforum<\/a> this request is\nstill open. This is just an example, CircleCi is great. I am not complaining\nabout them.<\/p>\n\n<p>One of the possible ways to fix this problem is via the forwarding proxy. You can\nspin up a set of nodes with a static ip and you can offer the list to the\ncustomer.<\/p>\n\n<p>Almost all cloud providers have a way to do that, floating ip\non DigitalOcean or Elastic IP on AWS.<\/p>\n\n<p>You can configure your applications to forward the requests to that pool and\nthe end services will get the ip from the forward proxy nodes and not from the\ninternal one.<\/p>\n\n<p>This can be another security layer for your infrastructure because you will be\nable to control and scan packages that are going outside from your network in a\nreally simple way and in a centralized place.<\/p>\n\n<p>This is not a single point of failure because you can spin up more than one\nforward proxies and they scale really well.<\/p>\n\n<p>Under the hood, a forward proxy is the <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/HTTP\/Methods\/CONNECT\">HTTP method\n<code>CONNECT<\/code><\/a>.<\/p>\n\n<blockquote>\n  <p>The CONNECT method converts the request connection to a transparent TCP\/IP\ntunnel, usually to facilitate SSL-encrypted communication (HTTPS) through an\nunencrypted HTTP proxy.<\/p>\n<\/blockquote>\n\n<p>A lot of HTTP Client across languages already support this in a very\ntransparent way. I built a very small example using golang and\n<a href=\"https:\/\/www.privoxy.org\/\">privoxy<\/a> to show you how simple it is.<\/p>\n\n<p>First of all, let\u2019s build an application called <code>whoyare<\/code>. It is an HTTP server\nthat returns your remote address:<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n\t\"encoding\/json\"\n\t\"net\/http\"\n)\n\nfunc main() {\n\thttp.HandleFunc(\"\/whoyare\", func(w http.ResponseWriter, r *http.Request) {\n\t\tw.Header().Set(\"Content-Type\", \"application\/json\")\n\t\tbody, _ := json.Marshal(map[string]string{\n\t\t\t\"addr\": r.RemoteAddr,\n\t\t})\n\t\tw.Write(body)\n\t})\n\thttp.ListenAndServe(\":8080\", nil)\n}\n<\/code><\/pre>\n\n<p>You can call the <code>GET<\/code> the route <code>\/whoyare<\/code> and you will receive a JSON like\n<code>{\"addr\": \"34.35.23.54\"}<\/code> where <code>34.35.23.54<\/code> is your public address. Running\n<code>whoyare<\/code> from your laptop if you make a request on your terminal you should get\n<code>localhost<\/code> as remote address. You can use curl to try it:<\/p>\n\n<pre><code class=\"language-bash\">18:36 $ curl -v http:\/\/localhost:8080\/whoyare\n* TCP_NODELAY set\n&gt; GET \/whoyare HTTP\/1.1\n&gt; User-Agent: curl\/7.58.0\n&gt; Accept: *\/*\n&gt;\n&lt; HTTP\/1.1 200 OK\n&lt; Content-Type: application\/json\n&lt; Date: Sun, 18 Mar 2018 17:36:40 GMT\n&lt; Content-Length: 31\n&lt;\n* Connection #0 to host localhost left intact\n{\"addr\":\"localhost:38606\"}\n<\/code><\/pre>\n\n<p>I wrote another application, it uses <code>http.Client<\/code> to print the response on\nstdout. If you have the server running you can run it:<\/p>\n\n<pre><code class=\"language-go\">package main\n\nimport (\n\t\"io\/ioutil\"\n\t\"log\"\n\t\"net\/http\"\n\t\"os\"\n)\n\ntype whoiam struct {\n\tAddr string\n}\n\nfunc main() {\n\turl := \"http:\/\/localhost:8080\"\n\tif \"\" != os.Getenv(\"URL\") {\n\t\turl = os.Getenv(\"URL\")\n\t}\n\tlog.Printf(\"Target %s.\", url)\n\tresp, err := http.Get(url + \"\/whoyare\")\n\tif err != nil {\n\t\tlog.Fatal(err.Error())\n\t}\n\tdefer resp.Body.Close()\n\tbody, err := ioutil.ReadAll(resp.Body)\n\tif err != nil {\n\t\tlog.Fatal(err.Error())\n\t}\n\tprintln(\"You are \" + string(body))\n}\n<\/code><\/pre>\n\n<p>So, this is a very simple example, but you can apply this example to more\ncomplex environments.<\/p>\n\n<p>To make this example a bit more clear I created two virtual machines on\nDigitalOcean, one will run privoxy the other one will run <code>whoyare<\/code>.<\/p>\n\n<ul>\n  <li><strong>whoyare<\/strong>: public ip 188.166.17.88<\/li>\n  <li><strong>privoxy<\/strong>: public ip 167.99.41.79<\/li>\n<\/ul>\n\n<p>Privoxy is a very simple to setup forward proxy, nginx, haproxy doesn\u2019t fit very\nwell for this use case because they do not support the CONNECT method.<\/p>\n\n<p>I built a docker image\n<a href=\"https:\/\/hub.docker.com\/r\/gianarb\/privoxy\/\"><code>gianarb\/privoxy<\/code><\/a>, it\u2019s on Docker\nHub. You can run and by default, it runs on port 8118.<\/p>\n\n<pre><code class=\"language-bash\">core@coreos-s-1vcpu-1gb-ams3-01 ~ $ docker run -it --rm -p 8118:8118\ngianarb\/privoxy:latest\n2018-03-18 17:28:05.589 7fbbf41dab88 Info: Privoxy version 3.0.26\n2018-03-18 17:28:05.589 7fbbf41dab88 Info: Program name: privoxy\n2018-03-18 17:28:05.591 7fbbf41dab88 Info: Loading filter file:\n\/etc\/privoxy\/default.filter\n2018-03-18 17:28:05.599 7fbbf41dab88 Info: Loading filter file:\n\/etc\/privoxy\/user.filter\n2018-03-18 17:28:05.599 7fbbf41dab88 Info: Loading actions file:\n\/etc\/privoxy\/match-all.action\n2018-03-18 17:28:05.600 7fbbf41dab88 Info: Loading actions file:\n\/etc\/privoxy\/default.action\n2018-03-18 17:28:05.607 7fbbf41dab88 Info: Loading actions file:\n\/etc\/privoxy\/user.action\n2018-03-18 17:28:05.611 7fbbf41dab88 Info: Listening on port 8118 on IP address\n0.0.0.0\n<\/code><\/pre>\n\n<p>The second step is to build and scp <code>whoyare<\/code> in your server. You can\nbuild it using the command:<\/p>\n\n<pre><code>$ CGO_ENABLED=0 GOOS=linux go build -o bin\/server_linux -a .\/whoyare\n<\/code><\/pre>\n<p>Now that we have the application up and running we can try via cURL to query it\ndirectly and via privoxy.<\/p>\n\n<p>Let\u2019s try directly as we did previously:<\/p>\n\n<pre><code>$ curl -v http:\/\/your-ip:8080\/whoyare\n<\/code><\/pre>\n\n<p><code>cURL<\/code> uses an environment variable <code>http_proxy<\/code> to forward the requests through\nthe proxy:<\/p>\n\n<pre><code>$ http_proxy=http:\/\/167.99.41.79:8118 curl -v http:\/\/188.166.17.88:8080\/whoyare\n*   Trying 167.99.41.79...\n* TCP_NODELAY set\n* Connected to 167.99.41.79 (167.99.41.79) port 8118 (#0)\n&gt; GET http:\/\/188.166.17.88:8080\/whoyare HTTP\/1.1\n&gt; Host: 188.166.17.88:8080\n&gt; User-Agent: curl\/7.58.0\n&gt; Accept: *\/*\n&gt; Proxy-Connection: Keep-Alive\n&gt;\n&lt; HTTP\/1.1 200 OK\n&lt; Content-Type: application\/json\n&lt; Date: Sun, 18 Mar 2018 17:37:02 GMT\n&lt; Content-Length: 29\n&lt; Proxy-Connection: keep-alive\n&lt;\n* Connection #0 to host 167.99.41.79 left intact\n{\"addr\":\"167.99.41.79:58920\"}\n<\/code><\/pre>\n<p>As you can see I have set <code>http_proxy=http:\/\/167.99.41.79:8118<\/code> and the response\ndoesn\u2019t contain my public ip but the proxy one.<\/p>\n\n<p><img src=\"\/img\/frankenstain-jr.jpg\" alt=\"\" \/><\/p>\n\n<p>These are the logs that you should expect from privoxy for the requests crossing it:<\/p>\n\n<pre><code>2018-03-18 17:28:22.886 7fbbf41d5ae8 Request: 188.166.17.88:8080\/whoyare\n2018-03-18 17:32:29.495 7fbbf41d5ae8 Request: 188.166.17.88:8080\/whoyare\n<\/code><\/pre>\n\n<p>The client that you run it previously by default it connects to <code>localhost:8080<\/code>\nbut you can override the target URL via env var <code>URL=http:\/\/188.166.17.88:8080<\/code>.\nRunning the following command I reached directly <code>whoyare<\/code>.<\/p>\n\n<pre><code>$ URL=http:\/\/188.166.17.88:8080 .\/bin\/client_linux\n2018\/03\/18 18:37:59 Target http:\/\/188.166.17.88:8080.\nYou are {\"addr\":\"95.248.202.252:38620\"}\n<\/code><\/pre>\n\n<p>The golang <code>HTTP.Client<\/code> supports a set of environment\nvariables to configure the proxy, it makes everything very flexible because\npassing\nthese variables to any service already running it will just work.<\/p>\n\n<pre><code>export HTTP_PROXY=http:\/\/http_proxy:port\/\nexport HTTPS_PROXY=http:\/\/https_proxy:port\/\nexport NO_PROXY=127.0.0.1, localhost\n<\/code><\/pre>\n<p>The first two are very simple, one is the proxy for the HTTP requests, the\nsecond for HTTPS. <code>NO_PROXY<\/code> excludes a set of hostname, the hostname listed\nthere won\u2019t cross the proxy.  In my case localhost and 127.0.0.1.<\/p>\n\n<pre><code>HTT_PROXY=http:\/\/forwardproxy:8118\n     +--------------+           +----------------+         +----------------+\n     |              |           |                |         |                |\n     |   client     +----------^+ forward proxy  +--------^+    whoyare     |\n     |              |           |                |         |                |\n     +--------------+           +----------------+         +----^-----------+\n                                                                |\n                                                                |\n    +---------------+                                           |\n    |               |                                           |\n    |   client      +-------------------------------------------+\n    |               |\n    +---------------+\n   HTTP_PROXY not configured\n<\/code><\/pre>\n<p>The client with the environment variables configured will cross the forward\nproxy. Other client will reach it directly.<\/p>\n\n<p>This granularity is very important. It\u2019s very flexible because other than a\n\u201cper-process\u201d you can also select what request to forward and what to exclude.<\/p>\n\n<pre><code>$ HTTP_PROXY=http:\/\/167.99.41.79:8118 URL=http:\/\/188.166.17.88:8080\n.\/bin\/client_linux\n2018\/03\/18 18:39:18 Target http:\/\/188.166.17.88:8080.\nYou are {\"addr\":\"167.99.41.79:58922\"}\n<\/code><\/pre>\n<p>As you can see we just reached <code>whoyare<\/code> via proxy and the <code>addr<\/code> in response is\nnow ours but the proxy one.<\/p>\n\n<p>The last command is a bit weird but it is just to show how the <code>NO_PROXY<\/code> works.\nWe are calling the proxy excluding the <code>whoyare<\/code> URL, and as expected it doesn\u2019t\ncross the proxy:<\/p>\n\n<pre><code>$ HTTP_PROXY=http:\/\/167.99.41.79:8118 URL=http:\/\/188.166.17.88:8080 NO_PROXY=188.166.17.88 .\/bin\/client_linux\n2018\/03\/18 18:42:03 Target http:\/\/188.166.17.88:8080.\nYou are {\"addr\":\"95.248.202.252:38712\"}\n<\/code><\/pre>\n<p>Let\u2019s read this article as a practical introduction to golang, forward proxy. You can\nsubscribe to my <a href=\"\/atom.xml\">rss feed<\/a> or you can follow me on\n<a href=\"https:\/\/twitter.com\/gianarb\">@twitter<\/a>. Probably I will write about how to\nreplace <code>privoxy<\/code> with golang and about how to setup and deploy this solution on\nKubernetes. So let me know what to write first!<\/p>\n"},{"title":"The abstract manifesto","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/the-abstract-manifesto"}},"description":"Often looking at the code I spot a lot of places where it looks too complicated. Disappointment is the feeling that I get reading classes with weird names or chain of abstractions or interfaces used only one time. Abstraction is often the reason for all my sadness.","image":"https:\/\/gianarb.it\/img\/shit-pretty.png","updated":"2018-03-17T10:08:27+00:00","published":"2018-03-17T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/the-abstract-manifesto","content":"<p>This is a personal outburst. Stop to abstract by default.<\/p>\n\n<p>I worked on too many application abstracted by default. And abstracted I mean\ncomplicated.<\/p>\n\n<p>The abstraction is easy to understand when you need it. If you need to think too\nmuch about why that crappy code should have an interface, or that made should\nimplement an interface and not an object you are out of track.<\/p>\n\n<p>Abstraction is not the answer, code architecture is, unit testing helps,\nintegration tests are the key to the modern microservices environment.<\/p>\n\n<p>Don\u2019t waste your time creating interfaces that nothing will reuse. If you don\u2019t\nknow what to do run.<\/p>\n\n<p>There are languages and design pattern that probably set your brain to look for\nabstraction everywhere. I worked with Java developer that wasn\u2019t able to write a\nclass without an interface, or without its abstract. My question was: \u201cWhy are\nwe doing that?\u201d. Compliance.<\/p>\n\n<blockquote>\n  <p>Dude, your world is a very boring one, and you are the root cause.<\/p>\n<\/blockquote>\n\n<p>If you are working in a service-oriented environment with services big enough\nto be rewritten easily the abstraction is even more useless.<\/p>\n\n<p>We are developers, we often don\u2019t build rockets. That\u2019s life, there are a good\namount of companies that make rockets, apply there or you will put your company\nin the condition of paying technical debts for you and they will hire smart\ncontractors to figure out what you did now that you are not working there\nbecause after probably just one year you locked yourself in that boring project\nfull of complicated concepts.<\/p>\n\n<p>btw, I don\u2019t think that software to control rockets has a lot of abstractions\nsorry.<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Y&#39;all are all about passionate programmers, but honestly I&#39;d\nrather programmers than care _just enough_. I could do with less pedantic\narguments about code.<\/p>&mdash; \uff61 \ud835\udd77\ud835\udd8e\ud835\udd93\ud835\udd89\ud835\udd98\ud835\udd8a\ud835\udd9e \ud835\udd6d\ud835\udd8e\ud835\udd8a\ud835\udd89\ud835\udd86 \uff61 (@lindseybieda) <a href=\"https:\/\/twitter.com\/lindseybieda\/status\/969296749985779712?ref_src=twsrc%5Etfw\">March\n1, 2018<\/a><\/blockquote>\n<script async=\"\" src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>I saw this tweet on my timeline yesterday and I think it really describes my\ncurrent mood. The code changes over time and I should spend more time making it\nflexible enough to support this continues to grow. Abstraction is not the right\nway.<\/p>\n\n<p>So, passionate code engineer always abstract? That\u2019s not the giveaway, Java\nengineer always abstract? maybe.<\/p>\n"},{"title":"Review book Site Reliability Engineering","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/site-reliability-engineering-review"}},"description":"A review about Site Reliability Engineering a book published by O'Reilly about Google and it's massive scale from the point of view of the engineers that made that scale possible. Distributed system, microservices and data driven development.","image":"https:\/\/gianarb.it\/img\/sre-book.jpeg","updated":"2018-03-15T10:08:27+00:00","published":"2018-03-15T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/site-reliability-engineering-review","content":"<p>I bought <a href=\"https:\/\/amzn.to\/2pfeHBU\">Site Reliability Engineering<\/a> a lot of months\nago. I read the ebook first but I am the kind of people that buy also paper\nbooks when they are good, so if you are working on a distributed and scalable\nenvironment it\u2019s something that you should read.<\/p>\n\n<p>Published by O\u2019Reilly and edited by Betsy Beyer, Chris Jones, Jennifer Petoff\nand Niall Richard Murphy it is written by many Google engineers and it\u2019s about\nthe experience they made scaling services like Google Maps, Calendar, YouTube\nand all the other products.<\/p>\n\n<p>I spoke with different people about this book and a lot of them told me that\nthere is nothing new on that. It\u2019s just cool because Google made it cool.<\/p>\n\n<p>I have a different option. It\u2019s a nice book because it is a complete source of\ninformation about design and processes in a highly scalable environment.\nProbably\nsome of them topic are well known but it\u2019s hard to find all this information in\na single place.<\/p>\n\n<p><img alt=\"Site Reliability Engineering book\" src=\"\/img\/sre-book.jpeg\" class=\"img-fluid\" \/><\/p>\n\n<p>To be fair, it has 524 pages so it\u2019s not a fast read. It  took me few\nmonths but I keep it around when I need to explain concepts like how to\ndimension and measure loads in a services environment. SLA, SLO and how to use\nthem properly to manage and measure risks are\nwell explained, circuit breaking and more, in general, a lot of good procedures\nabout\nresiliency, teamwork, delivery are explained in this book.<\/p>\n\n<p>There is a nice chapter about how to use the metrics to set up a function and\nsmart alerting system to keep engineer on-call in a safe and comfortable\nenvironment.<\/p>\n\n<p>Another one about how Google design resilient applications and how they\ndimension services. How much and how deep they know their services impressed me\na lot.<\/p>\n\n<p><strong>Site Reliability Engineering<\/strong> is a good mix of concepts that you can apply\nthrough your day to day no-at-Google job and the all the\nGoogle scale \u201cfreaky fun\u201d.<\/p>\n\n<p>So, in the end, I will define like the bible for an engineer that wish to work\nin a high-scalable environment. Doesn\u2019t matter if you are not there yet or if\nyou\nwon\u2019t serve millions of requests per second. It\u2019s good to read and to keep\naround.<\/p>\n\n<p>The <a href=\"https:\/\/landing.google.com\/sre\/book\/index.html\">HTML version<\/a> of this book\nis now available online for free.<\/p>\n"},{"title":"What is distributed tracing. Zoom on opencensus and opentracing","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/what-is-distributed-tracing-opentracing-opencensus"}},"description":"Distributed tracing is a fast growing concept. We increased the distributions for our applications and the consequence is a different complexity to monitor and understand what is going on across regions and applications (microservices). With this article I share something about what tracing is and my experience with opentracing and opencensus.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2018-02-18T10:08:27+00:00","published":"2018-02-18T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/what-is-distributed-tracing-opentracing-opencensus","content":"<p>A few months ago I started to actively study, support and use opentracing and\nmore in general the distributed tracing topic.<\/p>\n\n<p>In this article, I will share something about what I have learned starting from\nthe basic I hope to get your thoughts and questions via Twitter\n<a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a>.<\/p>\n\n<p>We all know the trend for the last couple of years. Spread applications across\ncontainers, cloud providers and split them into smallest units called services\nor microservices, pets\u2026<\/p>\n\n<p>This procedure brings a lot of advantages:<\/p>\n\n<ul>\n  <li>you can manage people in a better way and spread them across this small units.<\/li>\n  <li>small units are easy to understand for new people or after a couple of months.\nIn a work like our where there is a high turnover having the ability to\nrewrite a service if nobody knows it in a couple of days it\u2019s great.<\/li>\n  <li>You can monitor these units in a better way and if you detect scalability\nproblems or bottleneck you can stay focused on the specific problem without\nhaving other functions around. It enforces the single responsibility in some\nway.<\/li>\n<\/ul>\n\n<p>Btw there are other points for sure, but the last one is very important and I\nthink it helps us to understand why tracing is so important now.<\/p>\n\n<p>We discover that monitor this pets is very hard and it\u2019s different compared to\nthe previous situation. A lot of teams discovered this complexity moving forward\nwith services making noise in production.<\/p>\n\n<p>Our focus is not on the virtual machine, on the hostname or the even on the\ncontainer. I don\u2019t care about the status of the server. I care about the status\nof the service and even deeper I care about the status of a single event in my\nsystem. This is also one of the reasons why tools like Kafka are so powerful and\npopular. Reply a section of your history and collect events like user\nregistration, new invoice, new attendees register at your event, new flight\nbooked or new article published are the most interesting part here.<\/p>\n\n<p>Servers, containers should be replaceable things and they shouldn\u2019t be a\nproblem. The core here is the event. And you need to be 100% sure about having\nit in someplace.<\/p>\n\n<p>Same for monitoring, if the servers, containers are not important but events are\nyou should monitor the event and not the server.<\/p>\n\n<p>Oh, don\u2019t forget about distribution. It makes everything worst and more\ncomplicated my dear. Events move faster than everything else. They are across\nservices, containers, data centers.<\/p>\n\n<p>Where is the event? Where it failed. How a spike for particular events behave on\nyour system? If you have too many new registrations are you still able to serve\nyour applications?<\/p>\n\n<p>In a big distributed environments what a particular service is calling? Or is it\nused? Maybe nobody is using it anymore. These questions need to have an answer.<\/p>\n\n<p>Distributed tracing is one of the ways. It doesn\u2019t solve all the problems but it\nprovides a new point of view.<\/p>\n\n<p>In practice speaking in HTTP terms tracings translate on following a specific\nrequest from its start (mobile app, web app, cronjobs, other apps) so it\u2019s the .<\/p>\n\n<p>Registering how many applications it crosses to, for how long. Labeling these\nmetrics you can event understand latency between services.<\/p>\n\n<p><img src=\"https:\/\/www.hawkular.org\/img\/blog\/2017\/2017-04-19-jaeger-trace.png\" class=\"img-fluid\" \/>\n<small>from https:\/\/www.hawkular.org\/<\/small><\/p>\n\n<p>Speaking in the right language, this image describes a trace. It\u2019s an HTTP to\n<code>fronted<\/code> service. It\u2019s a GET request on the <code>\/dispatch<\/code> route. You can see how\nfar you can go. A trace is a collection of spans.<\/p>\n\n<p>Every span has it\u2019s own id and an optional parent id to create the hierarchy.\nSpans support what is called Span Tags. It is a key-value store where the key is\nalways a string and some of them are \u201creserved\u201d to describe specific behaviors.\nYou can look at them <a href=\"https:\/\/github.com\/opentracing\/specification\/blob\/master\/semantic_conventions.md#standard-span-tags-and-log-fields\">inside the specification\nitselt<\/a>.\nUsually, UI is using this standard tag to build a nice visualization. For\nexample if a span contains the tag <code>error<\/code> a lot of tracers colored it red.<\/p>\n\n<p>I suggest you read at the standard tags because it will give you the idea about\nhow descriptive a span can be.<\/p>\n\n<p>The architecture looks like this:<\/p>\n\n<pre><code>   +-------------+  +---------+  +----------+  +------------+\n   | Application |  | Library |  |   OSS    |  |  RPC\/IPC   |\n   |    Code     |  |  Code   |  | Services |  | Frameworks |\n   +-------------+  +---------+  +----------+  +------------+\n          |              |             |             |\n          |              |             |             |\n          v              v             v             v\n     +-----------------------------------------------------+\n     | \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 OpenTracing \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 |\n     +-----------------------------------------------------+\n       |               |                |               |\n       |               |                |               |\n       v               v                v               v\n +-----------+  +-------------+  +-------------+  +-----------+\n |  Tracing  |  |   Logging   |  |   Metrics   |  |  Tracing  |\n | System A  |  | Framework B |  | Framework C |  | System D  |\n +-----------+  +-------------+  +-------------+  +-----------+\n<\/code><\/pre>\n<p><small>from <a href=\"https:\/\/opentracing.io\/documentation\/pages\/instrumentation\/common-use-cases.html\" target=\"_blank\">opentracing.org<\/a><small><\/small><\/small><\/p>\n\n<p>There are different instrumentation libraries across multiple languages and you\nneed to embed one of them in your application. It usually provides a global\nvariable where you can add spans too. Time by time they are stored in the\ntracer that you select. If you are using Zipkin as tracer you can select\ndifferent backends like ElasticSearch and Cassandra.\nTracers provides API and UI to store and visualize traces.<\/p>\n\n<p>As you can see from the graph above Opentracing \u201cis able\u201d to push to Tracers,\nLogging system, metrics and so on. With my experience with opentracing, I don\u2019t\nknow how this can be done.<\/p>\n\n<p>I always used it with a Tracer like Zipkin or Jaeger to store spans. Logs are\ncovered by the spec because you can attach to every spans one or multiple <code>Span\nLogs<\/code>.<\/p>\n\n<blockquote>\n  <p>each of which is itself a key:value map paired with a timestamp. The keys must\nbe strings, though the values may be of any type. Not all OpenTracing\nimplementations must support every value type.<\/p>\n<\/blockquote>\n\n<p><small>from <a href=\"https:\/\/github.com\/opentracing\/specification\/blob\/master\/specification.md\" target=\"_blank\">opentracing.org<\/a><small><\/small><\/small><\/p>\n\n<p>The idea behind this feature is clear. There are too many buzzwords: metrics,\nlogs, events, time series and now traces.<\/p>\n\n<p>It\u2019s easy to end up with more\ninstrumentation libraries that business code. That\u2019s probably why opentracing\ncover this uses case. Logs and traces are time series. That\u2019s probably why\nmetrics are there.<\/p>\n\n<p>Using the go-sdk it looks like this:<\/p>\n<pre><code class=\"language-go\">span, ctx := opentracing.StartSpanFromContext(ctx, \"operation_name\")\n    defer span.Finish()\n    span.LogFields(\n        log.String(\"event\", \"soft error\"),\n        log.String(\"type\", \"cache timeout\"),\n        log.Int(\"waited.millis\", 1500))\n<\/code><\/pre>\n\n<p>But I am not able to find a way to say: \u201cForward all these logs to \u2026.elastic\nand this traces to Zipkin\u201d. And I don\u2019t know if the expectation is to have\ntracers smart enough to do that. But from my experience trying to extend Zipkin,\nthis looks like a hard idea. At first, because the tracers are out of the\nOpenTracing\u2019s scope.<\/p>\n\n<p>If the goal is to wrap together everything logs have precise use case from ages.\nThey work pretty well and you can\u2019t change the expectation. They can be a\nreal-time stream on stdout, stderr and\/or other thousands of exporter. I can\u2019t\nfind this kind of work there. So, looking at the code it\u2019s not clear who is in\ncharge of what. But the graph is pretty.<\/p>\n\n<p>I like the idea and I started looking at <a href=\"https:\/\/opencensus.io\/\">OpenCensus<\/a> a\nlibrary open sourced by Google from its experience with StackDriver and the\nGoogle\u2019s scale. It has its\n<a href=\"https:\/\/github.com\/census-instrumentation\/opencensus-specs\">specification<\/a> and\nit provides a set of <a href=\"https:\/\/github.com\/census-instrumentation\/\">libraries<\/a>\nthat you can add to your application to get what they call stats, traces out\nfrom your app.  Stat stays for metrics, events. It\u2019s another buzz probably!<\/p>\n\n<p>The concept looks similar to OpenTracing, obviously, the specs are different.<\/p>\n\n<p>Looking at the code, the go-SDK looks a lot more clear. I can clearly see stats\nand tracing objects, they both accept exporters and they can be Prometheus,\nZipkin, Jaeger, StackDriver and so on. I like the idea that the exporter is part\nof the project, you don\u2019t need a tracing application like Zipkin, you can write\nyour exporter to store data in your custom database and you are ready to go.<\/p>\n\n<pre><code>.\n\u251c\u2500\u2500 appveyor.yml\n\u251c\u2500\u2500 exporter\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 jaeger\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 prometheus\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 stackdriver\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 zipkin\n\u251c\u2500\u2500 internal\n\u251c\u2500\u2500 plugin\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 stats\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 internal\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 ...\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 view\n\u251c\u2500\u2500 tag\n\u251c\u2500\u2500 trace\n<\/code><\/pre>\n\n<p>You can probably do the same with OpenTracing writing your tracer that store\nthings in your custom databases jumping Zipkin and Jaeger, it looks a bit more\ncomplicated looking at the interface:<\/p>\n\n<pre><code class=\"language-go\">\/\/ opencensus-go\/trace\/export.go\n\n\/\/ Exporter is a type for functions that receive sampled trace spans.\n\/\/\n\/\/ The ExportSpan method should be safe for concurrent use and should return\n\/\/ quickly; if an Exporter takes a significant amount of time to process a\n\/\/ SpanData, that work should be done on another goroutine.\n\/\/\n\/\/ The SpanData should not be modified, but a pointer to it can be kept.\ntype Exporter interface {\n\tExportSpan(s *SpanData)\n}\n<\/code><\/pre>\n\n<pre><code>\/\/ opentracing tracer\n\ntype Tracer interface {\n\t\/\/ Create, start, and return a new Span with the given `operationName` and\n\t\/\/ incorporate the given StartSpanOption `opts`. (Note that `opts` borrows\n\t\/\/ from the \"functional options\" pattern, per\n\t\/\/ https:\/\/dave.cheney.net\/2014\/10\/17\/functional-options-for-friendly-apis)\n\t\/\/\n\t\/\/ A Span with no SpanReference options (e.g., opentracing.ChildOf() or\n\t\/\/ opentracing.FollowsFrom()) becomes the root of its own trace.\n\t\/\/\n\tStartSpan(operationName string, opts ...StartSpanOption) Span\n\n\t\/\/ Inject() takes the `sm` SpanContext instance and injects it for\n\t\/\/ propagation within `carrier`. The actual type of `carrier` depends on\n\t\/\/ the value of `format`.\n\t\/\/\n\t\/\/ OpenTracing defines a common set of `format` values (see BuiltinFormat),\n\t\/\/ and each has an expected carrier type.\n\t\/\/\n\t\/\/ Other packages may declare their own `format` values, much like the keys\n\t\/\/ used by `context.Context` (see\n\t\/\/ https:\/\/godoc.org\/golang.org\/x\/net\/context#WithValue).\n\t\/\/\n\tInject(sm SpanContext, format interface{}, carrier interface{}) error\n\n\t\/\/ Extract() returns a SpanContext instance given `format` and `carrier`.\n\t\/\/\n\t\/\/ OpenTracing defines a common set of `format` values (see BuiltinFormat),\n\t\/\/ and each has an expected carrier type.\n\t\/\/\n\t\/\/ Other packages may declare their own `format` values, much like the keys\n\t\/\/ used by `context.Context` (see\n\t\/\/ https:\/\/godoc.org\/golang.org\/x\/net\/context#WithValue).\n\t\/\/\n\tExtract(format interface{}, carrier interface{}) (SpanContext, error)\n}\n<\/code><\/pre>\n<p>OpenTracing doesn\u2019t care about exporter and tracers, something else handle that\ncomplexity, (the user, me.. bored) the standard only offers interfaces. I don\u2019t\nknow if this is good. It really looks a lot more like a common interface\nbetween traces. I like the idea, but I need a lot more.<\/p>\n\n<p>Now, writing this article I understood that I have a lot more to figure out\nabout this projects, sadly I realized that in practice they are even more\nsimilar compared my feeling before writing all this down.<\/p>\n\n<p>Tracing, metrics and instrumentation libraries remain crucial from my point of\nview. You can write everything you want but if you are not able to understand\nwhat\u2019s happening you are not making a good job. You look like a monkey.<\/p>\n\n<p>Personal I would like to find a common and good library to wrap together all\nthe buzzwords stats, spans, traces, metrics, time series, logs because they are\nall the same concept just from a different point of view.<\/p>\n\n<p>Everything is a point in time, grouped, ordered or with a specific hierarchy.\nYou can use them as aggregate, to compare, to alert and so on. A powerful\nimplementation should be able to combine both needs an easy ingestion with a\nclear output.<\/p>\n\n<p>I think that OpenTracing has a lot to do from both sides in and out. OpenCensus\nlooks good from an ingestion point of view. Nothing about logs in OpenCensus\nmaybe because they are good enough as they are but we need to be able to cross\nreference logs, traces, metrics from infrastructure and application events from\ndashboards and automatic tools.<\/p>\n\n<p>It looks like, with both setup, that you still need a platform capable to serve\nand use this data. A lot of people will answer that it\u2019s out of scope for\nthese projects, but I am pretty sure we all learned that just storing events is\nnot enough.<\/p>\n\n"},{"title":"The right balance","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/the-development-balance"}},"description":"My daily job as a developer is to find the right balance about everything. I would like to share what I think about this topic because the decisions that you take writing a system are the result of the software itself. So you should care about them.","image":"https:\/\/gianarb.it\/img\/k8s-up-and-running.jpg","updated":"2018-02-09T10:08:27+00:00","published":"2018-02-09T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/the-development-balance","content":"<p>My daily job as a developer other than programming is about finding the right\nbalance between different things:<\/p>\n\n<ol>\n  <li>Buzzword driven development vs rock solid boring things.<\/li>\n  <li>New technologies vs something already adopted inside the company.<\/li>\n  <li>State of the art implementation. Something that can work in an hour.<\/li>\n<\/ol>\n\n<p>And many more. I am sure you will have your own list. What I noticed during\nthese years working for different companies, with different teams and in\ndifferent states is that there are a lot of kind of developers, we are full of\ncompanies and projects to develop.<\/p>\n\n<p>You should really look for the right place for you, but you need to know what to\nlook for.<\/p>\n\n<p>What I am trying to say that people, colleagues, companies can help you to find\nyour balance be proactive on this research. Speak with your manager or with your\ncolleagues about what you are happy to do or not you will find interesting\nanswers if you are working with people with your same mindset. You will probably\ncome up selling and buying tasks and issues from member of the teams because\nthey like more what you are doing and vice versa.<\/p>\n\n<p>Sometime you will even have the sensibility to grab a bored task just to have it\ndone and leave your colleagues free to do something more fun.<\/p>\n\n<p>This is the kind of work that I like. I am almost sure about that now. A place\nwhere shit needs to be done and you have an active word on how, it doesn\u2019t need\nto be a decision, sometime I don\u2019t know what\u2019s better for the company or for the\nproject, but can\u2019t be always a black box that come from the ground and needs to\nbe done. I am not an check list implementer and dealing with my colleagues and\nother people is an active and very good part of my day.<\/p>\n\n<p>When it\u2019s time to have fun because things are going well and even when you are\nthe unique different voice out form a meeting. Find the right place where you\nfeel free to say your opinion, be wise and mature enough to know that your\nopinion can be good or bad can\u2019t be always the one to follow.<\/p>\n\n<p>The right balance between all of these things, across teams, companies gives to\nme the feel to be in the right place.<\/p>\n"},{"title":"Kubernetes up and running","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/kubernetes-up-and-running"}},"description":"Kubernetes up and running review.","image":"https:\/\/gianarb.it\/img\/k8s-up-and-running.jpg","updated":"2017-12-19T10:08:27+00:00","published":"2017-12-19T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/kubernetes-up-and-running","content":"<p>I read <a href=\"https:\/\/amzn.to\/2zflChj\">\u201cKubernetes up and running\u201d<\/a> an O\u2019Reilly book\nwritten by Kelsey Hightower, Brendan Burns and Joe Beda.<\/p>\n\n<p>It looks like the instrumental manual, you look for it when you buy something.\nBased on your knowledge of it you read or not that manual.<\/p>\n\n<p>I have a good knowledge of containers, orchestrator and cloud computing but I\nnever worked with Kubernetes until 2 weeks ago when I started a co-managed k8s\ncluster on scaleway with <a href=\"https:\/\/twitter.com\/fntlnz\">Lorenzo<\/a>.<\/p>\n\n<p>The book is well written, I read it in less than one week. The chapters are well\nsplit because I was able to jump \u201cBuiliding a Raspberry Pi Kubernetes cluster\u201d\nbecause I am not really interested without any pain.<\/p>\n\n<p>Chapters like \u201cDeploying real world applications\u201d, \u201cService Discovery\u201d are good\nand the book covers all the basic concepts that you need to know about\nKubernetes. You can feel all the experience that the three authors have on the\ntopic. There are gold feedbacks about what they learned using and building what\nis now the orchestration standard.<\/p>\n\n<p>Just to summarize, if you are using kubernetes and you like papers, this book is\na good way to have a documentation on paper. If you are new to Kubernetes is the\nbest way to start.<\/p>\n\n<p>Thanks to all the authors! If you have any questions let me know\n<a href=\"https:\/\/twitter.com\/gianarb\">@gianarb<\/a><\/p>\n"},{"title":"Desk setup","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/desk-setup"}},"description":"It is not a couple of months after i left CurrencyFair to start working at InfluxData. A lot of new things but working from home for a US based company is very hard. Dealing with a such big timezone requires a big effort. But I am very excited about how I am feeling working from home. That's why I decided to share my current office setup. Desktop, Zendbook and a lot of Ikea's things!","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2017-12-17T10:08:27+00:00","published":"2017-12-17T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/desk-setup","content":"<p>As you probably know in April 2017 I moved back after an year and half in Dublin\nand I started to work from home as SRE at InfluxData.<\/p>\n\n<p>I am ready to write a small post about my current setup.<\/p>\n\n<h2 id=\"ikea-markus\">Ikea Markus<\/h2>\n\n<p>First things I bought an <a href=\"https:\/\/www.ikea.com\/gb\/en\/products\/chairs-stools-benches\/office-chairs\/markus-swivel-chair-glose-black-art-20103101\/\">Ikea\nMarkus<\/a>.\nIt is comfortable and it has a competitive price. It\u2019s flexible and well designed.<\/p>\n\n<p>I don\u2019t have a lot to say about it. If you are not passionate about expensive\nand weird chairs you can go for this one. It will work!<\/p>\n\n<h2 id=\"stand-mount\">Stand Mount<\/h2>\n\n<p>My setup counts two boring Asus monitor, one horizontal and one vertical,\nLorenzo suggested to me this <a href=\"https:\/\/amzn.to\/2yMe59C\">standmount support for two monitor<\/a>.<\/p>\n\n<p>Day by day I discover how bad I am using more than one monitor. Change focus so\noften is not for me but I like the vertical monitor when I am debugging some\nweird application.<\/p>\n\n<h2 id=\"asus-zenbook-3\">Asus Zenbook 3<\/h2>\n\n<p>I have an <a href=\"https:\/\/amzn.to\/2AHAy9N\">Asus Zenbook 3<\/a> the only usb-c is kind of a\npain. I am a traveler and a speaker. I don\u2019t enjoy its low weight (900gr) that\noften because I always need some adapter.<\/p>\n\n<p>For traveling the adapter <a href=\"https:\/\/amzn.to\/2CKIMPG\">Asus Universal Dock<\/a> is good. It\nis embedded with a charger it means that you need to have power supply to use\nit. I wrote an article about it and I was very disappointed about the product.\nBut now that I am using it only for traveling purpose it\u2019s not too bad.<\/p>\n\n<p>If you are a multi desktop user you need to remember that it doesn\u2019t have an\nexternal video card, it has VGA and HDMI but you can use only one of them at the\ntime.<\/p>\n\n<p>I used Ubuntu 17.04 and 17.10. Now I am using ArchLinux and both laptop and\nUniversal Dock need to install some drivers, to work a bit on audio\nconfiguration and so on. But it\u2019s a good challenge and almost everything works\nout of the box.<\/p>\n\n<h2 id=\"logitech-c922\">Logitech C922<\/h2>\n\n<p>The embedded Asus WebCam is not great. If you are looking for high definition or\nan acceptable quality you need to have an external webcam.<\/p>\n\n<p>I work from and when I have a meeting with colleagues and friends I would like\nto offer to them good experience.<\/p>\n\n<p>The <a href=\"https:\/\/amzn.to\/2kEnJ9o\">Logitech C922<\/a> is not powerful enough to make me\nbeautiful but it makes an amazing work and it\u2019s very good.<\/p>\n\n<p>If you record tutorials or workshop you should think about having one of this.\nIt comes with a small tripod to setup it where ever you like.<\/p>\n\n<h2 id=\"usb-c-adapter\">USB-C adapter<\/h2>\n\n<p>As I told you the world is not ready for the USB-C but I am!\n<a href=\"https:\/\/amzn.to\/2zhPbSQ\">Plugable<\/a> makes my life very simple.<\/p>\n\n<p>WebCam, two monitors, Ethernet cable are always attached to it and I just need\nto plug my laptop in via the USB-C and everything will happen.<\/p>\n\n<p>It\u2019s an expensive toy but I am using it on Linux and it\u2019s working. The company\ndoesn\u2019t officially support it but there is a\n<a href=\"https:\/\/github.com\/displaylink\/evdi\">DisplayLink<\/a> driver open source on GitHub\nthat you can use.<\/p>\n\n<h2 id=\"desk\">Desk<\/h2>\n<p>Last but not least I have a standing desk.<\/p>\n\n<p>I think a good chair, gym, swim are better solution to keep you healthy but I\nchange my point of view and my position help me to stay focused.<\/p>\n\n<p>Every time I have a boring or complex task at some point just toggle my\ncurrent position from down to up or vice versa gives me some fresh power to\nend it well.<\/p>\n\n<p>I monitored the <a href=\"https:\/\/www.ikea.com\/gb\/en\/products\/desks\/office-desks\/bekant-desk-sit-stand-oak-veneer-black-spr-29061187\/\">Ikea\nBekant<\/a>\nfor a lot of months but I was not sure about investing money a standind desk.<\/p>\n\n<p>I looked at them for so long that Ikea started a very good discount campaign and\nI just bought it. I took only the mechanical legs because I like the feeling of\nreal wood and I bought the table separately.<\/p>\n\n<p>That\u2019s it! Bye!<\/p>\n"},{"title":"From Docker to Moby","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/from-docker-to-moby"}},"description":"Docker announced during DockerCon a new project called Moby. Moby will be the new home for Docker and all the other open source projects like containerd, linuxkit, vpnkit and so on. Moby is the glue for all that open source code. It will look as an entire platform to ship, build and run containers at scale.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2017-10-20T10:08:27+00:00","published":"2017-10-20T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/from-docker-to-moby","content":"<p>At DockerCon 2017 Austin\n<a href=\"https:\/\/blog.docker.com\/2017\/04\/introducing-the-moby-project\/\">Moby<\/a> was the\nbig announcement.<\/p>\n\n<p>It created confusion and some communities are still trying to understand what is\ngoing on. I think it\u2019s time to step back and see what we have after seven months\nafter the announcement.<\/p>\n\n<ol>\n  <li><code>containerd<\/code> is living a new life, the first stable release will happen soon.\nIt has been donated to CNCF.<\/li>\n  <li><code>notary<\/code> is the project behind <code>docker trust<\/code>. I wrote a full e-book about\n<a href=\"https:\/\/scaledocker.com\">Docker Security<\/a> if you need to know more. This\nalso has been donated to the CNCF.<\/li>\n  <li>github.com\/docker\/docker doesn\u2019t exist anymore there is a new repository\ncalled github.com\/moby\/moby .<\/li>\n  <li><a href=\"https:\/\/github.com\/docker\/cli\">CLI<\/a> has a separate home.<\/li>\n  <li>docker-ce is the first example of moby assembling. It is made my Docker Inc.<\/li>\n<\/ol>\n\n<p>Containers are not a first class citizen in Linux.<\/p>\n\n<p><img class=\"img-fluid\" src=\"\/img\/container-is-not-real.jpeg\" \/><\/p>\n\n<p>They are a combination of cgroups, namespaces and other kernel features. They are\nalso there from a lot of year. LXD is one of the first project that mentioned\ncontainer but the API wasn\u2019t really friendly and only few people are using it.<\/p>\n\n<p>Docker created a clean and usable api that human beings are happy to use. It\ncreated an ecosystem with an amazing and complete UX. Distribution, Dockerfile,\n<code>docker run<\/code>, <code>docker image<\/code> and so on.<\/p>\n\n<p>That\u2019s what Docker is, in my opinion. Other than a great community and a fast\ngrowing company.<\/p>\n\n<p>What Docker is doing with Moby is to give the ability to competitors, startups, new\nprojects to join the ecosystem that we built in all these 4 years.<\/p>\n\n<p>Moby in other hands is giving the ability at Docker to take ownership of the\nclean and usable experience. The <code>Docker CLI<\/code> that we know and use every day\nwill stay open source, but not the moby project\u2019s part. It will be owned by\nDocker. As I wrote above, the code is already moved out.<\/p>\n\n<p>Moby allows other companies and organisations to build their\nuser interface based on what they need. Or to build their product on top of a\nopen source project designed to be modular.<\/p>\n\n<p>Cloud and container moves fast Amazon with ECS, RedHat with OpenShift,\nPivotal with Cloud Foundry, Mesos with Mesosphere, Microsoft with Azure\nContainer Service, Docker with Docker, they are all pushing hard to build\nprojects around containers to sell them at big and small corporations to make\nlegacy projects less bored.<\/p>\n\n<blockquote>\n  <p>Legacy is the new buzzword<\/p>\n<\/blockquote>\n\n<p>Docker will continue to assemble and ship docker as we know it. The project is\ncalled <code>docker-ce<\/code>:<\/p>\n\n<pre><code>apt-get install docker-ce\ndocker run -p 80:80 nginx:latest\n<\/code><\/pre>\n\n<p>Everything happens down the street, in the open source ecosystem. Moby won\u2019t\ncontain the CLI that we know.<\/p>\n\n<p>Moby won\u2019t have the swarmkit integration as we know it. It was something that\nDocker as company was looking to have. Mainly to inject an orchestrator in\nmillion of laptops. Other companies and projects that are not using swarm don\u2019t\nneed it and they will be able to remove it in some way.<\/p>\n\n<p>Companies like Pivotal, AWS are working on\n<code>containerd<\/code> because other the runtime behind Docker it\u2019s what matters for a lot\nof projects that are just looking to run containers without all the layers on\ntop of it to make it friendly. ECS, Cloud Foundry are the actual layers on top\nof \u201cwhat runs a container\u201d.<\/p>\n\n<p>Container orchestrator doesn\u2019t really care about how or who spins up a container,\nthey just need to know that there is something able to do that.<\/p>\n\n<p>It is what Kubernetes does with CRI. They don\u2019t care about Docker, CRI-O,\ncontainerd. It\u2019s out of scope they just need a common interface. In this case is\na gRPC interface that every runtime should implement. Here a list of them:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/kubernetes-incubator\/cri-o\">cri-o<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/kubernetes-incubator\/cri-containerd\">cri-containerd<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/kubernetes-incubator\/rktlet\">rktlet<\/a><\/li>\n<\/ul>\n\n<p>That\u2019s a subset of reasons about why everything is happening:<\/p>\n\n<ul>\n  <li>Docker Inc. will be free to iterate on their business services and projects\nwithout breaking every application in the world. And they will have more\nflexibility on what they can do as company.<\/li>\n  <li>The transaction between Docker to Moby is the perfect chance to split the\nproject to different repositories we already spoke about docker-cli, containerd\nand so on.<\/li>\n  <li>Separation of concern is popular design pattern. Split\nprojects on smallest libraries allow us to be focused on one specific scope of the\nproject at the time.\n<a href=\"https:\/\/github.com\/moby\/buildkit\">buildkit<\/a> is the perfect example. It\u2019s the\nevolution of the <code>docker build<\/code> command. We had a demo at the MobySummit and\nit looks amazing!<\/li>\n<\/ul>\n\n<p>That\u2019s almost it. Let\u2019s summarise:<\/p>\n\n<p><strong>Are you a company in the container movement?<\/strong>\nYou are competing with Docker building container things and you was complaining\nabout them breaking compatibility or things like that now you should blame the\nMoby community.<\/p>\n\n<p><strong>Are you using docker run?<\/strong>\nYou are fine! You will be able to do what you was doing before.<\/p>\n\n<p><strong>Are you a OpenSource guru?<\/strong>\nMaybe you will be a bit disappointed if you worked hard on docker-cli and now\nDocker will bring your code out but you signed a CLA, the CLI will stay open\nsource. Blame yourself.<\/p>\n\n<p>That\u2019s it! Or at least that\u2019s what I understood.<\/p>\n"},{"title":"Git git git, but better","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/git-git-git-but-better"}},"description":"Doesn't matter for how much time you are using git or any version control system, you always have something to learn abut them. Not about the actual interface but about the right mindset.","image":"https:\/\/gianarb.it\/img\/git.png","updated":"2017-10-10T10:08:27+00:00","published":"2017-10-10T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/git-git-git-but-better","content":"<p>I can\u2019t say that Git is a new topic. Find somebody unable to explain how a\nversion control system was working was very hard. Now it\u2019s almost impossible.<\/p>\n\n<p>I used SVN and Git for many years and I also put together some unusual use case\nfor example: <a href=\"https:\/\/devzone.zend.com\/6134\/splitting-zend-framework-using-the-cloud\/\">\u201cSplitting Zend Framework Using the Cloud\u201d<\/a>\nis a project that I made with Corley SRL my previous company and the Zend\nFramework team.<\/p>\n\n<p>It helped me to put my hands down on the Git file system and I discovered a lot\nof features and capabilities that are not the usual: commit, checkout, reset,\nbranch, cherry-pick, rebase and so on.<\/p>\n\n<p>But during my experience building cloud at InfluxData I need to say that I can\nsee a change of my mindset, I am sharing this because I am kind of proud of\nthis. It\u2019s probably not super good looking at the time required to achieve this\ngoals but how cares!<\/p>\n\n<blockquote>\n  <p>Sometimes it\u2019s the journey that teaches you a lot about your destination.\n(Drake)<\/p>\n<\/blockquote>\n\n<p>I don\u2019t know this Drake, I am not even sure if it\u2019s the right author of the\nquote but that\u2019s not the point.<\/p>\n\n<p>At InfluxData, just to give you more context, I am working on a sort of\nscheduler that provisions and orchestrate servers and containers on our <a href=\"https:\/\/cloud.influxdata.com\/\">cloud\nproduct<\/a>. A lot of CoreOS instances, go, Docker and\nAWS api calls.<\/p>\n\n<p>It\u2019s a modest codebase in terms of size but it is keeping up a huge amount of\nservers, I am actively working on the code base almost by myself and I am kind\nof enjoying this. Nate, Goller and all the teams are supporting my approach and\nare using it but I am not using Git because hundreds of developers need to\ncollaborate on the same line of code. I had some experience in that environment\nworking as contributor in many open source project. But this time is different.<\/p>\n\n<p>I am mainly alone on a codebase that I didn\u2019t start and I don\u2019t know very well,\nthis project is running in production on a good amount of EC2.<\/p>\n\n<p>I really love the idea of having a clean and readable Git history. I am not\nsaying that because it\u2019s cool. I am saying that because every time I commit my\ncode I am thinking about which file to add\/commit <code>-a<\/code> is not really an option\nthat I use that much anymore. I think about the title and the message.<\/p>\n\n<p>I try to avoid the <code>WIP<\/code> message and I use it only if I am sure about a future\nsquash, rebase and if I need to push my code to ask for ideas and options (as I\nsaid I am writing code almost alone, but I am always looking for support from my\ngreat co-workers).<\/p>\n\n<p>This has a very big value I think also as remote worker. This is my first\nexperience in this environment and for a no-native English a good and\nself explanatory title can be the hardest part of the work but it will help\nother people to understand what I am doing.<\/p>\n\n<p>When you are working on a new codebase and you have tasks that require\nrefactoring to be achieved in a fancy and professional way you will find\nyourself moving code around without really be able to figure out when and how it\nwill become useful to close your task and open the PR that your team lead is\nwaiting for. At the end if you start to write code and you commit your changes\nat the end of the day as I was doing at the beginning after a couple of days you\nwill figure out that your PR is too big and you are scared to merge them.\nAnd probably it\u2019s just the PR that is preparing the codebase to get the initial\nrequests. I hated the situation but if you think about what I wrote you will\nfind that it\u2019s totally wrong.<\/p>\n\n<p>VCS is not there as saving point, you are not plaining Crash Bandicoot anymore,\nyou don\u2019t need to use Git as your personal \u201cooga booga\u201d. The right commit\ncontains an atomic information about a feature, bug fix or whatever.<\/p>\n\n<p><img src=\"\/img\/crash_bandcioot.jpg\" alt=\"\" \/><\/p>\n\n<p>These are the questions that I am asking myself now before to make a commit:<\/p>\n\n<ul>\n  <li>am I confident cherry-picking this commit to <code>master<\/code>? This is a good way to\nmake your commit small and easy to merge. If one of your PR is becoming too\nbig and you have \u201ccherry-picked\u201d commits you can select some of them merge\nthem as single PR.<\/li>\n  <li>are deploy and rollback easy actions? This is similar to previous one but I am\nthe one that deploy and monitor the service in production. I need to ask this\nquestion to myself before every merged PR.<\/li>\n  <li>Looking at the name of the branch that in my case the task in my\nviewfinder the commit that I am creating is about it or can I create a new PR\njust for this piece of code? This helps me a log to split my PR and to them\nsmall. A small PR is easier to review, it has a better scope and it makes me\nless scared to deploy it.<\/li>\n<\/ul>\n\n<p>Git is more than a couple of commands that you can execute. You need to\nbe in the right mindset to enjoy all the power.<\/p>\n"},{"title":"Orbiter the Docker Swarm autoscaler on the road to BETA-1","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/orbiter-the-swarm-autoscaler-moves"}},"description":"Orbiter is a project written in go. It is an autoscaler for Docker containers. In particular it works with Docker Swarm. It provides autoscaling capabilities for your services.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2017-08-09T10:08:27+00:00","published":"2017-08-09T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/orbiter-the-swarm-autoscaler-moves","content":"<p>Orbiter is an open source project written in go hosted on\n<a href=\"https:\/\/github.com\/gianarb\/orbiter\">GitHub<\/a>. It provides autoscaling\ncapabilities in your Docker Swarm Cluster.<\/p>\n\n<p>As you probably know at the moment autoscaling is not a feature supported\nnatively by Docker Swarm but this is not a problem at all.<\/p>\n\n<p>Docker Swarm provides a useful API that helps you improving its capabilities.<\/p>\n\n<p>I created Orbiter months ago as use case with InfluxDB and to allow services to\nscale automatically based on signal <code>up<\/code> or <code>down<\/code>. You can follow the webinar\nthat I made with InfluDB\n<a href=\"https:\/\/www.influxdata.com\/resources\/influxdata-helps-docker-auto-scale-monitoring\/?ao_campid=70137000000Jgw7\">here<\/a>.<\/p>\n\n<p>This article is not about \u201cHow it works\u201d. You can <a href=\"https:\/\/gianarb.it\/blog\/orbiter-docker-swarm-autoscaler\">read more here about how it\nworks<\/a> and you can\nwatch the embedded video that I made in the Docker HQ in San Francisco.<\/p>\n\n<p>Yesterday we made some very good improvements and we are moving forward to tag\nthe first beta release. I need to say a big thanks to <a href=\"https:\/\/github.com\/mbovo\">Manuel\nBovo<\/a>. He coded pretty much all the features listed\nhere.<\/p>\n\n<ol>\n  <li>\n    <p><a href=\"https:\/\/github.com\/gianarb\/orbiter\/pull\/26\">PR #26<\/a> e2e working example. <a href=\"https:\/\/github.com\/gianarb\/orbiter\/tree\/master\/contrib\/swarm\">Please try\nit<\/a>.<\/p>\n  <\/li>\n  <li>\n    <p><a href=\"https:\/\/github.com\/gianarb\/orbiter\/pull\/27\">PR #27<\/a> Now Orbiter has\nbackground job that listen on the Docker Swarm event API and register and\nde-register new services <a href=\"https:\/\/github.com\/gianarb\/orbiter#autodetect\">deployed with right\nlabels<\/a>. You don\u2019t need to\nrestart orbiter anymore. It detect new services automatically.<\/p>\n  <\/li>\n  <li>\n    <p><a href=\"https:\/\/github.com\/gianarb\/orbiter\/pull\/29\">PR #29<\/a> Fixed the up\/down range.\nNow we can not scale under 1 tasks but we can scale up services with 0 tasks.<\/p>\n  <\/li>\n  <li>\n    <p><a href=\"https:\/\/github.com\/gianarb\/orbiter\/pull\/31\">PR #31<\/a> We have a cooldown\nperiod configurable via label <code>orbiter.cooldown<\/code>. This fix avoid multiple\nscaling in a short amount of time.<\/p>\n  <\/li>\n  <li>\n    <p><a href=\"https:\/\/github.com\/gianarb\/orbiter\/pull\/32\">PR #32<\/a> We are migrating our API\nbase root. Now all the API are <code>\/v1\/orbiter\/.....<\/code>. At the moment we are\nsupporting old and new routes. <strong>In October I will remove the old one. Please\nmigrate to <code>\/v1\/orbiter\/....<\/code> now!<\/strong>.<\/p>\n  <\/li>\n<\/ol>\n\n<h2 id=\"now\">Now?<\/h2>\n\n<p>That\u2019s a good question, but I have part of the answer. In October the plan is to\nrelease a BETA and finally the first stable version but what we need to do to go\nthere?<\/p>\n\n<ul>\n  <li>Offer a proper auth method. Manuel started this\n<a href=\"https:\/\/github.com\/gianarb\/orbiter\/pull\/33\">PR<\/a>. I have some concerns but\nwe are on the right path.<\/li>\n  <li>Make orbiter \u201cOnly-Swarm\u201d. The project started with the vision to become a\ngeneral purpose autoscaler. But this is not in line with the idea of single\nresponsibility and we designed a very clean API for Docker Swarm, make it\nusable in other context is not going to work. We tried it with DigitalOcean\nbut the api and the project looks too complex and I love simplicity.<\/li>\n  <li>Get other feedback from the community to merge valuable features before the\nstable release.<\/li>\n<\/ul>\n\n<p>That\u2019s it! Share it and give it a try! For any question I am available on\ntwitter (@gianarb) or open an issue.<\/p>\n"},{"title":"Asus universal dock station driver","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/asus-universal-dock-driver"}},"description":"Every developer loves to speak about its setup. I am here to share my trouble with my new laptop. Asus Zenbook 3.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2017-08-03T10:08:27+00:00","published":"2017-08-03T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/asus-universal-dock-driver","content":"<p>Every developer loves to share things about it\u2019s setup. They also loves to make\nit better and to spend time on it.<\/p>\n\n<p>Lorenzo <a href=\"https:\/\/twitter.com\/fntlnz\">(fntlnz)<\/a> is super on it! I am\nnot, plus I bought a Zenbook 3. Super slim, less than 1kg, I can use it to cut\nham probably but the unique USB-C is driving me crazy.<\/p>\n\n<p>Probably more than the actual 40 degrees that I have in my home office now!\nIt is probably why I am writing this post btw.<\/p>\n\n<p>When I bought this laptop 7 months ago the Universal Docker Station was not\navailable and I wasn\u2019t even able to install linux on this laptop.<\/p>\n\n<p>Now I have an <a href=\"https:\/\/www.asus.com\/Laptops-Accessory\/Universal-Dock\/\">Asus Universal Dock\nstation<\/a>. I am feeling a\nlittle bit better but to work it replace a normal charger, it means that without\na socket near you, I can not use a USB\u2026 Amazing experience.<\/p>\n\n<p>I tried other adapter but I didn\u2019t find one good enough. Every one of them had\nsome input or output port unusable for some reason. Most of them because the\nBIOS has a different watt limit and they can not charge the laptop. I never\nreceived a response from ASUS about it. That\u2019s great.<\/p>\n\n<p>Anyway I am writing this article just as note for myself about the driver that\nLorenzo discover to have the Asus Universal Dock Station\u2019s ethernet port\nrunning.<\/p>\n\n<p><a href=\"https:\/\/www.realtek.com\/DOWNLOADS\/downloadsView.aspx?Langid=1&amp;PNid=13&amp;PFid=5&amp;Level=5&amp;Conn=4&amp;DownTypeID=3&amp;GetDown=false\">Realtek ethernet\ndriver<\/a>.\nIt\u2019s super easy to install. Just compile it and it will work.<\/p>\n"},{"title":"CNCF Italy, first event about opentracing","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/cncf-italy-first-event"}},"description":"CNCF is a branch of The Linux Foundation focused on Cloud Computing and modern scalable architectures. it's supporting tools like Kubernetes, Prometheus, containerd and so on. If you are using one of them or you are looking to know more about them, this is your meetup. Join us! hashtag CNCFItaly on twitter.","image":"https:\/\/gianarb.it\/img\/cncf.jpeg","updated":"2017-06-05T10:08:27+00:00","published":"2017-06-05T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/cncf-italy-first-event","content":"<p>CNCF is a branch of The Linux Foundation focused on Cloud Computing and modern\nscalable architectures. it\u2019s supporting tools like Kubernetes, Prometheus,\ncontainerd and so on. If you are using one of them or you are looking to know\nmore about them, this is your meetup. Join us! hashtag #CNCFItaly on twitter.<\/p>\n\n<p>The event will be 13th July 19.00pm at Toolboox Office in Turin. Reserve your\nseat on <a href=\"https:\/\/www.meetup.com\/CNCF-Italy\/events\/241118593\/\">Meetup.com<\/a>.<\/p>\n\n<iframe src=\"https:\/\/www.google.com\/maps\/embed?pb=!1m18!1m12!1m3!1d2818.7526155267037!2d7.667091951510242!3d45.05024176888683!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x47886d37dd5ababd%3A0x2adc0b0e358ddb6c!2sToolbox+Coworking!5e0!3m2!1sit!2sit!4v1499676857774\" width=\"600\" height=\"450\" frameborder=\"0\" style=\"border:0\" allowfullscreen=\"\"><\/iframe>\n\n<p>This will be a full evening about OpenTracing. OpenTracing <a href=\"https:\/\/www.cncf.io\/blog\/2016\/10\/20\/opentracing-turning-the-lights-on-for-microservices\/\">turns the light on\nfor\nmicroservices<\/a>.\nIt is a specification to store and manage trace. How can we follow what\u2019s going\non from the beginning to the end of our requests? What\u2019s happening when they\ncross different services? Where is the bottleneck? Tracing helps you to\nunderstand what\u2019s going on. It\u2019s not just for microservices but also for\ncaching, queue system an so on. Have a <a href=\"https:\/\/trends.google.it\/trends\/explore?q=opentracing\">look at the\ntrends<\/a> we need to know\nmore about it!<\/p>\n\n<p>Beers, Pizza are offered by CNCF after the two sessions!<\/p>\n\n<p>Other links:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/www.cncf.io\/\">CNCF.io<\/a><\/li>\n  <li><a href=\"https:\/\/opentracing.io\/\">Opentracing<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/openzipkin\">OpenZipkin by twitter<\/a><\/li>\n  <li><a href=\"https:\/\/www.youtube.com\/watch?v=n8mUiLIXkto\">Keynote: OpenTracing and Containers: Depth, Breadth, and the Future of\nTracing - Ben Sigelman<\/a><\/li>\n<\/ul>\n\n<h2 id=\"all-done\">All done!<\/h2>\n\n<p>Amazing event! here some pictures and the video is coming soon!<\/p>\n\n<div class=\"slide w3-display-container\">\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-sponsor-1.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-1.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-5.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-8.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-9.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-10.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-12.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-13.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-14.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-15.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-16.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-17.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-20.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-21.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-22.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-23.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-24.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-25.jpg\" \/>\n    <img class=\"mySlides img-fluid\" src=\"\/img\/cncf-first\/Conf-27.jpg\" \/>\n    <button class=\"w3-button w3-display-left\" onclick=\"plusDivs(-1)\">&#10094;<\/button>\n    <button class=\"w3-button w3-display-right\" onclick=\"plusDivs(+1)\">&#10095;<\/button>\n<\/div>\n\n<style>\n.w3-display-left{position:absolute;top:50%;left:0%;transform:translate(0%,-50%);-ms-transform:translate(-0%,-50%)}\n.w3-display-right{position:absolute;top:50%;right:0%;transform:translate(0%,-50%);-ms-transform:translate(0%,-50%)}\n.w3-tooltip,.w3-display-container{position:relative}.w3-tooltip .w3-text{display:none}.w3-tooltip:hover .w3-text{display:inline-block}\n.w3-btn,.w3-button{border:none;display:inline-block;outline:0;padding:8px 16px;vertical-align:middle;overflow:hidden;text-decoration:none;color:inherit;background-color:inherit;text-align:center;cursor:pointer;white-space:nowrap}\n.w3-btn:hover{box-shadow:0 8px 16px 0 rgba(0,0,0,0.2),0 6px 20px 0 rgba(0,0,0,0.19)}\n.w3-btn,.w3-button{-webkit-touch-callout:none;-webkit-user-select:none;-khtml-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}\n.w3-disabled,.w3-btn:disabled,.w3-button:disabled{cursor:not-allowed;opacity:0.3}.w3-disabled *,:disabled *{pointer-events:none}\n.w3-btn.w3-disabled:hover,.w3-btn:disabled:hover{box-shadow:none}\n.w3-badge,.w3-tag{background-color:#000;color:#fff;display:inline-block;padding-left:8px;padding-right:8px;text-align:center}.w3-badge{border-radius:50%}\n.w3-ul{list-style-type:none;padding:0;margin:0}.w3-ul li{padding:8px 16px;border-bottom:1px solid #ddd}.w3-ul li:last-child{border-bottom:none}\n.w3-tooltip,.w3-display-container{position:relative}.w3-tooltip .w3-text{display:none}.w3-tooltip:hover .w3-text{display:inline-block}\n.w3-ripple:active{opacity:0.5}.w3-ripple{transition:opacity 0s}\n.w3-input{padding:8px;display:block;border:none;border-bottom:1px solid #ccc;width:100%}\n.w3-select{padding:9px 0;width:100%;border:none;border-bottom:1px solid #ccc}\n.w3-dropdown-click,.w3-dropdown-hover{position:relative;display:inline-block;cursor:pointer}\n.w3-dropdown-hover:hover .w3-dropdown-content{display:block;z-index:1}\n.w3-dropdown-hover:first-child,.w3-dropdown-click:hover{background-color:#ccc;color:#000}\n.w3-dropdown-hover:hover > .w3-button:first-child,.w3-dropdown-click:hover > .w3-button:first-child{background-color:#ccc;color:#000}\n.w3-dropdown-content{cursor:auto;color:#000;background-color:#fff;display:none;position:absolute;min-width:160px;margin:0;padding:0}\n.w3-check,.w3-radio{width:24px;height:24px;position:relative;top:6px}\n.w3-sidebar{height:100%;width:200px;background-color:#fff;position:fixed!important;z-index:1;overflow:auto}\n.w3-bar-block .w3-dropdown-hover,.w3-bar-block .w3-dropdown-click{width:100%}\n.w3-bar-block .w3-dropdown-hover .w3-dropdown-content,.w3-bar-block .w3-dropdown-click .w3-dropdown-content{min-width:100%}\n.w3-bar-block .w3-dropdown-hover .w3-button,.w3-bar-block .w3-dropdown-click .w3-button{width:100%;text-align:left;padding:8px 16px}\n<\/style>\n\n<script>\nvar slideIndex = 1;\nshowDivs(slideIndex);\n\nfunction plusDivs(n) {\n    showDivs(slideIndex += n);\n}\n\nfunction showDivs(n) {\n    var i;\n    var x = document.getElementsByClassName(\"mySlides\");\n    if (n > x.length) {slideIndex = 1}\n    if (n < 1) {slideIndex = x.length} ;\n    for (i = 0; i < x.length; i++) {\n        x[i].style.display = \"none\";\n    }\n    x[slideIndex-1].style.display = \"block\";\n}\n<\/script>\n\n"},{"title":"Container security and immutability","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/container-security-immutability"}},"description":"Docker, container and immutability. Have an immutable system has advantages not only from deploy, release and scalability point of view but also from security side. Deploy and build a new release quickly and high frequency improve the way you trust your provisioning system. Have the old environment still running and ready to be rolled back is another good point.","image":"https:\/\/gianarb.it\/img\/container-security.png","updated":"2017-06-05T10:08:27+00:00","published":"2017-06-05T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/container-security-immutability","content":"<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">55 pages\nabout how to improve container security. <a href=\"https:\/\/twitter.com\/ciliumproject\">@ciliumproject<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/BPF?src=hash\">#BPF<\/a>, best practices, <a href=\"https:\/\/twitter.com\/coreos\">@coreos<\/a> clair, <a href=\"https:\/\/twitter.com\/hashtag\/apparmor?src=hash\">#apparmor<\/a> <a href=\"https:\/\/t.co\/ABiuldYA9b\">https:\/\/t.co\/ABiuldYA9b<\/a> <a href=\"https:\/\/t.co\/61jzWxzb1Y\">pic.twitter.com\/61jzWxzb1Y<\/a><\/p>&mdash; :w\n!sudo tee % (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/871808740080615424\">June 5,\n2017<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>Security is a fascinating topic.\nIt\u2019s part of every aspect of a system.\nFrom your email server to the HTTP body validation of your API system.<\/p>\n\n<p>It\u2019s also a very human centric topic. You can use the most stronger security\napproaches but if your rules are too hard to follow or too complicated to\nimplement the end users or your colleagues will become the perfect breach to be\nexploited by bad people.<\/p>\n\n<p>In distributed systems there are interesting challenges like:<\/p>\n\n<ul>\n  <li>How can we trust the instances part of the system itself? I mean, how can we\ntrust a new application after a pool scale?<\/li>\n  <li>All the traffic generated by the system needs to be locked. The network\ntopology grows with the number of services that we add but that\u2019s not a good\nexcuse to leak on responsibility about how we do manage our network.<\/li>\n<\/ul>\n\n<p>When you design a system you need to think about security from different points\nof view:<\/p>\n\n<ul>\n  <li>Security needs to be efficient. This seems obvious but it\u2019s always something\nto keep in mind.<\/li>\n  <li>It needs to be easy to use in development mode. As we said before. If security\nis making things slower somebody will turn it off.<\/li>\n  <li>If you are good enough to make it easy, it will be easier to enforce a secure behavior.<\/li>\n<\/ul>\n\n<p>All this concepts are well applied in all different projects built by the Docker\nCommunity.\nNotary, swarmkit are just a few examples but if you think about the update\nframework (TUF) and the whole set of things happening behind every <code>docker push<\/code> and <code>pull<\/code>\ncommand you suddenly see a great example on how to make complicated things really easy to use.<\/p>\n\n<p>I published an ebook that you can download for free <a href=\"\/blog\/scaledocker\">here<\/a>.\nIt contains ~55 pages about Container and Docker Security. In this article I\nwill share one of the concept expressed in that book, <strong>Immutability<\/strong>.<\/p>\n\n<p>Docker containers are in fact immutable.  This means that a running container\nnever changes because in case you need to update it, the best practice is to\ncreate a new container with the updated version of your application and delete\nthe old one.<\/p>\n\n<p>This aspect is important from different point of views.<\/p>\n\n<p>Immutability applied to deploy is a big challenge because it opens the door to a very\ndifferent set of release strategies like blue green deployment or canary releases.\nImmutability also lowers rollback times because you can probably keep the\nold version running for a little longer and switch traffic in case of problems.<\/p>\n\n<p>It\u2019s also a plus from a scalability and stability point of view. For each deploy\nyou are in fact using provisioning scripts and build tools to package and\nrelease a new version of your application. You are creating new nodes to replace\nthe old ones that means that you are focused on provisioning and configuration\nmanagement. You are justifying all the effort spent to implement infrastructure\nas code.<\/p>\n\n<p>It matters also for security because you will have a fresh container after each\nupdate and in the case of a vulnerability or injection they will be cleaned\nduring the update.<\/p>\n\n<p>You have also an instrument to analyse the attacked container with the command\ndocker diff <container_id> This command shows the differences in the file system.<\/container_id><\/p>\n\n<p>It supports 3 events:<\/p>\n\n<ul>\n  <li>A - Add<\/li>\n  <li>D - Delete<\/li>\n  <li>C - Change<\/li>\n<\/ul>\n\n<p>In case of attack, you can commit the attacked container to analyse it later and\nreplace it with the original image.<\/p>\n\n<p>This flow is interesting but if you know that your application does not need to\nmodify the file system you can use <code>\u2013read-only<\/code> parameters to make the fs read\nonly or you can share a volume with the <code>ro<\/code> suffix <code>-v PWD:\/data:ro<\/code>.<\/p>\n\n<p>Docker can\u2019t fix the security issues for you, if your application can be\nattacked by a code injection then you need to fix your app but Docker offers a\nfew utilities to make life hard for an hacker and to allow you to have more\ncontrol over your environment.<\/p>\n\n<p>During this chapter we covered some practices and tools that you can follow or\nuse to build a safe environment.<\/p>\n\n<p>In general, you need to close your application in an environment that provides\nonly what you need and what you know.<\/p>\n\n<p>If your distribution or your container has something that you don\u2019t have under\nyour control or it is unused then it is a good idea remove these dark points.<\/p>\n\n<p>That\u2019s all. Immutability is not free and it requires to keep all the tools and\nprocesses involved in deploy, packaging up to speed because all your production\nenvironment depends on these tools. But it\u2019s an important piece of the puzzle.\nTo read more about tools like Cilium, CoreOS Clair, best practices about\nregistry and images you can download the pdf <a href=\"\/blog\/scaledocker\">Play Safe with Docker and\nContainer Security<\/a>.<\/p>\n"},{"title":"Orbiter an OSS Docker Swarm Autoscaler","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/orbiter-docker-swarm-autoscaler"}},"description":"Orbiter is an open source project design to become a cross provider autoscaler. At the moment it works like Zero Configuration Autoscaler for Docker Swarm. It also has a basic implementation to autoscale Digitalocean. This project is designed with InfluxData a company that provides OSS solution like InfluxDB, Kapacitor and Telegraf. We are going to use all this tools to create an autoscaling policy  for your Docker Swarm services.","image":"https:\/\/gianarb.it\/img\/swarm.gif","updated":"2017-04-22T08:08:27+00:00","published":"2017-04-22T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/orbiter-docker-swarm-autoscaler","content":"<iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/Q1xfmfML8ok\" frameborder=\"0\" allowfullscreen=\"\"><\/iframe>\n<p>My presentation at the Docker HQ in San Francisco.<\/p>\n\n<h2 id=\"autoscaling\">Autoscaling<\/h2>\n<p>One of the Cloud\u2019s dreams is a nice world when everything magically happen. You\nhave unlimited resources and you are just going to use what you need and to pay\nto you use.\nTo do what AWS provides a service called autoscaling-group for example. You can\nspecify some limitation and some expectation about a group of servers and AWS is\nmatching your expectation for you.\nIf you are able to make an automatic provision of a node you can use Cloudwatch\nto set some alerts. When AWS trigger these alerts the austocaling-group is\ncreating or removing one or more instance.<\/p>\n\n<h3 id=\"lets-try-with-an-example\">let\u2019s try with an example<\/h3>\n<p>You have a web service and you know that for 2 hours every day you don\u2019t need 4\nEC2 because you have a lot of traffic, you need 10 of them.\nYou can create an autoscaling group, set some alerts:<\/p>\n\n<ol>\n  <li>When the memory usage is more than 65% for 3 minutes start 3 new servers.<\/li>\n  <li>When the memory usage is less than 30% for 5 minutes stop 2 servers.<\/li>\n<\/ol>\n\n<p>Just to have an idea. In this way AWS knows what do you and you don\u2019t need to\nstay in front of your laptop to wait something happen. You can just do something\nfunny.<\/p>\n\n<p>It\u2019s something useful, if you think about a daily magazine, they usually has a\nlot of traffic in the beginning of the day when all the people are usually\nreading news. At that\u2019s an easy scenario.<\/p>\n\n<p>But it can also happen than a new shared on reddit or HackerNews is getting a\nlot of traffic and the last thing that you are looking for is to go down just\nduring that spike!<\/p>\n\n<h3 id=\"actors\">Actors<\/h3>\n\n<p>There are different actors in this comedy. First of all our cluster needs to be\nmanageable by outside via API. In this example I am going to use Docker Swarm,\nOrbiter supports a basic implementation for DigitalOcean but it still requires\nsome toning.<\/p>\n\n<p>You need to have some time series database or analytics platform that can\ntrigger webhook to trigger orbiter based on some metrics.<\/p>\n\n<p>We ran a demo with the TICKStack (InfluxDB, Telegraf, and Kapacitor) days ago.\nIt\u2019s available <a href=\"https:\/\/www.influxdata.com\/resources\/influxdata-helps-docker-auto-scale-monitoring\/?ao_campid=70137000000Jgw7\">at this\nlink<\/a>.<\/p>\n\n<p>In the end you need to deploy <a href=\"https:\/\/github.com\/gianarb\/orbiter\">orbiter<\/a>.<\/p>\n\n<h3 id=\"orbiter-design-and-arch\">Orbiter, design and arch<\/h3>\n\n<p>Orbiter is an open source tool designed to be a cross platform autoscaler. It is\nin go and it provides a REST API to handle scale requests.<\/p>\n\n<p>It provides one entrypoint:<\/p>\n\n<pre><code class=\"language-sh\">curl -v -d '{\"direction\": true}' \\\n    http:\/\/localhost:8000\/handle\/infra_scale\/docker\n<\/code><\/pre>\n\n<ul>\n  <li><code>direction<\/code> represent how to scale your service, true means up, false means\ndown.<\/li>\n  <li><code>\/handle\/infra_scale\/docker<\/code> identify the autoscaling group.\n<code>infra_scale<\/code> is the autoscaler name, <code>docker<\/code> is the policy name.<\/li>\n<\/ul>\n\n<p><code>infra_scale<\/code> for example contains information about the cluster manager, where\nit is, what is it? Docker or Digitalocean or what ever?<\/p>\n\n<p>The policy describes how an application scale. If you know a bit Docker Swarm\n<code>docker<\/code> is the name of the service.<\/p>\n\n<p>Orbiter supports two different boot methods. One is via configuration:<\/p>\n\n<pre><code class=\"language-yaml\">autoscalers:\n  infra_scale:\n    provider: swarm\n    parameters:\n    policies:\n      docker:\n        up: 4\n        down: 3\n<\/code><\/pre>\n\n<p>The second one is actually only supported by Docker Swarm and it\u2019s called\nautodetection. In practice when you start orbiter, it\u2019s looking for a Docker\nSwarm up and running. If it finds Swarm it\u2019s going to list all the services\ndeployed and it\u2019s going to manage all the services labeled with <code>orbiter=true<\/code>.<\/p>\n\n<p>By default up and down are set to 1 but you can override them with the label\norbiter.up=3 and orbiter.down=2.<\/p>\n\n<p>Let\u2019s suppose to have a Docker Swarm cluster with 3 nodes.<\/p>\n\n<pre><code class=\"language-bash\">$ docker node ls\nID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS\n11btq767ecqhelidu8ah1osfp *  node1     Ready   Active        Leader\nptre8d4bjccqi6ml6z445u0mz    node2     Ready   Active\nq5rwi3cej9gc1vqyscwfau640    node3     Ready   Active\n<\/code><\/pre>\n\n<p>I deployed a service called <a href=\"https:\/\/github.com\/gianarb\/micro\">gianarb\/micro<\/a>.\nIt is an open source demo application. There are different versions, I deployed\nthe version 1.0.0. It only shows the current IP of the container\/server.<\/p>\n\n<pre><code class=\"language-bash\">docker service create --label orbiter=true \\\n    --name micro --replicas 3 \\\n    -p 8080:8000 gianarb\/micro:1.0.0\n<\/code><\/pre>\n\n<p>You can check the number of tasks running with the command:<\/p>\n\n<pre><code class=\"language-bash\">$ docker service ps micro\nID                  NAME                IMAGE                 NODE\nDESIRED STATE       CURRENT STATE            ERROR\n         PORTS\n         onsqgriv3nel        micro.1             gianarb\/micro:1.0.0   node3\n         Running             Running 51 seconds ago\n\n         yxtxyder7bs3        micro.2             gianarb\/micro:1.0.0   node1\n         Running             Running 51 seconds ago\n\n         lyzxxdc00052        micro.3             gianarb\/micro:1.0.0   node2\n         Running             Running 52 seconds ago\n\n<\/code><\/pre>\n\n<p>At this point you can visit port <code>8080<\/code> of your cluster to have a look of the\nservice but for this demo doesn\u2019t really matter. We are going to start orbiter\nand we are going to trigger a scaling policy to simulate a request made by our\nmonitoring tool.<\/p>\n\n<pre><code class=\"language-bash\">docker service create --name orbiter \\\n    --mount type=bind,source=\/var\/run\/docker.sock,destination=\/var\/run\/docker.sock \\\n    -p 8000:8000 --constraint node.role==manager \\\n    -e DOCKER_HOST=unix:\/\/\/var\/run\/docker.sock \\\n    gianarb\/orbiter daemon --debug\n<\/code><\/pre>\n\n<p>I am using Docker to deploy orbiter as service. I am using the Unix Socket to\ncommunicate with Docker Swarm and I am deploying this service into the <code>manager<\/code>\nbecause it needs to have write permission to start and stop tasks. This can be\ndone only into the manager. You can configure orbiter with the variable\n<code>DOCKER_HOST<\/code> to use REST API. In this way you don\u2019t have this constraint. This\nconfiguration in very easy to show in a demo like this one.<\/p>\n\n<pre><code class=\"language-bash\">$ docker service logs orbiter\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:24:56Z\" level=info\nmsg=\"orbiter started\"\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:24:56Z\" level=debug\nmsg=\"Daemon started in debug mode\"\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:24:56Z\" level=info\nmsg=\"Starting in auto-detection mode.\"\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:24:56Z\" level=info\nmsg=\"Successfully connected to a Docker daemon\"\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:24:56Z\" level=debug\nmsg=\"autodetect_swarm\/micro added to orbiter. UP 1, DOWN 1\"\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:24:56Z\" level=info\nmsg=\"API Server run on port :8000\"\n<\/code><\/pre>\n<p>As you can see into the logs the API are running on port 8000 and orbiter\nalready detected a service called <code>micro<\/code>, the one that we deployed before and\nit auto-created a autoscaling group called <code>autodetection_swarm\/micro<\/code>.\nThis is the unique name that we can use when we trigger our scale request.<\/p>\n\n<pre><code class=\"language-bash\">$ curl -d '{\"direction\": true}' -v\nhttp:\/\/10.0.57.3:8000\/handle\/autodetect_swarm\/micro\n*   Trying 10.0.57.3...\n* TCP_NODELAY set\n* Connected to 10.0.57.3 (10.0.57.3) port 8000 (#0)\n&gt; POST \/handle\/autodetect_swarm\/micro HTTP\/1.1\n&gt; Host: 10.0.57.3:8000\n&gt; User-Agent: curl\/7.52.1\n&gt; Accept: *\/*\n&gt; Content-Length: 19\n&gt; Content-Type: application\/x-www-form-urlencoded\n&gt;\n* upload completely sent off: 19 out of 19 bytes\n&lt; HTTP\/1.1 200 OK\n&lt; Content-Type: application\/json\n&lt; Date: Tue, 18 Apr 2017 09:30:35 GMT\n&lt; Content-Length: 0\n&lt;\n* Curl_http_done: called premature == 0\n* Connection #0 to host 10.0.57.3 left intact\n<\/code><\/pre>\n\n<p>With that cURL I simulated a scale request and as you can see in the log above\norbiter detected the request and it scaled up 1 task for our service called\n<code>macro<\/code><\/p>\n\n<pre><code class=\"language-bash\">$ docker service logs orbiter\norbiter.1.zop1qkwa1qxy@node1    | POST \/handle\/autodetect_swarm\/micro HTTP\/1.1\norbiter.1.zop1qkwa1qxy@node1    | Host: 10.0.57.3:8000\norbiter.1.zop1qkwa1qxy@node1    | Accept: *\/*\norbiter.1.zop1qkwa1qxy@node1    | Content-Length: 19\norbiter.1.zop1qkwa1qxy@node1    | Content-Type:\napplication\/x-www-form-urlencoded\norbiter.1.zop1qkwa1qxy@node1    | User-Agent: curl\/7.52.1\norbiter.1.zop1qkwa1qxy@node1    |\norbiter.1.zop1qkwa1qxy@node1    | {\"direction\": true}\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:30:35Z\" level=info\nmsg=\"Received a new request to scale up micro with 1 task.\" direc\ntion=true service=micro\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:30:35Z\" level=debug\nmsg=\"Service micro scaled from 3 to 4\" provider=swarm\norbiter.1.zop1qkwa1qxy@node1    | time=\"2017-04-18T09:30:35Z\" level=info\nmsg=\"Service micro scaled up.\" direction=true service=micro\n<\/code><\/pre>\n\n<p>We can verify the current number of tasks that are running for <code>micro<\/code> and we\ncan see that it\u2019s not 3 as before but 4.<\/p>\n\n<pre><code class=\"language-bash\">$ docker service ls\nID                  NAME                MODE                REPLICAS\nIMAGE\nazi8zyeor5eb        micro               replicated          4\/4\ngianarb\/micro:1.0.0\nezklgb6uak8b        orbiter             replicated          1\/1\ngianarb\/orbiter:latest\n<\/code><\/pre>\n\n<p>This project is open source on\n<a href=\"https:\/\/github.com\/gianarb\/orbiter\">github.com\/gianarb\/orbiter<\/a> you can have a look on\nit, try and leave some feedback or request if you need something different.<\/p>\n\n<p>PR are also open if you are working with a different cluster manager or with a\ndifferent provider, add a new one is very easy. It\u2019s just a new interface to\nimplement.<\/p>\n\n"},{"title":"LinuxKit operating system built for container","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/linuxkit-operating-system-build-for-containers"}},"description":"LinuxKit is a new tool presented during the DockerCon 2017 built by Docker to manage cross architecture and cross kernel testing. LinuxKit is a secure, portable and lean operating system built for containers. It supports different hypervisor as MacOS hyper or QEMU to run testsuite on different architectures. In this article I am showing you some basic concept above this tool. How it works and why it can be useful.","image":"https:\/\/gianarb.it\/img\/builder.gif","updated":"2017-04-18T10:08:27+00:00","published":"2017-04-18T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/linuxkit-operating-system-build-for-containers","content":"<p>Linuxkit is a new project presented by Docker during the DockerCon 2017. If we\nlook at the description of the project on\n<a href=\"https:\/\/github.com\/linuxkit\/linuxkit\">GitHub<\/a>:<\/p>\n\n<blockquote>\n  <p>A secure, portable and lean operating system built for containers<\/p>\n<\/blockquote>\n\n<p>I am feeling already exited. I was an observer of the project when <a href=\"https:\/\/twitter.com\/justincormack\">Justin\nCormack<\/a> and the other\n<a href=\"https:\/\/github.com\/linuxkit\/linuxkit\/graphs\/contributors\">contributors<\/a> was\nworking on a private repository. I was invited as part of ci-wg group into the\nCNCF and I loved this project from the first day.<\/p>\n\n<p>You can think about linuxkit as a builder for Linux operating system everything\nbased on containers.<\/p>\n\n<p>It\u2019s a project that can stay behind your continuous integration system to allow\nus to test on different kernel version and distribution. You can a light kernels\nwith all the services that you need and you can create different outputs\nrunnable on cloud providers as Google Cloud Platform, with Docker or with QEMU.<\/p>\n\n<h2 id=\"continuous-delivery-new-model\">Continuous delivery, new model<\/h2>\n\n<p>I am not really confident about Google Cloud Platform but just to move over I am\ngoing to do some math with AWS as provider.\nLet\u2019s suppose that I have the most common continuous integration system, one big\nbox always up an running configured to support all your projects or if you are\nalready good you are running containers to have separated and isolated\nenvironment.<\/p>\n\n<p>Let\u2019s suppose that you Jenkins is running all times on m3.xlarge:<\/p>\n\n<p><code>m3.xlarge<\/code> used 100% every months costs 194.72$.<\/p>\n\n<p>Let\u2019s have a dream. You have a very small server with just a frontend\napplication for your CI and all jobs are running in a separate instance, tiny as\na t2.small.<\/p>\n\n<p><code>t2.small<\/code> used only 1 hour costs 0.72$ .<\/p>\n\n<p>I calculated 1 hour because it\u2019s the minimum that you can pay and I hope that\nyour CI job can run for less than 1 hour.\nEasy math to calculate the number of builds that you need to run to pay as you\nwas paying before.<\/p>\n\n<p>194.72 \/ 0.72 ~ 270 builds every month.<\/p>\n\n<p>If you are running less than 270 builds a months you can save some money\ntoo. But you have other benefits:<\/p>\n\n<ol>\n  <li>More jobs, more instances. Very easy to scale. Easier that Jenkins\nmaster\/slave and so on.<\/li>\n  <li>How many times during holidays your Jenkins is still up and running without\nto have nothing to do? During these days you are just paying for the frontend\napp.<\/li>\n<\/ol>\n\n<p>And these are just the benefit to have a different setup for your continuous\ndelivery.<\/p>\n\n<h2 id=\"linuxkit-ci-implementation\">LinuxKit CI implementation<\/h2>\n\n<p>There is a directory called\n<a href=\"https:\/\/github.com\/linuxkit\/linuxkit\/tree\/master\/test\">.\/test<\/a> that contains\nsome linuxkit use case but I am going to explain in practice how linuxkit is\ntested. Because it uses itself, awesome!<\/p>\n\n<p>In first you need to download and compile linuxkit:<\/p>\n<pre><code class=\"language-shell\">git clone github.com:linuxkit\/linuxkit $GOPATH\/src\/github.com\/linuxkit\/linuxkit\nmake\n.\/bin\/moby\n<\/code><\/pre>\n<p>You can move it in your <code>$PATH<\/code> with <code>make install<\/code>.<\/p>\n\n<pre><code>$ moby\nPlease specify a command.\n\nUSAGE: moby [options] COMMAND\n\nCommands:\n  build       Build a Moby image from a YAML file\n  run         Run a Moby image on a local hypervisor or remote cloud\n  version     Print version information\n  help        Print this message\n\nRun 'moby COMMAND --help' for more information on the command\n\nOptions:\n  -q    Quiet execution\n  -v    Verbose execution\n<\/code><\/pre>\n\n<p>At the moment the CLI is very simple, the most important commands are build and\nrun.  linuxkit is based on YAML file that you can use to describe your kernel,\nwith all applications and all the services that you need.  Let\u2019s start with the\n<a href=\"https:\/\/github.com\/linuxkit\/linuxkit\/blob\/master\/test\/test.yml\">linuxkit\/test\/test.yml<\/a>.<\/p>\n\n<pre><code class=\"language-yaml\">kernel:\n  image: \"mobylinux\/kernel:4.9.x\"\n  cmdline: \"console=ttyS0\"\ninit:\n  - mobylinux\/init:8375addb923b8b88b2209740309c92aa5f2a4f9d\n  - mobylinux\/runc:b0fb122e10dbb7e4e45115177a61a3f8d68c19a9\n  - mobylinux\/containerd:18eaf72f3f4f9a9f29ca1951f66df701f873060b\n  - mobylinux\/ca-certificates:eabc5a6e59f05aa91529d80e9a595b85b046f935\nonboot:\n  - name: dhcpcd\n\timage: \"mobylinux\/dhcpcd:0d4012269cb142972fed8542fbdc3ff5a7b695cd\"\n\tbinds:\n\t - \/var:\/var\n\t - \/tmp:\/etc\n\tcapabilities:\n\t - CAP_NET_ADMIN\n\t - CAP_NET_BIND_SERVICE\n\t - CAP_NET_RAW\n\tnet: host\n\tcommand: [\"\/sbin\/dhcpcd\", \"--nobackground\", \"-f\", \"\/dhcpcd.conf\", \"-1\"]\n  - name: check\n\timage: \"mobylinux\/check:c9e41ab96b3ea6a3ced97634751e20d12a5bf52f\"\n\tpid: host\n\tcapabilities:\n\t - CAP_SYS_BOOT\n\treadonly: true\noutputs:\n  - format: kernel+initrd\n  - format: iso-bios\n  - format: iso-efi\n  - format: gcp-img\n<\/code><\/pre>\n\n<p>Linuxkit builds everythings inside a container, it means that you don\u2019t need a\nlot of dependencies it\u2019s very easy to use. It generates different <code>output<\/code> in\nthis case <code>kernel+initrd<\/code>, <code>iso-bios<\/code>, <code>iso-efi<\/code>, <code>gpc-img<\/code>  depends of the\nplatform that you are interested to use to run your kernel.<\/p>\n\n<p>I am trying to explain a bit how this YAML works. You can see that there are\ndifferent primary section: <code>kernel<\/code>, <code>init<\/code>, <code>onboot<\/code>, <code>service<\/code> and so on.<\/p>\n\n<p>Pretty much all of them contains the keyword <code>image<\/code> because as I said before\neverything is applied on containers, in this example they are store in\n<a href=\"https:\/\/hub.docker.com\/u\/mobylinux\/\">hub.docker.com\/u\/mobylinux\/<\/a>.<\/p>\n\n<p>The based kernel is <code>mobylinux\/kernel:4.9.x<\/code>, I am just reporting what the\n<a href=\"https:\/\/github.com\/linuxkit\/linuxkit#yaml-specification\">README.md<\/a> said:<\/p>\n\n<ul>\n  <li><code>kernel<\/code> specifies a kernel Docker image, containing a kernel and a\nfilesystem tarball, eg containing modules. The example kernels are built from\n<code>kernel\/<\/code><\/li>\n  <li><code>init<\/code> is the base <code>init<\/code> process Docker image, which is unpacked as the base\nsystem, containing <code>init<\/code>, <code>containerd<\/code>, <code>runc<\/code> and a few tools. Built from\n<code>pkg\/init\/<\/code><\/li>\n  <li><code>onboot<\/code> are the system containers, executed sequentially in order. They\nshould terminate quickly when done.<\/li>\n  <li><code>services<\/code> is the system services, which normally run for the whole time the\nsystem is up<\/li>\n  <li><code>files<\/code> are additional files to add to the image<\/li>\n  <li><code>outputs<\/code> are descriptions of what to build, such as ISOs.<\/li>\n<\/ul>\n\n<p>At this point we can try it. If you are on MacOS as I was you don\u2019t need to\ninstall anything one of the runner supported by <code>linuxkit<\/code> is <code>hyperkit<\/code> it\nmeans that everything is available in your system.<\/p>\n\n<p><code>.\/test<\/code> contains different test suite but now we will stay focused on\n<code>.\/test\/check<\/code> directory. It contains a set of checks to validate how the\nkernel went build by LinuxKit. They are the smoke tests that are running on each\nnew pull request created on the repository for example.<\/p>\n\n<p>As I said everything runs inside a container, if you look into the check\ndirectory there is a makefile that build a mobylinux\/check image, that image\nwent run in LinuxKit, into the <code>test.yml<\/code> file:<\/p>\n\n<pre><code class=\"language-yaml\">onboot:\n  - name: check\n\timage: \"mobylinux\/check:c9e41ab96b3ea6a3ced97634751e20d12a5bf52f\"\n\tpid: host\n\tcapabilities:\n\t - CAP_SYS_BOOT\n\treadonly: true\n<\/code><\/pre>\n\n<p>You can use the\n<a href=\"https:\/\/github.com\/linuxkit\/linuxkit\/blob\/master\/test\/check\/Makefile\">Makefile<\/a>\ninside the check directory to build a new version of check, you can just use\nthe command <code>make<\/code>.<\/p>\n\n<p>When you have the right version of your test we can build the image used by moby:<\/p>\n\n<pre><code>cd $GOPATH\/src\/github.com\/linuxkit\/linuxkit\nmoby build test\/test.yml\n<\/code><\/pre>\n\n<p>Part of the output is:<\/p>\n\n<pre><code class=\"language-shell\">Create outputs:\n  test-bzImage test-initrd.img test-cmdline\n  test.iso\n  test-efi.iso\n  test.img.tar.gz\n<\/code><\/pre>\n\n<p>And if you look into the directory you can see that there are all these files\ninto the root. These files can be run from qemu, google cloud platform,\nhyperkit and so on.<\/p>\n\n<pre><code class=\"language-shell\">moby run test\n<\/code><\/pre>\n<p>On MacOS with this command LinuxKit is using hyperkit to start a VM, I can not copy\npaste all the output but you can see the hypervisor logs:<\/p>\n\n<pre><code>virtio-net-vpnkit: initialising, opts=\"path=\/Users\/gianlucaarbezzano\/Library\/Containers\/com.docker.docker\/Data\/s50\"\nvirtio-net-vpnkit: magic=VMN3T version=1 commit=0123456789012345678901234567890123456789\nConnection established with MAC=02:50:00:00:00:04 and MTU 1500\nearly console in extract_kernel\ninput_data: 0x0000000001f2c3b4\ninput_len: 0x000000000067b1e5\noutput: 0x0000000001000000\noutput_len: 0x0000000001595280\nkernel_total_size: 0x000000000118a000\nbooted via startup_32()\nPhysical KASLR using RDRAND RDTSC...\nVirtual KASLR using RDRAND RDTSC...\n\nDecompressing Linux... Parsing ELF... Performing relocations... done.\nBooting the kernel.\n[    0.000000] Linux version 4.9.21-moby (root@84baa8e89c00) (gcc version 6.2.1 20160822 (Alpine 6.2.1) ) #1 SMP Sun Apr 9 22:21:32 UTC 2017\n[    0.000000] Command line: earlyprintk=serial console=ttyS0\n[    0.000000] x86\/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'\n[    0.000000] x86\/fpu: Supporting XSAVE feature 0x002: 'SSE registers'\n[    0.000000] x86\/fpu: Supporting XSAVE feature 0x004: 'AVX registers'\n[    0.000000] x86\/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256\n[    0.000000] x86\/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.\n[    0.000000] x86\/fpu: Using 'eager' FPU context switches.\n[    0.000000] e820: BIOS-provided physical RAM map:\n[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable\n[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable\n<\/code><\/pre>\n<p>When the VM is ready LinuxKit is starting all the <code>init<\/code>, <code>onboot<\/code>, the logs is\neasy to understand as the <code>test.yml<\/code> is starting <code>containerd<\/code>, <code>runc<\/code>:<\/p>\n\n<pre><code>init:\n  - mobylinux\/init:8375addb923b8b88b2209740309c92aa5f2a4f9d\n  - mobylinux\/runc:b0fb122e10dbb7e4e45115177a61a3f8d68c19a9\n  - mobylinux\/containerd:18eaf72f3f4f9a9f29ca1951f66df701f873060b\n  - mobylinux\/ca-certificates:eabc5a6e59f05aa91529d80e9a595b85b046f935\nonboot:\n  - name: dhcpcd\n\timage: \"mobylinux\/dhcpcd:0d4012269cb142972fed8542fbdc3ff5a7b695cd\"\n\tbinds:\n\t - \/var:\/var\n\t - \/tmp:\/etc\n\tcapabilities:\n\t - CAP_NET_ADMIN\n\t - CAP_NET_BIND_SERVICE\n\t - CAP_NET_RAW\n\tnet: host\n\tcommand: [\"\/sbin\/dhcpcd\", \"--nobackground\", \"-f\", \"\/dhcpcd.conf\", \"-1\"]\n  - name: check\n\timage: \"mobylinux\/check:c9e41ab96b3ea6a3ced97634751e20d12a5bf52f\"\n\tpid: host\n\tcapabilities:\n\t - CAP_SYS_BOOT\n\treadonly: true\n<\/code><\/pre>\n\n<pre><code>Welcome to LinuxKit\n\n\t\t\t##         .\n\t\t  ## ## ##        ==\n\t\t   ## ## ## ## ##    ===\n\t   \/\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\\___\/ ===\n\t  ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ \/  ===- ~~~\n\t   \\______ o           __\/\n\t\t \\    \\         __\/\n\t\t  \\____\\_______\/\n\n\n\/ # INFO[0000] starting containerd boot...                   module=containerd\nINFO[0000] starting debug API...                         debug=\"\/run\/containerd\/debug.sock\" module=containerd\nINFO[0000] loading monitor plugin \"cgroups\"...           module=containerd\nINFO[0000] loading runtime plugin \"linux\"...             module=containerd\nINFO[0000] loading snapshot plugin \"snapshot-overlay\"...  module=containerd\nINFO[0000] loading grpc service plugin \"healthcheck-grpc\"...  module=containerd\nINFO[0000] loading grpc service plugin \"images-grpc\"...  module=containerd\nINFO[0000] loading grpc service plugin \"metrics-grpc\"...  module=containerd\n<\/code><\/pre>\n<p>The last step is the <code>check<\/code> that runs the real test suite:<\/p>\n\n<pre><code>kernel config test succeeded!\ninfo: reading kernel config from \/proc\/config.gz ...\n\nGenerally Necessary:\n- cgroup hierarchy: properly mounted [\/sys\/fs\/cgroup]\n- CONFIG_NAMESPACES: enabled\n- CONFIG_NET_NS: enabled\n- CONFIG_PID_NS: enabled\n- CONFIG_IPC_NS: enabled\n- CONFIG_UTS_NS: enabled\n- CONFIG_CGROUPS: enabled\n- CONFIG_CGROUP_CPUACCT: enabled\n- CONFIG_CGROUP_DEVICE: enabled\n- CONFIG_CGROUP_FREEZER: enabled\n- CONFIG_CGROUP_SCHED: enabled\n\n........\n.......\n\nMoby test suite PASSED\n\n\t\t\t##         .\n\t\t  ## ## ##        ==\n\t\t   ## ## ## ## ##    ===\n\t   \/\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\\___\/ ===\n\t  ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ \/  ===- ~~~\n\t   \\______ o           __\/\n\t\t \\    \\         __\/\n\t\t  \\____\\_______\/\n\n[    3.578681] ACPI: Preparing to enter system sleep state S5\n[    3.579063] reboot: Power down\n<\/code><\/pre>\n\n<p>The last log is the output of\n<a href=\"https:\/\/github.com\/linuxkit\/linuxkit\/blob\/master\/test\/check\/check-kernel-config.sh\">check-kernel-config.sh<\/a>\nfiles.<\/p>\n\n<p>If you are on linux you can do the same command but by the default you are going\nto use <a href=\"https:\/\/www.qemu-project.org\/\">qemu<\/a> an open source machine emulator.<\/p>\n\n<pre><code class=\"language-bash\">sudo apt-get install qemu\n<\/code><\/pre>\n\n<p>I did some test in my Asus Zenbook with Ubuntu, when you run <code>moby run<\/code> this is\nthe command executed with qemu:<\/p>\n\n<pre><code>\/usr\/bin\/qemu-system-x86_64 -device virtio-rng-pci -smp 1 -m 1024 -enable-kvm\n\t-machine q35,accel=kvm:tcg -kernel test-bzImage -initrd test-initrd.img -append\n\tconsole=ttyS0 -nographic\n<\/code><\/pre>\n\n<p>By default is testing on <code>x86_64<\/code> but qemu supports a lot of other archs and\ndevices. You can simulate an arm and a rasperry pi for example. At the\nmoment LinuxKit is not ready to emulate other architecture. But this is the main\nscope for this project. It\u2019s just a problem of time. It will be able soon!<\/p>\n\n<p>Detect if the build succeed or failed is not easy as you probably expect. The\nstatus inside the VM is not the one that you get in your laptop. At the moment\nto understand if the code in your PR is good or bad we are parsing the output:<\/p>\n\n<pre><code>define check_test_log\n\t@cat $1 |grep -q 'Moby test suite PASSED'\nendef\n<\/code><\/pre>\n<p><a href=\"https:\/\/github.com\/linuxkit\/linuxkit\/blob\/master\/Makefile\">.\/linuxkit\/Makefile<\/a><\/p>\n\n<p>Explain how linuxkit tests itself at the moment is the best way to get how it\nworks. It is just one piece of the puzzle, if you have a look here <a href=\"https:\/\/github.com\/linuxkit\/linuxkit\/pulls\">every\npr<\/a> has a GitHub Status that point to\na website that contains logs related that particular build. That part is not\nmanaged by linuxkit because it\u2019s only the builder used to create the\nenvironment. All the rest is managed by\n<a href=\"https:\/\/github.com\/docker\/datakit\">datakit<\/a>. I will speak about it probably in\nanother blogpost.<\/p>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n\n<p>runc, docker, containerd, rkt but also Prometheus, InfluxDB, Telegraf a lot of\nprojects supports different architecture and they need to run on different\nkernels with different configuration and capabilities. They need to run on your\nlaptop, in your IBM server and in a Raspberry Pi.<\/p>\n\n<p>This project is in an early state but I understand why Docker needs something\nsimilar and also, other projects as I said are probably going to get some\nbenefits from a solution like this one. Have it open source it\u2019s very good and\nI am honored to be part of the amazing group that put this together. I just did\nsome final tests and I tried to understand how it\u2019s designed and how it works.\nThis is the result of my test. I hope that can be helpful to start in the right\nmindset.<\/p>\n\n<p>My plan is to create a configuration to test InfluxDB and play a bit with <code>qemu<\/code>\nto test it on different architectures and devices. Stay around a blogpost will\ncome!<\/p>\n\n<p>Some Links:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/blog.docker.com\/2017\/04\/introducing-the-moby-project\/\">INTRODUCING MOBY PROJECT: A NEW OPEN-SOURCE PROJECT TO ADVANCE THE SOFTWARE\nCONTAINERIZATION MOVEMENT<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/linuxkit\">github.com\/linuxkit<\/a><\/li>\n  <li><a href=\"https:\/\/github.com\/moby\">github.com\/moby<\/a><\/li>\n<\/ul>\n\n<p class=\"text-muted\">\n    Reviewers: <a href=\"https:\/\/twitter.com\/justincormack\">Justin Cormack<\/a>\n<\/p>\n\n<div class=\"post row\">\n  <div class=\"col-md-12\">\n      <div class=\"bs-callout bs-callout-info row\">\n\t<div class=\"row\">\n\t\t<div class=\"col-md-12\">\n\t\t\t<h2><a href=\"\/\/gianarb.it\/blog\/docker-the-fundamentals\" target=\"_blank\">get \"Docker the Fundamentals\"<\/a> <small>by. Drive your boat as a Captain<\/small><\/h2>\n\t\t<\/div>\n\t<\/div>\n\t<div class=\"row\">\n\t\t<div class=\"col-md-3\">\n\t\t\t<a href=\"\/\/gianarb.it\/blog\/docker-the-fundamentals\" target=\"_blank\"><img src=\"\/img\/the-fundamentals.jpg\" class=\"img-fluid\" \/><\/a>\n\t\t<\/div>\n\t\t<div class=\"col-md-9\">\n\t\t\t<p>\n\t\t\tYou can get the Chapter 2 of the book <a href=\"\/blog\/scaledocker\" target=\"_blank\">\"Drive your boat as a Captain\"<\/a> just leave click on the\n\t\t\tcover and leave your email to receive a free copy.<\/p>\n\t\t\t<p>This chapter is getting started with Docker Engine and the basic\n\t\t\tconcept around registry, pull, push and so on. It's a good way to start from\n\t\t\tzero with Docker.<\/p>\n\t\t<\/div>\n\t<\/div>\n<\/div>\n\n  <\/div>\n<\/div>\n"},{"title":"Containers why we are here","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/containers-why-we-are-here"}},"description":"Cloud computing, containers, devops, everything is moving so fast that sometime for big companies or CTO is very hard keep track of everything. What it's just a new trend and what I really need. This post contains my options and a bit of history about docker, cloud computing, aws and containers.","image":"https:\/\/gianarb.it\/img\/container-security.png","updated":"2017-03-12T08:08:27+00:00","published":"2017-03-12T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/containers-why-we-are-here","content":"<blockquote>\n  <p>\u201cIt is change, continuing change, inevitable change, that is the dominant\nfactor in society today. No sensible decision can be made any longer without\ntaking into account not only the world as it is, but the world as it will be\u2026\nThis, in turn, means that our statesmen, our businessmen, our everyman must take\non a science fictional way of thinking\u201d  Asimov, 1981<\/p>\n<\/blockquote>\n\n<h1 id=\"isolation-and-virtualization\">Isolation and Virtualization<\/h1>\n\n<p>I can see clearly two kind of invention: the ones that allow people to do\nsomething they couldn\u2019t do before and the ones that let them do something\nbetter. Fire, for example,  gave people the chance to cook food, push away wild\nbeasts and warm themselves up during cold nights. Many years later, electricity\nlet people warm their houses just by pushing a button. After wheels discovery\npeople began to travel and to trades goods, but was only with car\u2019s invention\nthat they might do it faster and efficiently.  Similarly, the web creates a huge\nnetwork, able to connect people all over the world, web application gave people\ntools to use and customise such a complex system. Under this perspective,\ncontainer is one of the main revolution of the last years, a unique tool that\nhelps with app management and development. Let\u2019s discover  something more about\nthe real story of containers.<\/p>\n\n<p>We have not a lot of documentation about why Bill Joy 18th March 1982 added\nchroot into the BSD probably to emulate him solutions and program is an isolated\nroot. That\u2019s was amazing but not enough few years later in 1991 Bill Cheswick\nextended chroot with security features provided by FreeBSD and implemented the\n\u201cjails\u201d and in 2000 he introduced what we know as the proper jails command now\nour chroots can not be anything, anywhere out of themself. When you start a\nprocess in a chroot the PID is one and there is only that process but from\noutside you can see all processes that are running in a chroot.  Our\napplications can not stay in a jail! They need to communicate with outside,\nexchange information and so on. To solve this problem in 2002 in the kernel\nversion 2.4.19 a group of developers like Eric W. Biederman, Pavel Emelyanov\nintroduced the namespace feature to manage system resources like network,\nprocess and file system.<\/p>\n\n<p>This is just a bit of history about how the ecosystem spin up, in the end of\nthis chapter we will try to understand how why Docker arrives on the scene, but\nthe main goal of this book is on another layer and on another complexity, we are\nhere to understand how manage all this things in cloud and how to design a\ndistributed system but you know the past is important to build a solid future.<\/p>\n\n<p>All this great features are now popular under the name of container, nothing\nreally news and this is one of the reason about why all this things are amazing!\nThey are under the hood from a while! Solid and tested feature put together and\nmade usable.<\/p>\n\n<p>Nothing to say about the importance for a system to being isolated: isolation\nhelps us to usefully manage resources, security and monitoring, in the best way,\nfalse problems creation in specific applications, often not even related to our\napp.<\/p>\n\n<p>The most common solution  is virtualization: you can use an hypervisor to create\nvirtual server in a single machine.  There are different kind of virtualization:<\/p>\n\n<ul>\n  <li>Full virtualization<\/li>\n  <li>Para virtualization like Virtual Machine, Xen, VMware<\/li>\n  <li>Operating System virtualization like Containers<\/li>\n  <li>Application virtualization like JVM.<\/li>\n<\/ul>\n\n<p><img class=\"img-fluid\" src=\"\/img\/virtualization.png\" \/>\n<a href=\"https:\/\/fntlnz.wtf\/post\/why-containers\/\" target=\"_blank\"><small>img from fntlnz\u2019s blog. Thanks<\/small><\/a><\/p>\n\n<p>The main differences between them is how they abstract the layers, application,\nprocessing, network, storage and also about how the superior level interact with\nunderlying level.  For example into the Full virtualization the hardware is\nvirtualized, into the para virtualization not.<\/p>\n\n<p>Container is an operation-system-level virtualization. The main difference\nbetween Container and Virtual Machine is the layer: the first works on the\noperating system, the second on the hardware layer.<\/p>\n\n<p>When we speak about container we are focused on the application virtualization\nand on a specific feature provided by the kernel called Linux Containers (LXC):\nwhat we do when we build containers is create new isolated Linux systems into\nthe same host, it means that we can not change the operation system for example\nbecause our virtualization layer doesn\u2019t allow us to run Linux containers out of\nLinux.<\/p>\n\n<h1 id=\"the-reasons\">The reasons<\/h1>\n\n<p>Revolutions are not related to a single and specific event but come from\nmultiple movements and changes: Container is just a piece of the story.<\/p>\n\n<p>Cloud Computing allowed us to think about our infrastructure as an instable\nnumber of servers that can scale up and down, in a reasonable short amount of\ntime, with less money and without the investment requested to manage a big\ninfrastructure made of more than one datacenter across the world.<\/p>\n\n<p>As a consequence, applications that had been in a cellar, now are on Amazon Web\nService, with a load balancer and maybe different availability zone. This\nallowed little teams and medium companies, without datacenter and\ninfrastructures, to think about concept like distribution, high availability,\nredundancy.  Evolution never stop .<\/p>\n\n<p>Once our applications are running in few virtual machines, our business will\ngrow up so we start to scale up and down this servers to serve all our users.\nWe experimented few benefits but also a lot of issues related, for example, to\nthe time requested for managing this dynamism; moreover big applications are\nusually more expensive to scale.<\/p>\n\n<p>Our application can only grow but the deploy can be really expensive. We\ndiscovered that the behavior of an application is not the same across all of our\nservices and entrypoint, because few of them receive more traffic that others.\nSo, we started to split our big applications in order to make them easy to scale\nand monitor. The problem was that, in order to maintain our standard, we need to\nfind a way to keep them isolated, safe and able to communicate each others.<\/p>\n\n<p>The Microservices Architecture arrived and companies like Netflix, Amazon,\nGoogle and others counts hundreds and hundreds of little and specific of\nservices that together work to serve big and profitable products.  Netlix is one\nof first companies that started sharing the way they build Netlix.com: with more\nthat 400 microservices, they managed feature like registration, streaming,\nrankins and all what the application provides.  At the moment, Containers are\nthe best solution for managing a dense and dynamic environment with a good\ncontrol, security and for moving your application between servers.<\/p>\n\n<p class=\"text-muted\">\n    Reviewers: Arianna Scarcella, <a href=\"https:\/\/twitter.com\/TheBurce\">Jenny Burcio<\/a>\n<\/p>\n"},{"title":"About your images, security tips","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/about-your-images-security-tips"}},"description":"Everything unnecessary in your system could be a very stupid vulnerability. We already spoke about this idea in the capability chapter and  the same rule exists when we build an image. Having  tiny images with only what our application needs to run is not just a goal in terms of distribution but also in terms of cost of maintenance and security.","image":"https:\/\/gianarb.it\/img\/container-security.png","updated":"2016-12-28T08:08:27+00:00","published":"2016-12-28T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/about-your-images-security-tips","content":"<p>Everything unnecessary in your system could be a very stupid vulnerability. We\nalready spoke about this idea in the capability chapter and  the same rule\nexists when we build an image. Having  tiny images with only what our\napplication needs to run is not just a goal in terms of distribution but also\nin terms of cost of maintenance and security.  If you have some small\nexperience with docker already you probably know the\n<a href=\"https:\/\/hub.docker.com\/_\/alpine\/\">alpine<\/a> image. It is build\nfrom the Alpine distribution and it\u2019s only 5MB size, if your application can\nrun inside it then this is a very good optimization that you can do.  What\nabout your binaries? Can your application run standalone? If the answer is yes\nyou can think about a very very minimal image. scratch is usually used as a\nbase for other images like debian and ubuntu but you can also use it to run\nyour golang binary and let me show you something with our micro application.\nIn the <a href=\"https:\/\/github.com\/gianarb\/micro\/releases\/tag\/1.0.0\">release page<\/a>,\nthere are a list of binaries already compiled and ready to be used. In this\ncase we can download the linux_386 binary.<\/p>\n\n<p><img class=\"img-fluid\" src=\"\/img\/security-image\/micro-release.png\" \/><\/p>\n\n<pre><code class=\"language-bash\">curl -SsL https:\/\/github.com\/gianarb\/micro\/releases\/download\/1.0.0\/micro_1.0.0_linux_386 &gt; micro\n<\/code><\/pre>\n\n<p>And we know we can include this binary in the scratch image with this Dockerfile<\/p>\n\n<pre><code class=\"language-bash\">FROM scratch\n\nADD .\/micro \/micro\nEXPOSE 8000\n\nCMD [\"\/micro\"]\n<\/code><\/pre>\n\n<pre><code class=\"language-bash\">docker build -t micro-scratch .\ndocker run -p 8000:8000 micro-scratch\n<\/code><\/pre>\n\n<p>The expectation is an http application on port 8000 but the main difference is\nthe size of the image, the old one from alpine is 12M the new one is 5M.<\/p>\n\n<p>The scratch image is impossibile to use with all applications but if you have a\nbinary you can remove a lot of unused overhead.<\/p>\n\n<p>Another way to understand the status of your image is to scan it to detect\nsecurity vulnerabilities or exposures. Docker Hub and Docker Cloud can do it\nfor private images.  This is a great feature to have in your pipeline to scan\nan image after a build.<\/p>\n\n<p>CoreOS provides an open source project called <a href=\"https:\/\/github.com\/coreos\/clair\">clair<\/a> to do the same in your environment.<\/p>\n\n<p>It is an application in Golang that exposes a set of HTTP API to\npull, push and analyse images. It downloads vulnerabilities from different\nsources like <a href=\"https:\/\/security-tracker.debian.org\/tracker\">Debian Security\nTracker<\/a> or <a href=\"https:\/\/www.redhat.com\/security\/data\/metrics\/\">RedHat Security\nData<\/a>. Each vulnerability is\nstored in Postgres. Clair works like static analyzer, this means that it\ndoesn\u2019t need to run our container to scan it but it persists different checks\ndirectly into the filesystem of the image.<\/p>\n\n<pre><code class=\"language-bash\">docker run -it -p 5000:5000 registry\n<\/code><\/pre>\n\n<p>With this command we are running a private registry to use as a source for the\nimage to scan<\/p>\n\n<pre><code class=\"language-bash\">docker pull gianarb\/micro:1.0.0\ndocker tag gianarb\/micro:1.0.0 localhost:5000\/gianarb\/micro:1.0.0\ndocker push localhost:5000\/gianarb\/micro:1.0.0\n<\/code><\/pre>\n\n<p>Now that we pushed in our private repo the micro image we can setup clair.<\/p>\n\n<pre><code class=\"language-bash\">mkdir $HOME\/clair-test\/clair_config\ncd $HOME\/clair-test\ncurl -L https:\/\/raw.githubusercontent.com\/coreos\/clair\/v1.2.2\/config.example.yaml -o clair_config\/config.yaml\ncurl -L https:\/\/raw.githubusercontent.com\/coreos\/clair\/v1.2.2\/docker-compose.yml -o docker-compose.yml\n<\/code><\/pre>\n<p>Modify <code>$HOME\/clair_config\/config.yml<\/code> and add the proper source\n<code>postgresql:\/\/postgres:password@postgres:5432?sslmode=disable<\/code><\/p>\n\n<p>Now you can run the following command to start postgres and clair:<\/p>\n\n<pre><code class=\"language-bash\">docker-compose up\n<\/code><\/pre>\n\n<p>To make our test easier, we will use another CLI called hyperclair that is just\na client to work with this application. If you are using Mac OS, you can follow\nthe above commands, if you are in another OS you can find the correct url in\nthe release page<\/p>\n\n<pre><code class=\"language-bash\">curl -SSl https:\/\/github.com\/wemanity-belgium\/hyperclair\/releases\/download\/0.5.2\/hyperclair-darwin-386 &gt; ~\/hyperclair\nchmod 755 ~\/hyperclair\n<\/code><\/pre>\n\n<p>Now we have an executable in ~\/hyperclair<\/p>\n\n<pre><code class=\"language-bash\">~\/hyperclair pull localhost:5000\/gianarb\/micro:1.0.0\n~\/hyperclair push localhost:5000\/gianarb\/micro:1.0.0\n~\/hyperclair analyze localhost:5000\/gianarb\/micro:1.0.0\n~\/hyperclair report localhost:5000\/gianarb\/micro:1.0.0\n<\/code><\/pre>\n\n<p>The generated report looks like this:<\/p>\n\n<p><img class=\"img-fluid\" src=\"\/img\/security-image\/report-clair.png\" \/><\/p>\n\n<p>Hyperclair is just one of the implementations of clair, you can decide to use\nit or build your own implementation in your pipeline.<\/p>\n"},{"title":"Docker registry to ship and manage your containers.","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/docker-registry-to-ship-your-containers"}},"description":"Build and Run containers is important but ship them out of your laptop is the best part! A Registry is used to store and manage your images and all your layers. You can use a storage to upload and download them across your servers and to share them with your colleagues.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2016-12-14T08:08:27+00:00","published":"2016-12-14T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/docker-registry-to-ship-your-containers","content":"<p>Build and Run containers is important but ship them out of your laptop is the\nbest part! The Registry is a very important tool that requires a bit more\nattention. A Registry is used to store and manage your images and all your\nlayers. You can use a storage to upload and download them across your servers\nand to share them with your colleagues.<\/p>\n\n<p>The most popular one is hub.docker.com\nit contains different kind of images: public, official and private.  You can\ncreate an account and push your images or build them for example from a github\nor bitbucket repository. The integration with GitHub and Bitbucket is called\n\u201cAutomated Builds\u201d. it allows you to create a continuous integration\nenvironment for your images, when you select \u201cCreate\u201d and \u201cAutomated Builds\u201d\nyou can specify a repository and a path of your Dockerfile. You can specify\nmore that one path from the same repository to build more that one image tag.\nIn this way you can centralize and build your images every time that a new\nchange is pushed into the repository. It also supports organizations to split\nyour images in different groups and manage visibility of them in case of\nprivate images.<\/p>\n\n<p>By default any developer can push their image to registry and\nthey\u2019ll be public and free for other developers to use.  Official images are\nthose public images selected and maintained from specific organization or\nmember of the communities, the idea is that they have a better quality or who\nprovides them are usually involved into the product development. A set of\nofficial images are: Nginx, Redis, MySql, PHP, Go and so on\n<a href=\"https:\/\/hub.docker.com\/explore\">https:\/\/hub.docker.com\/explore<\/a>.<\/p>\n\n<p>Docker Hub offers different plan to store\nprivate images, all people has one for free but if you need more you can pay a\nplan and store more.<\/p>\n\n<p>Registry is not just  a tool but it\u2019s a specification, it\ndescribe how expose capabilities has pull, push, search and so on. This\nsolution allowed the ecosystem to implement these rules in other projects and\nsave the compatibility with the Docker Client and with the other runtime engine\nthat use this capability. It\u2019s for this reason that other providers as\nKubernetes, Cloud Foundry supports download from Docker Hub. This specification\nhas 2 version, v1 and v2 the most famous registries implement both standard and\nthey fallback from v2 to v1 for  features that are not supported yet. For\nexample Search is not supported at the moment into the v2 but only in v1.<\/p>\n\n<p>If you are looking for an In-House solution you have different tools available\nonline. The first one is distribution. It is provided by Docker, it\u2019s open\nsource and offers a very tiny registry that you can start and store in your\nserver. It also supports different storage like the local filesystem and S3.\nThis feature is very interesting because usually the size of the images and the\nnumber of layers increase very fast and you also need to keep them safe with\nbackup and redundancy policies for high availability. This is very important if\nyour environment is based on containers it means that your register is a core\npart of your company. Let\u2019s start a Docker Distribution:<\/p>\n\n<pre><code class=\"language-bash\">$ docker run -d -p 5000:5000 --name registry registry:2\n<\/code><\/pre>\n\n<p>In docker the default registry is hub.docker.com it means that when we push or\npull an image we are reaching this registry:<\/p>\n\n<pre><code class=\"language-bash\">$ docker pull alpine\n<\/code><\/pre>\n\n<p>To push our images in another registry we need to tag them:<\/p>\n\n<pre><code class=\"language-bash\">$ docker tag alpine 127.0.0.1:5000\/alpine\n<\/code><\/pre>\n\n<p>With this command you tagged the alpine to a registry 127.0.0.1:5000 because as\nwe said in previous chapters the name of the image contains a lot of\ninformation:<\/p>\n\n<pre><code>REGISTRY\/NAME:VERSION\n<\/code><\/pre>\n\n<p>The default registry is hub.docker.com a name could be simple as alpine or with\na username matt\/alpine and you can pin a specific build with a version you can\nuse semver or for example the sha of the commit the default VERSION is latest.<\/p>\n\n<p>Now that we have a new tag we can push and pull it in from our registry:<\/p>\n\n<pre><code class=\"language-bash\">$ docker push 127.0.0.1:5000\/alpine\n$ docker pull 127.0.0.1:5000\/alpine\n<\/code><\/pre>\n\n<p>A very important information to remember when you start a customer registry is\nthat every layers, every build is stored and it\u2019s very easy to have a big\nregistry, you need to monitor the instance to be sure that your server has\nenough disks space and also take care about high availability. In a real\nenvironment the registry it the core of your infrastructure, developers use it\nto pull and push build and also to put a version in production. Take care of\nyour registry.<\/p>\n\n<p>Other that Docker provided registry there are few alternatives. <a href=\"https:\/\/www.sonatype.com\/nexus-repository-sonatype\">Nexus<\/a> is a registry manager that\nsupport a lot of languages and packages if you are a Java developer you know\nit. Nexus supports Docker Registry API v1 and v2. The Docker registry\nspecification is young but it has 2 version already.<\/p>\n\n<p>We can use the image provided by Sonatype and start our Nexus repository:<\/p>\n\n<pre><code class=\"language-bash\">$ docker run -d -p 8082:8082 -p 8081:8081 \\\n    -v \/tmp\/sonata:\/sonatype-work --name nexus sonatype\/nexus3\n$ docker logs -f nexus\n<\/code><\/pre>\n\n<p>When our log tells us that Nexus is ready we can reach the ui from our browser\nhttp:\/\/localhost:8081\/ or with the IP of your Docker Machine if you are using\nDocker for Mac\/Windows or Docker in Linux. The default credentials are username\nadmin and password admin123.<\/p>\n\n<p><img class=\"img-fluid\" src=\"\/img\/docker-registry\/nexus-image-loaded.png\" \/><\/p>\n\n<p>First of all we need to create a new Hosted Repository for Docker, we need to\npress the Settings Icon top left of the page, Repositories and Create\nRepository. I called mine mydocker and you need to specify an HTTP port for\nthat repository, we shared port 8082 during the run and for this reason I chose\n8082.<\/p>\n\n<p><img class=\"img-fluid\" src=\"\/img\/docker-registry\/nexus-create-repo.png\" \/><\/p>\n\n<p>Nexus has different kind of repositories Host means that it\u2019s self hosted but\nyou can also create a Proxy Repository to proxy for example the official Docker\nHub.\nNow we need to login to out docker registry:<\/p>\n\n<pre><code class=\"language-bash\">$ docker login 127.0.0.1:8082\n<\/code><\/pre>\n\n<p>Now we can tag an alpine and push the tag into the repository<\/p>\n\n<pre><code class=\"language-bash\">$ docker tag alpine 127.0.0.1:8082\/alpine\n$ docker push 127.0.0.1:8082\/alpine\n<\/code><\/pre>\n\n<p>You can go in Assets click on mydocker repository and see that your image is\ncorrectly stored.<\/p>\n\n<p><a href=\"https:\/\/about.gitlab.com\/\">GitLab<\/a> has also a container registry. GitLab uses\nit to manage build and it\u2019s available for you from version 8.8 if you are\nalready using this tool.<\/p>\n\n<p class=\"text-muted\">Thanks <a href=\"https:\/\/twitter.com\/kishoreyekkanti\" target=\"_blank\">Kishore\nYekkanti<\/a>, <a href=\"https:\/\/twitter.com\/liuggio\" target=\"_blank\">Giulio De\nDonato<\/a> for your review.<\/p>\n\n<div class=\"post row\">\n  <div class=\"col-md-12\">\n      <div class=\"bs-callout bs-callout-info row\">\n\t<div class=\"row\">\n\t\t<div class=\"col-md-12\">\n\t\t\t<h2><a href=\"\/\/gianarb.it\/blog\/docker-the-fundamentals\" target=\"_blank\">get \"Docker the Fundamentals\"<\/a> <small>by. Drive your boat as a Captain<\/small><\/h2>\n\t\t<\/div>\n\t<\/div>\n\t<div class=\"row\">\n\t\t<div class=\"col-md-3\">\n\t\t\t<a href=\"\/\/gianarb.it\/blog\/docker-the-fundamentals\" target=\"_blank\"><img src=\"\/img\/the-fundamentals.jpg\" class=\"img-fluid\" \/><\/a>\n\t\t<\/div>\n\t\t<div class=\"col-md-9\">\n\t\t\t<p>\n\t\t\tYou can get the Chapter 2 of the book <a href=\"\/blog\/scaledocker\" target=\"_blank\">\"Drive your boat as a Captain\"<\/a> just leave click on the\n\t\t\tcover and leave your email to receive a free copy.<\/p>\n\t\t\t<p>This chapter is getting started with Docker Engine and the basic\n\t\t\tconcept around registry, pull, push and so on. It's a good way to start from\n\t\t\tzero with Docker.<\/p>\n\t\t<\/div>\n\t<\/div>\n<\/div>\n\n  <\/div>\n<\/div>\n"},{"title":"Continuous Integration and silent checks. You are looking in the wrong place","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/continuous-integration-and-silent-checks"}},"description":"Bad and good practice when you setup a continuous integration job. Silent checks are not a good practice but analyze your code is the perfect way to understand how your codebase is evolving.","image":"https:\/\/gianarb.it\/img\/jenkins.png","updated":"2016-11-18T10:08:27+00:00","published":"2016-11-18T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/continuous-integration-and-silent-checks","content":"<p>Continuous Integration is a process of merging all developer working copies to\nshared mainline several times a day. In practice is when you have in place a\nsystem that allow you to trust all changes that all developers are doing in a\nshort period of time in order to have that code complaint and ready to be\npushed in production.<\/p>\n\n<p>There are a lot of different way to do CI but I will stay focused on a very\nimportant expect, you need a policy that contains a series of checks that you\ncan easy automate. All this steps persisted on every change allow you to mark\nthat new code as <code>ready<\/code>.<\/p>\n\n<p>Automation is an important part to keep your integration continuous, usually what\npeople do is a human review of the code, if one or more people mark your code\nas complaints and the continuous integration system is agree with them your code\ncan be merged. This is the unique manual step.<\/p>\n\n<p>But let\u2019s talk about what I call \u201cSilent Checks\u201d they are really one of the\nbest invention that I never saw. Silent Checks are like cigarettes, all know\nthat they are not so good but nobody cares.<\/p>\n\n<p>Usually your CI system use exit code to understand if a check is good or bad,\nyour command come back with <code>0<\/code> in case of success or with another number if\nsomething fails. Sometime you can find in your continuous integration checks\nthat put the status code in a silent mode. The check fails but it\u2019s not important enough.<\/p>\n\n<p><img class=\"img-fluid\" src=\"\/img\/the-wolf-ci.jpeg\" alt=\"continuous integration party\" \/><\/p>\n\n<p>You have a check that runs but you are not asking people to care about the\nresult. Probably because it\u2019s not important enough. There are few disadvantages\nabout this approach:<\/p>\n\n<ul>\n  <li>That check is making your job slow.<\/li>\n  <li>If the job doesn\u2019t fail no one care about that optional check and your check\nwill never fail.<\/li>\n  <li>When a job fails you just need to scroll and jump over all the logs generated\nby the optional check. They produce a very long logs because usually they\nfails. There is more, usually your coworkers forget about this check and they\nping you about that errors.<\/li>\n<\/ul>\n\n<p>Analyse your code is very important but there are other strategies that\nyou can use to avoid this inconvenient. Usually the silent checks are in place\nin a period of migration, maybe they are important to monitor how it is going.\nThey are just in the bad position.\nYou can move them in a separated job, collected them and analyse what you need\nto analyse and monitoring trends about how your team works.<\/p>\n\n<p>I saw a TEDx Talk by Adam Tornhill. He talked about Analyzing Software with\nforensic psychology. This topic is great! You can get a lot of informations\nabout your application from who is writing that code.<\/p>\n\n<div style=\"text-align:center\">\n<iframe width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/qJ_hplxTYJw\" frameborder=\"0\" allowfullscreen=\"\"><\/iframe>\n<\/div>\n\n<p>Trends and monitoring not just to understand how your application works but\nthey are fundamentals to understand how your team is working, how they\nfeel and also to catch how your codebase is moving. They are really important\nand if you are strong enough to have a good monitoring system for that metric\nyou are really in a good position!  You just need to understand that insert\nthem into the continuous integration flow is not a good idea.<\/p>\n"},{"title":"Docker Bench Security","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/Docker-Security-Benchmark"}},"description":"Container security is a hot topic because today containers are everywhere also in production. It means that we need to trust this technology and start to think about best practices and tools to make our container environment safe.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2016-11-15T10:08:27+00:00","published":"2016-11-15T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/Docker-Security-Benchmark","content":"<p>Frequently, best practices help you to have a safe environment,\n<a href=\"https:\/\/github.com\/docker\/docker-bench-security\">docker-bench-security<\/a> is an\nopen source project that runs in a container and scans your environment to\nreport a set of common mistakes like:<\/p>\n\n<ul>\n  <li>Your kernel is too old<\/li>\n  <li>Your docker is not up to date<\/li>\n  <li>Some Docker daemon configurations are not good enough to run a production environment<\/li>\n  <li>Your container runs 2 processes<\/li>\n  <li>and others<\/li>\n<\/ul>\n\n<p>It\u2019s a great idea to run it at some stage in each host to have an idea about\nthe status of your environment. To do that you can just use this command when\nrunning a container<\/p>\n\n<pre><code class=\"language-bash\">$ docker run -it --net host --pid host --cap-add audit_control \\\n    -v \/var\/lib:\/var\/lib \\\n    -v \/var\/run\/docker.sock:\/var\/run\/docker.sock \\\n    -v \/usr\/lib\/systemd:\/usr\/lib\/systemd \\\n    -v \/etc:\/etc --label docker_bench_security \\\n    docker\/docker-bench-security\n<\/code><\/pre>\n\n<p>A good way to start is your run  it in your local environment. Run the command\nand check what you can do to make your local environment safe.  This tool is\nopen source on GitHub and it\u2019s also a great example of collaboration and how a\ncommunity can share experiences to help other members to improve an\nenvironment.  This is a partial output:<\/p>\n\n<pre><code class=\"language-bash\">Initializing Thu Nov 24 21:35:24 GMT 2016\n\n[INFO] 1 - Host Configuration\n[WARN] 1.1  - Create a separate partition for containers\n[PASS] 1.2  - Use an updated Linux Kernel\n[PASS] 1.4  - Remove all non-essential services from the host - Network\n[PASS] 1.5  - Keep Docker up to date\n[INFO]       * Using 1.13.01 which is current as of 2016-10-26\n[INFO]       * Check with your operating system vendor for support and security maintenance for docker\n[INFO] 1.6  - Only allow trusted users to control Docker daemon\n[INFO]      * docker:x:999:gianarb\n[WARN] 1.7  - Failed to inspect: auditctl command not found.\n[WARN] 1.8  - Failed to inspect: auditctl command not found.\n[WARN] 1.9  - Failed to inspect: auditctl command not found.\n[INFO] 1.10 - Audit Docker files and directories - docker.service\n[INFO]      * File not found\n[INFO] 1.11 - Audit Docker files and directories - docker.socket\n[INFO]      * File not found\n<\/code><\/pre>\n<p>Sometime to have a good result you just need to run a single command.<\/p>\n\n<p>This article is part of \u201cDrive your boat like a Captain\u201d. It\u2019s a book about\nDocker in production, how manage a cluster of Docker Engine with Swarm and what\nit means to manage a production environment today.<\/p>\n\n<p>Keep in touch to receive news about the book\n<a href=\"\/blog\/scaledocker\">scaledocker.com<\/a>.  If you are looking for a Docker\nGetting Started you can also look on the first chapter that I released <a href=\"\/blog\/docker-the-fundamentals\">Docker\nThe\nFundamentals<\/a><\/p>\n"},{"title":"Chef Server startup notes","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/chef-server-startup-notes"}},"description":"This tutorial explain how to setup a chef server on digitalocean from zero. It also shows how to use it to make a provisioning of one Chef Client. Chef is one of the most used provisioning tool. DevOps tool to apply infrastructure as code.","image":"https:\/\/gianarb.it\/img\/chef.png","updated":"2016-11-10T10:08:27+00:00","published":"2016-11-10T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/chef-server-startup-notes","content":"<p>I worked with different provisioning tools and configuration managers in the\nlast couple of years: Chef, Saltstack, Puppet, Shell, Python, Terraform.\nEverything that was allowing me to make automation and describe my\ninfrastructure as a code.<\/p>\n\n<p>I really think that this is the correct street and every companies need to stop\nto persist random commands in a server:<\/p>\n\n<ul>\n  <li>The code used to describe your infrastructure is reausable.<\/li>\n  <li>The code is a good backup and you can put it in your repository to study of\nit changed and manage rollbacks.<\/li>\n  <li>Your servers become collaborative and your team can review what you do.<\/li>\n<\/ul>\n\n<p>Chef is my first configuration manager, I started to use it with Vagrant few\nyears ago but I never had change to deep drive into it and into the full\nchef-server configuration from scratch.<\/p>\n\n<p>I had this change few days ago and I am here to share some notes. I used\ndigitalocean to start 1 Chef Server and two nodes, during this post I am not\nfocused about the recipe and cookbook syntax but I will share some commands and\nnotes that I took during my test to start and configure a Chef Server.<\/p>\n\n<p>First of all <code>doctl<\/code> is the command line application provided by digitalocean\nto manage droplets and everything, I used that tool to start my droplets.<\/p>\n\n<p>The Chef Server doesn\u2019t run in little box, we need 2gb RAM, I tried with small\nsize but nothing was working, the installation process gone out of memory very\nsoon. Thanks Ruby.<\/p>\n\n<pre><code class=\"language-sh\">$ doctl compute droplet create chef-server \\\n  --region ams2 --size 2gb --image 20385558 \\\n  --access-token $DO --ssh-keys $DO_SSH\n\n$ doctl compute droplet create n1 \\\n  --region ams2 --size 512mb --image 20385558 \\\n  --access-token $DO --ssh-keys $DO_SSH\n<\/code><\/pre>\n<p><code>$DO<\/code> contains my digitalocean access key and <code>$DO_SSH<\/code> the id of the ssh key\nto log into the servers. You can leave the last one empty and you will receive\nan email with the password.<\/p>\n\n<p>When the process is gone you will be able to copy the ip of the chef-server and go into it.<\/p>\n\n<pre><code class=\"language-bash\">$ doctl compute droplet ls\n\nID              Name            Public IPv4     Public IPv6     Memory  VCPUs   Disk    Region  Image           Status  Tags\n30cw4230        chef-server                                     2gb     1       20      ams2    Debian 8.6 x64  new\nq0514230        n1                                              512     1       20      ams2    Debian 8.6 x64  new\n<\/code><\/pre>\n\n<p>This provisioning script installs Chef-Server from the official deb package and\nalso install chef-manage.  Chef-manage provides a nice web interface to manage\nusers, cookbooks and everything is stored into the server.<\/p>\n\n<pre><code class=\"language-bash\">cd \/tmp\nsudo apt-get update\nsudo apt-get install -y unzip curl\ncurl -LS https:\/\/packages.chef.io\/stable\/ubuntu\/16.04\/chef-server-core_12.9.1-1_amd64.deb -o chef-server-core_12.9.1-1_amd64.deb\nsudo dpkg -i chef-server-core_12.9.1-1_amd64.deb\nsudo chef-server-ctl reconfigure\n<\/code><\/pre>\n\n<p>Our server is up an running reachable over HTTPS (port 443). This configuration\nis just for testing purpose. It\u2019s not a good practice leave a Chef Server public\nas we are doing. It\u2019s a better idea to close it under a VPN for example.<\/p>\n\n<p>Chef supports an authentication and authorization level based on users and\ncompanies. We are creating a new user called <code>Gianluca Arbezzano<\/code> with username\n<code>ga@thumpflow.com<\/code> and as password <code>hellociaobye<\/code>.\nWe are also create an organization and we are associating user with org.<\/p>\n<pre><code class=\"language-bash\">chef-server-ctl user-create gianarb Gianluca Arbezzano ga@thumpflow.com 'hellociaobye' --filename \/root\/gianarb_test.pem\nchef-server-ctl org-create tf 'ThumpFlow' --association_user gianarb --filename \/root\/tf-validator.pem\nchef-server-ctl org-user-add tf gianarb\n<\/code><\/pre>\n\n<p>At this point we can configure a nice UI for our Chef Server with this simple\ncommands:<\/p>\n\n<pre><code class=\"language-bash\">sudo chef-server-ctl install chef-manage\nsudo chef-server-ctl reconfigure\nsudo chef-manage-ctl reconfigure --accept-license\n<\/code><\/pre>\n\n<p>Chef-Server works with the concept of Organization and User. The organization\nis a group of users that share cookbooks, rules and so on.  Users can update\ncookbooks and there is also a set of permission to manage access on particular\nresources like:<\/p>\n\n<ul>\n  <li>Add a new node<\/li>\n  <li>Syncronize cookbook with the server<\/li>\n  <li>add new users<\/li>\n<\/ul>\n\n<p>At this point we have one user with its own key and credential. You can come\nback into  the UI and use username (gianarb) and password (hellociaobye) to\nlogin in.  The key (\u2013filename) is used to configure knife and encrypt\ncommunication between client and server.  There are 3 main actors and this\npoint that we need to know:<\/p>\n\n<ul>\n  <li>Chef-Server contains all our recipes, cookbooks and it\u2019s the brain of the cluster.<\/li>\n  <li>Nodes are all servers configurated by Chef.<\/li>\n  <li>Workstation are usually enable to syncronize, update cookbooks. For example\nJenkins or your Continuous Integration System after every new commit can push\nevery changes into the server.<\/li>\n<\/ul>\n\n<p>Chef Server has a HTTP api and <code>knife<\/code> is a CLI that provide an easy\nintegration for your node and workstation.  With this command we are installing\nknife. You can do it in your local environment, to become a workstation and into\nthe server. (it\u2019s usually a god practice create a user, we are doing everything\nas root right know but it\u2019s BAD! don\u2019t be bad!).<\/p>\n\n<p>We have two certificate one is <code>gianarb_test.pem<\/code> and it\u2019s identify a specific\nuser, we need to generate our for every workstation\/member of the team and the\n<code>validation_client<\/code> represent the organization, it could be the same across\nmultiple users.<\/p>\n\n<pre><code class=\"language-bash\">curl -O -L http:\/\/www.opscode.com\/chef\/install.sh\nbash .\/install.sh\n<\/code><\/pre>\n\n<p>You can copy paste the 2 keys into the local machine and run this command that\nwill drive your into the process to create a <code>~\/.chef\/knife.rb<\/code> file that your\ncli uses to communicate with the chef server.<\/p>\n\n<pre><code class=\"language-bash\">knife configure\n<\/code><\/pre>\n\n<p>This is an example of generated knife configuration file that I did in my\nserver.  I lose times to understand the <code>chef_server_url<\/code> it contains the\nhostname of the server but also the `\/organization\/<organization_short_name>'\nbe careful about this or knife will come back with an HTML response in your terminal.<\/organization_short_name><\/p>\n\n<pre><code class=\"language-ruby\">log_level                :info\nlog_location             STDOUT\nnode_name                'gianarb'\nclient_key               '\/root\/gianarb_test.pem'\nvalidation_client_name   'tf-validator'\nvalidation_key           '\/root\/tf-validator.pem'\nchef_server_url          'https:\/\/chef-server:443\/organizations\/tf'\nsyntax_check_cache_path  '\/root\/.chef\/syntax_check_cache'\ncookbook_path            [\"\/home\/gianarb\/git\/chef-pluto\/cookbooks\"]\nssl_verify_mode          :verify_none\n<\/code><\/pre>\n\n<p>The last 2 commands download and validate the SSH certificate because in the\ndefault configuration the CA is unofficial and we need to force our client to\ntrust the cert.<\/p>\n\n<pre><code class=\"language-bash\">knife ssl fetch\nknife ssl check\n<\/code><\/pre>\n\n<p>Know that we did that in our server and also in our local environment we can\nclone <a href=\"https:\/\/github.com\/gianarb\/chef-pluto\">chef-pluto<\/a> a repository that contains recipes, rules and cookbooks to\nconfigure our node, we need to syncronize it into the server.<\/p>\n\n<pre><code class=\"language-bash\">git clone git@github.com:gianarb\/chef-pluto.git \/home\/gianarb\/git\/chef-pluto\/chef-pluto\ncd \/home\/gianarb\/git\/chef-pluto\/chef-pluto\nknife update \/\n<\/code><\/pre>\n<p>The last command update all our repository into the chef server. You can log in\ninto the web ui and see the <code>micro<\/code> cookbook and the <code>power<\/code> rule.<\/p>\n\n<p><a href=\"https:\/\/github.com\/gianarb\/micro\">micro<\/a> is an application that I wrote in go and it just expose the ip of the\nmachine. It\u2019s a binary and the cookbook downloads and starts it, pretty\nstraightforward.<\/p>\n\n<p>At this point we need to make a provisioning of our first node, usually is the\nserver that install and start the Chef Client into the node, what we can do\nit\u2019s store a private key into the server to allow chef to connect to the node.\nI copied the digitalocean private key into the server (~\/do), from security\npoint of view you can create a dedicate one. You can also use the -P option if\nyou are not using an ssh-key to run this example.<\/p>\n\n<pre><code class=\"language-bash\">knife bootstrap &lt;ip-node&gt; -N node1 --ssh-user root -r 'role[power]' -i ~\/do\n<\/code><\/pre>\n\n<p>If everything it\u2019s good you can reach the application from port <code>8000<\/code> into the\nbrowser. The log is something like:<\/p>\n\n<pre><code class=\"language-bash\">$ knife bootstrap 95.85.52.211 -N testNode --ssh-user root -r 'role[power]' -i ~\/do\nDoing old-style registration with the validation key at \/root\/tf-validator.pem...\nDelete your validation key in order to use your user credentials instead\n\nConnecting to 95.85.52.211\n95.85.52.211 -----&gt; Existing Chef installation detected\n95.85.52.211 Starting the first Chef Client run...\n95.85.52.211 Starting Chef Client, version 12.15.19\n95.85.52.211 resolving cookbooks for run list: [\"micro\"]\n95.85.52.211 Synchronizing Cookbooks:\n95.85.52.211   - micro (0.1.0)\n95.85.52.211 Installing Cookbook Gems:\n95.85.52.211 Compiling Cookbooks...\n95.85.52.211 Converging 2 resources\n95.85.52.211 Recipe: micro::default\n95.85.52.211   * remote_file[Download micro] action create_if_missing (up to date)\n95.85.52.211   * service[Start micro] action start\n95.85.52.211     - start service service[Start micro]\n95.85.52.211\n95.85.52.211 Running handlers:\n95.85.52.211 Running handlers complete\n95.85.52.211 Chef Client finished, 1\/2 resources updated in 02 seconds\n<\/code><\/pre>\n\n<p>knife started the client, syncronized cookbooks, it assigned the <code>power<\/code> role\nat the node and run the correct recipes.  Your server is ready and you can\ncreate and delete nodes to make your infrastructure complex how much you like.<\/p>\n\n<p>Chef is quite old and it\u2019s in ruby (the first one could be a plus but the\nsecond one no really) but it continue to be a good way to make a provisioning o\nyour infrastructure. Lots of people moved to Ansible but the agent that they\nreject offer a very good orchestration feature that it\u2019s something that I\nusually search.<\/p>\n\n<p>I worked with StalStack and it\u2019s very nice, the syntax is easy\nand it seems less expensive in terms of configuration, resources and setup but\nI am not really sure about the YAML specification. I am not a ruby developer\nand I don\u2019t love the ruby syntax but in the end is a programming languages and\nI am doing infrastructure as a code.<\/p>\n"},{"title":"Docker The fundamentals","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/docker-the-fundamentals"}},"description":"Docker The fundamental is the second chapter of my book Scale Docker. Drive your boat like a captain. I decided to share free the second chapter of the book. It covers getting started with Docker. It's a good tutorial for people that are no idea about how container means and how docker works.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2016-08-25T12:08:27+00:00","published":"2016-08-25T12:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/docker-the-fundamentals","content":"<p>I am writing a book about Docker SwarmKit and how manage a production\nenvironment for your containers.<\/p>\n\n<p>The second chapter of the book is a Getting Started about Docker, it covers\nbasic concepts about what container means and it\u2019s a started point to\nunderstand the concepts expressed into the book.<\/p>\n\n<h2>Drive your boat like a Captain.\n<small>Docker in production<\/small><\/h2>\n\n<p>The book is work in progress but you can find more information into the site\n<a href=\"\/blog\/scaledocker\">scaledocker.com<\/a>.<\/p>\n\n<p>To receive the first chapter free leave your email and if you like your twitter account:<\/p>\n\n<div class=\"row\">\n\t<div class=\"col-md-6\">\n        <img src=\"\/img\/the-fundamentals.jpg\" class=\"img-fluid\" \/>\n    <\/div>\n\t<div class=\"col-md-4\">\n\t\t<form id=\"get-chapter\">\n\t\t  <div class=\"form-group\">\n\t\t\t<label for=\"exampleInputEmail1\">Email address *<\/label>\n\t\t\t<input type=\"email\" class=\"form-control\" required=\"required\" id=\"email\" placeholder=\"Email\" \/>\n\t\t  <\/div>\n\t\t  <div class=\"form-group\">\n\t\t\t<label for=\"exampleInputPassword1\">Twitter<\/label>\n\t\t\t<input type=\"title\" class=\"form-control\" id=\"twitter\" placeholder=\"@gianarb\" pattern=\"^@.*\" \/>\n\t\t\t<p class=\"help-block\">The first letter needs to be a @<\/p>\n\t\t  <\/div>\n          <p class=\"text-success get-chapter-thanks\">Check your email! Thanks!<\/p>\n          <p class=\"text-warning get-chapter-sorry\"><span class=\"err-text\"><\/span>.\n          Please notify the error with a comment or with an email<\/p>\n\t\t  <button class=\"btn btn-default\">Get your free copy<\/button>\n\t\t<\/form>\n\t<\/div>\n<\/div>\n\n<h2>Contents<\/h2>\n<ol>\n  <li>Introduction<\/li>\n  <li>Install Docker on Ubuntu 16.04<\/li>\n  <li>Install Docker on Mac<\/li>\n  <li>Install Docker on Windows<\/li>\n  <li>Run your first HTTP application<\/li>\n  <li>Docker engine architect<\/li>\n  <li>Image and Registry<\/li>\n  <li>Docker Command Line Tool<\/li>\n  <li>Volumes and File Systems 20<\/li>\n  <li>Network and Links<\/li>\n  <li>Conclusion<\/li>\n<\/ol>\n\n<p>Enjoy your reading and leave me a feedback about the chapter!<\/p>\n\n<script>\n    (function() {\n        $(\".get-chapter-thanks\").hide();\n        $(\".get-chapter-sorry\").hide();\n        var api = \"https:\/\/1lkdtyxdx4.execute-api.eu-west-1.amazonaws.com\/prod\";\n        $(\"#get-chapter button\").click(function(eve) {\n            eve.preventDefault()\n            $(\".get-chapter-thanks\").hide();\n            $(\".get-chapter-sorry\").hide();\n            var requestChapter = $.ajax({\n                \"url\": api+\"\/the-fundamentals\",\n                \"type\": 'post',\n                \"data\": {\n                    email: $(\"#email\").val(),\n                    twitter: $(\"#twitter\").val()\n                },\n                \"dataType\": 'json',\n                \"contentType\": \"application\/json\"\n            });\n            requestChapter.done(function() {\n                $(\".get-chapter-thanks\").show();\n            });\n            requestChapter.fail(function(data) {\n                $('.err-text').html(\"[\"+data.responseJSON.code+\"]\"+ data.responseJSON.text);\n                $(\".get-chapter-sorry\").show();\n            });\n        });\n    })();\n<\/script>\n\n"},{"title":"Be smart like your healthcheck","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/be-start-like-your-healthcheck"}},"description":"In a distributed system environment has a simple way to know the status of your server help you do understand if it's ready to go in production. HealthCheck is simple and common but design a good one can help you do avoid strange behaviors. Docker 1.12 supports healthcheck and we in this blog I share an example of implementation.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2016-08-25T12:08:27+00:00","published":"2016-08-25T12:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/be-start-like-your-healthcheck","content":"<p>I am not a doctor, I am a Software Engineer and this is a tech post! You can\ncontinue to read!<\/p>\n\n<p>To monitor monolithic what we usually do is install a tool\nlike <a href=\"https:\/\/www.nagios.org\/\">Nagios<\/a> to centralize all our metrics and to\nstay in touch with our infrastructure and our application.  In a distributed\nsystem with more that one services with own metrics the situation is totally\ndifferent.  This about how it\u2019s more dynamic respect a monolithic.  Containers\nor VM that scale up and down and that move around the network, Nagios is a good\nsolution to check if our new service after a deploy is safe and ready to be\nattached into the production pool?  I love a talk made by <a href=\"https:\/\/github.com\/kelseyhightower\">Kelsey\nHightower<\/a> during the Monitorama event, he\nspeak about healthcheck watch him to follow a <a href=\"https:\/\/vimeo.com\/173610242\">great demo<\/a>!<\/p>\n\n<p>Healthcheck is an API that your service exposes to share it\u2019s status, if you\nmake it really start it\u2019s a good tool to understand the situation of your\nservice with just a call.  A service could be ready or not and it\u2019s in the best\nsituation to communicate its status.  It\u2019s a like a patient, you need to ask\nhim all what you need to make the best diagnosis and take a decision about it.<\/p>\n\n<p>We can stay focused on a REST service, it exposes an API under the route\n\/health. The response could has two different Status Code:<\/p>\n\n<ul>\n  <li>200 if all it\u2019s good and you service is ready<\/li>\n  <li>500 it there is something wrong and your service is not ready<\/li>\n<\/ul>\n\n<p>To make an smart HealthCheck what do we need to check?<\/p>\n\n<p>This is a real implementation:<\/p>\n\n<pre><code class=\"language-php\">&lt;?php\necho 1;\n<\/code><\/pre>\n\n<p>It\u2019s better that nothing but we are looking for something smart!  We need to\ncheck all dependendencies that our service has and it\u2019s for this reason that\nthe service itself is the best actor because it knows what it need to be ready.\nI wrote a demo service, the name is <a href=\"https:\/\/github.com\/gianarb\/micro\/blob\/master\/handle\/health.go\">micro<\/a>, it\u2019s in go and\nthe version 2 use\nmysql.<\/p>\n\n<pre><code class=\"language-go\">func Health(username string, password string, addr string) func(http.ResponseWriter, *http.Request) {\n    return func(w http.ResponseWriter, r *http.Request) {\n        res := healtResponse{Status: true}\n        httpStatus := 200\n        dsn := fmt.Sprintf(\"%s:%s@tcp(%s:3306)\/micro\", username, password, addr)\n        ddb, err := sql.Open(\"mysql\", dsn)\n        if err != nil {\n            log.Fatal(err)\n        }\n        if err := ddb.Ping(); err != nil {\n            res.Status = false\n            res.Info = map[string]string{\"database\": err.Error()}\n        }\n        c, _ := json.Marshal(res)\n        if res.Status == false {\n            httpStatus = 500\n        }\n        log.Println(\"%s called \/health\", r.Host)\n        w.WriteHeader(httpStatus)\n        w.Header().Set(\"Content-Type\", \"application\/json\")\n        w.Write(c)\n    }\n}\n<\/code><\/pre>\n<p>Doesn\u2019t matter how many dependencies you service has, you need to check all of\nthem, databases, other services that it uses.  In my case I decided to add a\nkey-value field, I called it <code>info<\/code>, it contains some information about whether\nmysql is or is not not working, in order to make the debug flow easy.  If the\nservice that you are checking has an healthcheck you are lucky! You can use\nthat entrypoint to know if your dependency is fine.  If you are not so lucky if\nyou can create a wrapper or just check if you can reach the service, in my case\nI just tried to connect to mysql in order to know if my network supports me! I\nalso using the correct database name in order to avoid edge case like \u201cmysql is\non but the database doesn\u2019t exist\u201d.<\/p>\n\n<p>The ecosystem supports healthchecks! Nginx looks it to know if a server is\nreachable, if the health check doesn\u2019t work for a while it just make the server\nout for few times. Same for Kubernetes, Swarm and Docker.  Docker provides a\nlibrary in go an <a href=\"https:\/\/github.com\/docker\/go-healthcheck\">healthcheck\nframework<\/a> that you can use in your\napplications, it is also used in Docker 1.12.<\/p>\n\n<p>You can describe in your  Dockerfile an HealthCheck<\/p>\n\n<pre><code>HEALTHCHECK CMD .\/cli health\n<\/code><\/pre>\n\n<p>If the exit code is 0 Docker marks you container like healthy if it\u2019s different like unhealthy.\nVery easy and flexible, you can check your REST healthcheck in this way<\/p>\n\n<pre><code>HEALTHCHECK --interval=30s --timeout=30s --retries=3 \\\n  CMD curl -si localhost:8000\/health | grep 'HTTP\/1.1 200 OK' &gt; \/dev\/null\n<\/code><\/pre>\n\n<p><code>--interval<\/code> is the timing between two healthcheck, <code>--timeout<\/code> is used to mark\nlike unhealthy a service that doesn\u2019t come back after 30s in this case,\n<code>--retries<\/code> is the attempts to do before make a container unhealthy.<\/p>\n\n<p>HealtCheck doesn\u2019t replace traditional monitoring system but with a lot of\ninstances and services has a single point to check and understand the situation\nafter a deploy make your like easy and your products stable.<\/p>\n"},{"title":"Build opinions to become wise","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/build-opinions-to-become-wise"}},"description":"As a Software Engineer you need to build your own opinions about different topics like: linux or window? Editor or IDE? Containers or VM? There are different developers just for this reason. Make, share and change your opinions is the best way to grow. Not only like developer but also like human.","image":"https:\/\/gianarb.it\/img\/myselfie.jpg-large","updated":"2016-08-25T12:08:27+00:00","published":"2016-08-25T12:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/build-opinions-to-become-wise","content":"<p>I am Gianluca, I am 24 years old and I work as Software Engineer.<\/p>\n\n<p>I like my work because there are really a lot of different kinds of Software\nEngineer, mainly because you can work in pretty much every environment like\ntechnology, food, cook, sport, fashion.<\/p>\n\n<p>Also because there are a lot of stuff to do, if you like to work on a product,\nyou build features to make happy other people to buy a car or to read a news on\na online news paper.<\/p>\n\n<p>If you like play with cables, racks, switchs you can work in a big or small\nfarm and design fancy datanceters.<\/p>\n\n<p>But I like this work because a lot of people have opinions. There are opinions\nbuilt on top of big study or after a long experience but everyone is happy to\nshare them and use them to build a new product.<\/p>\n\n<p>I am a good PHP developer, because I worked for a while with this language and\nI tried to catch and verify good opinions from a lot of developers that come\nfrom different parts of the world and from different experiences.<\/p>\n\n<p>I am happy so share someone of them with you:<\/p>\n\n<pre><code class=\"language-php\">&lt;?php\nnamespace Opinion\\One;\n\nclass Good {\n\n    private $important;\n\n    public function __contruct($somethingThatIReallyNeed)\n    {\n        $this-&gt;important = $somethingThatIReallyNeed;\n    }\n}\n<\/code><\/pre>\n\n<p>Usually I force myself (and when I can other people :P ) to inject objects from\ncontructor if our object (Good) can not work without them.<\/p>\n\n<p>This is a good way to be sure that your object will be completed, because you can not forgot nothing!<\/p>\n\n<p>I use Zend Framework from a lot of time and I remember one class\n<code>ServiceLocatorAwareInterface<\/code>.<\/p>\n\n<p>I have an opion, I hate this class.<\/p>\n\n<p>If you implement this interface your service has a service locator! It\u2019s\npowerfull today that you need to finish your ticket and go away with a new\nfeature.\nAfter months a lot of people improve this service and they start to use a lot\nof other services without think about for what reason you built your service.\nJust because it\u2019s really simple get random services from service locator.<\/p>\n\n<p>Be wise, don\u2019t allow people to write bad code and use your constructor to inject your dependencies!!<\/p>\n\n<p>I have also architecture opinions like:<\/p>\n\n<p>When you think about \u201cHow can I resolve this problem?\u201d you must start from\ndesign pattern! They are documented and tested from a lot of developers and in\nmore use cases.<\/p>\n\n<p>Imagine your colleague that come to you around 5pm to ask how an entire\nlibraries is designed, you can just reply \u201cIt\u2019s just a SOAL client, see you\ntomorrow!\u201d and you go to play basketball.<\/p>\n\n<p>Or if you are building an API, OAuth2 could be a good choice for your\nanthentication service. It\u2019s tested, there are a lot of clients and\ndocumentations.\nYour clients will be good to know that you are just using\nOauth2 and nothing strange.<\/p>\n\n<p>Well, I am not here to share all my opinion in a single post, all of them are\njust examples of what I mean for opinion.<\/p>\n\n<p>An opinion is really important! But it\u2019s just an opinion!<\/p>\n\n<p>As a Software Engineer you need to have an opinion because every day people\nwill try to have opinion for you: Microservices, containers, one big\nrepository, golang\/rust.<\/p>\n\n<p>To build an opinion you need to make experience and to study it\u2019s a big effort\nbut to be really wise you need also to stay ready to change your\nopinion.<\/p>\n\n<p>In my opinion this is the main difference between smart and wise people. I\nprefer the second one!<\/p>\n\n<div class=\"alert alert-success\" role=\"alert\">\nThanks for your review <a href=\"https:\/\/twitter.com\/fntlnz\" target=\"_blank\">Lorenzo<\/a>!<\/div>\n"},{"title":"Watch demo about Docker 1.12 made during Docker Meetup","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/docker-1-12-meetup-dublin"}},"description":"Docker 1.12 contains a lot of news about orchestration and production. During August Docker Meetup in Dublin I presented with a demo a set of new features around this new release.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2016-08-24T12:08:27+00:00","published":"2016-08-24T12:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/docker-1-12-meetup-dublin","content":"<p>In August during the Docker Meetup I presented with a demo some new\nfeatures provided by Docker 1.12.<\/p>\n\n<p>It\u2019s an important release because it improves your experience with Docker\nin production with an orchestration framework included into the product.<\/p>\n\n<p>Docker provides a new set of commands to create a cluster of Docker\ndeamon and manage a production enviroment.<\/p>\n\n<p>It\u2019s something like Kubernetes, Mesos, Swarm but it is included and\nbuilt-in Docker.<\/p>\n\n<p>I wrote an article about it few months ago <a href=\"\/blog\/docker-1-12-orchestration-built-in\">\u201cDocker 1.12 orchestration\nbuilt-in\u201d<\/a>.<\/p>\n\n<p>In this demo I do an introduction of some new features like:<\/p>\n\n<div style=\"    text-align: center;\">\n<iframe width=\"420\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/h7a7vhzjElo\" frameborder=\"0\" allowfullscreen=\"\"><\/iframe>\n<\/div>\n\n<ul>\n  <li>How create a SwarmMode docker cluster<\/li>\n  <li>What is a service? What tasks means?<\/li>\n  <li>How Docker SwarmKit manage a node down?<\/li>\n  <li>I tried to show the HealthCheck feature :)<\/li>\n  <li>How docker swarmkit manage containers update<\/li>\n  <li>service discovery<\/li>\n<\/ul>\n"},{"title":"\u201cMicroservices and common parts\"","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/services-and-common-parts"}},"description":"When you think about microservices and distributed system there are a lot of parts that usually all your services require. Logging, monitoring, testing, distribution. Manage them in the best way it's one of the reason of success for your distributed system. In this article I shared few of this parts with some feedback to design them in a good way.","image":"https:\/\/gianarb.it\/img\/distributed_system_planet.png","updated":"2016-08-14T12:08:27+00:00","published":"2016-08-14T12:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/services-and-common-parts","content":"<p>Changing my glossary and replacing the concept of application with\nservice could be a buzzword but this takes me to built a\nnew approach to my work.<\/p>\n\n<p>Nowadays many products require more services than before to\nwork: perhaps they could be modules, libraries directly integrate in one or\nmore services  or applications that communicate and provide a feature, it\ndoesn\u2019t matter cause in any case the product\u2019ll have some dependences.<\/p>\n\n<p>If you start to follow this path, a lot of redundant concepts will show up in your\nproduct:<\/p>\n\n<ul>\n  <li>Monitoring<\/li>\n  <li>Logging<\/li>\n  <li>Authentication<\/li>\n  <li>Scaling<\/li>\n  <li>High availability<\/li>\n  <li>Distribution<\/li>\n  <li>Testing (unit, functional, integration..)<\/li>\n  <li>And others<\/li>\n<\/ul>\n\n<p>Some of them, like monitoring or logs, require architecture and tools\nselection: you can use some B2B tools or host something in house. It\u2019s not\nonly a problem of tooling, the other face of this \u201credundant money\u201d is how your\nservices can communicate logs, metrics with outside in a clean and reusable\nway.<\/p>\n\n<p>In this post I will try to share the common part of a\nmicroservices ecosystem and some possible approach to solve this issues.<\/p>\n\n<h2 id=\"logging\">Logging<\/h2>\n\n<p>All applications require a good and strong log system.  There are few\nlibraries able to help you in managing this section but the minimum requirement, in my opinion, includes:<\/p>\n\n<ul>\n  <li>Support for multiple stream: usually, I use stdout or file and I move\nthem in a database with a separate pipeline, but a lot of good libraries allow you to manage your logs in different collectors.<\/li>\n  <li>Different layers like: INFO, DEBUG, WARNING, FATAL.<\/li>\n  <li>Provide a way to change this layer runtime, for example with a RPC call.<\/li>\n<\/ul>\n\n<p>The third point is really important: if your application start to have a big\ntraffic, the amount of logs you must manage will be relevant; so, changing this\nlevel runtime allows you to manage the amount of logs that you store and, for example, allows DEBUG\ninformation only if you need to do some specific debugging in production. This strategy save storage and money.<\/p>\n\n<p>There are a lot of services and open source tools able to manage and storage this data. The real issue is decide which street follow.<\/p>\n\n<p>Are you interested to manage your logs or it\u2019s a big effort for your company? You can move all to log entries and forgot about elastic search and kibana\nand similar in this case. Think about your environment and catch the best solution. Remember that it could be just a temporary solution. When you start a business you have different thoughts, start slim and easy.<\/p>\n\n<h2 id=\"monitoring\">Monitoring<\/h2>\n\n<p>Several services require several time and energy to be monitored and to be\nmaintained alive.<\/p>\n\n<p>The best way to do that is with a time series\ndatabase like prometheus, InfluxDB or other as a service solution like NewRelic\nor AppDynamics.<\/p>\n\n<p>The real problem is how your application can provide\nmetrics readable and usable from external systems. You can find a very good solution to this problem in Docker: they provide different streams and events to grab this kind of informations.<\/p>\n\n<p>If you take a lot on how it manage this part\nyou can implement a good system in your application.  A stream of events is\nalso a good API to allow other services to enjoy features provided from your\nservice.<\/p>\n\n<h2 id=\"heahtcheck\">Heahtcheck<\/h2>\n\n<p>Understand with a single request if your application has all what it needs to\nwork is really important.<\/p>\n\n<p>The microservices ecosystem contains a lot of micro applications that change and have dependencies to work. How can you understand if all your system is up and runs without spend a lot of time?<\/p>\n\n<p>You can create for each service the call <code>\/heath<\/code> that return 200 if all it\u2019s fine and 500 if there is something that it\u2019s not working properly.<\/p>\n\n<p>During a release you can use this endpoint in order to understand if your\nservice is ready to be attached into the production pool.<\/p>\n\n<p>In practice, if you have one service called Users that depends from MySQL and\nfrom another service like Emailer, the health entrypoint for Users\u2019s service\nwill check whether it can connect to the MySql and also you can call <code>\/health<\/code> for\nEmailer in order to check if the service is up.<\/p>\n\n<p>Your orchestration and deploy framework can check after each deploy if the health is up and running and manage your release, it can revert or it doesn\u2019t include your new release into the production pool.<\/p>\n\n<h2 id=\"authentication\">Authentication<\/h2>\n\n<p>Your microservice is not public, sometime you have a set of firewall\u2019s rules or\na strong network settings to manage the security of your environment but for other\nservices the authentication layer is a requirement and usually there are few\nservices that need to know which is the identity of the user that is persisting\nan action.<\/p>\n\n<p>Think about a To Do service, it need to know the identity of the user in order\nto fetch the correct items.<\/p>\n\n<p>For this reason this layer could be common between your services and it\u2019s also a\ncritical section of your architecture because usually from it depends the security of\nyour application and users.<\/p>\n\n<p>Oauth2 is a framework to manage authentication, I recommend it\nbecause it has a documentation already done, it\u2019s a standard.\nYou don\u2019t reivent anything, there are a lot of libraries and use cases about it that make it solid, flexible and reusable.<\/p>\n\n<h2 id=\"automation-and-deploy\">Automation and Deploy<\/h2>\n\n<p>A good layer of automation is important in every ecosystem to make your work less\nbored but also to decrease chance for a human to make a mistake during a\nrepetitive task.<\/p>\n\n<p>If you are thinking about a microservices ecosystem all this problems are\nmultiplied for a big number of applications.<\/p>\n\n<p>Without a good layer of automation and a good deploy\u2019s flow you will spend all\nyour day to put line of code in production without have time to stay focused on\nnew features or other business\u2019s requests.<\/p>\n\n<h2 id=\"documentation\">Documentation<\/h2>\n\n<ul>\n  <li>Describe the topology of your ecosystem,<\/li>\n  <li>how match microservices you have?<\/li>\n  <li>where they are and how they are distributed across your datacenters<\/li>\n  <li>Make it extensible and easy to read and update.<\/li>\n  <li>How a single service works?<\/li>\n  <li>Which APIs it expose<\/li>\n  <li>how another service can communicate with it.<\/li>\n  <li>Single dependencies for each microservices is also important to know.<\/li>\n<\/ul>\n\n<p>All common part like, logs, auth, metrics help you to have a\ncommon documentation easy to maintain, read and implement but for each service\nyou must provide a specific documentation because all it\u2019s clear today but\nbetween few months when you worked on ten other services the situation could be\nreally different.<\/p>\n\n<p>One of the goal about microservices is the possibility to add\nand integrate them easily. Documentation is one of the goal to make this\npossible and efficient.<\/p>\n\n<h2 id=\"communication-layer\">Communication Layer<\/h2>\n\n<p>A lot of companies have one communication layer in the\nenvironment, JSON and REST. It\u2019s a good choice, easy to implement and there are\nalso a lot of tools to test, document and create client libraries.<\/p>\n\n<p>But HTTP\/REST is not the unique way to expose features out of your service, this is\nreally important to know.<\/p>\n\n<p>There are other efficient and less expensive solution, binary protocol is\none of them.<\/p>\n\n<p>For all this topics we can stay here to speech for years for this reason I have in\nplan other posts to analyze some points better.<\/p>\n\n<p>Please let me know if in your experience there are other common part between\nyour services.<\/p>\n"},{"title":"What Distributed System means","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/distributed-system-means"}},"description":"I will speak about service discovery, micro services, container, virtual machines, schedulers, cloud, scalability and latency I hope to have, at the end of this experience a good number of posts in order to share what I know and how I work and approach this kind of challenges.","image":"https:\/\/gianarb.it\/img\/distributed_system_planet.png","updated":"2016-07-12T16:08:27+00:00","published":"2016-07-12T16:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/distributed-system-means","content":"<p>I choose  to put my experience about distributed system in a serie of blog\nposts on which I\u2019ll  cover different topics.<\/p>\n\n<p>I will speak about service discovery, micro services, container, virtual\nmachines, schedulers, cloud, scalability and latency I hope to have, at the end\nof this experience a good number of posts in order to share what I know and how\nI work and approach this kind of challenges.<\/p>\n\n<p>In first I will not speak about nothing new, in fact distributed system means:<\/p>\n\n<blockquote>A distributed system consists of a collection of autonomous computers,\nconnected through a network and distribution middleware, which enables\ncomputers to coordinate their activities and to share the resources of the\nsystem, so that users perceive the system as a single, integrated computing\nfacility.\n<p><a href=\"https:\/\/www0.cs.ucl.ac.uk\/staff\/ucacwxe\/lectures\/ds98-99\/dsee3.pdf\" target=\"_blank\">Wolfgang Emmerich, 1997<\/a><\/p>\n<\/blockquote>\n\n<p>Internet is a distributed system, you infrastructure is usually a distributed\nsystem if you follow the minimum requirements to make high availability for\nyour services.<\/p>\n\n<p>In first of all I love what service means, your application is a service,\nmicroservices is just a way to remind to people that a little application is\neasy to maintain, deploy and control but the idea in my opinion is just make\nsomething autonomous and useful for your customers. Sometimes your customer is\na human in other case could be another service provided by yourself or from a\nthird-party, it is not really important. It\u2019s important that your service must\nbe ready to communicate with the extern.<\/p>\n\n<p>Distributed your system is important to make it available, if you close your\nservice inside a single datacenter in a single part of the world you take the\nrisk to make it unavailable in case of problem in that particular area, if you\ndistribute your service in different location you are increasing the chances to\nstay up.<\/p>\n\n<p>You are also mitigating the latency around your system because you are bringing\nyour application near your customers and if you have a world wide traffic\nthat\u2019s param is really important.<\/p>\n\n<p><img alt=\"Internet Global Submarine map\" src=\"\/img\/global-submarine-cable.jpg\" class=\"img-fluid\" \/><\/p>\n\n<p>This is the map of the submarine cable (2014) and all know that internet is not\nin the air and serve different point in the world require different amount of\ntime to have a response and also is not just a problem of distance but traffic\nand quality of the network have them weight. Akamai is a expert about this\ntopic, he provide a service of content delivery (CDN) and also it\u2019s a\nmonitoring system for the status of the network, they provide different data,\none of them describe the <a href=\"https:\/\/www.akamai.com\/us\/en\/solutions\/intelligent-platform\/visualizing-akamai\/real-time-web-monitor.jsp\">high level status of Internet<\/a>.<\/p>\n\n<p>Virtualisation, container,  cloud computing and in general the low price to\ndesign an infrastructure and the growth of internet\u2019s users allow little\ncompany with a little budget to create something of stable, secure and\navailable in different part of worlds. I think that for this reason micro\nservices and distributed system start to have a big impact in the industry.<\/p>\n\n<p>A good exercise to understand the current situation could be design a little\ninfrastructure cross provider in multi datacenter to support a normal blog,\nwith a database and an application. With a couple of servers on different cloud\nproviders you can create an high available and distributed system across\nmultiple datacenter and avoid a lot of point of failure like: Geography\ndisaster Provider errror\u2026<\/p>\n\n<p>Docker, openstack, AWS, Consul, Prometheus, Elasticsearch, MongoDB are just a\nset of products that help us to create something really stable and useful.\nContinue Delivery, High Availability, disaster recovery, monitoring, Continuous\nIntegration, reliable are a subset of topic that you must resolve when you\nthink about distributed system because you can not care about where the\ninstances of your applications are around the world and the network is not a\nparadise of stability.  Microservices helps you to create better and stable\napplication, allow your company to create more rooms for more developers and to\nreplace single pieces and features but they create other kind of problems like\narchitecture complexity, good knowledge of in different layers (DevOps point of\nview), network and chain of failures. All the topics that we already know must\nbe adapter for this new architecture, monitoring, logging, deploy.<\/p>\n"},{"title":"Symfony and InfluxDB to monitor PHP application","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/symfony-and-influxdb-to-monitor-php-applications"}},"description":"How monitoring your Symfony and PHP applications with InfluxDB.","image":"https:\/\/gianarb.it\/img\/influx.jpg","updated":"2016-07-02T10:08:27+00:00","published":"2016-07-02T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/symfony-and-influxdb-to-monitor-php-applications","content":"<p>Symfony is one of the most famous PHP Frameworks in use right now, today we are\ngoing to use it to understand how much is important to know how one our\nfeatures performs.  We don\u2019t monitor CPU usage, I\/O disk or the number of\nserver errors but we are monitoring the final feature from the business point\nof view.\nThis approach is very important because understanding which is the\nimpact of a new release on a specific, critical feature is the best way to\nunderstand how a customer use our service.<\/p>\n\n<p>In this article we are implementing\na monitor for one of the most diffuse business requirements, the\nauthentication.<\/p>\n\n<p>In order to understand how many people try to do a login, and\ntrack how many of them perform a wrong authentication and use this metrics to\nunderstand how the system evolves.\nSometimes happens that right after a deploy\nthe number of wrong logins  grows faster than usual, this could be a sign that\nthe feature doesn\u2019t work as expected.  We begin from the standard Symfony\napplication<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">$ composer create-project symfony\/framework-standard-edition influxdb_app\n$ cd influxdb_app\/\n$ php bin\/console server:run<\/code><\/pre><\/figure>\n\n<p>We create a route under authorization (\/admin), it is private and only admin\nusers are allowed to see that and one public homepage (\/).<\/p>\n\n<p>You can follow the official tutorial, or this step from the Symfony application\ndirectly from GitHub.  We have an admin panel and a public site, the idea is\nuse our InfluxDB PHP SDK to understand how this feature works. We use the\nDependency Injection Container (DiC) provided by Symfony to create our\ninfluxdb.client.<\/p>\n\n<p>Go into the project\u2019s root and use composer to install the library: composer\nrequire influxdb\/influxdb-php The first things to do is add some parameters:\nhost and port of our InfluxDB. To do that open app\/config\/parameters.yml and\nadd this fields:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-yaml\" data-lang=\"yaml\">influxdb_host: 127.0.0.1\ninfluxdb_port: 8086\ninfluxdb_db: symfony_influx<\/code><\/pre><\/figure>\n\n<p>We use the REST Api to send metrics to InfluxDB, please if your connection\nparams are different change them.<\/p>\n\n<p>The second step is configure the Symfony\u2019s\nDiC in order to get our client around the application, open\napp\/config\/services.yml and add this line.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-yaml\" data-lang=\"yaml\">services:\n    # ...\n    influxdb_client:\n      class: InfluxDB\\Client\n      arguments: [&#39;%influxdb_host%&#39;, &#39;%influxdb_port%&#39;]\n    influxdb_database:\n      class: InfluxDB\\Database\n      arguments: [&#39;%influxdb_db%&#39;, &#39;@influxdb_client&#39;]<\/code><\/pre><\/figure>\n\n<p>With this specification we are asking at the DiC to provide a influxdb_client,\nit\u2019s a InfluxDB\\Client object with two constructor parameters: influxdb_host,\ninfluxdb_port.<\/p>\n\n<p>InfluxDB could have different databases, influxdb_database is a\nservice that use the influxdb_clint to work with only one database influxdb_db.\nNow we have a influxdb.database ready to be used!  Only to try if all works\nfine open DefaultController and try to send a page view metrics:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">\/**\n     * @Route(&quot;\/&quot;, name=&quot;homepage&quot;)\n     *\/\n    public function indexAction(Request $request)\n    {\n        $result = $this-&gt;get(&quot;influxdb_database&quot;)-&gt;writePoints([new Point(\n          &#39;page_view&#39;,  \/\/ name of the measurement\n          1             \/\/ the measurement value\n        )]);\n\n        \/\/ replace this example code with whatever you need\n        return $this-&gt;render(&#39;default\/index.html.twig&#39;, [\n            &#39;base_dir&#39; =&gt; realpath($this-&gt;getParameter(&#39;kernel.root_dir&#39;).&#39;\/..&#39;),\n        ]);\n    }<\/code><\/pre><\/figure>\n\n<p><img class=\"img-fluid\" alt=\"InfluxDB admin panel\" src=\"\/img\/influxdb_admin.png\" \/><\/p>\n\n<p>Go into the homepage and in the meantime do a query like SELECT * FROM\n\u201csymfony_influx\u201d.\u201d\u201c.page_view into the InfluxDB\u2019s admin panel, you are sending\na new point after each visit! Very good but we have another target!  If you\nhave some problem and you are using my repository see the difference between\nthis and the last step on GitHub.<\/p>\n\n<p>Sent a point in this method it\u2019s not a good\npractice because our controller has two responsability: Rendering of the page\nSent a point In this example the situation is not dangerous because the\napplication is very easy and with a very low traffic, but symfony provide a\nstrong event system, perfect to split the logic on different classes and\nsimplify our code, we try to follow this approach for our last step, we create\na listener to sent a point when an user fails a login.  In first we must create\na listener into src\/AppBundle\/Listener\/MonitorAuthenticationListener.php.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nnamespace AppBundle\\Listener;\nuse Symfony\\Component\\Security\\Core\\Event\\AuthenticationFailureEvent;\nuse InfluxDB\\Point;\nclass MonitorAuthenticationListener\n{\n    private $database;\n    public function __construct($database)\n    {\n        $this-&gt;database = $database;\n    }\n    public function onFailure(AuthenticationFailureEvent $event)\n    {\n        $this-&gt;database-&gt;writePoints([new Point(\n            &#39;login&#39;,\n            1,\n            [&#39;status&#39; =&gt; &#39;error&#39;]\n        )]);\n    }\n}<\/code><\/pre><\/figure>\n\n<p>We use the DiC to attach this listener at the security.authentication.failure\nevent. This event is called after each failed login. To do that open\napp\/config\/services.yml and add this configuration.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-yaml\" data-lang=\"yaml\">services:\n    # ....\n    security.authentication.monitoring:\n        class: AppBundle\\Listener\\MonitorAuthenticationListener\n        arguments: [&#39;@influxdb_database&#39;]\n        tags:\n            - { name: kernel.event_listener, event: security.authentication.failure, method: onFailure }<\/code><\/pre><\/figure>\n\n<p>We are injecting into the constructor our influxdb database, in this way we use\nit to send points like our old example into the controller.  This is the last\npractical section of this tutorial, please if you have lost something try to\ncheck this diff from the last step on GitHub.  Try to do some wrong login and\ncheck the situation into the Admin Panel with a query like<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-sql\" data-lang=\"sql\">SELECT * FROM &quot;symfony_influx&quot;.&quot;&quot;.login.<\/code><\/pre><\/figure>\n\n<p><img class=\"img-fluid\" alt=\"InfluxDB admin panel\" src=\"\/img\/chronograf.png\" \/><\/p>\n\n<p>The admin panel is not the best way to check our metrics, InfluxData provide a\ngreat dashboard called Chronograf, try to use this metric to create a graph\nspecific to understand how your feature works.  This post is only a getting\nstarted to understand a good way to send metrics without connect directly your\nbusiness logic with the monitoring system, but with a real traffic this\napproach is totally inefficient.<\/p>\n\n<p>Send point by point increase the traffic in your network and the latency create\nperformance problems, telegraf is a collector that you can use to mitigate this\nproblem, in this way you can not send your points directly to InfluxDB but you\ncan use this agent installed on your server that collect and send bulk of data\nfor you.<\/p>\n"},{"title":"Docker 1.12 orchestration built-in","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/docker-1-12-orchestration-built-in"}},"description":"Docker 1.12 adds different new features around orchestration, scaling and deployment, in this article I am happy to share some tests I did with this version","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2016-06-20T10:08:27+00:00","published":"2016-06-20T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/docker-1-12-orchestration-built-in","content":"<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Some\ntests with Docker 1.12! <a href=\"https:\/\/t.co\/budUOtMuBB\">https:\/\/t.co\/budUOtMuBB<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/docker?src=hash\">#docker<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/DockerCon?src=hash\">#DockerCon<\/a>\norchestration, swarm and services.<\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/744977855277309953\">June 20,\n2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>During the DockerCon 2016 docker announced Docker 1.12 release.\nOne of the news stories around the new version is the orchestration system built directly\ninside the engine, this feature allow us to use swarm\nwithout installing it separately from outside, it\u2019s now a feature provided by Docker directly.<\/p>\n\n<p>Now we have a new set of commands that allow us to orchestrate containers\nacross a cluster.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker swarm\ndocker node\ndocker service<\/code><\/pre><\/figure>\n\n<p>All these commands are focused on increasing our ability to orchestrate our\ncontainers and also join them in services.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">#!\/bin\/bash\n\n# create swarm manager\ndocker-machine create -d virtualbox sw1\necho &quot;sudo \/etc\/init.d\/docker stop &amp;&amp; \\\n    curl https:\/\/test.docker.com\/builds\/Linux\/x86_64\/docker-1.12.0-rc2.tgz | \\\n    tar xzf - &amp;&amp; sudo mv docker\/* \/usr\/local\/bin &amp;&amp; \\\n    rm -rf docker\/ &amp;&amp; sudo \/etc\/init.d\/docker start&quot; | \\\n    docker-machine ssh sw1 sh -\ndocker-machine ssh sw1 docker swarm init\n\n# create another swarm node\ndocker-machine create -d virtualbox sw2\necho &quot;sudo \/etc\/init.d\/docker stop &amp;&amp; \\\n    curl https:\/\/test.docker.com\/builds\/Linux\/x86_64\/docker-1.12.0-rc2.tgz | \\\n    tar xzf - &amp;&amp; sudo mv docker\/* \/usr\/local\/bin &amp;&amp; \\\n    rm -rf docker\/ &amp;&amp; sudo \/etc\/init.d\/docker start&quot; | \\\n    docker-machine ssh sw2 sh -\ndocker-machine ssh sw2 docker swarm join $(docker-machine ip sw1):2377<\/code><\/pre><\/figure>\n\n<p>another Captain wrote this script that I just updated to work\nwith the public Docker 1.12-rc2. We can use this script to create a cluster with\nvirtual box ready to be used.  After this script you can see the number of\nworkers and masters, in this case your one and one.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">$ docker node ls<\/code><\/pre><\/figure>\n\n<p>Docker 1.12 has a built-in set of primitive functions to orchestrate your containers just\nlike a summary. The main commands that you must run to create a cluster are<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">## On the master to start your cluster\n$ docker swarm init --listen-addr &lt;master-IP(this ip)&gt;:2377\n## on each node to add it into the cluster\n$ docker swarm join &lt;master-ip&gt;:2377<\/code><\/pre><\/figure>\n\n<p><img class=\"img-fluid\" alt=\"Docker Swarm architecture\" src=\"\/img\/posts\/swarm_arch.png\" \/><\/p>\n\n<p>If you are not confident with docker swarm this is the architecture, this graph\nis provided by Docker Inc. and explains really well the design around this project.\nThe principal actors are managers and workers, managers are the brains of the\nsystem, they dispatch schedules and remember services and containers. Workers\nexecute these commands.<\/p>\n\n<p>The cluster is secure because each node has a proper TLS\nidentity and all communications are encrypted end to end by default with a\nautomatic key rotation in order to increase the security around the keys use in\nthe cluster.<\/p>\n\n<p><a href=\"https:\/\/raft.github.io\/\">Raft<\/a> is the consensual protocol used to distribute\nmessage around the cluster and check the number of nodes, it\u2019s complex\nalgorithm but really interested I have in plan another article about it but the\noffical site contains a lot of details about it.<\/p>\n\n<p>We already saw the concept of services in docker-compose they are a single or a\ngroup of containers to describe your ecosystem, you can scale a specific\nservice or orchestrate it across your cluster. It\u2019s the same here, you don\u2019t have\na specification file like compose at the moment but anyway you can run a bunch\nof commands to create your service.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">$ docker service create --name helloworld --replicas 1 alpine ping docker.com<\/code><\/pre><\/figure>\n\n<p>With this example we push up a new service helloworld. It has one container from\nthe alpine image and it pings docker.com site.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker service ls<\/code><\/pre><\/figure>\n\n<p>To watch all our services, we can also inspect a service<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker service inspect &lt;container_id&gt;<\/code><\/pre><\/figure>\n\n<p>There is a new concept, when you run a service you are also creating a task,\nthis task represents the container\/s under your service, in this case we have\njust one task<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker service tasks helloworld<\/code><\/pre><\/figure>\n\n<p>When you scale your service you are creating new tasks<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker service scale helloworld=10<\/code><\/pre><\/figure>\n\n<p>Now you can see 10 tasks that are running and you can inspect one of them,\ninside you can find the containerId and you can, for example, follow logs<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">22:17 $ docker inspect  6fhfse4it8lwzlsk1t5sd5jbk\n[\n    {\n        &quot;ID&quot;: &quot;6fhfse4it8lwzlsk1t5sd5jbk&quot;,\n        &quot;Version&quot;: {\n            &quot;Index&quot;: 67\n        },\n        &quot;CreatedAt&quot;: &quot;2016-06-18T21:06:36.707664178Z&quot;,\n        &quot;UpdatedAt&quot;: &quot;2016-06-18T21:06:39.241942781Z&quot;,\n        &quot;Spec&quot;: {\n            &quot;ContainerSpec&quot;: {\n                &quot;Image&quot;: &quot;alpine&quot;,\n                &quot;Args&quot;: [\n                    &quot;ping&quot;,\n                    &quot;docker.com&quot;\n                ]\n            },\n            &quot;Resources&quot;: {\n                &quot;Limits&quot;: {},\n                &quot;Reservations&quot;: {}\n            },\n            &quot;RestartPolicy&quot;: {\n                &quot;Condition&quot;: &quot;any&quot;,\n                &quot;MaxAttempts&quot;: 0\n            },\n            &quot;Placement&quot;: {}\n        },\n        &quot;ServiceID&quot;: &quot;24e0pojscuj2irvlxvx2baiid&quot;,\n        &quot;Slot&quot;: 2,\n        &quot;NodeID&quot;: &quot;55v4jjzf56mcwnhbwvn4cq1rs&quot;,\n        &quot;Status&quot;: {\n            &quot;Timestamp&quot;: &quot;2016-06-18T21:06:36.7110425Z&quot;,\n            &quot;State&quot;: &quot;running&quot;,\n            &quot;Message&quot;: &quot;started&quot;,\n            &quot;ContainerStatus&quot;: {\n                &quot;ContainerID&quot;: &quot;4ec69142e3e886098915140663737f4176c6de5afe9f2fad1f5b2439d8fc336d&quot;,\n                &quot;PID&quot;: 3627\n            }\n        },\n        &quot;DesiredState&quot;: &quot;running&quot;\n    }\n]\n22:17 $ docker logs -f 6fhfse4it8lwzlsk1t5sd5jbk<\/code><\/pre><\/figure>\n\n<p>At this point it is a normal container and it\u2019s running on your cluster.\nWell I tried to explain the main concept around this big feature provided by\nDocker 1.12, the last example is just to cover the DNS topic.<\/p>\n\n<p>I created an application that serve an http server and print the current IP.\nEach server has an internal load balancer that dispatches traffic in round robin\nbetween the different tasks.\nIn this way it\u2019s totally transparent, you can just\nresolve your service with a normal URL, docker will do the rest for you.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">$. docker service create \u2014-name micro \u2014-replicas 10 \u2014-publish 8000\/tcp gianarb\/micro<\/code><\/pre><\/figure>\n\n<p><a href=\"https:\/\/github.com\/gianarb\/micro\">Micro<\/a> is an application that exposes an\nhttp server on port 8000 and print the current ip, now we have 10 tasks with\nthis service.\nTo grab the current entry point for our service we can inspect it\nand search for this information:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">$. docker service inspect &lt;id-service&gt;\n\n...\n      &quot;Endpoint&quot;: {\n            &quot;Spec&quot;: {},\n            &quot;Ports&quot;: [\n                {\n                    &quot;Protocol&quot;: &quot;tcp&quot;,\n                    &quot;TargetPort&quot;: 8000,\n                    &quot;PublishedPort&quot;: 30000\n                }\n            ],\n            &quot;VirtualIPs&quot;: [\n                {\n                    &quot;NetworkID&quot;: &quot;890fivvc6od3pa4rxd281lobb&quot;,\n                    &quot;Addr&quot;: &quot;10.255.0.5\/16&quot;\n                }\n            ]\n       }\n...<\/code><\/pre><\/figure>\n\n<p>In this case our published port is 3000, we can call <ip>:3000 to resolve our\nservice, if you try to do multi request you can see your IP chances because the\ninternal DNS is calling different containers.<\/ip><\/p>\n\n<p>This is just an overview about the features but there are other powerful news\nlike DAB, stacks and how do an easy update of your containers, this could be\nthe topic around my next article. Please stay in touch follow me on\n<a href=\"https:\/\/github.com\/gianarb\">Twitter<\/a> to chat and receive news about the next articles.<\/p>\n\n<blockquote>\n  <p>Thanks <a href=\"https:\/\/twitter.com\/gpelly\">@gpelly<\/a> for your review!<\/p>\n<\/blockquote>\n"},{"title":"A little bit of refactoring","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/a-little-bit-of-refactoring"}},"description":"Strategy to refactoring your code. Tricks about performance from PHPKonf Istanbul Conference.","image":"https:\/\/gianarb.it\/img\/refactoring.jpg","updated":"2016-04-24T10:08:27+00:00","published":"2016-04-24T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/a-little-bit-of-refactoring","content":"<p>I wrote this note during the <a href=\"https:\/\/phpkonf.org\/\">PHPKonf<\/a>, I spoke about\n<a href=\"\/jenkins-real-world\/#\/\">Jenkins and Continuous Delivery<\/a> but\nduring the days I followed few interested talks and this is my list of notes.<\/p>\n\n<p>Is the code reusable? For a few people the response is yes, for other ones is\nno. I agree with <a href=\"https:\/\/twitter.com\/ocramius\">Ocramius<\/a> that the response is\nno. An abstraction is reusable, an interface is reusable, but it\u2019s very hard to\nreuse a final implementation.  First of all because when you finish to write it\nyour code is already old, your function is already legacy and you start to\nmaintain it, you search bugs and edge cases.<\/p>\n\n<p>One of the way to reduce the time dedicated to do refactroring is prevent and\ndefende your code from bad integration, in OOP usually you have a sort of\nvisibility (private, public, protected) and other different way to defend your\ncode, opinable but the ocramius\u2019s talk about <a href=\"https:\/\/ocramius.github.io\/extremely-defensive-php\/#\/\">Extremly defensive\nphp<\/a> is good to see.<\/p>\n\n<p>Refactoring is a methodology to make better your code. There are different\nimprovement topics like readable, performance, solidity.<\/p>\n\n<ul>\n  <li>make your code readable for the new generation is one of the best stuff that\nyou can do to show your love for your team and your company.<\/li>\n  <li>If your site require more time to be loaded usually you lost your client.\n\u201cless performance is a bug\u201d cit. <a href=\"https:\/\/twitter.com\/fabpot\">Fabien Potencier<\/a><\/li>\n  <li>When your run your code all it\u2019s fine, you are a good developer and your\nfeature works. After the deploy in production your code is the same but\nusually there is a bd category of people, your client, that will use it in a\nvery strange way, usually it\u2019s synonym of bug or edge case. Each bug fix make\nyour code more solid.<\/li>\n<\/ul>\n\n<p>Test your code before start to change it, you know automation is good but if\nyou love seems a machine to it manually.  Setup a continuous integration\nsystem, it can be do just one step like run tests but remember to increase that\nwith all steps that you usually do to test the compliance of your code like\nstyle, standard, static analysis just to enforce that you are not a machine.\nCreate a good environment and an automatic lifecycle for your application allow\nyou to stay focused on the code and not lost your time around stupid task,\nremember that when a routine is good the machine fail less respect a human,\nusually.<\/p>\n\n<p>Refactoring is one of the best stuff that you can do for other people and to\nmake your feature ready for the real world, usually it\u2019s hard for no-tech\ncompany understands it because few times they don\u2019t see any kind of change\ncreate a good environment to save time and use it to do refactoring is a godo\nstrategy.  Automation is the unique method that I know to do that. There are\ndifferent layer of automation just to start my 2coins is just put a make file\non your codebase and when you do something for the second time stop to write it\non your console and write a new make task to share with you team.  After that\ninstall Jenkins and allow it do run this task for you before put on your on the\nmaster branch (for git users, trunk for svn users).<\/p>\n\n<p>Make your development environment comfortable and increase the conformability\u2019s\nperception about the lifecycle it\u2019s the best way to do refactoring without the\nfear to die.  If you are fear to die usually you don\u2019t do nothing.<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Great\ntalk as always Gianluca :) <a href=\"https:\/\/twitter.com\/GianArb\">@GianArb<\/a>\n<a href=\"https:\/\/twitter.com\/hashtag\/phpkonf?src=hash\">#phpkonf<\/a> <a href=\"https:\/\/t.co\/ZW2G1UsXm7\">pic.twitter.com\/ZW2G1UsXm7<\/a><\/p>&mdash;\nFontana Lorenzo (@fntlnz) <a href=\"https:\/\/twitter.com\/fntlnz\/status\/733986655334486016\">May 21,\n2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>Add the PHPKonf in your list! See you next year!<\/p>\n"},{"title":"Docker inside docker and overview about Jenkins 2","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/docker-inside-docker-and-jenkins-2"}},"description":"A little overview about Jenkins 2 but the main topic of the article is about how run docker inside docker to start a continuous integration system inside a container","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2016-04-01T10:08:27+00:00","published":"2016-04-01T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/docker-inside-docker-and-jenkins-2","content":"<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\"><a href=\"https:\/\/twitter.com\/hashtag\/docker?src=hash\">#docker<\/a> inside docker\nand an overview about Jenkins 2 <a href=\"https:\/\/t.co\/qa5ddjfhrs\">https:\/\/t.co\/qa5ddjfhrs<\/a> <a href=\"https:\/\/twitter.com\/docker\">@docker<\/a> <a href=\"https:\/\/twitter.com\/jenkinsci\">@jenkinsci<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/container?src=hash\">#container<\/a><\/p>&mdash;\nGianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/727876226875068416\">May 4,\n2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>Jenkins  is one of the most famous\ncontinuous integration and deployment tools, it\u2019s written in Java and it helps\nyou to manage your pipeline and all tasks that help you to put your code in\nproduction or manage your build.<\/p>\n\n<p>The announcement of Jenkins release of version 2 few days ago, is one of the\nbest release of this year in my opinion.<\/p>\n\n<p>The previous version is very stable but it has a lot of years and the ecosystem\nis totally different. I am happy to see a strong refurbishment to get the best\nof this powerful tool with a series of new feature like:<\/p>\n\n<ul>\n  <li>Nice installation wizard<\/li>\n  <li>Refactoring of the design, one of the most critical\nfeature of the previous version<\/li>\n  <li>Good and modern set of plugins like <a href=\"https:\/\/jenkins.io\/solutions\/pipeline\/\">Jenkins\nPipeline<\/a> to manage your build<\/li>\n<\/ul>\n\n<p>Jenkins is truly a wonder but the tool of the moment it\u2019s docker, engine\nthat allow you to work easier with the containers.<\/p>\n\n<p>This two tools together are perfect to create an isolated environment to test\nand deploy your applications.<\/p>\n\n<p>The first setup could be install Jenkins on your\nserver and use a plugin to manage the integration and trigger your test inside\nan isolated environment, the container.<\/p>\n\n<p>Great work but in my opinion reproducibility is one of the critical point when\nyou deal with plugins if you can not run your build on your local environment\neasily then you have a problem.  Secondly if the container could be a good\nsolution to deploy and maintain a solid and isolated application, why your\nJenkins has not the privilege to run inside a container?  In this perspective\nhow can we run container inside a container?<\/p>\n\n<p>Ok, now its the time to figure it out how to solve the problems.<\/p>\n\n<p>We can use the official Jenkins image to put jenkins inside a container, but I\nworked on my personal alpine installation, light and easy, <a href=\"https:\/\/github.com\/gianarb\/dockerfile\/blob\/master\/jenkins\/2.0\/Dockerfile\">here is the\ndockerfile<\/a>\nand we can pull it:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker pull gianarb\/jenkins:2.0<\/code><\/pre><\/figure>\n\n<p>If you are interested the main article to understand how run docker inside\ndocker is written by\n<a href=\"https:\/\/jpetazzo.github.io\/2015\/09\/03\/do-not-use-docker-in-docker-for-ci\/\">jpetazzo<\/a>,\nthe idea is run our jenkins container with <code>-privileged<\/code> enabled and share our\ndocker binary and the socket <code>\/var\/run\/docker.sock<\/code> to manage our\ncommunications.<\/p>\n\n<ul>\n  <li><code>\/var\/run\/docker.sock<\/code> is the entrypoint of the docker daemon<\/li>\n  <li><code>docker<\/code> the command is like a client that sends commands to socket<\/li>\n  <li><code>--privileged<\/code> give extended privileges to our container<\/li>\n<\/ul>\n\n<p>Translated in a docker command:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker run -v \/var\/run\/docker.sock:\/var\/run\/docker.sock \\\n    -v $(which docker):\/usr\/local\/bin\/docker \\\n    -p 5000:5000 -p 8080:8080 \\\n    -v \/data\/jenkins:\/var\/jenkins \\\n    --privileged \\\n    --restart always \\\n    gianarb\/jenkins:2.0<\/code><\/pre><\/figure>\n\n<p>We connect on <code>http:\/\/docker-ip:8080<\/code> and start the new awesome wizard!<\/p>\n\n<p><img class=\"img-fluid\" alt=\"First Jenkins 2 page, grab from the log your key and start\" src=\"\/img\/docker-in-docker\/jenkins2-start.png\" \/><\/p>\n\n<p><img class=\"img-fluid\" alt=\"Jenkins's plugins wizard\" src=\"\/img\/docker-in-docker\/jenkins2-plugin.png\" \/><\/p>\n\n<p>To verify that all work we can create a new job it only runs <code>docker ps -a<\/code> our\nexpectation is the same list of containers that we have out of jenkins.<\/p>\n\n<p><img class=\"img-fluid\" alt=\"Result of the first build\" src=\"\/img\/docker-in-docker\/jenkins2-result.png\" \/><\/p>\n\n<p>Now we can use run command from jenkins to manage our build with docker without\nany kind of plugins but anyway you are free to use <a href=\"https:\/\/wiki.jenkins-ci.org\/display\/JENKINS\/Docker+Plugin\">Docker\nPlugin<\/a> to start\nyour build.<\/p>\n\n<p>I used Jenkins like an example to run docker inside another container but you\ncan use the same strategy to do the same with your applications if they require\na strong connection with docker.<\/p>\n"},{"title":"Happy docker's birthday and thanks","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/happy-docker-bday-and-thanks"}},"description":"Just a post to say thanks docker for your awesome community and happy birthday!","image":"https:\/\/gianarb.it\/img\/dockerbday.png","updated":"2016-03-25T10:08:27+00:00","published":"2016-03-25T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/happy-docker-bday-and-thanks","content":"<p>Just a post to say thanks docker for your awesome community and happy birthday!<\/p>\n\n<p>This week is the \u201cDocker\u2019s birthday week\u201d and already it this amazing, one week\nof birthday, a lot of MeetUp groups this week done a Tutorial Meetup to help\npeople to start with Docker, Dublin made it very well!<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Fantastic turnout for the <a href=\"https:\/\/twitter.com\/hashtag\/dockerbday?src=hash\">#dockerbday<\/a> <a href=\"https:\/\/twitter.com\/Workday\">@workday<\/a>. Thanks to everyone who\nattended and completed the voting app!! <a href=\"https:\/\/t.co\/zWQssvyHSd\">pic.twitter.com\/zWQssvyHSd<\/a><\/p>&mdash;\nTomWillFixIT (@tomwillfixit) <a href=\"https:\/\/twitter.com\/tomwillfixit\/status\/712749765151297537\">March 23,\n2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>50 people to understand as docker and all ecosystem works and to eat a slice of cake (thanks WorkDay)<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">All\nit&#39;s ready.. We can go! <a href=\"https:\/\/twitter.com\/hashtag\/dockerbday?src=hash\">#dockerbday<\/a> Dublin..\nSome problem? Don&#39;t worry to ask! <a href=\"https:\/\/t.co\/9qz3V9mW9y\">pic.twitter.com\/9qz3V9mW9y<\/a><\/p>&mdash;\nGianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/712705450786099200\">March 23,\n2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>There are different kind of developers, I am happy to work and to follow each\ncommunities that provide a good tools and increase the quality of my work, I\nspend much time accross different team like doctrine, InfluxDB and I am very\nhappy to see as Docker make a big effort to involve and to use its community.<\/p>\n\n<p>I wase member of the beautiful mentor team (we will share a \u201cretro pic\u201d next month\nbecause we forget to do it) and I am happy to see that we done a good work.<\/p>\n\n<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\"><a href=\"https:\/\/twitter.com\/GianArb\">@GianArb<\/a> tnks 4 the help 2nite\nGianluca!! Appreciate it <a href=\"https:\/\/twitter.com\/hashtag\/DublinDocker?src=hash\">#DublinDocker<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/dockerbday?src=hash\">#dockerbday<\/a><\/p>&mdash;\nDelpedro (@Delpedro47) <a href=\"https:\/\/twitter.com\/Delpedro47\/status\/712745923848351744\">March 23,\n2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>Today we seen that also che community appreciate it!\nHappy Birthday and <a href=\"https:\/\/www.meetup.com\/Docker-Dublin\/\">see you next month<\/a>!<\/p>\n"},{"title":"Some days of work vs Jenkins CI","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/some-days-of-work-vs-jenkins-ci"}},"description":"I love Jenkins CI is a beautiful and stable project to run job and manage continuous integration and deploy pipeline, few days ago I worked to improve the delivery pipeline in CurrencyFair and I started to do some thought about this topic, here my internal battle vs Jenkins CI","image":"https:\/\/gianarb.it\/img\/jenkins.png","updated":"2016-02-21T10:08:27+00:00","published":"2016-02-21T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/some-days-of-work-vs-jenkins-ci","content":"<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\"><a href=\"https:\/\/t.co\/02HbnkzRsS\">https:\/\/t.co\/02HbnkzRsS<\/a> &quot;Some Days of work vs <a href=\"https:\/\/twitter.com\/hashtag\/JenkinsCI?src=hash\">#JenkinsCI<\/a>&quot; Little things about continuous integration <a href=\"https:\/\/twitter.com\/hashtag\/ci?src=hash\">#ci<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/dev?src=hash\">#dev<\/a><\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/709466156453732352\">March 14, 2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>Guys please move down your hands, I love JenkinsCI! I am not here to write a\nbad post about it!\nI am here to share few days of reasonings about continuous\ninteagrations, Jenkins CI and all this strong topic.<\/p>\n\n<h2 id=\"reproducible\">Reproducible<\/h2>\n<p>There are a lot of tools that you can use to run tasks,\nant, make, grunt. Use it to run section or all your build on your local\nenvironment, this appraoch increase the value your tasks becuase you use and\ntest them more times.\nTo have a reproducible build help you to maintain splitted your flow by your\nrunner maybe Jenkins is perfect but the are other tools and services:\nTravis-CI, CircleCI, Drone, don\u2019t create a big dependency with your environment.<\/p>\n\n<h2 id=\"speedy\">Speedy<\/h2>\n<p>A slow test suite is a bad idea, in 1 minutes I can maintain the\nfocus on the execution but 5 minutes are a lot, you can take a coff\u00e9 or\nstart to think about another task and return on your old task require an\neffort. This several focus switch it\u2019s not good, and at the same time you\nlost 5 minutes for each build and for every engineer after 1 week are a lot\nof money.<\/p>\n\n<h2 id=\"versionable\">Versionable<\/h2>\n<p>I lost more time about this point and I am not sure if it is a required point\nor not but TravisCI for example use a yml specification file, this file doesn\u2019t\ndescribe only your build but it part of the story of your application, if you\ninclude it into the VCS. Could be it a value for your pipeline?<\/p>\n\n<h2 id=\"maintainable\">Maintainable<\/h2>\n<p>There are a lot of tools that you can choice to create the perfect pipeline ant\nit\u2019s very easy lost your focus and start to use too much tools, you must try\nall but it\u2019s your task to create the perfect sub-set of tools the point 1\n(Reusability) increase the value, use tools that you can reuse during the daily\nwork of your team to increase the develop and go the flow better.\nEach tools that you add seems perfect until they don\u2019t becase a problem.<\/p>\n\n<h2 id=\"scalable\">Scalable<\/h2>\n<p>An easy way to decrease the time for your job is split it in different little\njobs and run them in parallel, you can check the codestyle and run your test\nsuite in the same time for example.\nAnother good reason to create a scalable environment for your jobs is because\nyour company would grow and the continuous integration system burns to helps it\nto grow and not to stop it.<\/p>\n\n<h2 id=\"unique\">Unique<\/h2>\n<p>Jenkins, vagrant, ant, make, drone, docker are only a list of amazing tools to\ncreate the perfect pipeline to deploy and test your code but they are only a\nmeans the goal is indeed the best pipeline for your code and for your team.\nObserve how your team works, which the requirments and criticalities and design\nthe best pipeline for your use case.<\/p>\n\n<h2 id=\"communication-layer\">Communication layer<\/h2>\n<p>One goal for your team is understand the status of the build without logged in\nany application, because enter into the Jenkins site (at first because it is\nnot beautiful :P ) and it is another step to do other: create feature branch,\nsubmit pull request, write code lalala..\nUse directly the pull request to create a connection with your job, you\ncontinuous integration system can submit a new comment or if you are working\nwith GitHub you can use the status check, in this way you can help your\ncolleagues during them work and remove a jump.<\/p>\n\n<p>With JenkinsCi you can do all but if you lost more time to create your best\npipeline? Maybe you don\u2019t know it or maybe it is not the best tool for your use\ncase. Jenkins is flexible, but the flexibility is only the number of plugins\nthat you can install?<\/p>\n\n<p>I don\u2019t know I use it but I am happy to experiment and there are a lot of new\ntechnologies and tools that maybe can help us to do a good work, with or\nwithout JenkinsCI.<\/p>\n\n<h2 id=\"as-a-microservices\">As a microservices<\/h2>\n\n<p><img src=\"\/img\/pipeline.svg\" alt=\"Continuous Integration and Deploy pipeline\" \/><\/p>\n\n<p>This is a summary of a pipeline, each pipeline follows this steps and from this\npoint of view seems very easy!\nJenkins, drone as very stong solution but they are all in one, if you follow\nthis image it\u2019s clear that maybe to create the own pipeline for your projects\nplay with the LEGO to mount the best steps for your team and for your project\nit\u2019s possible.<\/p>\n\n<p>I am happy to share some projects to implement this approach.<\/p>\n\n<h2 id=\"slimmer-proof-of-concept\">Slimmer, proof of concept<\/h2>\n<p>I tried to create a runner for my test suite, <a href=\"https:\/\/github.com\/gianarb\/slimmer\">slimmer<\/a>,\nto implement this thought with docker and go.\nGo offers a lot of libraries and tools to create something in a bit of time and\ndocker it\u2019s perfect because it creates isolated environment and it\u2019s very easy\nto scale with Swarm.\nIn practice at the moment this console app exec a <code>build.slimmer<\/code> a bash script\nexecutable flexible and versionable.\n<a href=\"https:\/\/travis-ci.org\">TravisCI<\/a> is powerful but the YML file is it a good way\nto describe a build? It\u2019s flexible? Maybe yes but I am curious to try a \u201clow\nlevel\u201d approach, because finally all becames a series of commands.\nI created also a series of agent to trigger notification quicly:\n<a href=\"https:\/\/github.com\/gianarb\/ircer\">ircer<\/a>,\n<a href=\"https:\/\/github.com\/gianarb\/slacker\">slacker<\/a>.  You can use them to notify the\nresult of your build.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">composer install\nvendor\/bin\/phpunit\nRESULT=$?\ncurl -LSs https:\/\/github.com\/gianarb\/ircer\/releases\/download\/0.1.0\/ircer_0.1.0_linux_386 &gt; ircer\nchmod 755 ircer\nif [ $RESULT = 0 ]; then\n    .\/ircer -j tech-team -m &quot;You are a great develop. Your build works&quot;\nelse\n    .\/ircer -j tech-team -m &quot;No bad but your build doesn&#39;t work&quot;\nfi<\/code><\/pre><\/figure>\n\n<p>This is an example of <code>build.slimmer<\/code> with an IRC notification, it is a PoC and\nI prepared a little <a href=\"\/slimmer-poc-slide\/#\/\">presentation<\/a> to\nreceive some feedback and I presented it during a Dublin Go Meetup<\/p>\n\n<div class=\"row\">\n    <div class=\"col-md-12 text-center\">\n        <iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/CWCHT3GClMM\" frameborder=\"0\" allowfullscreen=\"\"><\/iframe>\n    <\/div>\n<\/div>\n\n<p>I wait some feedback if you are interested about continuous integration and\ncontinuous delivery.<\/p>\n"},{"title":"ChatOps create your IRC bot in Go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/chatops-create-your-own-irc-bot-in-go"}},"description":"ChatOps is a strong topic it is growing day by day because now with the IaaS are allowed a new way to manage your infrastacture provide for you an API layer. You can implement it to create your automation layer. A pretty bot is a good assistence","image":"https:\/\/gianarb.it\/img\/go.png","updated":"2016-02-21T10:08:27+00:00","published":"2016-02-21T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/chatops-create-your-own-irc-bot-in-go","content":"<p>The infrastructure as a service (IaaS) opened new ways to manage your\ninfrastructure.\nUse an API to create, destroy and update your virtual machine\nis one of the biggest revolutions of our sector.<\/p>\n\n<p>A lot of companies and a lot of DevOps started to create own assistence to\nincrease the automation or to check the status of them infrastructure, in top of\nall GitHub provided a series of awesome blogpost and tools to describe this\napproach that it has a name: ChatOps.<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/hubot.github.com\/\">HuBot<\/a> is a beautiful tools written in node.js to provide smart bot.<\/li>\n  <li><a href=\"https:\/\/www.pagerduty.com\/blog\/what-is-chatops\/\">So, What is ChatOps? And How do I Get Started?<\/a> by PagerDuty<\/li>\n  <li><a href=\"https:\/\/github.com\/blog\/968-say-hello-to-hubot\">Say Hello to Hubot<\/a> by GitHub<\/li>\n<\/ul>\n\n<div class=\"row\">\n    <div class=\"col-md-12 text-center\">\n        <iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/IhzxnY7FIvg\" frameborder=\"0\" allowfullscreen=\"\"><\/iframe>\n    <\/div>\n<\/div>\n\n<p>IRC is an application layer protocol that facilitates communication. One of the\nmost famouse open IRC server is freenode all most important open source projects\nuse it to chat.<\/p>\n\n<p>This concept is already applyed it because most projects are your personal bot,\nfor example Zend use Zend\\Bot a good assistence written by DASPRiD.<\/p>\n\n<p>The ChatOps is an assistence oriented to decrease the distance between your\ninfrastacture and your communication channels.<\/p>\n\n<p>I wrote a low level library to communicate on IRC protocol, we can try to use it to\nwrite our dummy bot.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-go\" data-lang=\"go\">package main\n\nimport (\n    &quot;log&quot;\n    &quot;fmt&quot;\n    &quot;regexp&quot;\n    &quot;bufio&quot;\n    &quot;net\/textproto&quot;\n    &quot;github.com\/gianarb\/go-irc&quot;\n)\n\nfunc main(){\n    secretary := NewBot(\n        &quot;irc.freenode.net&quot;,\n        &quot;6667&quot;,\n        &quot;SybilBot&quot;,\n        &quot;SybilBot&quot;,\n        &quot;#channel-name&quot;,\n        &quot;&quot;,\n    )\n    conn, _ := secretary.Connect()\n    defer conn.Close()\n\n    reader := bufio.NewReader(bot.conn)\n    tp := textproto.NewReader(reader)\n    for {\n        line, err := tp.ReadLine()\n        if err != nil {\n            log.Fatal(&quot;unable to connect to IRC server &quot;, err)\n        }\n\n        isPing, _ := regexp.MatchString(&quot;PING&quot;, line)\n        if isPing  == true {\n            bot.Send(&quot;PONG&quot;);\n        }\n\n        fmt.Printf(&quot;%s\\n&quot;, line)\n    }\n}<\/code><\/pre><\/figure>\n\n<p>With this code you have a bot, in this case her name is SybilBot and at the\nmoment it suppot only the PING PONG flow, without this helth system your bot go\ndown after few time.<\/p>\n\n<p>You can use the same log to add other actions<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-go\" data-lang=\"go\">yourAction, _ := regexp.MatchString(&quot;CheckSomething&quot;, line)\nif yourAction  == true {\n    \/\/ Do Something\n}<\/code><\/pre><\/figure>\n\n<p><a href=\"https:\/\/github.com\/gianarb\/go-irc\">go-irc<\/a> allow you to communicate over IRC protocol, our but is very stupid I\nlike the idea! If you are working on this topic, in go or in other language\nplease ping me! I am very happy to know your bot!<\/p>\n"},{"title":"InfluxDB PHP 1.3.0 is ready to go","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/influxdb-php-release-1-3-0"}},"description":"InfluxDB is a time series database, it helps us to manage matrics, point and offert a stack of tools to collect and see this type of data. I am a maintainer of InfluxDB PHP integration. In this past I describe the news provided by new relesae 1.3.0","image":"https:\/\/gianarb.it\/img\/influx.jpg","updated":"2016-02-18T10:08:27+00:00","published":"2016-02-18T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/influxdb-php-release-1-3-0","content":"<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Shout out to <a href=\"https:\/\/twitter.com\/GianArb\">@GianArb<\/a> for shipping a new release of the InfluxDB-PHP library! Here&#39;s what&#39;s new: <a href=\"https:\/\/t.co\/tJQIu9OCbL\">https:\/\/t.co\/tJQIu9OCbL<\/a><\/p>&mdash; InfluxData (@InfluxDB) <a href=\"https:\/\/twitter.com\/InfluxDB\/status\/704403294592970752\">February 29, 2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>We are happy to annuonce a new minor release, <a href=\"https:\/\/github.com\/influxdata\/influxdb-php\">Influxdb-php library<\/a> 1.3.0.<\/p>\n\n<p>This is a list of PRs merged in 1.3.0 since 1.2.2:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/influxdata\/influxdb-php\/pull\/36\">#36<\/a> Added quoting of dbname in queries<\/li>\n  <li><a href=\"https:\/\/github.com\/influxdata\/influxdb-php\/pull\/35\">#35<\/a> Added orderBy to query builder<\/li>\n  <li><a href=\"https:\/\/github.com\/influxdata\/influxdb-php\/pull\/37\">#37<\/a> Fixed wrong orderby tests<\/li>\n  <li><a href=\"https:\/\/github.com\/influxdata\/influxdb-php\/pull\/38\">#38<\/a> Travis container-infra and php 7<\/li>\n<\/ul>\n\n<p>The <code>QueryBuilder<\/code> now support the orderBy function to order our data, InfluxDB supports it from version 0.9.4.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-sql\" data-lang=\"sql\">select * from cpu order by value desc<\/code><\/pre><\/figure>\n\n<p>Now you can do it in PHP<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">$this-&gt;database-&gt;getQueryBuilder()\n  -&gt;from(&#39;cpu&#39;)\n  -&gt;orderBy(&#39;value&#39;, &#39;DESC&#39;)-&gt;getQuery();<\/code><\/pre><\/figure>\n\n<p>We are increase our Continuous Integration system in order to check our code with PHP7, it\u2019s perfect!<\/p>\n\n<p>We escape our query to support reserved keyword like <code>database<\/code>, <code>servers<\/code> personally I prefer avoid this type of word but you are free to use them.<\/p>\n\n<p>Please we are very happy to understand as the PHP community use this library and InfluxDB, please share your experience and your problem into the repository, on IRC (join influxdb on freenode) and we wait you on <a href=\"https:\/\/twitter.com\/influxdata\">Twitter<\/a>.<\/p>\n\n<p>Remeber to update your <code>composer.json<\/code>!<\/p>\n\n<p>```json<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-json\" data-lang=\"json\">{\n    &quot;require&quot;: {\n        &quot;influxdb\/influxdb-php&quot;: &quot;~1.3&quot;\n    }\n}<\/code><\/pre><\/figure>\n\n<p>A big thanks at all our contributors!<\/p>\n"},{"title":"Swarm scales docker for free","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/swarm-scales-your-containter-for-free"}},"description":"Docker is an awesome tool to manage your container. Swarm helps you to scale your containers on more servers.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2015-12-14T10:08:27+00:00","published":"2015-12-14T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/swarm-scales-your-containter-for-free","content":"<blockquote class=\"twitter-tweet tw-align-center\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">An ocean of containers! With docker and swarm.. <a href=\"https:\/\/t.co\/1dXoZYS3ZA\">https:\/\/t.co\/1dXoZYS3ZA<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/docker?src=hash\">#docker<\/a><\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/696620821931036672\">February 8, 2016<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p><a href=\"https:\/\/github.com\/gianarb\/gourmet\">Gourmet<\/a> is a work in progress application\nthat allow you to execute little applications on an isolated environment, it\ndowloads your manifest and runs it in a container.\nI start this application to improve my go knowledge and to work with the Docker API\nI am happy to share my idea and my tests with Swam an easy way to scale this type of application.<\/p>\n\n<p>Gourmet exposes an HTTP API available at the <code>\/project<\/code> endpoint that accept a JSON request body like:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-json\" data-lang=\"json\">{\n    &quot;img&quot;: &quot;gourmet\/php&quot;,\n    &quot;source&quot;: &quot;https:\/\/ramdom-your-source.net\/gourmet.zip&quot;,\n    &quot;env&quot;: [\n        &quot;AWS_KEY=EXAMPLE&quot;,\n        &quot;AWS_SECRET=&quot;,\n        &quot;AWS_QUEUE=https:\/\/sqs.eu-west-1.amazonaws.com\/test&quot;\n    ]\n}<\/code><\/pre><\/figure>\n\n<ul>\n  <li><code>img<\/code> is the started point docker image<\/li>\n  <li><code>source<\/code> is your script<\/li>\n  <li><code>env<\/code> is a list of environment variables that you can use on your script<\/li>\n<\/ul>\n\n<p>During my test I use this <a href=\"https:\/\/github.com\/gianarb\/gourmet-php-example\">php script<\/a> that send a message on SQS.<\/p>\n\n<p>Your script has a console entrypoint executables in this path <code>\/bin\/console<\/code> and\ngourmet uses it to run your program.<\/p>\n\n<p>To integrate it with Docker I used <code>fsouza\/go-dockerclient<\/code> an open source\nlibrary written in go.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-go\" data-lang=\"go\">container, err := dr.Docker.CreateContainer(docker.CreateContainerOptions{\n    &quot;&quot;,\n    &amp;docker.Config{\n        Image:        img,\n        Cmd:          []string{&quot;sleep&quot;, &quot;1000&quot;},\n        WorkingDir:   &quot;\/tmp&quot;,\n        AttachStdout: false,\n        AttachStderr: false,\n        Env:          envVars,\n    },\n    nil,\n})<\/code><\/pre><\/figure>\n\n<p>This is a snippet that can be used to create a new container.\nWith the container started I use the exec feature to\nextract your source and to run it.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-go\" data-lang=\"go\">exec, err := dr.Docker.CreateExec(docker.CreateExecOptions{\n    Container:    containerId,\n    AttachStdin:  true,\n    AttachStdout: true,\n    AttachStderr: true,\n    Tty:          false,\n    Cmd:          command,\n})\n\nif err != nil {\n    return err;\n}\n\nerr = dr.Docker.StartExec(exec.ID, docker.StartExecOptions{\n    Detach:      false,\n    Tty:         false,\n    RawTerminal: true,\n    OutputStream: dr.Stream,\n    ErrorStream:  dr.Stream,\n})<\/code><\/pre><\/figure>\n\n<p>After each build Gourmet cleans all and destroies the environment.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-go\" data-lang=\"go\">err := dr.Docker.KillContainer(docker.KillContainerOptions{ID: containerId})\nerr = dr.Docker.RemoveContainer(docker.RemoveContainerOptions{ID: containerId, RemoveVolumes: true})\nif(err != nil) {\n    return err;\n}\nreturn nil<\/code><\/pre><\/figure>\n\n<p>At the moment it is gourmet, It could be different hypothetical use cases:<\/p>\n\n<ul>\n  <li>high separated task<\/li>\n  <li>run a testsuite<\/li>\n  <li>dispatch specific functions<\/li>\n<\/ul>\n\n<p>A microservice to work with docker container easily.<\/p>\n\n<p>I thought about an easy way to scale this application and I found\n<a href=\"https:\/\/docs.docker.com\/swarm\/\">Swarm<\/a>, it is a native cluster for docker and\nit seems awesome in first because  it is compatibile with the docker api.<\/p>\n\n<h2 id=\"swarm\">Swarm<\/h2>\n<p>A Docker Swarm\u2019s cluster is very easy to setup, I worked on this project\n<a href=\"https:\/\/github.com\/gianarb\/vagrant-swarm\">vagrant-swarm<\/a> to create a local\nenvironment but <a href=\"https:\/\/docs.docker.com\/swarm\/install-manual\/\">the official\ndocumentation<\/a> is easy to follow.<\/p>\n\n<p>Swarm\u2019s cluster has two actors:<\/p>\n<ul>\n  <li>A master is the entrypoint of your requests, it provide an HTTP\napi compatible with docker.<\/li>\n  <li>A series of nodes that communicate with the master.<\/li>\n<\/ul>\n\n<p>During this example we will work with 1 master and 2 nodes.\nBuild this machine with virtualbox , with another tool, or in cloud is not a\nproblem and <a href=\"https:\/\/docs.docker.com\/engine\/installation\/\">install docker<\/a>.<\/p>\n\n<p>Into the master pull swarm and create a cluster identifier.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker pull swarm\ndocker run --rm swarm create\ndocker run --name swarm_master -d -p &lt;manager_port&gt;:2375 swarm manage token:\/\/&lt;cluster_id&gt;<\/code><\/pre><\/figure>\n\n<p><code>swarm create<\/code> returns a cluster_id use them to start the manager and the\n<code>manager_ip<\/code> is the ip of your master server.<\/p>\n\n<p>Now go into the node, because we must do few things.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker daemon -H tcp:\/\/0.0.0.0:2375 -H unix:\/\/\/var\/run\/docker.sock\ndocker run -d swarm join --addr=&lt;node_ip:2375&gt; token:\/\/&lt;cluster_id&gt;<\/code><\/pre><\/figure>\n\n<p>When <code>cluster_id<\/code> is the id created in the previous step and the <code>node_id<\/code> is the ip\nof  your current node.\nEnter into the master and restart your manager container<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker restart swarm_master<\/code><\/pre><\/figure>\n\n<p>Now we are ready to test if all it\u2019s up.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker -H tcp:\/\/0.0.0.0:2375 info<\/code><\/pre><\/figure>\n\n<p>Replace <code>0.0.0.0.0<\/code> with your master ip if you are in the same server.\nYou\u2019ll wait this type of response<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">$. sudo docker -H tcp:\/\/192.168.13.1:2375 info\nContainers: 1\nImages: 1\nRole: primary\nStrategy: spread\nFilters: health, port, dependency, affinity, constraint\nNodes: 2\n vagrant-ubuntu-vivid-64: 192.168.13.101:2375\n  \u2514 Status: Healthy\n  \u2514 Containers: 1\n  \u2514 Reserved CPUs: 0 \/ 1\n  \u2514 Reserved Memory: 0 B \/ 513.5 MiB\n  \u2514 Labels: executiondriver=native-0.2, kernelversion=3.19.0-43-generic, operatingsystem=Ubuntu 15.04, storagedriver=aufs\n vagrant-ubuntu-vivid-64: 192.168.13.102:2375\n  \u2514 Status: Healthy\n  \u2514 Containers: 0\n  \u2514 Reserved CPUs: 0 \/ 1\n  \u2514 Reserved Memory: 0 B \/ 513.5 MiB\n  \u2514 Labels: executiondriver=native-0.2, kernelversion=3.19.0-43-generic, operatingsystem=Ubuntu 15.04, storagedriver=aufs\nCPUs: 1\nTotal Memory: 513.5 MiB\nName: f5e23167339e<\/code><\/pre><\/figure>\n\n<p>Gourmet is a set of environment variables to create a connection with docker\napi, in particular this function\n<a href=\"https:\/\/godoc.org\/github.com\/fsouza\/go-dockerclient#NewClientFromEnv\">NewClientFromEnv<\/a>\nand the <code>DOCKER_HOST<\/code> parameter.<\/p>\n\n<p>Docker Swarm supports the same Docker API in this way gourmet uses more nodes.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">$ DOCKER_HOST=&quot;tcp:\/\/192.168.13.1:2333&quot; .\/gourmet api<\/code><\/pre><\/figure>\n\n"},{"title":"Docker and wordpress for a better world","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/wordpress-docker"}},"description":"Docker and wordpress to guarantee scalability, flexibilty and isolation. A lot of webagencies install all wordpress in the same server but how can they manage a disaster? AWS with Elastic Container Service could be a more professional solution.","image":"https:\/\/gianarb.it\/img\/docker.png","updated":"2015-12-14T10:08:27+00:00","published":"2015-12-14T10:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/wordpress-docker","content":"<blockquote class=\"twitter-tweet tw-align-center\" lang=\"en\"><p lang=\"en\" dir=\"ltr\"><a href=\"https:\/\/twitter.com\/hashtag\/docker?src=hash\">#docker<\/a> and <a href=\"https:\/\/twitter.com\/hashtag\/wordpress?src=hash\">#wordpress<\/a> for a better world.. <a href=\"https:\/\/t.co\/o9c6YXvsl3\">https:\/\/t.co\/o9c6YXvsl3<\/a> Blogpost after my talk <a href=\"https:\/\/twitter.com\/CodemotionIT\">@CodemotionIT<\/a> How and Why? <a href=\"https:\/\/twitter.com\/awscloud\">@awscloud<\/a><\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/679241680797700096\">December 22, 2015<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>I am trying to represent a typical wordpress infrastructure<\/p>\n\n<p><img src=\"\/img\/posts\/2015-12-16\/wp-infra.png\" alt=\"Wordpress typical infrastructure\" \/><\/p>\n\n<p><strong>Isolation<\/strong>: every single wordpress share all with the others, filesystem,\nmemory, database.<\/p>\n\n<p>This lack of isolation causes different problems:<\/p>\n\n<ul>\n  <li>The monitoring of each installation is harder.<\/li>\n  <li>We share security problems<\/li>\n  <li>We don\u2019t have the freedom to work without the fear or blocking 100 customers<\/li>\n<\/ul>\n\n<p>We are overwhelmed by the problems<\/p>\n\n<p><img src=\"\/img\/posts\/2015-12-16\/problem.png\" alt=\"Problem\" \/><\/p>\n\n<h2 id=\"lxc-container\">LXC Container<\/h2>\n\n<blockquote>\n  <p>it is an operating-system-level virtualization environment for running multiple\nisolated Linux systems (containers) on a single Linux control host.<\/p>\n\n  <p>by wikipedia<\/p>\n<\/blockquote>\n\n<p>Wikipedia helps me to resolve one problem (theory), container is <strong>isolated\nLinux System<\/strong><\/p>\n\n<h2 id=\"docker\">Docker<\/h2>\n\n<p>Docker borns as wrap of LXC container but now we use an own implementation\n<a href=\"https:\/\/github.com\/opencontainers\/runc\">runc<\/a> to serve your application ready\nto go in an isolate environment, with own filesystem and dependencies.<\/p>\n\n<p>Worpdress in this implemetation has two containers, one to provide apache and\nphp and one for mysql database.  This is an example of Dockerfile, it describes\nhow a docker container works it is very simple to understand, from this example\nthere are different keywords<\/p>\n\n<ul>\n  <li><code>FROM<\/code> describes the image that we use as start point.<\/li>\n  <li><code>RUN<\/code> run a command.<\/li>\n  <li><code>EXPOSE<\/code> describes ports to open during a link, in this case MySql runs on\nthe default port 3306.<\/li>\n  <li><code>CMD<\/code> is the default command used during the run console command.<\/li>\n<\/ul>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">FROM ubuntu\nRUN dpkg-divert --local --rename --add \/sbin\/initctl\nRUN ln -s \/bin\/true \/sbin\/initctl\nRUN echo &quot;deb http:\/\/archive.ubuntu.com\/ubuntu precise main universe&quot; &gt; \/etc\/apt\/sources.list\nRUN apt-get update\nRUN apt-get -y install mysql-server\nEXPOSE 3306\nCMD [&quot;\/usr\/bin\/mysqld_safe&quot;]<\/code><\/pre><\/figure>\n\n<p>Very easy to read, it is a list of commands!\nWe are only write a container definition, now we can build it!<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker build -t gianarb\/mysql .<\/code><\/pre><\/figure>\n\n<p>In order to increase the value of this article and to use stable images I will\nuse the official <a href=\"https:\/\/hub.docker.com\/_\/mysql\/\">mysql<\/a> and\n<a href=\"https:\/\/hub.docker.com\/_\/wordpress\/\">wordpress<\/a> images.<\/p>\n\n<p>Download this images<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker pull wordpress\ndocker pull mysql<\/code><\/pre><\/figure>\n\n<p>We are ready to run all! Dockerfile is only a way to describe each single\ncontainer, and the pull command downloads online container ready to work, it is\na good way to reuse your or other containers.<\/p>\n\n<p>We downloaded mysql and wordpress, with the run command we start them and we\ndefine our connections<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker run \\\n    --name mysql \\\n    -p 3306:3306 \\\n    -e MYSQL_ROOT_PASSWORD=passwd  mysql\n\ndocker run -e WORDPRESS_DB_HOST=wp1.database.prod \\\n    -e WORDPRESS_DB_USER=root \\\n    -e WORDPRESS_DB_PASSWORD=help_me \\\n    -p 8080:80 \\\n    -d --name wp1 \\\n    --link wp.database.prod:mysql wordpress<\/code><\/pre><\/figure>\n\n<p>I can try to explain this commands, it run two containers:<\/p>\n\n<ul>\n  <li>The name of the first container is mysql and it uses the <code>mysql<\/code> image, we\nuse -p flag to expose mysql port now you can use phpmyadmin or other client\nto fetch the data but remember that is not a good practice.<\/li>\n  <li>The second container called wp1 uses the image <code>gianarb\/wordpress<\/code> forward\nthe container port 80 (apache) on host 8080, that in this case it is the way\nto see the site.  \u2013link flag is the correct way to consume mysql outside the\nmain container, in this particular case we could use wp.database.prod how url\nto connect at mysql from our worpdress container, awesome!<\/li>\n  <li>Docker image supports environment variable <code>ENV<\/code> for example we can use them\nto configure our services, in this case to set root password in mysql and to\nconfigure worpdress\u2019s database connection<\/li>\n<\/ul>\n\n<p>We are ready! Now you have a worpdress ready to go on port 8080.<\/p>\n\n<h2 id=\"docker-compose\">Docker Compose<\/h2>\n<p>To save time and to increase reusability we can use\n<a href=\"https:\/\/docs.docker.com\/compose\/\">docker-compose<\/a> tool\nthat helps us to manage multi-container infrastructures, in this case one for\nmysql and one for wordpress.\nIn practice we can describe all work did above in a <code>docker-compose.yml<\/code> file:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-yaml\" data-lang=\"yaml\">wp:\n  image: wordpress\n  ports:\n    - 8081:80\n  environment:\n      WORDPRESS_DB_HOST: wp1.database.prod\n      WORDPRESS_DB_USER: root\n      WORDPRESS_DB_PASSWORD: help_me\n  links:\n    - wp1.database.prod:mysql\nmysql:\n  image: mysql:5.7\n  environment:\n    MYSQL_ROOT_PASSWORD: help_me<\/code><\/pre><\/figure>\n\n<p>Now we can run<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">docker-compose build .\ndocker-compose up<\/code><\/pre><\/figure>\n\n<p>To prepare and start our infrastructure. Now we have one wordpress with own\nmysql that run on port 8081. We can change wordpress port to start new isolate\nwordpress installation.<\/p>\n\n<p class=\"text-center\">\n<iframe src=\"\/\/giphy.com\/embed\/l41lYCDgxP6OFBruE\" width=\"480\" height=\"268\" frameborder=\"0\" class=\"giphy-embed\" allowfullscreen=\"\"><\/iframe><p><a href=\"https:\/\/giphy.com\/gifs\/foxtv-win-ricky-gervais-emmys-2015-l41lYCDgxP6OFBruE\">via\nGIPHY<\/a><\/p>\n<\/p>\n\n<h2 id=\"in-cloud-with-aws-ecs\">In Cloud with AWS ECS<\/h2>\n<p>We won a battle but the war is too long, we can not use our PC as server.  In\nthis article I propose <a href=\"https:\/\/docs.aws.amazon.com\/AmazonECS\/latest\/developerguide\/Welcome.html\">AWS Elastic Container\nService<\/a>\na new AWS service that helps us to manage containers, why this service? Because\nit is Docker and Docker Composer like, it\u2019s managed by AWS, maybe there are\nmore flexible solutions, Swarm, Kubernetes but it is a good start point.<\/p>\n\n<p><img src=\"\/img\/posts\/2015-12-16\/ecs.png\" alt=\"AWS Elastic Container Service\" \/><\/p>\n\n<p>A services of keywords to understand how it works:<\/p>\n\n<ul>\n  <li><strong>Container instance<\/strong>: An Amazon EC2 that is running the Amazon ECS Agent. It has been registered into the ECS.<\/li>\n  <li><strong>Cluster<\/strong>: It is a pool of Container instances<\/li>\n  <li><strong>Task definition<\/strong>: A description of an application that contains one or more container definitions<\/li>\n  <li>Each Task definition running is a <strong>Task<\/strong><\/li>\n<\/ul>\n\n<h3 id=\"in-practice\">In practice<\/h3>\n\n<ol>\n  <li>Create a cluster<\/li>\n<\/ol>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">ecs-cli configure \\\n    --region eu-west-1 \\\n    --cluster wps \\\n    --access-key apikey \\\n    --secret-key secreyKey<\/code><\/pre><\/figure>\n\n<ol>\n  <li>Up nodes (one in this case)<\/li>\n<\/ol>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">ecs-cli up --keypair key-ecs \\\n    --capability-iam \\\n    --size 1 \\\n    --instance-type t2.medium<\/code><\/pre><\/figure>\n\n<ol>\n  <li>Push your first task!<\/li>\n<\/ol>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">ecs-cli compose --file docker-compose.yml  \\\n    --project-name wp1 up<\/code><\/pre><\/figure>\n\n<ol>\n  <li>Follow the status of your tasks<\/li>\n<\/ol>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">ecs-cli ps<\/code><\/pre><\/figure>\n\n<p>You can use another docker-compose.yml with a different wordpress port to build\nanother task with another worpdress!<\/p>\n\n<h2 id=\"now-is-only-a-problem-of-url\">Now is only a problem of URL<\/h2>\n<p>We are different isolated worpdress online, but they are an ip and different\nports, maybe our customers would use a domain name for example.\nI don\u2019t know if this solution is ready to run in production and it is good to\nrun more and more wordpress but a good service to turn and proxy requests is\nHaProxy. This is an example of configuration for our use case:<\/p>\n\n<p>wp1.gianarb.it and wp1.gianarb.it are two our customers and 54.229.190.73:8080,\n54.229.190.73:8081 are our wordpress.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">...\nfrontend wp_mananger\n        bind :80\n        acl host_wp1 hdr(host) -i wp1.gianarb.it\n        acl host_wp2 hdr(host) -i wp2.gianarb.it\n        use_backend backend_wp1 if host_wp1\n        use_backend backend_wp2 if host_wp2\nbackend backend_wp1\n        server server1 54.229.190.73:8080 check\nbackend backend_wp2\n        server server2 54.229.190.73:8081 check<\/code><\/pre><\/figure>\n\n<p>Note: This configuration increase the scalability of our system, because we can\nadd other service in order to support more traffic.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">backend backend_wp1\n        server server1 54.229.190.73:8080 check\n        server server1 54.229.190.12:8085 check\n        server server1 54.229.190.15:80 check<\/code><\/pre><\/figure>\n\n<h3 id=\"there-are-other-solutions\">There are other solutions<\/h3>\n<ul>\n  <li>Nginx<\/li>\n  <li>Consul to increase the stability and the scalability of our endpoint<\/li>\n<\/ul>\n\n<div class=\"alert alert-info\" role=\"alert\">\nThis article is based on my presentation at <a href=\"https:\/\/gianarb.it\/codemotion-2015\/\" target=\"_blank\">Codemotion 2015<\/a>\n<\/div>\n\n<div class=\"alert alert-success\" role=\"alert\">\nThanks for review <a href=\"https:\/\/twitter.com\/fntlnz\" target=\"_blank\">Lorenzo<\/a>! I'm in Ireland from 3 weeks but I am not ready to\nwrite an article without your english review!\n<\/div>\n"},{"title":"FastEventManager, only an event manager","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/fast-event-manager-only-an-event-manager"}},"description":"FastEventManager is a PHP library designed to be a smart and light event manager. You can use it in your applications or as a base component for your framework. It adds capabilities around events as attach and triggering of events.","image":"https:\/\/gianarb.it\/img\/github.png","updated":"2015-11-01T00:00:00+00:00","published":"2015-11-01T00:00:00+00:00","id":"https:\/\/gianarb.it\/blog\/fast-event-manager-only-an-event-manager","content":"<blockquote>\n  <p>The Event-Driven Messaging is a design pattern, applied within the\nservice-orientation design paradigm in order to enable the service consumers,\nwhich are interested in events that occur within the periphery of a service\nprovider, to get notifications about these events as and when they occur\nwithout resorting to the traditional inefficientpolling based mechanism.\nby. <a href=\"https:\/\/en.wikipedia.org\/wiki\/Event-Driven_Messaging\">wiki<\/a><\/p>\n<\/blockquote>\n\n<p>In PHP there are different implementation of this pattern, but <a href=\"https:\/\/github.com\/gianarb\/fast-event-manager\">I tried to write\nmy idea<\/a>.\nAn easy to understand and to extends event manager based on regex.<\/p>\n\n<p>Why? Because it is a good way to match strings, it is flexible and powerful.\nAs it is smart and little and it can be used as basis for custom implementation.\nit resolves a regex and triggers events It supports a priority to order\ntriggered listeners.<\/p>\n\n<h2 id=\"install\">Install<\/h2>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">composer require gianarb\/fast-event-manager<\/code><\/pre><\/figure>\n\n<h2 id=\"getting-started\">Getting Started<\/h2>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nrequire __DIR__.&quot;\/vendor\/autoload.php&quot;;\nuse GianArb\\FastEventManager;\n$eventManager = new FastEventManager();\n$eventManager-&gt;attach(&quot;user_saved&quot;, function($event) {\n});\n$user = new Entity\\User();\n$eventManager-&gt;trigger(&quot;\/user_saved\/&quot;, $event);<\/code><\/pre><\/figure>\n\n<p>Each listener has a priority (default = 0), it describe the order of execution<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\n$eventManager-&gt;attach(&quot;wellcome&quot;, function() {\n    echo &quot; dev!&quot;;\n}, 100);\n$eventManager-&gt;attach(&quot;wellcome&quot;, function() {\n    echo &quot;Hello&quot;;\n}, 345);\n$eventManager-&gt;trigger(&quot;\/wellcome\/&quot;);\n\/\/output &quot;Hello dev!&quot;<\/code><\/pre><\/figure>\n\n<p>I wrote this library because there are a lot of solutions that implement this\npattern but they are verbose, this is only an event manager if you search other\nfeatures you can extends it or you can use differents implementations.\nOn top of this library you can write your library to build an event manager ready\nto use with your team in your applications.<\/p>\n\n<p>This is a good solution because it is easy, ~31 line of code to trigger events\nwithout fear to inherit many line of codes and unused features to maintain.<\/p>\n"},{"title":"Penny PHP framework made of components","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/penny-framework-made-of-components"}},"description":"Penny a PHP framework made of components, write your microframework made of symfony, zend framework and other components.","image":"https:\/\/gianarb.it\/img\/penny.jpg","updated":"2015-10-27T23:08:27+00:00","published":"2015-10-27T23:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/penny-framework-made-of-components","content":"<blockquote class=\"twitter-tweet tw-align-center\" lang=\"en\"><p lang=\"en\" dir=\"ltr\"><a href=\"https:\/\/twitter.com\/hashtag\/pennyphp?src=hash\">#pennyphp<\/a> <a href=\"https:\/\/t.co\/tsA2nE09GM\">https:\/\/t.co\/tsA2nE09GM<\/a> Why and what?! o.O <a href=\"https:\/\/twitter.com\/hashtag\/php?src=hash\">#php<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/framework?src=hash\">#framework<\/a> to build <a href=\"https:\/\/twitter.com\/hashtag\/microservices?src=hash\">#microservices<\/a> and application &quot;consciously&quot;<\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/659762064446083073\">October 29, 2015<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p class=\"text-center\">\n<iframe src=\"https:\/\/ghbtns.com\/github-btn.html?user=pennyphp&amp;repo=penny&amp;type=star&amp;count=true&amp;size=large\" frameborder=\"0\" scrolling=\"0\" width=\"160px\" height=\"30px\"><\/iframe>\n<\/p>\n\n<p>The PHP ecosystem is mature, there are a lot of libraries that help you to write\ngood and custom applications. Too much libraries require a strong knowledge to\navoid the problem of maintainability and they also open a world made on specific\nimplementations for specific use cases.<\/p>\n\n<p>A big framework adds a big overhead under your business logic sometimes, and some\nof those unused features could cause maintainability problems and chaos.<\/p>\n\n<p>Spending too much time reading the docs could be a problem, do you think you are\na system integrator and not a developer?! These are different works!<\/p>\n\n<p>We are writing <a href=\"https:\/\/github.com\/pennyphp\/penny\">penny<\/a> to share this idea.\nThis is a middleware, event driven framework to build the perfect\nimplementation for your specific project. The starting point we chose is made of:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/zendframework\/zend-diactoros\">Zend\\Diactoros<\/a> PSR-7 HTTP\nlibrary<\/li>\n  <li><a href=\"https:\/\/github.com\/zendframework\/zend-eventmanager\">Zend\\EventManager<\/a> to\ndesign the application flow<\/li>\n  <li><a href=\"https:\/\/php-di\">PHP-DI<\/a> DiC library<\/li>\n  <li><a href=\"https:\/\/github.com\/nikic\/FastRoute\">FastRouter<\/a> because it is fast and easy to\nuse<\/li>\n<\/ul>\n\n<p>but we are working to replace every part of penny with the libraries perfect\nfor your use case.<\/p>\n\n<p>Are you curious to try this idea? We are writing a big documentation around penny.\n<a href=\"https:\/\/docs.pennyphp.org\/en\/latest\/\">docs.pennyphp.org\/en\/latest<\/a><\/p>\n\n<p>And we have a set of use cases:<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/pennyphp\/penny-classic-app\">pennyphp\/penny-classic-app<\/a>\nbuilds with plates<\/li>\n  <li><a href=\"https:\/\/github.com\/pennyphp\/bookshelf\">pennyphp\/bookshelf<\/a> builds with\ndoctrine, twig<\/li>\n  <li><a href=\"https:\/\/github.com\/gianarb\/twitter-uservice\">gianarb\/twitter-uservice<\/a> gets\nthe last tweet from <code>#AngularConf15<\/code> hashtag<\/li>\n<\/ul>\n\n<p><a href=\"https:\/\/github.com\/pennyphp\/penny\/issues?utf8=%E2%9C%93&amp;q=is%3Aissue\">Share your experience!<\/a><\/p>\n"},{"title":"vim composer 0.3.0 is ready","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/php-and-vim-composer-release-0-3-0"}},"description":"vim-composer is a plugin to manage integration between composer and vim","image":"https:\/\/gianarb.it\/img\/vim.png","updated":"2015-09-15T23:08:27+00:00","published":"2015-09-15T23:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/php-and-vim-composer-release-0-3-0","content":"<blockquote align=\"center\" class=\"twitter-tweet\" data-cards=\"hidden\" lang=\"en\"><p lang=\"en\" dir=\"ltr\"><a href=\"https:\/\/twitter.com\/hashtag\/vimForPHP?src=hash\">#vimForPHP<\/a> <a href=\"https:\/\/t.co\/EdczdpCrRc\">https:\/\/t.co\/EdczdpCrRc<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/php?src=hash\">#php<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/vim?src=hash\">#vim<\/a> Release 0.3.0 <a href=\"https:\/\/twitter.com\/hashtag\/composer?src=hash\">#composer<\/a> plugin is ready! Thanks <a href=\"https:\/\/twitter.com\/sensorario\">@sensorario<\/a> for your work!<\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/641674841192574976\">September 9, 2015<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>I\u2019m very happy to announce release 0.3.0 of <a href=\"https:\/\/github.com\/vim-php\/vim-composer\">vim-composer<\/a>.\nThis plugin builds a good integration between VIM and <a href=\"https:\/\/getcomposer.org\">composer<\/a> the strong dependency manager for PHP.<\/p>\n\n<h2 id=\"changelog\">Changelog<\/h2>\n<ul>\n  <li><a href=\"https:\/\/github.com\/vim-php\/vim-composer\/pull\/18\">#18<\/a> Added missing ComposerUpdate function<\/li>\n  <li><a href=\"https:\/\/github.com\/vim-php\/vim-composer\/pull\/21\">#21<\/a> Added missing CONTRIBUTING.md file<\/li>\n  <li><a href=\"https:\/\/github.com\/vim-php\/vim-composer\/pull\/20\">#20<\/a> Require and\/or init commands<\/li>\n<\/ul>\n\n<p>Now this plugin serve new function to require specific package. Update it and map new function <code>:ComposerRequireFunc<\/code>.<\/p>\n"},{"title":"Staging environment on demand with AWS Cloudformation","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/stagin-environment-on-demand-with-aws-cloudformation"}},"description":"Environment are ephemeral. They come and go really quickly based on needs. AWS delivers a service called CloudFormation that allows you to easy describe via JSON or YALM specification a lot of AWS resources like EC2, Route53 hosted zone and domains, RDS, VPC, subnet and almost everything you normally do via console. This is infrastructure as code applied to AWS resource allows you to version and push on git entire AWS environment. You can replicate it over and over.","image":"https:\/\/gianarb.it\/img\/amazon-aws-logo.jpg","updated":"2015-07-08T09:08:27+00:00","published":"2015-07-08T09:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/stagin-environment-on-demand-with-aws-cloudformation","content":"<blockquote class=\"twitter-tweet tw-align-center\" lang=\"en\"><p lang=\"en\" dir=\"ltr\">Staging environment on demand. To Work on <a href=\"https:\/\/twitter.com\/hashtag\/AWS?src=hash\">#AWS<\/a> low level with <a href=\"https:\/\/twitter.com\/hashtag\/cloudformation?src=hash\">#cloudformation<\/a> <a href=\"https:\/\/t.co\/VWBR129637\">https:\/\/t.co\/VWBR129637<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/cloud?src=hash\">#cloud<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/devops?src=hash\">#devops<\/a><\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/621691855810494464\">July 16, 2015<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<h2 id=\"staging-environment\">Staging Environment<\/h2>\n<p>There are few environments during my developer workflow, today I chose a little example:<\/p>\n\n<ul>\n  <li>Production enviroment always exists, it runs the stable application and you can not use it for your test.<\/li>\n  <li><strong>Staging<\/strong> enviroment is a \u201cpre-production\u201d state.<\/li>\n  <li>Develop enviroment is instable and it runs new features and fixes, here there\u2019s the work of all team but it\u2019s not ready to go in production.<\/li>\n<\/ul>\n\n<p><img src=\"\/img\/cloudformation-staging\/staging.jpg\" alt=\"Staging graph\" \/><\/p>\n\n<p>Staging environment in my opinion could be \u201cvolatile\u201d version, we use it when our product is ready to go in production for the last time it was unused. Maybe this statement isn\u2019t real in your work but if you think a little team of consultants that\n work on different projects maybe this words have a sense.<\/p>\n\n<h2 id=\"aws-cloudformation\">AWS Cloudformation<\/h2>\n<p>CloudFormation is an AWS service that helps you to orchestate all AWS services, you can write a template in JSON and you can use it to create an infrastructure with one click.\nThis solution helps me  to build and destroy this environment and we can pay it only if it\u2019s necessary, if you use a <code>stagin env == production env<\/code> it can be very very expensive.\nThis solution could help you to down cost.<\/p>\n\n<h2 id=\"current-infrastructure\">Current infrastructure<\/h2>\n\n<p><img src=\"\/img\/cloudformation-staging\/infra.jpg\" alt=\"RDS and EC2 infrastructure\" \/><\/p>\n\n<p>This is my template to build a simple application Frontend + MySQL (RDS).\nIn this implementation I build network configuration and I create one instance of RDS and one EC2 (my frontend).\n<code>Parameters<\/code> key is the list of external parameters that I can use to configure my template, for example database and EC2 key pair, my root\u2019s password..\n<code>Resources<\/code> key contains description of all actors of this infrastructure.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-json\" data-lang=\"json\">{\n  &quot;Parameters&quot; : {\n    &quot;VPCName&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;staging&quot;,\n      &quot;Description&quot; : &quot;VPC name&quot;\n    },\n    &quot;ProjectName&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;app&quot;,\n      &quot;Description&quot; : &quot;Project name&quot;\n    },\n    &quot;WebKey&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;web-key&quot;,\n      &quot;Description&quot; : &quot;Ssh key to log into the web instances&quot;\n    },\n    &quot;WebInstanceType&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;m3.medium&quot;,\n      &quot;Description&quot; : &quot;Web instance type&quot;\n    },\n    &quot;WebInstanceImage&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;ami-47a23a30&quot;,\n      &quot;Description&quot; : &quot;Web instance image&quot;\n    },\n    &quot;DatabaseInstanceType&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;db.m3.medium&quot;,\n      &quot;Description&quot; : &quot;Database instance type&quot;\n    },\n    &quot;DatabaseName&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;mydb&quot;,\n      &quot;Description&quot; : &quot;Database instance&#39;s name&quot;\n    },\n    &quot;DatabaseMasterUsername&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;gianarb&quot;,\n      &quot;Description&quot; : &quot;Name of master user&quot;\n    },\n    &quot;DatabaseEngineVersion&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;5.6&quot;,\n      &quot;Description&quot; : &quot;MySQL version&quot;\n    },\n    &quot;DatabaseUserPassword&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;test1234&quot;,\n      &quot;Description&quot; : &quot;User password&quot;\n    },\n    &quot;DatabasePublicAccess&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : true\n    },\n    &quot;DatabaseMultiAZ&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : false\n    }\n  },\n  &quot;Resources&quot; : {\n    &quot;Staging&quot;: {\n       &quot;Type&quot; : &quot;AWS::EC2::VPC&quot;,\n       &quot;Properties&quot; : {\n          &quot;CidrBlock&quot; : &quot;10.15.0.0\/16&quot;,\n          &quot;EnableDnsSupport&quot; : true,\n          &quot;EnableDnsHostnames&quot; : true,\n          &quot;InstanceTenancy&quot; : &quot;default&quot;,\n          &quot;Tags&quot; : [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: {&quot;Ref&quot;: &quot;VPCName&quot;}}]\n       }\n    },\n    &quot;DatabaseSubnet1&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::Subnet&quot;,\n      &quot;Properties&quot; : {\n        &quot;AvailabilityZone&quot; : &quot;eu-west-1a&quot;,\n        &quot;CidrBlock&quot; : &quot;10.15.1.0\/28&quot;,\n        &quot;MapPublicIpOnLaunch&quot; : true,\n        &quot;VpcId&quot;: {\n          &quot;Ref&quot; : &quot;Staging&quot;\n        },\n        &quot;Tags&quot;: [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: &quot;db-1a&quot;}]\n      }\n    },\n    &quot;DatabaseSubnet2&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::Subnet&quot;,\n      &quot;Properties&quot; : {\n        &quot;AvailabilityZone&quot; : &quot;eu-west-1b&quot;,\n        &quot;CidrBlock&quot; : &quot;10.15.1.16\/28&quot;,\n        &quot;MapPublicIpOnLaunch&quot; : true,\n        &quot;VpcId&quot;: {\n          &quot;Ref&quot; : &quot;Staging&quot;\n        },\n        &quot;Tags&quot; : [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: &quot;db-1b&quot;}]\n      }\n    },\n    &quot;WebSubnet1&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::Subnet&quot;,\n      &quot;Properties&quot; : {\n        &quot;AvailabilityZone&quot; : &quot;eu-west-1a&quot;,\n        &quot;CidrBlock&quot; : &quot;10.15.0.8\/28&quot;,\n        &quot;MapPublicIpOnLaunch&quot; : true,\n        &quot;VpcId&quot;: {\n          &quot;Ref&quot; : &quot;Staging&quot;\n        },\n        &quot;Tags&quot; : [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: &quot;web-1a&quot;}]\n      }\n    },\n    &quot;RDSSubnet&quot;: {\n     &quot;Type&quot; : &quot;AWS::RDS::DBSubnetGroup&quot;,\n     &quot;Properties&quot; : {\n        &quot;DBSubnetGroupDescription&quot;: &quot;db-prod-subnet-group&quot;,\n        &quot;SubnetIds&quot; : [\n          { &quot;Ref&quot;: &quot;DatabaseSubnet1&quot; },\n          { &quot;Ref&quot;: &quot;DatabaseSubnet2&quot; }\n        ]\n      }\n    },\n    &quot;Database&quot;: {\n      &quot;Type&quot; : &quot;AWS::RDS::DBInstance&quot;,\n      &quot;Properties&quot; : {\n        &quot;AllocatedStorage&quot;: &quot;5&quot;,\n        &quot;AllowMajorVersionUpgrade&quot; : false,\n        &quot;DBInstanceClass&quot;: {&quot;Ref&quot;:&quot;DatabaseInstanceType&quot;},\n        &quot;DBName&quot; : {&quot;Ref&quot;:&quot;DatabaseName&quot;},\n        &quot;DBInstanceIdentifier&quot;: {&quot;Ref&quot;:&quot;DatabaseName&quot;},\n        &quot;Engine&quot; : &quot;MySQL&quot;,\n        &quot;EngineVersion&quot; : {&quot;Ref&quot;:&quot;DatabaseEngineVersion&quot;},\n        &quot;DBSubnetGroupName&quot;: {\n          &quot;Ref&quot;: &quot;RDSSubnet&quot;\n        },\n        &quot;MasterUsername&quot; : {&quot;Ref&quot;: &quot;DatabaseMasterUsername&quot;},\n        &quot;MasterUserPassword&quot; : {&quot;Ref&quot;: &quot;DatabaseUserPassword&quot;},\n        &quot;MultiAZ&quot; : true,\n        &quot;VPCSecurityGroups&quot;: [\n          {\n            &quot;Ref&quot;: &quot;DatabaseSG&quot;\n          }\n        ],\n        &quot;PubliclyAccessible&quot; : {&quot;Ref&quot;: &quot;DatabasePublicAccess&quot;},\n        &quot;Tags&quot; : [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: {&quot;Fn::Join&quot;:[&quot;.&quot;, [&quot;db&quot;, {&quot;Ref&quot;: &quot;ProjectName&quot;}, {&quot;Ref&quot;:&quot;VPCName&quot;}]]} }]\n      }\n    },\n    &quot;WebInstance&quot; : {\n        &quot;Type&quot; : &quot;AWS::EC2::Instance&quot;,\n        &quot;Properties&quot; : {\n            &quot;ImageId&quot; : {&quot;Ref&quot;: &quot;WebInstanceImage&quot;},\n            &quot;InstanceType&quot; : {&quot;Ref&quot;: &quot;WebInstanceType&quot;},\n            &quot;KeyName&quot; : {&quot;Ref&quot;: &quot;WebKey&quot;},\n            &quot;BlockDeviceMappings&quot; : [\n                {\n                    &quot;DeviceName&quot; : &quot;\/dev\/sdm&quot;,\n                    &quot;Ebs&quot; : {\n                        &quot;VolumeType&quot; : &quot;io1&quot;,\n                        &quot;Iops&quot; : &quot;200&quot;,\n                        &quot;DeleteOnTermination&quot; : &quot;false&quot;,\n                        &quot;VolumeSize&quot; : &quot;20&quot;\n                    }\n                },\n                {\n                    &quot;DeviceName&quot; : &quot;\/dev\/sdk&quot;,\n                    &quot;NoDevice&quot; : {}\n                 }\n            ],\n            &quot;SubnetId&quot;: { &quot;Ref&quot; : &quot;WebSubnet1&quot; },\n            &quot;SecurityGroupIds&quot;: [\n                {&quot;Ref&quot;: &quot;WebSG&quot;}\n            ]\n        }\n    },\n    &quot;StagingZone&quot;: {\n      &quot;Type&quot; : &quot;AWS::Route53::HostedZone&quot;,\n      &quot;Properties&quot; : {\n        &quot;Name&quot; : {&quot;Fn::Join&quot;:[&quot;.&quot;, [{&quot;Ref&quot;: &quot;ProjectName&quot;}, {&quot;Ref&quot;:&quot;VPCName&quot;}]]},\n        &quot;VPCs&quot; : [{&quot;VPCId&quot;: {&quot;Ref&quot;: &quot;Staging&quot;}, &quot;VPCRegion&quot;: &quot;eu-west-1&quot;}]\n      }\n    },\n    &quot;StagingInternetGateway&quot; : {\n      &quot;Type&quot; : &quot;AWS::EC2::InternetGateway&quot;,\n      &quot;Properties&quot; : {\n        &quot;Tags&quot; : [ {&quot;Key&quot; : &quot;Name&quot;, &quot;Value&quot; : {&quot;Fn::Join&quot;:[&quot;-&quot;, [{&quot;Ref&quot;:&quot;VPCName&quot;}, &quot;igw&quot;]]}}]\n      }\n    },\n    &quot;StagingIgwAttach&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::VPCGatewayAttachment&quot;,\n      &quot;Properties&quot; : {\n        &quot;InternetGatewayId&quot; : {&quot;Ref&quot;: &quot;StagingInternetGateway&quot;},\n        &quot;VpcId&quot; : {&quot;Ref&quot;: &quot;Staging&quot;}\n      }\n    },\n    &quot;StagingRouteTable&quot;: {\n       &quot;Type&quot; : &quot;AWS::EC2::RouteTable&quot;,\n       &quot;Properties&quot; : {\n          &quot;VpcId&quot; : {&quot;Ref&quot;: &quot;Staging&quot;}\n       }\n    },\n    &quot;LocalRoute&quot;: {\n       &quot;Type&quot; : &quot;AWS::EC2::Route&quot;,\n       &quot;Properties&quot; : {\n          &quot;DestinationCidrBlock&quot; : &quot;0.0.0.0\/0&quot;,\n          &quot;GatewayId&quot; : {&quot;Ref&quot;: &quot;StagingInternetGateway&quot;},\n          &quot;RouteTableId&quot; : {&quot;Ref&quot;: &quot;StagingRouteTable&quot;}\n       }\n    },\n    &quot;Web1LocalRoute&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::SubnetRouteTableAssociation&quot;,\n      &quot;Properties&quot; : {\n        &quot;RouteTableId&quot; : {&quot;Ref&quot;: &quot;StagingRouteTable&quot;},\n        &quot;SubnetId&quot; : {&quot;Ref&quot;: &quot;WebSubnet1&quot;}\n      }\n    },\n    &quot;Db1LocalRoute&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::SubnetRouteTableAssociation&quot;,\n      &quot;Properties&quot; : {\n        &quot;RouteTableId&quot; : {&quot;Ref&quot;: &quot;StagingRouteTable&quot;},\n        &quot;SubnetId&quot; : {&quot;Ref&quot;: &quot;DatabaseSubnet1&quot;}\n      }\n    },\n    &quot;Db2LocalRoute&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::SubnetRouteTableAssociation&quot;,\n      &quot;Properties&quot; : {\n        &quot;RouteTableId&quot; : {&quot;Ref&quot;: &quot;StagingRouteTable&quot;},\n        &quot;SubnetId&quot; : {&quot;Ref&quot;: &quot;DatabaseSubnet2&quot;}\n      }\n    },\n    &quot;DatabaseSG&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::SecurityGroup&quot;,\n      &quot;Properties&quot; : {\n        &quot;GroupDescription&quot; : &quot;Database security groups&quot;,\n        &quot;SecurityGroupIngress&quot; : [\n          {\n            &quot;IpProtocol&quot; : &quot;tcp&quot;,\n            &quot;FromPort&quot;: 3306,\n            &quot;ToPort&quot; : &quot;3306&quot;,\n            &quot;SourceSecurityGroupId&quot;: {&quot;Ref&quot; : &quot;WebSG&quot;}\n          }\n        ],\n        &quot;Tags&quot; :  [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: &quot;db-sg&quot;}],\n        &quot;VpcId&quot; : {&quot;Ref&quot;: &quot;Staging&quot;}\n      }\n    },\n    &quot;WebSG&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::SecurityGroup&quot;,\n      &quot;Properties&quot; : {\n        &quot;GroupDescription&quot; : &quot;Web security groups&quot;,\n        &quot;SecurityGroupIngress&quot; : [\n          {\n            &quot;IpProtocol&quot; : &quot;tcp&quot;,\n            &quot;ToPort&quot; : 80,\n            &quot;FromPort&quot;: 80,\n            &quot;CidrIp&quot; : &quot;0.0.0.0\/0&quot;\n          },\n          {\n            &quot;IpProtocol&quot; : &quot;tcp&quot;,\n            &quot;ToPort&quot; : 22,\n            &quot;FromPort&quot;: 22,\n            &quot;CidrIp&quot; : &quot;0.0.0.0\/0&quot;\n          }\n        ],\n        &quot;Tags&quot; :  [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: &quot;web-sg&quot;}],\n        &quot;VpcId&quot; : {&quot;Ref&quot;: &quot;Staging&quot;}\n      }\n    },\n    &quot;DatabaseRecordSet&quot; : {\n      &quot;Type&quot; : &quot;AWS::Route53::RecordSet&quot;,\n      &quot;Properties&quot; : {\n         &quot;HostedZoneId&quot; : {\n            &quot;Ref&quot;: &quot;StagingZone&quot;\n         },\n         &quot;Comment&quot; : &quot;DNS name for database&quot;,\n         &quot;Name&quot; : {&quot;Fn::Join&quot;:[&quot;.&quot;, [&quot;db&quot;, {&quot;Ref&quot;: &quot;ProjectName&quot;}, {&quot;Ref&quot;:&quot;VPCName&quot;}]]},\n         &quot;Type&quot; : &quot;CNAME&quot;,\n         &quot;TTL&quot; : &quot;300&quot;,\n         &quot;ResourceRecords&quot; : [\n           { &quot;Fn::GetAtt&quot; : [ &quot;Database&quot;, &quot;Endpoint.Address&quot;]}\n         ]\n      }\n    }\n  }\n}<\/code><\/pre><\/figure>\n\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>You can load this teamplate in your account and after environment creations you are ready to work with one EC2 instance and one RDS with MySQL 5.6 installed.\nYou can log into the web interface with key-pair chosen during the creation flow (default ga-eu) and I set default this mysql credential:<\/p>\n\n<ul>\n  <li>user gianarb<\/li>\n  <li>password test1234<\/li>\n<\/ul>\n\n<p>But you can change it before running this template because they are <code>Parameters<\/code>.\nThis approach in my opinion is very powerful because you can start versioning your infrastructure and you can delete and restore it quickly because if you delete the cloudformation stack it rollbacks all resources, it is very easy!<\/p>\n\n<h2 id=\"trick\">Trick<\/h2>\n\n<p>Parameters node create a form into the AWS CloudFormation console to choose a lot of different variable values, for example name of intances or key-pair to log in your EC2.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-json\" data-lang=\"json\">{\n  &quot;Parameters&quot; : {\n    &quot;VPCName&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;staging&quot;,\n      &quot;Description&quot; : &quot;VPC name&quot;\n    },\n    &quot;ProjectName&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;app&quot;,\n      &quot;Description&quot; : &quot;Project name&quot;\n    },\n    &quot;WebKey&quot; : {\n      &quot;Type&quot; : &quot;String&quot;,\n      &quot;Default&quot; : &quot;web-key&quot;,\n      &quot;Description&quot; : &quot;Ssh key to log into the web instances&quot;\n    }\n}<\/code><\/pre><\/figure>\n\n<hr class=\"style-two\" \/>\n\n<p>Resources node contains all elements of your infrastructure, EC2, RDS, VCP.. You can use the parameteters with a simple <code>Ref Key<\/code>.\nes. <code>[{\"Key\": \"Name\", \"Value\": \"ProjectName\"}]<\/code> describe the name of the specific project into the parameter form.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-json\" data-lang=\"json\">{\n  &quot;Resources&quot; : {\n    &quot;Staging&quot;: {\n       &quot;Type&quot; : &quot;AWS::EC2::VPC&quot;,\n       &quot;Properties&quot; : {\n          &quot;CidrBlock&quot; : &quot;10.15.0.0\/16&quot;,\n          &quot;EnableDnsSupport&quot; : true,\n          &quot;EnableDnsHostnames&quot; : true,\n          &quot;InstanceTenancy&quot; : &quot;default&quot;,\n          &quot;Tags&quot; : [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: {&quot;Ref&quot;: &quot;VPCName&quot;}}]\n       }\n    },\n    &quot;DatabaseSubnet1&quot;: {\n      &quot;Type&quot; : &quot;AWS::EC2::Subnet&quot;,\n      &quot;Properties&quot; : {\n        &quot;AvailabilityZone&quot; : &quot;eu-west-1a&quot;,\n        &quot;CidrBlock&quot; : &quot;10.15.1.0\/28&quot;,\n        &quot;MapPublicIpOnLaunch&quot; : true,\n        &quot;VpcId&quot;: {\n          &quot;Ref&quot; : &quot;Staging&quot;\n        },\n        &quot;Tags&quot;: [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: &quot;db-1a&quot;}]\n      }\n    }\n}<\/code><\/pre><\/figure>\n\n<hr class=\"style-two\" \/>\n\n<p>In your template you can describe VPC and create its subnet. You can also describe the specific resource and you can use it to build another<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-json\" data-lang=\"json\">WebSubnet1&quot;: {\n  &quot;Type&quot; : &quot;AWS::EC2::Subnet&quot;,\n  &quot;Properties&quot; : {\n    &quot;AvailabilityZone&quot; : &quot;eu-west-1a&quot;,\n    &quot;CidrBlock&quot; : &quot;10.15.0.8\/28&quot;,\n    &quot;MapPublicIpOnLaunch&quot; : true,\n    &quot;VpcId&quot;: {\n      &quot;Ref&quot; : &quot;Staging&quot;\n    },\n    &quot;Tags&quot; : [{&quot;Key&quot;: &quot;Name&quot;, &quot;Value&quot;: &quot;web-1a&quot;}]\n  }\n},<\/code><\/pre><\/figure>\n\n<p>In this example I resumed <code>Staging<\/code> VPC to build its subnet.<\/p>\n\n<hr class=\"style-two\" \/>\n\n<p>This chapter is insteresting because it creates a RecordSet to map a CNAME DNS in your VPC and now in your Web instances you can resolve MYSql host with <code>db.app.staging<\/code>.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-json\" data-lang=\"json\">&quot;DatabaseRecordSet&quot; : {\n  &quot;Type&quot; : &quot;AWS::Route53::RecordSet&quot;,\n  &quot;Properties&quot; : {\n     &quot;HostedZoneId&quot; : {\n        &quot;Ref&quot;: &quot;StagingZone&quot;\n     },\n     &quot;Comment&quot; : &quot;DNS name for database&quot;,\n     &quot;Name&quot; : {&quot;Fn::Join&quot;:[&quot;.&quot;, [&quot;db&quot;, {&quot;Ref&quot;: &quot;ProjectName&quot;}, {&quot;Ref&quot;:&quot;VPCName&quot;}]]},\n     &quot;Type&quot; : &quot;CNAME&quot;,\n     &quot;TTL&quot; : &quot;300&quot;,\n     &quot;ResourceRecords&quot; : [\n       { &quot;Fn::GetAtt&quot; : [ &quot;Database&quot;, &quot;Endpoint.Address&quot;]}\n     ]\n  }\n}<\/code><\/pre><\/figure>\n\n<p><br \/>\n<br \/>\n<br \/><\/p>\n\n<div class=\"well\"><a target=\"_blank\" href=\"https:\/\/twitter.com\/EmanueleMinotto\">@EmanualeMinotto<\/a> thanks for trying to fix my bad English<\/div>\n"},{"title":"Build your Zend Framework Console Application","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/zendframework-console-app"}},"description":"ZF Console is a component written by zf-campus and Apigility organization that help you to build console application using different Zend Framework components","image":"https:\/\/gianarb.it\/img\/zf.jpg","updated":"2015-05-21T23:08:27+00:00","published":"2015-05-21T23:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/zendframework-console-app","content":"<blockquote class=\"twitter-tweet tw-align-center\" lang=\"en\"><p lang=\"en\" dir=\"ltr\">Blogpost about console-skeleton-app for your console application <a href=\"https:\/\/t.co\/WuVq0GZlxE\">https:\/\/t.co\/WuVq0GZlxE<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/PHP?src=hash\">#PHP<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/ZF?src=hash\">#ZF<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/console?src=hash\">#console<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/develop?src=hash\">#develop<\/a><\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/613292048708468736\">June 23, 2015<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<div class=\"alert alert-success\" role=\"alert\"><strong>Github: <\/strong>Article written about <a target=\"_blank\" href=\"https:\/\/github.com\/gianarb\/console-skeleton-app\">console-skeleton-app<\/a> 1.0.0<\/div>\n\n<p>I\u2019m writing a skeleton app to build console\/bash application in PHP.\nThis project is very easy and it depends on ZF\\Console a zfcampus project and Zend\\Console builds by ZF community.\nI have a todo list for the future but for the time being it\u2019s just a blog post about these two modules.<\/p>\n\n<ul>\n  <li>Integration with container system to manage dependency injection<\/li>\n  <li>Docs to test your command<\/li>\n  <li>Use cases and different implementations<\/li>\n<\/ul>\n\n<h2 id=\"zfconsole-and-other-components\">ZF\\Console and other components<\/h2>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/zfcampus\/zf-console\">ZF\\Console<\/a> is maintained by zfcampus and it is used by Apigility<\/li>\n  <li><a href=\"https:\/\/github.com\/zendframework\/zend-console\">zendframework\\zend-console<\/a> is maintained by zendframework, all the info are in the <a href=\"https:\/\/framework.zend.com\/manual\/current\/en\/modules\/zend.console.introduction.html\">documantation<\/a><\/li>\n<\/ul>\n\n<h2 id=\"tree\">Tree<\/h2>\n\n<p>This is my folders structure proposal, there are three entrypoint in the <code>bin<\/code> directory, one for bash, one for php and a bat for Window.\nI use composer to manage my dependencies and I included .lock file because this project is an APPLICATION not a library..\n<code>\/config<\/code> directory contains only routing definitions but in the future we can add services and other configurations.\n<code>src\/Command\/<\/code> contains my commands.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">\u251c\u2500\u2500 bin\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 console.php\n\u251c\u2500\u2500 composer.json\n\u251c\u2500\u2500 composer.lock\n\u251c\u2500\u2500 config\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 routes.php\n\u251c\u2500\u2500 src\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 Command\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 Conf.php\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 Database.php\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 Download.php\n\u2514\u2500\u2500 vendor\n    \u2514\u2500\u2500 ...<\/code><\/pre><\/figure>\n\n<h2 id=\"bootstrap\">Bootstrap<\/h2>\n\n<p>The Application\u2019s entrypoints are just example and they require few changes.\nFirst we have to change the version in the parameters.php configuration file and also change the application name <code>'app'<\/code> to what fits.\nTo load configurations from different sources I will use the well known <code>Zend\\Config<\/code> component.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nrequire __DIR__.&#39;\/..\/vendor\/autoload.php&#39;;\n\nuse Zend\\Console\\Console;\nuse ZF\\Console\\Application;\nuse ZF\\Console\\Dispatcher;\n\n$version = &#39;0.0.1&#39;;\n\n$application = new Application(\n    &#39;app&#39;,\n    $version,\n    include __DIR__ . &#39;\/..\/config\/routes.php&#39;,\n    Console::getInstance(),\n    new Dispatcher()\n);\n\n$exit = $application-&gt;run();\nexit($exit);<\/code><\/pre><\/figure>\n\n<h2 id=\"routes\">Routes<\/h2>\n<p><code>config\/routes.php<\/code> contains router configurations. This is just an example but you can see all options <a href=\"https:\/\/github.com\/zfcampus\/zf-console#defining-console-routes\">here<\/a>.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nreturn [\n    [\n        &#39;name&#39;  =&gt; &#39;hello&#39;,\n        &#39;route&#39; =&gt; &quot;--name=&quot;,\n        &#39;short_description&#39; =&gt; &quot;Good morning!! This is a beautiful day&quot;,\n        &quot;handler&quot; =&gt; [&#39;App\\Command\\Hello&#39;, &#39;run&#39;],\n    ],\n];<\/code><\/pre><\/figure>\n\n<h2 id=\"command\">Command<\/h2>\n\n<p>Basic command to wish you a good day!\nI decided that a command doesn\u2019t extends any class because in my opinion is a good way to impart readability and simplicity.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nnamespace App\\Command;\n\nuse ZF\\Console\\Route;\nuse Zend\\Console\\Adapter\\AdapterInterface;\n\nclass Hello\n{\n    public static function run(Route $route, AdapterInterface $console)\n    {\n        $name = $route-&gt;getMatchedParam(&quot;name&quot;, &quot;@gianarb&quot;);\n        $console-&gt;writeLine(&quot;Hi {$name}, you have call me. Now this is an awesome day!&quot;);\n    }\n}<\/code><\/pre><\/figure>\n\n<h2 id=\"troubleshooting-and-tricks\">Troubleshooting and tricks<\/h2>\n<ul>\n  <li>OSx return an error because zf-console use a function blocked into the mac os php installation. Have a look at  PR<a href=\"https:\/\/github.com\/zfcampus\/zf-console\/pull\/22\">#22<\/a><\/li>\n  <li>See <a href=\"https:\/\/www.sitepoint.com\/packaging-your-apps-with-phar\/\">this<\/a> article to package your application in a phar archive..<\/li>\n<\/ul>\n\n<p><br \/>\n<br \/>\n<br \/><\/p>\n\n<div class=\"well\"><a target=\"_blank\" href=\"https:\/\/twitter.com\/__debo\">@__debo<\/a> thanks for trying to fix my bad English<\/div>\n"},{"title":"Test your Symfony Controller and your service with PhpUnit","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/symfony-unit-test-controller-with-phpunit"}},"description":"Test your Symfony Controller with PhpUnit. You expect that if one parameter is true your action get a service by Dependence Injcation and use it!","image":"https:\/\/gianarb.it\/img\/symfony.png","updated":"2015-05-21T23:08:27+00:00","published":"2015-05-21T23:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/symfony-unit-test-controller-with-phpunit","content":"<blockquote align=\"center\" class=\"twitter-tweet\" lang=\"en\"><p lang=\"en\" dir=\"ltr\">Unit <a href=\"https:\/\/twitter.com\/hashtag\/test?src=hash\">#test<\/a> for your <a href=\"https:\/\/twitter.com\/hashtag\/Controller?src=hash\">#Controller<\/a> with <a href=\"https:\/\/twitter.com\/hashtag\/PhpUnit?src=hash\">#PhpUnit<\/a> and <a href=\"https:\/\/twitter.com\/hashtag\/Symfony?src=hash\">#Symfony<\/a>.. With a little use case of <a href=\"https:\/\/twitter.com\/hashtag\/DepedenceInjaction?src=hash\">#DepedenceInjaction<\/a> test <a href=\"https:\/\/t.co\/JNb39EyRly\">https:\/\/t.co\/JNb39EyRly<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/php?src=hash\">#php<\/a><\/p>&mdash; Gianluca Arbezzano (@GianArb) <a href=\"https:\/\/twitter.com\/GianArb\/status\/601526550438215680\">May 21, 2015<\/a><\/blockquote>\n<script async=\"\" src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n\n<p>In this article I would like share with you a little experience with:<\/p>\n\n<ul>\n  <li>Symfony MVC<\/li>\n  <li>PhpUnit<\/li>\n  <li>Symfony Dependence Injaction<\/li>\n<\/ul>\n\n<p>This is an example of very easy controller.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nnamespace AppBundle\\Controller;\n\nuse Sensio\\Bundle\\FrameworkExtraBundle\\Configuration\\Route;\nuse Symfony\\Bundle\\FrameworkBundle\\Controller\\Controller;\nuse Symfony\\Component\\HttpFoundation\\Request;\n\nclass SomeStuffController extends FOSRestController\n{\n    \/**\n     * @Rest\\Post(&quot;\/go&quot;)\n     * @return array\n     *\/\n    public function goAction(Request $request)\n    {\n        if($this-&gt;container-&gt;getParameter(&quot;do_stuff&quot;)) {\n            $body = $this-&gt;container-&gt;get(&quot;stuff.service&quot;)-&gt;splash($request-&gt;getContent());\n        }\n        return [];\n    }\n}<\/code><\/pre><\/figure>\n\n<p><code>$this-&gt;container-&gt;getParameter(\"do_stuff\")<\/code> is a boolean parameter that enable or disable a feature, How can I test this snippet?\nI can try to write a functional test but in my opinion is easier write a series of unit tests with PhpUnit to validate my expectations.<\/p>\n\n<h2 id=\"expectations\">Expectations<\/h2>\n<ul>\n  <li>If <code>do_stuff<\/code> parameter is false function get by my container will be call zero times<\/li>\n  <li>If <code>do_stuff<\/code> parameter is true function get by my container will be call one times<\/li>\n<\/ul>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\n\nnamespace AppBundle\\Tests\\Controller;\n\nuse Liip\\FunctionalTestBundle\\Test\\WebTestCase;\nuse AppBundle\\Controller\\SomeStuffController;\n\nclass SomeStuffControllerTest extends WebTestCase\n{\n    public function testDoStuffIsTrue()\n    {\n        $request = $this-&gt;getMock(&quot;Symfony\\Component\\HttpFoundation\\Request&quot;);\n        $container = $this-&gt;getMock(&quot;Symfony\\Component\\DependencyInjection\\ContainerInterface&quot;);\n        $service = $this-&gt;getMockBuilder(&quot;Some\\Stuff&quot;)-&gt;disableOriginalConstructor()-&gt;getMock();\n        $container-&gt;expects($this-&gt;once())\n            -&gt;method(&quot;getParameter&quot;)\n            -&gt;with($this-&gt;equalTo(&#39;do_stuff&#39;))\n            -&gt;will($this-&gt;returnValue(true));\n\n        $container-&gt;expects($this-&gt;once())\n            -&gt;method(&quot;get&quot;)\n            -&gt;with($this-&gt;equalTo(&#39;stuff.service&#39;))\n            -&gt;will($this-&gt;returnValue($service));\n\n        $controller = new SameStuffController();\n        $controller-&gt;setContainer($container);\n\n        $controller-&gt;goAction($request);\n\n    }\n}<\/code><\/pre><\/figure>\n\n<p>This is my first expetection \u201cIf <code>do_stuff<\/code> param is true I call <code>stuff.service<\/code>\u201d.\nIn this controller I use a few objects, Http\\Request, Container and <code>stuff.service<\/code> in this example is a <code>Some\\Stuff<\/code> class.\nIn the first step I have created one mock for each object.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\n$request = $this-&gt;getMock(&quot;Symfony\\Component\\HttpFoundation\\Request&quot;);\n$container = $this-&gt;getMock(&quot;Symfony\\Component\\DependencyInjection\\ContainerInterface&quot;);\n$service = $this-&gt;getMockBuilder(&quot;Some\\Stuff&quot;)-&gt;disableOriginalConstructor()-&gt;getMock();<\/code><\/pre><\/figure>\n\n<p>In the second step I have written my first expetctation, \u201cCall only one time function <code>getParameter<\/code> from <code>$container<\/code> with argument do_stuff and it returns true\u201d.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\n$container-&gt;expects($this-&gt;once())\n    -&gt;method(&quot;getParameter&quot;)\n    -&gt;with($this-&gt;equalTo(&#39;do_stuff&#39;))\n    -&gt;will($this-&gt;returnValue(true));<\/code><\/pre><\/figure>\n\n<p>Thanks at this definitions I know that there will be another effect, my action will call only one time <code>$container-&gt;get(\"stuff.service\")<\/code> and it will be return an Some\\Stuff object.<\/p>\n\n<p>The second test that we can write is \u201cif <code>do_stuff<\/code> is false <code>$contaner-&gt;get(\"stuff.service\")<\/code> it will not be called.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\npublic function testDoStuffIsFalse()\n{\n    $request = $this-&gt;getMock(&quot;Symfony\\Component\\HttpFoundation\\Request&quot;);\n    $container = $this-&gt;getMock(&quot;Symfony\\Component\\DependencyInjection\\ContainerInterface&quot;);\n    $service = $this-&gt;getMockBuilder(&quot;Some\\Stuff&quot;)-&gt;disableOriginalConstructor()-&gt;getMock();\n    $container-&gt;expects($this-&gt;once())\n        -&gt;method(&quot;getParameter&quot;)\n        -&gt;with($this-&gt;equalTo(&#39;do_stuff&#39;))\n        -&gt;will($this-&gt;returnValue(false));\n\n    $container-&gt;expects($this-&gt;never())\n        -&gt;method(&quot;get&quot;)\n        -&gt;with($this-&gt;equalTo(&#39;stuff.service&#39;))\n        -&gt;will($this-&gt;returnValue($service));\n\n    $controller = new SameStuffController();\n    $controller-&gt;setContainer($container);\n    $controller-&gt;goAction($request);\n}<\/code><\/pre><\/figure>\n\n"},{"title":"The price of modularity","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/the-price-of-modularity"}},"description":"Modularity is not only a beautiful word, It has roles and it is a methodology that helps you to keep your project easy to understand. Modularity is a key principle to have an easy on board of new developer in your application because it will look separated and simpler to approach. As a developer you should think about how your scaffolding and your code looks from the outside because in reality you read much more code compared with how code you write.","image":"https:\/\/gianarb.it\/img\/gianarb.png","updated":"2015-02-21T23:08:27+00:00","published":"2015-02-21T23:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/the-price-of-modularity","content":"<p>Today all frameworks are <strong>modulable<\/strong> but it isn\u2019t just a beautiful word, behind it there are a lot of concepts and ideas:<\/p>\n\n<ul>\n  <li>The modularity helps you to reuse parts of code in different projects<\/li>\n  <li>Every component is indipendent so you work on single part of code<\/li>\n  <li>Every <strong>component<\/strong> solves a specific problem\u2026 it\u2019s a beautiful concept that helps you with maintainance!<\/li>\n  <li>other stuffs..<\/li>\n<\/ul>\n\n<p>As you can imagine there is a drawback, all this requires a big effort.\nIdeally every component requires personal circle of release, repository, commits, pull requests, travis conf, documentation etc. etc.<\/p>\n\n<p>Anyway several shorcuts are available. For instance, <em>git subtree<\/em> could help you in this war but the key is this: you need an agreement to win.<\/p>\n\n<p>Zend Framwork Community choose another street, <code>Zend\\Mvc<\/code> in this moment required:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-json\" data-lang=\"json\">{\n    &quot;name&quot;: &quot;zendframework\/zend-mvc&quot;,\n    &quot;...&quot;: &quot;...&quot;,\n    &quot;target-dir&quot;: &quot;Zend\/Mvc&quot;,\n    &quot;require&quot;: {\n        &quot;php&quot;: &quot;&gt;=5.3.23&quot;,\n        &quot;zendframework\/zend-eventmanager&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-servicemanager&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-form&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-stdlib&quot;: &quot;self.version&quot;\n    },\n    &quot;require-dev&quot;: {\n        &quot;zendframework\/zend-authentication&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-console&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-di&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-filter&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-http&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-i18n&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-inputfilter&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-json&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-log&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-modulemanager&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-session&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-serializer&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-text&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-uri&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-validator&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-version&quot;: &quot;self.version&quot;,\n        &quot;zendframework\/zend-view&quot;: &quot;self.version&quot;\n    },\n    &quot;suggest&quot;: {\n        &quot;zendframework\/zend-authentication&quot;: &quot;Zend\\\\Authentication component for Identity plugin&quot;,\n        &quot;zendframework\/zend-config&quot;: &quot;Zend\\\\Config component&quot;,\n        &quot;zendframework\/zend-console&quot;: &quot;Zend\\\\Console component&quot;,\n        &quot;zendframework\/zend-di&quot;: &quot;Zend\\\\Di component&quot;,\n        &quot;zendframework\/zend-filter&quot;: &quot;Zend\\\\Filter component&quot;,\n        &quot;...&quot;: &quot;...&quot;\n    },\n    &quot;...&quot;: &quot;...&quot;\n}<\/code><\/pre><\/figure>\n\n<p>A few <code>require-dev<\/code> dependencies are used into the component to run some features, why? This force me to think <em>\u201cDependencies of this feature are included or not?\u201d<\/em>!!\nComposer was born to solve it! In my opinion the cost of the question is highest than download a few unused classes.\nThere are a lot of unused classes? Maybe too much?<\/p>\n\n<p>Even if the right answer donsn\u2019t exist I think thant some indicators may help you to understand when is the moment to split the component:<\/p>\n\n<ul>\n  <li>List of dependencies<\/li>\n  <li>Complexity of component<\/li>\n  <li>Features<\/li>\n  <li>..<\/li>\n<\/ul>\n\n<p>No shortcuts.<\/p>\n\n"},{"title":"Zend Framework release 2.3.4","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/zf2-release-234"}},"description":"Zend Framework release 2.3.4","image":"https:\/\/gianarb.it\/img\/zf.jpg","updated":"2015-01-14T00:00:00+00:00","published":"2015-01-14T00:00:00+00:00","id":"https:\/\/gianarb.it\/blog\/zf2-release-234","content":"<p>Zend Framework 2.3.4 is ready! After 4 mouths the new path version of ZF2 is\npublished.<\/p>\n\n<p>How all path release there aren\u2019t new important feature but the list of <a href=\"https:\/\/github.com\/zendframework\/zf2\/pulls?q=is%3Aclosed+is%3Apr+milestone%3A2.3.4+\">pull\nrequests<\/a>\nis very long.<\/p>\n\n<ul>\n  <li><a href=\"https:\/\/github.com\/zendframework\/zf2\/pull\/7112\">#7112<\/a> You can find into \/resources directory a ZF official logo<\/li>\n  <li><a href=\"https:\/\/github.com\/zendframework\/zf2\/pull\/7087\">#7087<\/a> Happy new year by ZF!<\/li>\n  <li><a href=\"https:\/\/github.com\/zendframework\/zf2\/issues\/6673\">#6673<\/a> <code>Zend\\Http\\Header<\/code> now support DateTime format for expire Cookie<\/li>\n<\/ul>\n\n<p>Zend Framework follow <a href=\"https:\/\/semver.org\/\">semver<\/a> directives, <code>2.3.4<\/code> is a path\nrelease in this version there are a log list of <a href=\"https:\/\/github.com\/zendframework\/zf2\/pulls?q=is%3Aclosed+is%3Apr+milestone%3A2.3.4+label%3Abug\">bug\nfixes<\/a><\/p>\n\n<p>Good download <a href=\"https:\/\/github.com\/zendframework\/zf2\/releases\/tag\/release-2.3.4\">Zend Femework\n2.3.4<\/a>, this is\nthe\n<a href=\"https:\/\/github.com\/zendframework\/zf2\/blob\/18534b6f2c14f52898bb208932fedacd5324be63\/CHANGELOG.md\">changelog<\/a><\/p>\n"},{"title":"Influx DB and PHP implementation","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/InfluxDB-and-PHP"}},"description":"InfluxDB is a popular and open source time series database capable to store millions of points keeping a fast lookup. It supports SQL as query language and it exposes an HTTP API to interact with it. In Corley we wrote a PHP SDK and we released it open source to integrate InfluxDB in your PHP application.","image":"https:\/\/gianarb.it\/img\/influxdb.png","updated":"2015-01-06T00:00:00+00:00","published":"2015-01-06T00:00:00+00:00","id":"https:\/\/gianarb.it\/blog\/InfluxDB-and-PHP","content":"<p>Influx DB is <a href=\"http:\/\/en.wikipedia.org\/wiki\/Time_series_database\">time series\ndatabase<\/a> written in Go.<\/p>\n\n<p>It supports SQL like queries and it has different entry points, REST API (tcp\nprotocol) and UDP.<\/p>\n\n<div class=\"row\">\n<div class=\"col-md-4 col-md-offset-3\"><img class=\"img-fluid\" src=\"\/img\/influxdb.png\" \/><\/div>\n<\/div>\n\n<p>We wrote a <a href=\"https:\/\/github.com\/corley\/influxdb-php-sdk\">sdk<\/a> to manage\nintegration between Influx and PHP.<\/p>\n\n<p>It supports Guzzle Adapter but if you use Zend\\Client you can write your\nimplementation.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\n$guzzle = new \\GuzzleHttp\\Client();\n\n$options = new Options();\n$adapter = new GuzzleAdapter($guzzle, $options);\n\n$client = new Client();\n$client-&gt;setAdapter($adapter);<\/code><\/pre><\/figure>\n\n<p>In this case we are using a Guzzle Client, we communicate with Influx in TPC, but we can speak with it in UDP<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\n$options = new Options();\n$adapter = new UdpAdapter($options);\n\n$client = new Client();\n$client-&gt;setAdapter($adapter);<\/code><\/pre><\/figure>\n\n<p>Both of them have the same usage<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\n$client-&gt;mark(&quot;app.search&quot;, $points, &quot;s&quot;);<\/code><\/pre><\/figure>\n\n<p>The first different between udp and tcp is known, TPC after request expects a\nresponse, UDP does not expect anything and in this case does not exist any\ndelivery guarantee.  If you can accept this stuff this is the benchmark:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">Corley\\Benchmarks\\Influx DB\\AdapterEvent\n    Method Name                Iterations    Average Time      Ops\/second\n    ------------------------  ------------  --------------    -------------\n    sendDataUsingHttpAdapter: [1,000     ] [0.0026700308323] [374.52751]\n    sendDataUsingUdpAdapter : [1,000     ] [0.0000436344147] [22,917.69026]<\/code><\/pre><\/figure>\n\n"},{"title":"Zf2 Event, base use","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/Zf2-Event-base-use"}},"description":"Integrate in your application an event system is a good way to decouple and extend your application keeping it clean and clear. An event manager allow you to trigger and catch event based on what your application do. You can for example send different kind of notifications like email, slack message and so on from the same event like 'user registration'. Zend Framework a popular and open source PHP framework has a component called EventManager that helps you do integrate such flow.","image":"https:\/\/gianarb.it\/img\/zf.jpg","updated":"2013-11-21T12:38:27+00:00","published":"2013-11-21T12:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/Zf2-Event-base-use","content":"<p>Hi! Some mouths ago I have writted a gist for help me to remember a base use of\nEvents and Event Manager into Zend Fremework, in this article I report this\nsmall tutorial.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nrequire_once __DIR__.&quot;\/vendor\/autoload.php&quot;;\n\nclass Foo\n{\n    \/* @var \\Zend\\EventManager\\EventManagerInterface *\/\n    protected $eventManager;\n\n    public function getEventManager()\n    {\n        if(!$this-&gt;eventManager instanceof \\Zend\\EventManager\\EventManagerInterface){\n            $this-&gt;eventManager = new \\Zend\\EventManager\\EventManager();\n        }\n        return $this-&gt;eventManager;\n    }\n\n    public function echoHello()\n    {\n        $this-&gt;getEventManager()-&gt;trigger(__FUNCTION__.&quot;_pre&quot;, $this);\n        echo &quot;Hello&quot;;\n        $this-&gt;getEventManager()-&gt;trigger(__FUNCTION__.&quot;_post&quot;, $this);\n    }\n}\n\n$foo = new Foo();\n$foo-&gt;getEventManager()-&gt;attach(&#39;echoHello_pre&#39;, function($e){\n    echo &quot;Wow! &quot;;\n});\n$foo-&gt;getEventManager()-&gt;attach(&#39;echoHello_post&#39;, function($e){\n    echo &quot;. This example is very good! \\n&quot;;\n});\n$foo-&gt;getEventManager()-&gt;attach(&#39;echoHello_post&#39;, function($e){\n    echo &quot;\\nby gianarb92@gmail.com \\n&quot;;\n}, -10);\n$foo-&gt;echoHello();<\/code><\/pre><\/figure>\n\n<p>The result:<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">gianarb@GianArb-2 eventTest :) $ php try.php\nWow! Hello. This example is very good!\n\nby gianarb92@gmail.com<\/code><\/pre><\/figure>\n\n<p><a href=\"https:\/\/framework.zend.com\/manual\/2.0\/en\/modules\/zend.event-manager.event-manager.html\">@see Zend Event Manager Ref<\/a><\/p>\n"},{"title":"Git global gitignore","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/Git-globa-gitignore"}},"description":"Git is the most popular code version control. It helps you to manage and share your code, writing a history about the evolution of it over time. It also allow teams to work together managing conflicts and large codebases. Based on languages, projects or operating system there are files that you should never commit such as .DS_Store on mac. You can setup a user level gitignore file to keep them out.","image":"https:\/\/gianarb.it\/img\/git.png","updated":"2013-11-21T12:38:27+00:00","published":"2013-11-21T12:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/Git-globa-gitignore","content":"<p><code>.gitignore <\/code> helps me manage my commits by setting which files or\ndirectory don\u2019t end in my repository. I know two good practices if you work for\nexample on an open source project:<\/p>\n\n<ul>\n  <li>You don\u2019t commit your IDE configurations<\/li>\n  <li>Not use .gitignore file for exclude IDE configuration, because this is\npersonal problem. There are differents IDE, if all devs exclude this files on\na repository level the lists is very long.<\/li>\n<\/ul>\n\n<p>I follow this practices for all my projects, if you are Mac user you have a\nDS_STORE files, there is a method for exclude this file of default.<\/p>\n\n<p><code>~.\/.gitconfig<\/code> is your configuration file, every user has it. If you execute this command<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">$. git config --global core.excludesfile ~\/.gitignore_global<\/code><\/pre><\/figure>\n\n<p>into this file it write thiese lines<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\">[core]\nexcludesfile = \/Users\/gianarb\/.gitignore_global<\/code><\/pre><\/figure>\n\n<p><code>\/Users\/gianarb\/.gitignore_global<\/code> is my global gitignore file!<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-bash\" data-lang=\"bash\"># IDE #\n#######\n.idea\n\n# COMPOSER #\n############\ncomposer.phar\n\n# OS generated files #\n######################\n.DS_Store\n.DS_Store?\n._*\n.Spotlight-V100\n.Trashes\nehthumbs.db\nThumbs.db<\/code><\/pre><\/figure>\n\n"},{"title":"Vagrant Up, slide and first talk","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/vagrant-up-talk-milano"}},"description":"This is my first public talk delivered at the PHP User Group Milan. It is about how to setup a local environment using Vagrant as automation tool. A well setup local environment is a must have to quick develop your application. With Vagrant you make infrastructure as a code to provision your environment. You can push your code inside a version control system such as GIT to share it with your collagues.","image":"https:\/\/gianarb.it\/img\/vagrant-logo.png","updated":"2013-09-14T12:08:27+00:00","published":"2013-09-14T12:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/vagrant-up-talk-milano","content":"<h3>Vagrant Up<\/h3>\n<p>Friday 12 Sept 2013 I talked at Vagrant, tool for manage VM, these is my slide.\nI thanks <a href=\"https:\/\/milano.grusp.org\/\" target=\"_blank\">PugMi<\/a> for this opportunity if you have questions I'm here! :grin: <\/p>\n<iframe style=\"display:block; margin: 0 auto;\" src=\"https:\/\/www.slideshare.net\/slideshow\/embed_code\/26159972\" width=\"597\" height=\"486\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" style=\"border:1px solid #CCC;border-width:1px 1px 0;margin-bottom:5px\" allowfullscreen webkitallowfullscreen mozallowfullscreen> <\/iframe> <div style=\"margin-bottom:5px;text-align:center;\"> <strong> <a href=\"https:\/\/www.slideshare.net\/GianlucaArbezzano\/presentazione-def-26159972\" title=\"Vagrant - PugMI\" target=\"_blank\">Vagrant - PugMI<\/a> <\/strong> from <strong><a href=\"https:\/\/www.slideshare.net\/GianlucaArbezzano\" target=\"_blank\">Gianluca Arbezzano<\/a><\/strong> <\/div>\n<\/br><\/br>\n<blockquote class=\"twitter-tweet\" align=\"center\"><p>quasi 30 persone a sentire <a href=\"https:\/\/twitter.com\/GianArb\">\n    @GianArb<\/a> parlare di <a href=\"https:\/\/twitter.com\/search?q=%23vagrant&amp;src=hash\">#vagrant<\/a> <a href=\"https:\/\/twitter.com\/search?q=%23php&amp;src=hash\">#php<\/a> <a href=\"https:\/\/twitter.com\/search?q=%23pugMi&amp;src=hash\">#pugMi<\/a> <a href=\"https:\/\/twitter.com\/search?q=%23milano&amp;src=hash\">#milano<\/a> <a href=\"https:\/\/t.co\/75MOJiZmDZ\">pic.twitter.com\/75MOJiZmDZ<\/a><\/p>&mdash; Milano PHP (@MilanoPHP) <a href=\"https:\/\/twitter.com\/MilanoPHP\/statuses\/378213865072656385\">September 12, 2013<\/a><\/blockquote>\n<script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n"},{"title":"Zend Framework 2 - Console usage a speed help","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/zf2-console-usage-speed-help"}},"description":"CLI tools are an easy way to interact with an application because you can drive users or even other developers in a well know direction. It is a very good way to decrease possible mistakes. Zend Framework 2 a PHP open source framework has a Console package that helps you to address common issue like argument management, command parsing and to format a colored and nice output.","image":"https:\/\/gianarb.it\/img\/zf.jpg","updated":"2013-08-22T08:08:27+00:00","published":"2013-08-22T08:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/zf2-console-usage-speed-help","content":"<p>With Zend Framework is very easy to write a command line tool to manage\ndifferent things. But what if there are more commands? How do you remeber them\nall?<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nnamespace ModuleTest;\nuse Zend\\Console\\Adapter\\AdapterInterface;\nclass Module {\n\tpublic function getConsoleUsage(AdapterInterface $console)\n\t{\n\t\treturn array(\n\t\t\tarray(&#39;test &lt;params1&gt; &lt;params2&gt; [--params=]&#39;, &#39;Description of test command&#39;),\n\t\t\tarray(&#39;run &lt;action&gt;&#39;, &#39;Start anction&#39;)\n\t\t);\n\t}\n}<\/code><\/pre><\/figure>\n\n<p>You can write this function in a Module.php file, and create a basic helper to\nhelp you see when you write a bad command.<\/p>\n\n<p>English by Rali :smile: Thanks!!!! :smile:<\/p>\n"},{"title":"Generale Jekyll sitemap without plugin","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/Generate-jekyll-sitemap-without-plugin"}},"description":"Every site should have its sitemap to tell search engine like Google about news and updates from your site. With a static site generator such as Jekyll you need to generate statically the sitemap too. This article explains how to write a template that generate a sitemap.","image":"https:\/\/gianarb.it\/img\/jekyll.png","updated":"2013-08-09T09:38:27+00:00","published":"2013-08-09T09:38:27+00:00","id":"https:\/\/gianarb.it\/blog\/Generate-jekyll-sitemap-without-plugin","content":"<p>This blog is a static blog and uses GitHub pages, GitHub pages are generally\ndeployed using Jekyll.<\/p>\n\n<h3 id=\"how-can-you-generate-a-sitemap-without-jekyll-plugin\">How can you generate a sitemap without Jekyll plugin?<\/h3>\n<p>This <a href=\"https:\/\/gist.github.com\/GianArb\/6172377\">gist<\/a> answers your question.<\/p>\n\n<p>I use some post values: changefreq, date and priority, if you don\u2019t set any\nspecific values for them default values are used that are, 0.8 for priority and\nmonth for frequency.<\/p>\n\n<p>In a single post you add this params for use correct params!<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">---\nlayout: post\ntitle:  &quot;Why this blog?&quot;\ndate:   2013-07-22 23:08:27\ncategories: me\ntags: me, developer, presentation, gianarb\nsummary: Gianluca Arbezzano, developer, Italian, why open this blog?\nchangefreq: monthly\n---<\/code><\/pre><\/figure>\n\n<p>If you want to know more about the Sitemap Protocol read\n<a href=\"https:\/\/www.sitemaps.org\/protocol.html\">this<\/a>.<\/p>\n\n<p><a href=\"https:\/\/github.com\/MarcoDeBortoli\">Marco<\/a> thanks for English! :)<\/p>\n"},{"title":"Zend Framework 2 - How do you implement log service?","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/how-do-you-implement-log-service"}},"description":"Logging is a requirement for every application. In PHP and in every other language. It is the way your application has to tell you what its doing. This article is about how to implement a logger in a Zend Framework 2 application in PHP. This solution achieve simplicity and usability.","image":"https:\/\/gianarb.it\/img\/zf.jpg","updated":"2013-07-26T23:08:27+00:00","published":"2013-07-26T23:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/how-do-you-implement-log-service","content":"<p>A log system is an essential element for any application. It is a way to check\nthe status and use of the application. For a basic implementation you can refer\nto the  fig-stanrdars organization\n<a href=\"https:\/\/github.com\/php-fig\/fig-standards\/blob\/master\/accepted\/PSR-3-logger-interface.md\">PSR-3<\/a>\narticle, that describes th elogger interface.<\/p>\n\n<p>Zend Framework 2 implement a <a href=\"https:\/\/github.com\/zendframework\/zf2\/tree\/master\/library\/Zend\/Log\">Logger\nComponent<\/a>,\nthe following is an example of how to use it with service manager.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nreturn array(\n\t&#39;service_manager&#39; =&gt; array(\n\t\t&#39;abstract_factories&#39; =&gt; array(\n\t\t\t&#39;Zend\\Log\\LoggerAbstractServiceFactory&#39;,\n\t\t),\n\t),\n\t&#39;log&#39; =&gt; array(\n\t\t&#39;Log\\App&#39; =&gt; array(\n\t\t\t&#39;writers&#39; =&gt; array(\n\t\t\t\tarray(\n\t\t\t\t\t&#39;name&#39; =&gt; &#39;stream&#39;,\n\t\t\t\t\t&#39;priority&#39; =&gt; 1000,\n\t\t\t\t\t&#39;options&#39; =&gt; array(\n\t\t\t\t\t\t&#39;stream&#39; =&gt; &#39;data\/app.log&#39;,\n\t\t\t\t\t),\n\t\t\t\t),\n\t\t\t),\n\t\t),\n\t),\n);<\/code><\/pre><\/figure>\n\n<p><a href=\"https:\/\/github.com\/zendframework\/zf2\/blob\/master\/library\/Zend\/Log\/LoggerServiceFactory.php\">LoggerAbstractServiceFactory<\/a>\nis a Service Factory, as an example,  into service Manager class Logger and will\nbe used in the whole application. Log\/App is the name of a single logger, and\nwriter is an adapter that is used to choose the method of writing, in this case\neverything is written to file, but you can use a DB adapter and write your log\ninto database.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nnamespace GianArb\\Controller;\nclass GeneralController\n\textends AbastractActionController\n{\n\tpublic function testAction(){\n\t\t$logger = $this-&gt;getServiceLocator()-&gt;get(&#39;Log\\App&#39;);\n\t\t$logger-&gt;log(\\Zend\\Log\\Logger::INFO, &quot;This is a little log!&quot;);\n\t}\n}<\/code><\/pre><\/figure>\n\n<p>With this configuration Log\\App writes a string into data\/app.log file, with\nINFO property. By default you can use an array of properties.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nprotected $priorities = array(\n\tself::EMERG  =&gt; &#39;EMERG&#39;,\n\tself::ALERT  =&gt; &#39;ALERT&#39;,\n\tself::CRIT   =&gt; &#39;CRIT&#39;,\n\tself::ERR    =&gt; &#39;ERR&#39;,\n\tself::WARN   =&gt; &#39;WARN&#39;,\n\tself::NOTICE =&gt; &#39;NOTICE&#39;,\n\tself::INFO   =&gt; &#39;INFO&#39;,\nself::DEBUG  =&gt; &#39;DEBUG&#39;,\n);<\/code><\/pre><\/figure>\n\n<p>Usage of different keys is a good practice because it is very easy to write\nfilter or log categories.<\/p>\n\n<p>Another good practice, valid for all services in general, is to create your\nclass extending single service.<\/p>\n\n<figure class=\"highlight\"><pre><code class=\"language-php\" data-lang=\"php\">&lt;?php\nuse Zend\\Log\\Logger\nclass MyLogger extends Logger<\/code><\/pre><\/figure>\n\n<p>This choice helps managing future customizations  of services and is another\nimportant layer for managing unexpected updates.<\/p>\n\n<p>Rali, thanks for your help with my robotic english! :P<\/p>\n"},{"title":"Why this blog?","link":{"@attributes":{"rel":"alternate","type":"text\/html","href":"https:\/\/gianarb.it\/blog\/why-this-blog"}},"description":"This is my first article on Jekyll. A new static site generator that I have selected as engine for my site to replace wordpress.","image":"https:\/\/gianarb.it\/img\/myselfie.jpg-large","updated":"2013-07-22T23:08:27+00:00","published":"2013-07-22T23:08:27+00:00","id":"https:\/\/gianarb.it\/blog\/why-this-blog","content":"<p>Hi! I\u2019m Gianluca aka <a href=\"https:\/\/twitter.com\/gianarb\">GianArb<\/a>, I\u2019m web developer,\nworks with Php, SQL and noSql databases and in this time I\u2019m crazy for DevOps,\nVagrant and Chef, than for manage this tool I\u2019m learning Ruby.<\/p>\n\n<h3 id=\"why-this-blog\">Why this blog?<\/h3>\n<p>I\u2019m opening this blog because my English is very terrible! I have an Italian\n<a href=\"\/\">Blog<\/a> in WordPress but I\u2019d like to use Jekyll and this is a\ngood opportunity.  Share my experience and my Job, to grow and improve my\nskills! Can you help me with my English? :P<\/p>\n\n<h3 id=\"skills-and-interests\">Skills and interests<\/h3>\n\n<p>This is a list of my skills and interests that I am sure will be all topics for\nmy posts, I hope you will enjoy reading them!  Php, tech, html, css, js, Open\nSource, Zf2, Doctrine, Symfony, noSql (mongo, couch..), sql databases, Redis,\nDevOps, Chef, Vagrant, Composer, TDD\u2026.<\/p>\n\n<h3 id=\"open-source-world\">Open Source world!<\/h3>\n\n<p>Community and Open Source are my passion! A community is a great way to\nchallenge myself as a person and as a coder. You can find me on <a href=\"https:\/\/github.com\/gianarb\">Github\nAccount<\/a>!<\/p>\n\n<h3 id=\"this-is-my-face\">This is my face!<\/h3>\n<div style=\"text-align:center;\">\n<img src=\"\/img\/posts\/2013-07-19-why-this-blog.png\" width=\"90%\" \/>\n<\/div>\n"}]}