Promos

Jonathan Godbout

11/20/2022

I recently went from Software Engineer 3 to Senior Software Engineer (L4 to L5) at Google. This promotion doesn’t greatly change my day to day work, it just means I’ll be graded on a higher standard than I was a few months ago. With this change, I’ve been trying to figure out what this promotion actually means, and what the future holds.

My History At Google

I started in 2016 at Google as an L3, this is the position that most people start, a new grad hire. Not much is expected of new grad hires, you generally get your work assigned by someone more senior, it generally has most of the implementation questions filled in, and anything not filled in is expected to be fairly simple. People generally don’t stay at this level very long.

In late 2017 I took up a larger project that was started by a Xoogler but had priority and had a lot of questions. This was my introduction to really writing documentation, to researching the tools needed to support systems, and meeting with people from other teams to determine how to not break them. After a years worth of work, becoming the owner of a large swath of my teams codebase, and making a system that I will be asked about for years to come, I was promoted to L4.

L4 is the highest level you must get at Google. Your expected to own some small portion of your team’s code-base and be able to receive tasks with moderate open questions, design, and implement.

Getting to L5

As an L4, you’re not expected to design and implement large systems at Google, most of the interesting design work is done at L5 and above. Thus the big difference from L4 to L5 is being able to design a large(ish) system given some degree of constraints. Say, the L6 says we need x done, the L5 designs the system, and the L{3,4,5} go forth and implement the system. 

Now, I work on a team whose system was put in maintenance mode years ago, so how does one go about finding such a large project to work on? The best answer I have to this is find a piece of your system that is causing your team pain, (better causing customers pain) and be the one who designs a solution to fix it. Remember, this is not an iterative approach to fixing a problem, this has to be a large project that will give clear benefits to multiple teams and will need to include working across multiple teams.

Every time I saw an interesting project, I took it. 

  • My team has no way to use Protocol Buffers, so I rewrote CL-Protobufs giving us access to Google standard technology.
  • The dreaded NDC future of Airfare Ticketing is coming, try to be part of it.
  • Different release processes are painful, try to fix them.
    • This was nicely paired with 1.

Where I am now

Right now I’m an L5 engineer, and given the average promotion rate, I will probably be an L5 engineer for a few years. In order to reach the next level I need to expand my scope of work. For the next level there are two possibilities:

  • Manager
  • Staff Software Engineer

Both require expanding my range of influence, and growing beyond just being a QPX engineer.

Why Promo

A lot of engineers stay at L5. It is said (everywhere) that the role of L6 is a completely different job, it is leadership instead of engineering. Even as a staff software engineer your still a leader, you own a significant portion of your codebase, and you set priorities on future endeavors for your team. A lot of engineers have no interest in this role.

Personally, I don’t like stagnation. I don’t like sitting still, and I wish to learn more. I can be a better engineer, I can learn more, be more attuned to performance, but at some point in order to enlarge your knowledge you must cast off into new unknowns, and gain responsibility beyond yourself. This is what this next step will be about.


Kids Agency

Agency, noun; action or intervention, especially such as to produce a particular effect.

I’m a big believer that kids should be given as much agency as possible, given their age and ability. There is a movement called Free Range Kids fighting back against the continual coddling and oversight of kids. We should be very supportive of this. In this post I’ll describe some of what I think should be allowed.

Bikes

Being a kid is hard, you are continually being told what you can and cannot do, you can’t transport yourself anywhere, and you live in an adult world. The first taste of freedom you are likely to get is on a bike. You suddenly go from a walking or running speed, to a much faster biking speed. Your parents can no longer feasibly keep up with you, and you have the ability to explore your world.

When I was a kid my bike was how I got around. Living in rural Vermont, the only way to get anywhere was on a bike (or a car if you’re old enough). Getting to friends was nearly impossible on foot. My bike let me get to my friends, the local park, my school, and the town’s convenience store. We’ll discuss more about letting kids run around alone later.

Now that I have kids, I want them to learn how to ride. Not only is it good exercise, and a lot of fun, but it will also serve as their first vehicle to get around town and see their friends. Right now I want them to be in my eyesight, but Lyra already loves riding her bike to the playground. Faye now has a balance bike but it will be a few months before she grasps riding.

Favorite kids bikes:
Woom 1: https://us.woombikes.com/products/1
Woom 2: https://us.woombikes.com/products/2

Playing Without Parents

When kids are old enough, they need their own space to be themselves. They need to be able to order their own lives, to run by themselves, and just be themselves. With parents constantly over their shoulder, they will never be able to learn about themselves.

When I was a kid, I lived in rural Vermont with many acres of land around me. My parents always said “There’s a large forest out back, go play.” Around 7 my friends and I would ride around Huntington, there was a playground near the gravel pit, and a convenience store a few miles away. We would ride down and get root beer, candy, or some other snack. So long as I told my parents where I was going, they were fine.

Now I live in a suburb of Boston MA, and everyone is scared of everyone else. People seem to not want kids to be playing by themselves. But I ask: “Whats the point of having a backyard if I can’t tell her to go out back and play?” She’s old enough to know to stay out back. In a few years (2, 3?) she should have no problem with the 5 minute walk down the street to the park.

Note: The street I lived in in Huntington had cars, we were smart enough to get out of the way.

Being Alone

Sometimes Lyra decides she wants to play alone. Maybe Faye is getting in her space, maybe Mama and Dada are being too belligerent, but she needs her own time. She goes into her room, or the playroom, and cleans, or plays, or reads. This is important for development, allowing her to self calm, self direct, and just be herself. Parents need to give this to kids.

Outro

Kids are near infinitely capable. The bounds that they have, are often the bounds we set upon them. Before thinking about what you’re comfortable with, what fears you have, think about what your child needs, what they can handle, and what kind of agency you want them to have. All this said, you should do what you feel is right for your kids!

Out With 2021, In With 2022

Greetings readers,

I know I don’t post much, but I have many ideas. Maybe someday I’ll find the time to write them!

The year 2021 is practically over, and I feel like a year end blog post is in order. It was quite a year. I say that every year, so maybe every year is quite a year!

The most important part of the year was the birth of little Faye. It’s interesting having your second child, in some ways you’re much more prepared for the work, for the sleeplessness, and for taking care of a baby. In other ways, you’re much less prepared than you were for the first one. You know the best ways to get them to sleep, but then the older sister goes and wakes them up!

This year saw Lyra starting preschool, spending time out of the house without us. For the first time Grandpa Don took care of her alone. For the first time I saw her interact with a very close best friend. In some ways I know she’s still my little girl, only 3, but in so many other ways she seems to be getting too big already.

Last year ended with us buying a house, this year we didn’t get to move in until nearly the end of the year. Even after that, the place was barely liveable until the end of October. It’s still great for Lyra to have a large yard to play in, and big bad wolves (not really) to hunt out back.

Wenwen keeps working at UMass Boston. The hardest thing is finding time to relax, someday we’ll find some.

As for 2022. I don’t imagine the pandemic going away, it will linger on and we’ll be forced to find better ways to live in a world with a pandemic, but we’ll find ways. We have our family. Despite all thats going on, I don’t think there’s been a better time for me.

Jon

Welcoming Faye, Helping Lyra

Greeting everyone.

This will be the first post since Faye’s birth. I have one more Proto Cache post coming, in fact the code is already up on Github at head, but I don’t have time to write it yet. Technical posts take quite a while to write, then get copy edits and code fixes. It will come, someday…

Faye is doing great. She’s much quieter than Lyra was at her age. As a parent, you tend to worry about everything, but that’s just life as a parent. You truly forget how small newborns are.

I get worried about Lyra. When Wenjing was at the hospital Lyra kept wondering when Mama was going to come home. She said she was excited to see Faye, but she had no idea the change that would happen when we got to the hospital. That first meeting was hard.

Due to Covid, I was able to enter the hospital once, and since I had a Lyra at home I was only there for around 5 hours. When Lyra and I got to the hospital, I carried Lyra inside as she was spooked to be at the hospital. When we got to Faye, I held Faye, put her in the car seat, and sang a quick lullabye. Lyra’s face started scrunching up, and tears quickly followed. It’s tragic seeing a toddler about to cry, knowing what’s coming, but having no way to stop the tears.

Lyra’s gotten over the immediate shock. She helps us change diapers, and talks to Faye. She seems happy to be an older sister. On the other hand, she gets jealous. Wenjing is allowed to be with Faye, Lyra is fine with that. But Lyra seems to think Dada is hers

Lyra never asked for this. Her small world has changed immensely, and unlike adults she has no experience handling change. Toddlers have a hard time controlling their emotions, everything is shown on their sleeve. Even with this, she already loves her little sister. 

Merry Christmas!

Greetings everyone!

Lyra on a box.

This will not be a programming post, or really a post of any technical or mathematical interest. I’m not entirely sure what the next technical post I will make is, but I am thinking.

As it is Christmas, I wanted to say some thanks.

Lyra’s first Christmas tree at home.

First, to Carl Gay, my coworker and mentor at Google. He’s been the person I’ve talked to the most from work over these past 9 months (and probably well before that as well). He’s much farther down his career then I am, but he’s been amazingly helpful and kind freind.

I have been blessed with many great co-workers at Google. Ron, Ted, Stephen, Rujith, etc. Thank you for making this strange work year as great as it was.

Also Google. They’ve given me months off to take care of my daughter and allowed my wife to continue working without strain on childcare. I know people have a lot of misgivings about Big Tech, but I truly beleive Google always tries to do whats right.

Lyra and Me at Google last year!

Next my parents. It was a tough year. We stayed at my moms for a bit in the summer, which allowed Lyra to play in giant fields and moo at giant cows. Sadly, we did not get to see my dad and Melinda. We miss them very much and look forward to seeing them in 2021.

Lyra, Grandma, Cows, and Me.

Finally, to my wife Wenwen and daughter Lyra, for making this year. For making our condo a home.

Again, Merry Christmas and if I don’t post again this year have a Happy New Year!

Lyra and Wenwen drawing,

Lisp Mortgage Calculator Proto with JSON

I’ve finally found a house! Like many Googlers from Cambridge I will be moving to Belmont MA. With that being said, I have to get a mortgage. My wife noticed we don’t know much about mortgages, so she decided to do some research. I, being a mathematician and a programmer, decided to make a basic mortgage calculator that will tell you how much you will pay on your mortgage per month, and give you an approximate amortization schedule. Due to rounding it’s impossible to give an exact amortization schedule for every bank.

This post should explain three things:

  1. How to calculate your monthly payment given a fixed rate loan.
  2. How to create an amortization schedule.
  3. How to create an easy acceptor in Hunchentoot that takes either application/json or application/octet-stream.

Mathematical Finance

The actual formulas here come from the Pre Calculus for Economic Students course my wife teaches. The book is:

Applied Mathematics for the Managerial, Life, and Social Sciences, Soo T. Tan, Cengage Learning, Jan 1, 2015 – Mathematics – 1024 pages

With that out of the way we come to the Periodic Payment formula. We will assume you pay monthly and the interest rate is quoted for the year but calculated monthly. 

 Example:
 Interest rate of 3%
 Loan Amount 100,000$
 First Month Interest = $100,000*(.03/12) = $100,000*.0025= $250. 

 MonthlyPayment = \frac{LoanAmount * \frac{InterestRate}{12}} {1 - (1 + \frac{InterestRate}{12})^{NumberOfMonths}} 

I am not going to prove this, though the proof is not hard. I refer to the cited book section 4.3.

With this we can compute the amortization schedule iteratively. The interest paid for the first month is:

I_{1} = LoanAmount * \frac{InterestRate}{12}

The payment toward principal for the first month is:

PTP_{1} = MonthlyPayment - I_{1}

The interest paid for month j is:

I_{j} = \frac{InterestRate}{12}*(LoanAmount - \sum_{i=1}^{j-1}PTP_{i})

The payment toward principal for month j is:

PTP_{j} = MonthlyPayment - I_{j}

Since I_{j} relies on only the PTP(i) for 0<i<j and PTP_{1} is defined, we can compute them for any value we wish!

Creating the Mortgage Calculator

We will be creating a Huntchentoot server that will receive either JSON or octet-stream Protocol Buffer messages and return either JSON or octet-stream Protocol Buffer messages. My previous posts discussed creating Hunchentoot Acceptors and integrating Protocol Buffer Messages into a Lisp application. For a refresher please visit my Proto Over HTTPS.

mortgage.proto

When defining a system that sends and receives protocol buffers you must tell your consumers what those messages will be. We expect requests to be in the form of the  mortgage_information_request message and we will respond with mortgage_information message.

Note: With the cl-protobufs.json package we can send JSON requests that look like the protocol buffer message. So sending in:

{
 "interest":"3",
 "loan_amount":"380000",
 "num_periods":"300"
}

We can parse a mortgage_information. We will show how to do this shortly.

mortgage-info.lisp

Server Code:

There are two main portions of this file, the server creation section and the mortgage calculator section. We will start by discussing the server creation section by looking at the define-easy-handler macro.

We get the post body by calling (raw-post-data). This can either be in JSON or serialized protocol buffer format so we inspect the content-type http header with 

(cdr (assoc :content-type (headers-in *request*)))

If this header is “application/json” we turn the body into a string and call cl-protobufs.json:parse-json:

(let ((string-request 
        (flexi-streams:octets-to-string request)))
      (cl-protobufs.json:parse-json 
         'mf:mortgage-information-request
         :stream (make-string-input-stream 
                    string-request)))

Otherwise we assume it’s a serialized protocol buffer message and we call cl-protobufs:deserialize-from-stream.

The application code is the same either way; we will briefly discuss this later.

Finally, if we received a JSON object we return a JSON object. This can be done by calling cl-protobufs.json:print-json on the response object:

(setf (hunchentoot:content-type*) "application/json")
(let ((out-stream (make-string-output-stream)))
   (cl-protobufs.json:print-json response
      :stream out-stream)
   (get-output-stream-string out-stream))

Otherwise we return the response serialized to an octet vector using cl-protobufs:serialize-to-bytes.

Application Code:

For the most part, the application code is just the formulas described in the mathematical finance section but written in Lisp. The only problem is that representing currency as double-precision floating point is terrible. We make two simplifying assumptions:

  1. The currency uses two digits after the decimal.
  2. We floor to two digits after the decimal.

When we make our final amortization line we pay off the remaining principal. This means the repayment may not be the repayment amount for every other month, but it removes rounding errors. We may want to make a currency message for users to send us which specifies its own rounding and decimal places, or we could use the Google one that is not a well known type here. The ins-and-outs of currency programming wasn’t part of this blog post so please pardon the crudeness.

We create the mortgage_info message with the call to populate-mortgage-info:

  (let (...
         (response (populate-mortgage-info
                    (mf:loan-amount request)
                    (mf:interest request)
                    (mf:num-periods request)))) …)

We showed in the previous section how we convert JSON text or the serialized protocol buffer message into a protocol buffer message in lisp memory. This message was stored in the request variable. We also showed in the last section how the response variable will be returned to the caller as either a JSON string or a serialized protocol buffer message.


The author would like to thanks Ron Gut, Carl Gay, and Ben Kuehnert.

The Secretary Problem

I’ve been looking for houses lately. The general problem with house hunting is that there is a time limit which dictates how many houses you will see, and there will probably be a close to total order on your opinions of the houses. In layman’s terms, for all of the houses you look at each house will be better than some houses, and worse than the rest. My wife and I have debated how long we should look for a house. Thankfully this is nicely solved in mathematics.

The Secretary Problem:

Suppose you are trying to hire a secretary. You know you will interview 10 possible secretaries and you will have a total order in how much you like them. You must decide whether or not you should hire them at the end of each interview. What is the likelihood of choosing the top ranked secretary?

Problem Description and algorithm description.

To further explain each possible secretary you interview will have a rank from 1 to 10. When you interview them, you will not know how high they rank but you will know their score in relation to the other secretaries you have interviewed. When you interview candidate 1 you have no information. When you interview candidate 2 you know they are better or worse than candidate 1. When you interview candidate 3 you know how they relate to candidates 1 and 2. More information gives you more knowledge on the ranking, but less choices in who to hire.

Obviously there are many algorithms you could use to choose a secretary. You could choose the first secretary to come and interview, your chances of getting the optimal secretary is 10%. You could choose the first secretary better than the first, this will mean with 90% probability you will avoid the worst secretary!

The optimal probability of selecting the best secretary is 1/e. I’m not going to go into the proof, it’s not easy, but if you’re interested please check out the wikipedia page. The algorithm itself is quite simple. First we will generalize to having n secretaries come to interview.

  1. Check the first n/e applicants.
  2. Choose the next applicant who is better than the first n/e applicants.

Coding Experiment

We will generalize the optimal algorithm thusly:

  1. We will check the first k candidates of the n candidates.
  2. We will choose the first applicant who is better than the first k applicants.

We will create a permutation of {1,…,n}, get the max of the first k candidates, then get the first candidate who is higher than the max, if no such candidate exists we take the last candidate. We will return a bool determining whether this chosen candidate is ranked n.

The code can be found on my github account. We use Robert Smith’s cl-permutation library available on Quicklisp.

We see for 10 candidates and 100000 trials we get:

For 100 candidates and 100000 trials we get:

It’s interesting to note your chances of finding the optimal secretary increase quite quickly while increasing the number of people you interview, and decrease far slower after hitting the optimal stopping bound.

Takeaways:

As mathematics only approximates life, this doesn’t perfectly fit into my house search problem. I don’t know how many houses I will see, and I don’t know if house prices will increase or decrease over time. Also, I often don’t have to make a split-second decision right after I see a house. 

This does however give me a takeaway:

When searching for a house, do your due diligence and look at as many open houses as you can at first. Getting an idea of what you like and don’t like will help you find the house you want. Don’t wait too long though!

I would like to thank Ron, Carl, and Ben for the edits to this article.

Serializing and Deserializing Protobuf Messages for HTTP

So far, I’ve made two posts creating an HTTP client  which sends and receives protocol buffer messages and an HTTP Server that accepts and respond with protocol buffer messages. In both of these posts we had to do a lot of extra toil in serializing protocol buffers into base64-encoded strings and deserialize protocol buffers from base64-encoded strings. In this post we create three functions and a macro to help us serialize and deserialize protocol buffers in our http server and client.

Notes:

I will be discussing the Hello World Server and Hello World Client. If you missed those blog posts it may be useful to go and view them here and here. There has been code drift since those posts, mainly the changes we will discuss in this post. The source code for the utility functions can be found in my protobuf-utilities code repo on github.

Code Discussion

This time we will omit the discussion of the asd files. We went through the asd files line-by-line in the two posts referenced in the notes so please look at those.

In addition to the main macros we discuss and show below, we use two helper functions deserialize-proto-from-base64-string and serialize-proto-to-base64-string which can be found in my protobuf-utilities repo.

Server-Side

We noticed a large part of the problem with using cl-protobufs protocol buffer objects in an HTTP request and response is the tedium of translating from the base64-encoded string that was sent into the server to a protocol buffer and then reversing the protocol buffer with the response object. We know which parameters to our HTTP handler will be either nil or a base64-encoded proto packed in a string and their respective types. With this we can make a macro to translate the strings to their respective protos and use them in an enclosing lexical scope.

Why a macro? Many Lispers may not ask this question, but we should as macros are harder to reason about than functions. We want the body of our macro to run in a scope where it has access to all of the deserialized protobuf messages. We are creating a utility that will work for all lists of proto messages so long as we know their types. We could with effort and ugliness make a function that accepts a function, and have that outer function funcall the inner function, but it would be ugly. With a macro we can create new syntax which will simplify code, allowing us to simply list the protobuf messages we wish to deserialize and then use them.

Given that, what our macro should accept is obvious, a list of conses each containing the variable that holds an encoded proto and the type of message to be encoded/decoded. We also take a body in which the supplied symbols will now refer to a deserialized proto.

(defmacro with-deserialized-protos 
  (message-message-type-list &body body)
  "Take a list (MESSAGE . PROTO-TYPE) 
MESSAGE-MESSAGE-TYPE-LIST where the message will be 
a symbol pointing to a base64-encoded serialized proto 
in a string. Deserialize the protos and store them in 
the message symbols. The messages are bound lexically 
so after this macro finishes the protos return to be 
serialized base64-encoded strings."
  `(let ,(loop for (message . message-type) 
            in  message-message-type-list
               collect
               `(,message 
                   (deserialize-proto-from-base64-string
                      ',message-type
                      (or ,message ""))))
     ,@body))

It is plausible that our HTTP server will respond with a base64-encoded protocol buffer object. We could first call `with-deserialized-protos` to do some processing, creating a new protocol buffer object, and then call a function like `serialize-proto-to-base64-string`. Instead I create a macro that will automatically serialize to string then base64-encode the result of a body.

(defmacro serialize-result (&body body)
  (let ((result-proto (gensym "RESULT-PROTO")))
    `(let ((,result-proto ,@body))
       (serialize-proto-to-base64-string ,result-proto))))

Since we’ve gone this far, we can string these two macros together:

(defmacro with-deserialized-protos-serializing-return 
  (message-message-type-list &body body)
  `(serialize-result (with-deserialized-protos 
                       ,message-message-type-list ,@body)))

This vastly improves our handler:

(define-easy-handler (hello-world :uri "/hello")
    ((request :parameter-type 'string))
  (pu:with-deserialized-protos-serializing-return 
     ((request . hwp:request))
    (hwp:make-response
     :response
     (if (hwp:request.has-name request)
         (format nil "Hello ~a" (hwp:request.name request))
         "Hello"))))

A final pro-macro argument: Macros allow us to make syntax that describes what we want a region of code to accomplish. The macros I wrote aren’t distinctly necessary, you could just call `deserialize-proto-from-base64-string` several times in a let binding. Since you probably only have one request proto that would do find. You could also deserialize the return proto yourself. I find the written macros makes the code nicer to write, the downside is people working on the code will have to know what these macros do. Thankfully, we have M-x and docstrings for that.

Client-Side

We have the reverse story on the client side. We start by having to serialize and base64-encode our proto object before sending them over the wire, and then deserialize the result. One would imagine writing the same kind of macro here as we wrote on the server side. The problem with that is there’s no real body we want to run with our serialized protos we want to send over the wire, and we get one proto back so we can just serialize the HTTP result proto object and let bind it. We can just use a function for this.

(defun proto-call 
    (call-name-proto-list return-type address)
  (let* ((call-name-serialized-proto-list
           (loop for (call-name .  proto) 
              in call-name-proto-list
                 for ser-proto 
               = (pu:serialize-proto-to-base64-string proto)
                 collect
                 (cons call-name ser-proto)))
         (call-result
           (or (drakma:http-request
                address
                :parameters call-name-serialized-proto-list)
               "")))
    (pu:deserialize-proto-from-base64-string return-type 
       call-result)))

Final Remarks

In this blog post we implemented several helper macros and a function for working with protocol-buffer objects in an HTTP environment. I believe the macros in protobuf-utilities are the missing link that will make cl-protobufs a welcome addition to Common Lisp HTTP servers.

Pull requests are always welcome


I would like to thank @rongut, @cgay, and @benkuehnert for their edits and comments.

Banana Bike

I’ve done several programming posts back-to-back, and I think it’s time to take a fun break. Today we are going to talk about the Banana bike, you can find it on Amazon.

The Banana bike is a balance bike with an aluminum body and air filled tires. It says it’s suitable for a toddler between the ages of 2 to 4, but I would think that is probably too large of a range, so lets says 2-3. It has a movable seat. I would say it’s best attribute, especially for the price range, is air filled tires. More expensive options like the Strider have foam tires, something that doesn’t roll well with me.

I got this bike for my daughter around 14 months. She wasn’t really able to ride it until around 20 months so you may want to wait for your toddler to be tall enough to ride it. On the other hand, she continually wanted to try to ride it, so maybe having a challenge like this is a positive.

Pros:

  • The weels are air filled. This is something bikes at this price point rarely have.
  • Very light. My daughter can easily move it around.
  • Good size.
  • Great price.

Cons:

  • The handlebar swivels a full 360 degrees. A steering limiter would be appreciated.
  • No breaks. At this age level thats probably fine.

Final opinion:

This is a fantastic bike for your toddler. At first I was concerned about the difference in quality a 60$ bike would be from say the 200$ Woom 1. I can tell you that your toddler won’t notice the difference. I might suggest the 30$ upgrade to a Schwinn balance bike to get the steering limiter, but it’s definitely not required. IF you have a toddler, you should definitely get out there and bike!

Note: A Woom 1, while expensive, may still be worth it. One can often resell them for more than the initial cost. The Banana Bike will probably have no resale value. You should however be able to use iether bike with multiple children.

CL-Protobufs Hello-World-Client

Last post we created a server using the Lisp web server Huntchentoot and the cl-protobufs Lisp protocol buffer library. In this post we will discuss making a client to contact our web service, sending protocol buffer messages over HTTP. We will be using the HTTP client package Drakma. We will end the post by discussing improvements that should be made in future iterations.

The reader should find the code in my hello-world-client github repo. It contains three files:

  1. hello-world-client.lisp
  2. hello-world.proto
  3. hello-world-client.asd

If you haven’t read the previous post please take a look here as we will be connecting to the service discussed therein.

Updates to the HTTP web server

In our last post our hello-world-server accepted a string and we directly read the string in as an octet-buffer. We did this to ease our testing, we could manually use a GUI web client such as Postman to send rest calls to our web server, inputting the octet-buffer we get from calling print on the octet buffer returned from serialize-proto. I would like to avoid any read calls in my Common Lisp code for security. Instead we will take in a base64-encoded string containing the octets, decode the base64 encoding with cl-base64 and use flexi-streams string-to-octets and octets-to-string to read and write the octet-buffer as a string.

The updated code can be found in hello-world-server/hello-world-server.lisp starting at line 19.

Code Discussion

I will omit the discussion of the hello-world.proto code and how it is compiled with the cl-protobuf asd additions. For a discussion on this please refer to my previous post here.

Hello-world-client.asd:

The useful information in this file is:

  • defsystem: The Lisp system is called hello-world-client. 
  • defsystem-depends-on: To load the system you will need to load cl-protobufs so we can generate Lisp code from the proto file.
  • depends-on: We will use Drakma as our http client library.
  • module: We have one module src.
  • protobuf-source-file: This is an asdf directive given to us by cl-protobufs. It will look for a file hello-world.proto in our current directory and call protoc-gen-lisp on this file.
  • file: A lisp file hello-world-client.lisp

Hello-world.proto:

The request and response schema definitions for the hello-world-server. This is copied from hello-world.proto in hello-world-server

Hello-world-client.lisp:

This is where the real work is done. We are starting the hello-world server locally and default to port 4242 with handler hello so we will set those as globals. We define a function call-hello-world which is what the client is attempting to do. It takes a name as either nil or string and address, port, and handler as optional keyword arguments defaulting to the aforementioned globals.  

We create the proto and then serialize the bytes using the cl-protobufs serializer:

(cl-protobufs:serialize-object-to-bytes proto-to-send)

Next we use flexi-streams to turn the bytes into a string octet, cl-base64 to base64 encode the string, and send it to the server with drakma.

(drakma:http-request
  (concatenate 
    'string address ":" port "/" handler)
    :parameters 
      `(("request" . ,(cl-base64:string-to-base64-string
                        (flexi-streams:octets-to-string
                          serialized-req)))))

The drakma library blocks until it receives a response, which will contain a base64 encoded stringified proto message. We simply reverse the base64 encoding, then call string-to-octets to get our octet buffer. We deserialize the proto message with cl-protobufs and print the response to the repl.

(print
     (hwp:response.response 
       (cl-protobufs:deserialize-object-from-bytes
          'hwp:response
          (flexi-streams:string-to-octets
            (cl-base64:base64-string-to-string response)))

We see

(call-hello-world “foo”)
=> “Hello foo”

as all good hello-world calls should show.

Final Remarks

The hello-world-server and hello-world-client code work as one would expect. There is however, too much boilerplate that has to go around this code. Having to manually call octets-to-string and string-to-base64-string and its reverse is cumbersome. What one should really do is have a function for the client that does this for you.

On the server side it is equally onerous to have to call base64-string-to-string and string-to-octets, and at the end it’s the reverse for every proto parameter. This should really be a macro that takes as an argument a list of (parameter-name . proto-type) and does the serialization for you. You could add an optional output proto-type/parameter-name to do the base64 encoding then string-to-octets at the end of the call. This would equate to having to add one macro call per handler.

In the next cl-protobuf hello-world post we will try to add these!


I would like to thank @rongut, @cgay, and @benkuehnert for their edits and comments.