TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
Large Language Models / Software Development

Learning While Coding: How LLMs Teach You Implicitly

LLMs can deliver just-in-time knowledge tailored to real programming tasks; it's a great way to learn about coding idioms and libraries.
Sep 7th, 2023 8:30am by
Featued image for: Learning While Coding: How LLMs Teach You Implicitly
Image via Pexels

I’ve always been a hands-on learner, especially when it comes to learning how to use — and create — software. I wish I could learn protocols from specifications, come up to speed with apps by reading documentation, and absorb coding techniques from structured lessons, but things don’t come alive for me until I’m enmeshed in a project, focused on a tangible goal, and able to run live code in a debugger. In Radical just-in-time learning, I recalled my favorite scene in The Matrix.

Neo: Can you fly that thing?

(looks toward helicopter)

Trinity: Not yet.

(picks up phone)

Tank: Operator.

Trinity: I need a pilot progam for a B-212 helicopter. Hurry.

(eyelids flutter briefly)

Trinity: Let’s go.

(they jump in the chopper and fly off the roof)

For that project, the helicopter I needed to learn how to pilot was React, the role of Tank the operator was played by an LLM (actually, a team of LLMs), and the concepts acquired just in time included useState, useEffect, and JSX. I didn’t need to become a fully competent pilot, I just needed to get off the ground and make a short hop. With LLM guidance I accomplished that much faster than I otherwise could have, starting from a baseline of essentially zero React knowledge.

Did I “learn” React? Barely! This was an exploratory exercise. The resulting proof-of-concept may or may not evolve, but if it needs to I’ll have broken the ice. And I’ll approach future iterations knowing that documentation and lessons aren’t the only way to build on my rudimentary React knowledge: I’ll be able to summon guidance that’s delivered just-in-time in task-specific contexts.

Ambient Learning

The highlight of my previous post was an impressive performance by ChatGPT and its Code Interpreter plugin. Running it in an autonomous goal-directed loop, where the goal was to pass tests I’d written, was an eye-opening experience. But as I worked through the exercise — which entailed writing code to digest changelogs, then visualizing the changes in various ways — I learned a number of useful things.

Printing Expected and Actual Values

Here’s one of the tests I wrote.


I like to do things the simplest possible way so there’s no test framework here, just a basic assert. I hadn’t known about the optional second argument (or perhaps had forgotten), so I originally used a second line of code to print the expected and actual values. Could I have looked that up? Sure, but it wasn’t important enough to interrupt my flow. What happened instead: the LLM showed me the idiom as a by-product of code it wrote to pass the test. This is the kind of tacit knowledge transfer that can happen when you work with another person, you don’t explicitly ask a question, and your partner doesn’t explicitly answer it. The knowledge just surfaces organically, and transfers by osmosis.

Here are some other bits of tacit knowledge transfer that happened along the way.

argparse Defaults

It had been a while since I’d used Python’s argparse module. Another thing that I may never learned in the first place, or forgot, was this idiom:


When ChatGPT used action=”store_true” I thought I knew what that meant, but asked the LLMs to verify. Among the chorus of explanations — including those from Cody and Copilot — I found this one from ChatGPT most helpful:

This makes --count_all_plugins a flag that doesn’t require a value. Its mere presence on the command line means “yes” or True, and its absence means “no” or False.

Could I have learned that from documentation? Again, sure. Would I have learned it that way? Again, unlikely. If I were lacking the concept mere-presence-on-the-command-line-means-true I would have had to first think of the idea, then dig through the docs to see whether it’s possible, and if so, how.

What happened instead is that the LLM surfaced the concept in a context where it was needed, showed me how to apply it, and when asked to explain, grounded the explanation in that very context. It wasn’t just mere-presence-on-the-command-line-means-true, but more specifically mere-presence-of-count_all_plugins-on-the-command-line-means-true.

Perhaps it’s a failing of mine that I learn best given a specific example, based on my own situation, from which I can then generalize. But I doubt I’m the only learner who operates that way. Once I’ve made some headway on a task, I’ll go to the documentation to enrich my understanding. But I rarely want to start there. It’s hard enough to use docs to answer known questions, and even harder to use them to answer questions you didn’t think to ask. When ambient knowledge can emerge while doing a task, I’m a far more effective learner.

re.escape

According to the docs: “This is useful if you want to match an arbitrary literal string that may have regular expression metacharacters in it.” I’ve written a lot of Python regexes over the years, had never learned this, and as a result, probably caused myself a lot of grief. Having the idiom shown to me in passing, while in a teachable moment, was the best way to learn it.

The Negative Lookahead Assertion

Here’s something I’m pretty sure I never learned in the first place. It appears in the regex that ChatGPT wrote to match bullet points in changelogs. I’d have been uncomfortable using it without understanding what it was, but there was no need: I was not only given the code, I was able to ask for and receive an explanation.

(?!\s*-): This is the negative lookahead assertion. It checks if the next line does NOT start with a dash (-) preceded by optional whitespace. In other words, it ensures that the next line isn’t the start of a new list item.

The combination -\s[^\n]*(?:\n(?!\s*-).*)* matches a list item that can span multiple lines, as long as the subsequent lines don’t start with a new list item.

Nonlocal Variables

When I asked ChatGPT to refactor a function that had grown too complex, and to use nested functions in doing so, I was introduced to another idiom I’d never encountered. I have sometimes used the global keyword to refer to a variable outside the scope of any function, and might have (guiltily) done the same in this case. But here I learned the concept variable-in-the-nearest-enclosing-scope-that-is-not-global.

Getting up to Speed with Libraries

Once I could reliably capture the changelog data, it was time to visualize it using tables and charts. Nowadays I’m most comfortable with SQL, so when ChatGPT provided solutions based on pandas.DataFrame, it created another learning opportunity. I’d used pandas a few years ago, neither extensively nor easily. Since HTML tables were one of my desired outputs, this was a nice introduction to the pandas.DataFrame.to_html method.

I’d also used Matplotlib before, again neither extensively nor easily, so I appreciated ChatGPT showing me how to apply it to the task at hand. With that code in place, my script wrote two files: an HTML file with tables, and an image file referenced in the HTML.

Where possible, I like to minimize the number of moving parts that comprise a solution. All other things being equal, if a thing can be done with a single file I’ll prefer that to a multi-file package. The chart I needed was simple, and I knew it would be possible to create it using only HTML and CSS in a single file that also included the HTML tables, but I wouldn’t normally go out of my way to try making that happen. Now, though, with a helpful assistant on board, why not give it a try?

Although the HTML-and-CSS-only experiment did not yield a successful result, I don’t regard it as a failure either. It was a quick refresher for CSS’s flexbox layout mechanism, with examples of align-items, flex-direction, and gap that I could play within the context of running code relevant to the task at hand. The basic chart came together quickly, then efforts to refine it yielded diminishing returns. Getting the axes right was — unsurprisingly! — tricky. Along the way ChatGPT made an interesting suggestion:

Let’s try a different strategy. We’ll use Matplotlib’s standard functionalities, then use mpld3 to convert to an HTML representation.

Another useful thing that I learned in passing: Matplotlib can render to HTML by way of mpld3, which can “Bring Matplotlib to the Browser”! That got me closer to the desired result, but the y-axis remained problematic so I retreated from the slippery slope. The excursion didn’t take long, though, and my interaction with Matplotlib/mpld3 felt like a valuable learning experience. Had I been starting from scratch, hunting for examples in the docs similar to the code I was trying to write, it would have been painful and time-consuming. But ChatGPT, aware of the context in which I was working, enabled me to iterate rapidly.

When the Matplotlib/mpld3 effort stalled, I asked ChatGPT to recommend another charting library that could render charts using HTML/CSS. It suggested plotly, Bokeh, and Vega-Altair. I’d had brief prior experience with plotly, none with Bokeh or Vega-Altair, so I’d have been inclined to try plotly first. ChatGPT’s characterization of it — as the most straightforward when transitioning from Matplotlib — reinforced that inclination.

The plotly solution took a bit of fiddling, naturally, but again the LLM assistance radically accelerated my learning. Here’s where we landed.


Having read the documentation and seen code examples, ChatGPT was able to suggest better strategies for managing tick labels and values than those I’d have come up with from a cold start. There were still false starts and blind alleys! But the solution emerged quickly, by way of a collaboration that felt very much like pair programming.

Now more than ever programming entails finding and applying libraries and components that exist in bewildering profusion. You’re less likely to be writing what we conventionally think of as lines of code, much more likely to be tweaking parameters and settings. There’s a vast gulf between what the documentation says those parameters and settings mean and what actually happens when you try to use them. The ability of LLMs to help bridge that gap may turn out to be one of the most powerful forms of code-writing assistance they deliver.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.