- Product Driven Newsletter
- Posts
- Are we the Future Assistant to the AI?
Are we the Future Assistant to the AI?
We all want an AI assistant, but none of us want to be the assistant to the AI
If you buy into the hype, all software developers will be replaced in 18 months by AI. However, anyone who knows anything about software development knows that developers don’t spend most of their time writing code anyways.
Developers spend most of their time reading and debugging code. We spend a lot of time fixing bugs or making minor improvements to existing code. We scan hundreds of thousands or millions of lines of code to change a few lines daily.
Our job is to understand how existing code works and make changes without breaking the entire system.
But supposedly, AI is going to do that now. I welcome its ability to do all production deployments with no problems. 😂
Is AI going to be our assistant, or will we be the AI's assistant?
What can AI actually do for development?
I have been using GitHub Copilot recently and am a huge fan.
Although, sometimes, it makes me mad.
I’m coding away in a rhythm, and it keeps making excellent suggestions, and I keep accepting them. I’m in the flow, and the magic is happening. Then all of a sudden, it won’t provide a suggestion… I just stare at the screen, waiting and giving it a scornful look. 👀
Seriously though, the current Copilot features are helpful, but it is mostly smarter autocomplete.
I also use ChatGPT regularly to ask it how to do things I can’t remember the exact syntax for. Like how to create a unique SQL index or do a specific C# LINQ query. Stuff I have done many times but always forget the syntax for.
The future of what GitHub Copilot X, ChatGPT, and other AI models will do is exciting. I see them as an assistant to help developers write code. AI pair programming sounds incredible, and we are just getting started with the tech.
However, you must still be more intelligent than the AI because it will make bad suggestions.
Speaking of Copilot, airplanes pretty much fly themselves these days. A pilot once told me his job was to sit there and be ready if the autopilot stopped or some other weird problems happened. Years and years of training to stare at the autopilot all day in case it doesn’t work or the plane has a mechanical failure or bad weather.
Is our future job to babysit our AI copilot?
AI also makes mistakes
If you know anything about how AI and ChatGPT work, you know it is all a prediction model of text. It uses statistics based on its training models to predict what words come next in a sentence. That works great for asking it weird history questions buried in Wikipedia. But it also just makes stuff up regarding computer programming based on the prediction model.
For example, I asked ChatGPT how to use Stackify with an application written in COBOL. It straight-up provides an answer like it’s really possible. There is no COBOL documentation or support like it mentions. 🤦♂️
Anytime you are using AI to generate code, you have to review the code and test it to ensure it works. It will absolutely make mistakes. What it thinks is the highest predicted answer doesn’t make it fact or accurate.
We only love our own code
If there is one thing all developers can agree on, we don’t like other people’s code. Trying to read and debug someone else’s code is way more complicated than the code we wrote. By writing the code, we deeply understand what it does.
Many developers quickly call other developers’ code technical debt and want to rewrite it. Not because there is anything wrong with it, but because they didn’t write it and don’t fully understand it. They are scared to modify it.
It’s hard to work with other people’s code for a few reasons:
Lack of consistency
Poor design choices
Lack of documentation
Different coding styles
Legacy code
It’s understandable why developers don’t like working on other people’s code. It can take hours to fully understand how code works if you are trying to troubleshoot problems with it.
Debugging other people’s code is one of our least favorite things.
What if it’s all AI code?
If the goal is to use AI to generate large percentages of code and developers have to review it, test it, and debug it… that doesn’t sound fun.
Some people love testing and QA work. Testing video games all day is a dream job for some people. However, testing code from AI generators all day sounds like a nightmare.
We want AI to help us do our job. We don’t want our entire job to be fixing the crappy code that comes out of AI. Troubleshooting bugs is hard, especially in other people’s code. Nobody wants that to be their full-time job.
We don’t want to be the assistant to the AI.
What about AI Coding Standards?
One of the funniest things to do with ChatGPT is to ask it to write a bunch of text and then tell it to write it like Darth Vader, Snoop Dogg, Donald Trump, or others. It is fantastic how it can change the writing voice and style.
You can’t tell ChatGPT to write code like Snoop Dogg, but you can tell it “more smaller methods” or “10 spaces instead of tabs”.
Generating code with AI opens up a debate around coding style and standards it needs to follow.
Here are some examples of things to consider about the output of the AI code:
Code format - Tabs, spaces, curly brace placement, variables names
Abstractions - Interfaces and abstractions around everything?
Configuration - How does it use configuration or magic strings
Method size - Are we going to an extreme max of 5 lines of code per method?
Inline code - Are we using inline ifs and other strategies to compact the code so much it looks minified?
Error handling - How does it handle exceptions?
Security - Hopefully, it doesn’t create SQL injection and other vulnerabilities
Performance - Is the code optimized for performance and avoids N+1 type issues?
Comments - Will it put comments in the code to explain it?
Taking code from it and cleaning it up to follow our standards defeats the purpose. In the future, I imagine we will train the AI with our existing code for it to follow.
It’s one thing to make a solution for FizzBuzz or a bubble sort. It’s another to create a new method in an existing code library that uses specific frameworks, configurations, conventions, etc. We will eventually get there with things like GitHub Copilot X that process our entire codebase.
The AI needs to work for us!
If I know anything about developers, nobody is signing up to debug broken AI code all day. We want to build things. We are excited to have AI help us build stuff. It makes for a great assistant to do repetitive tasks and spot potential bugs in our code.
The AI needs to be our assistant. None of us want to be the assistant of the AI and fix its mistakes.
We don’t want to be the Assistant to the Regional Manager. We want to be the Assistant Regional Manager. Thanks, Dwight.
Reply