Thursday, July 10, 2025

Building Resilience

In the beginning of the year we had a conversation with my manager about my yearly goals. Due to the nature of my current project and the newness of it, I felt the need to build more resilience to be able to deal better with changes. He appreciated my initiative, but also mentioned this would be something unethical to measure. I agreed with him, I do not want to measure it by putting an enormous amount of pressure on myself. But there has to be a way to build skills in this direction.

For the theory I watched a very corporate course on Udemy. My first takeaway was that I am in a much better shape than I thought I was. I do bounce back easily most of the time, and I am not affected by change and things going wrong. I recognized events in the past where people around me were scared of what is going to happen, but I was calm and knew that I can adapt to any of the outcomes. I also recognize events that were tough for me, but I took it one step at a time and kept an open mind.

There were also things that I was doing wrong, that this course helped me understand. I always thought I had a hard time asking for help. I know this is true to a certain extent, but I also understood that this is not fully on me. When the environment is supportive and I have trust in the people around me, I have no problem asking for help. It is, of course, also my responsibility to build trust. This is becoming increasingly difficult with remote teams, where the only interactions you are having are during work meetings, which often get heated. Add to this the personality of the average software developer dealing with computers only, having very limited people skills, and it's easy to see how communication can be more challenging.

Partly due to this distance from my colleagues in the past, I sometimes ended up feeling isolated and blocked. I remember having days when I could not get any work done and I did not know who to ask or what to do about this. Sometimes I stayed up late trying to refactor parts of the code to make things work. Sometimes I tried to sleep, but couldn't due to feeling guilty about not being productive enough. Then the next day I was too exhausted to get anything done, and the cycle continued until I felt burnt out and sometimes even got physically sick.

I did learn this time that putting in extra hours this way does not lead to better productivity. I need to switch off at the end of the workday and do my personal things as well. I also need to take regular vacations that help me recharge, so I can push hard again when I am back. This constant push and pause leads to the best outcome and the most productivity on a personal level.

I found another source for learning about doing hard things and increasing my resilience in the podcasts I listen to from time to time. It has come up in a few episodes of the Hubermann Lab that doing hard things is good for humans. There was this one idea that doing hard things strengthens the part of your brain that's responsible for your will to live. I find that idea fascinating. It is great that most of us are able to have an easy life, but we also have to understand that once it gets too easy, we need to create artificial hardships to give a workout to our brains.

Similarly, doing hard things makes doing slightly less hard things easy. So if you keep pushing yourself to do harder things constantly, your everyday life will flow easily.

As I am currently reading the book "The Comfort Crisis" of Michael Easter, after listening to the related podcast, I find myself thinking more and more about the idea of doing things that feel almost impossible. I already have a few ideas on how to challenge myself physically, but I also need to add mental challenges into my life. Running more marathons, or exploring new places still scares me, but I already know I am able to do all these. Similarly, building good software is something I know I can do, but I am still feeling challenged on the conceptual parts of coming up with complex solutions. Thankfully, my work gives me plenty of opportunities to do this, and I am not punished for failing.

As a next step, I am going to define challenges that push my limits. What this means, I do not know. I do know that I need to get out there more and do the hard things. And after doing the hard things, or failing at doing the hard things, I need to take a break. As Samuel Beckett said: “Try Again. Fail Again. Fail Better.”


Monday, August 5, 2024

Random Rant - In Defense of Developing Software

Working in software for me is about building complex systems that fulfill a certain need. This weekend I met someone who mentioned they were disappointed in software engineers and didn't understand the high salary requirements. They said most of the work is about copying code from one place and basically trial and error the rest. At first, this seemed fair; most of the code had already been written, so we copy-paste and look up many things.

But!

Knowing what exactly to look up is a skill in itself. Using the right keywords, looking for the right programming language, and scanning through conversations to ensure we understand the problem they are trying to solve, and how much it applies to our problem is something experienced developers do better. You cannot try to fit all the code into your codebase until you find something that covers the known use cases. That would take a lot of time. Plus, there will be corner cases you might not think about at first.

Putting your code out there for others to use also takes time and effort. You often have to clean it from sensitive information. You have to explain it in an easy-to-understand way, and you have to clarify any upcoming questions. Your code, ideally, also goes through extensive reviews. It has to fit the style of the rest of the application, but be clear enough for newcomers to read.

There are also so many concepts out there you learn throughout your journey as a developer. There are implementation patterns to recognize. There are algorithms to compare to find the perfect one for you at the moment. There are data structures better suited for one problem than another. And that is really just a small part of the pure coding.

As a software developer, you are designing sophisticated systems, depending on systems built by other developers. New things are popping up every day, it is an Earth-wide community. It is impossible to keep up with everything that appears. It is already hard enough to keep up with the most popular projects in parallel with releasing our own stuff because there are so many people pushing out new features every second.

You also have to understand requirements that the person requiring them does not understand. You have to predict user behavior. You have to understand what can and cannot happen when your code is being used, and make sure that it does exactly what you think it does. Now and in the future. You build solutions to problems people don't even know they have.

The job is fun, though. We get to build interesting things daily. We interact with others who like to create similar and not-so-similar things. There is also a lot of creativity required in solving unusual problems. I especially love talking to down-to-earth developers with many years of experience. You can always tell there is a lot of care they are putting into their work. They also love to share if you are genuinely willing to listen.

I am really not sure why some people hate developers. This was also not the first time some man tried to tell me why he did not respect my profession. I just wish people would be less ignorant and more open-minded to things that they do not fully understand.

Friday, February 2, 2024

GitHub Copilot First Impressions

I am aware that it has been a while since Copilot came out. I was a bit hesitant to install it because I wasn't sure that it would be a service I would use enough to justify paying for it. Also, the morality behind how it was built felt a bit icky to me, but I can see how it would be good for us in the long run.

1. Installation

I installed it a few days ago. I am using VSCode with Go, and I must say, the installation process was quick and painless. I could see suggestions in my code instantaneously. I also started writing a function mapping two types of payment methods, and, although the suggested structs were not the right ones, the logic was usable. I can see it will definitely be useful for naming things.

2. Authentication

The authentication part was not my favourite, to be honest. I was hoping that once I log in from VSCode it won't require it again, but the notification did pop up multiple times when switching between projects or restarting my machine. I did not investigate where it was coming from, it could be something set in the IDE, I just found it worth mentioning.

3. ChatGPT

I had some experience using OpenAI's ChatGPT to generate a function that required logic to calculate if one timestamp was found in between two other timestamps. This one was buggy, and it took me some time to understand where the bug was coming from and provide a fix. I am still using ChatGPT for generating documentation and tests, and for that it is great. Documentation is general enough so that I can add the parts that behave differently in my code very easily. With tests, I also prefer to have the debugger set up in my VSCode, so I can see exactly what is happening step by step.

That being said, I believe the use case is very different, and I feel Copilot feels much more like my pair programming partner lives inside the IDE, rather than outside of my computer - the feeling I get when conversing with ChatGPT.

4. Shortcuts

The basic keyboard shortcuts are intuitive, I believe many other tools are using the same combinations. Their official list is nicely formatted in a table that is easy to read: https://docs.github.com/en/copilot/configuring-github-copilot/configuring-github-copilot-in-your-environment?tool=vscode

For now, I did not feel the need to overwrite any of them, but it definitely is nice to have the option.

5. Suggestions

For what it is worth, I am happy with the suggestions I have been getting so far, but as I said, it has only been a few days. The code is not always compilable, but given that I am familiar with the codebase, I find it easy to find the necessary data structures in different packages. If I just started a new project, I think I would disable Copilot, at least until I get familiar with the codebase during my first few tasks. On the other hand, I could also see the benefit of using it on an easy task that does not rely heavily on the rest of the code.

I will have to go through the documentation of GitHub Copilot, so I can discover all its features. I can already see that they have a chat function in beta on the JetBrains IDE, similar to what ChatGPT is providing - I am definitely looking forward to giving that a go!

Thursday, January 25, 2024

Getting Go Linker Flags from the Executable in a Docker Image

In between the holidays, I found myself working on a task, for which I needed some IDs to test my code. As my colleagues were not available, I felt challenged to find them on my own. In what comes, I will describe what I did.


1. I checked where the variable was coming from.

I looked at the source code written in Go, and I saw it was an environment variable, so naturally, I checked the docker image with:

$ docker inspect

No environment variables were set, besides some basic PATHs.

I checked the code further, looking at the GitHub Actions configured, and I realized they came from the ldflags in the GoReleaser.


2. I looked up ldflags.

As I did not need to use them before, I googled ldflags. Apparently, they are used in linkers, the tools that make sure multiple source code files can work together - including the dependencies. ldflags are used to inject values during compilation. They are useful to mark the time and version of the build, and bake in other special values.


3. I checked how I could get ldflags back from executables.

I first found the Linux command:

$ nm


4.  I tried running the Docker image and getting a terminal to interact with the built object.

I used the command

$ docker run --entrypoint /bin/sh -it sha256:c2dc7194677d97676e...

It did not work, `/bin/sh` was missing from the image. Upon investigating the Dockerfile, I saw that this image was built from scratch. Good security, bad for my debugging purposes.


5. I built a new Docker image.

I knew where the path was in the original image, so I wanted to build a new image based on a Docker image having tools to debug. I used the go base image, and copied over the file in the new Dockerfile:


FROM golang:1.21

COPY --from=${ORIGINAL_IMAGE}:${VERSION} ${ORIGINAL_EXECUTABLE_PATH} ${TARGET_PATH}

CMD ["go","tool","nm","${TARGET_PATH}"]


I the built and ran the image.

$ docker build ...

$ docker run ...

No response.


6. I exec-ed into the Docker container.

I used the following command to exec into the newly built image's container:

$ docker run --entrypoint /bin/sh -it sha256:c2dc7194677d97676e...


7. I used the nm command.

$ nm ${TARGET_PATH}

nm: ${TARGET_PATH}: no symbols


Then

$ nm -D ${TARGET_PATH}

nm: ${TARGET_PATH}: no symbols


8. As this did not help me, I looked further and found that Go has its own version.

$ go tools nm

But this did not give me what I needed either.


9. Finally, I found a StackOverflow entry and executed the strings command from it, which literally looks at the strings in a file.

$ strings ${TARGET_PATH} | grep "${VARIABLE_NAME}"

This command finally returned the sensitive values I needed, and I could move on with testing the feature in my local environment.


Normally, I would advise that you ask your colleagues instead of investigating on your own, but you might find yourself in the same trouble as I did. It was still a fun exercise, and it was nice learning about the linker, flags, and the different commands you can use to investigate an executable.

Sunday, December 3, 2023

Post exam prep: things I have learned about learning

 I did finish the book “The Google Cloud Certified Professional Cloud Architect Study Guide” to prepare for my exam. I did want to post about everything I was learning. The start was great, not “chapter-by-chapter describing everything perfectly”-great, but I liked what was going on, I did get some nice summaries of the chapters. Up until chapter 6.

I did start writing about the networking concepts in chapter 6, but, to be honest, I can not put them quickly into words in a way that I would find easy to understand. As for the rest of the chapters, I got impatient. I know that writing out what I have learnt would have helped me cement my knowledge, and find the gaps easier, but I just really wanted to finish the book. I did learn from the later chapters, even if most of the things on software development lifecycle and SRE concepts I was already familiar with from my workplace.

Besides the technical and business knowledge I’ve got, here’s what I’m taking away from the preparations themselves.

Note: this is all very subjective.

Learning is like a bit like working out – the more you do it, the easier it gets, and the more you will like it.

Sports can teach you a lot about this. Because most of the time you don’t feel like working out. Or at least I don’t. But I am aware that once I am there, and I am in the flow, the voice, that earlier was very convinced that I am too tired and hungry for pizza, will disappear. Once the workout is finished, I feel proud and cannot stop smiling.

I am sure the processes behind learning are very different from sports. But accepting that voice, and knowing it might not be saying the things that will help me in the long term, is the same process. Taking cold showers is another example of how to get better at this.

Once you are doing it, focusing on one thing at a time, it can be quite meditative. I observed this at my workplace as well, as I was struggling with procrastination. Once I applied the same principles to getting started on my tasks, I felt like I got things done in a more timely manner.

If something needs to be done, it can be done even if it takes staying up until late.

I do not advise anyone to lose out on sleep, do not underestimate the power of it! (see “Why We Sleep” by Matthew Walker) Actually, I do advise against doing this in general.

But for me, I used to put things aside, to go to sleep early. Just to end up watching TV for 3 hours, but this time from the bed. So now I did sit down to learn, even if it meant starting after 10pm.

I do feel a certain satisfaction after finishing a chapter/section.

You know the tiny dopamine hit you get after finishing a task, any task? I got it after finishing a chapter. It made me want to do another chapter, and then I had to remind myself that I value sleep a lot and that I want to be productive at work the next day.

Getting into the flow makes it easier to get through the chapter.

After managing to sit down and start reading and taking notes, it was also important to eliminate distractions. I had to turn off the TV, put my phone away, and maybe even turn on my Pomodoro timer when everything else failed. Actually focusing on what I was reading, and trying to understand, and writing down pushed me into a focused mood. 1-2 hours went by quickly.

Regular breaks are important – especially to protect my eyes.

My eyes do hurt when I sit in front of a monitor the whole day. I do try not to overuse them, but working in front of a computer the whole day, scrolling on my phone and watching TV does make them hurt. So to protect them, I do take breaks. At some point, I want to look into other ways to help them, but for now, taking regular breaks away from anything that requires focusing my eyes, and just looking out the windows, like an old lady, does help.

Writing down what I’ve learned is still the best way for me to make things stick.

I do not know anyone who does this. I received doubtful looks from my teachers for this. One of them even asked if it does not take too long. It does take a long time and requires effort. But the effort is spent on the material you want to learn. You reflect on it, you think about it differently, and you understand it even better. If you’ve never tried it, I would say give it a go. If you have a friend to study with, maybe tell each other what you’ve learnt, ask questions, and challenge your understanding. I usually like to do things on my own, that is why I prefer writing.

Notes don’t have to be perfect or pretty for me – that part does not add to my learning experience.

Okay, I did enjoy using different colours on my notes. I like to look at them, they do make me feel good about the notes. But I also like to look at my handwriting. I did have to change my handwriting in university because others could not read it. As a result, my handwriting is pretty and resembles printed letters more. Different coloured lines and drawings do not add enough to the experience to make it worth it. Unless worthiness is not the point, and I am learning for fun.

Learning from a book and taking physical notes is very nostalgic, and reminds me of the hard-working and high-achieving student I used to be – past Lilla was impressive ðŸ™‚

The whole experience did remind me of who I was in school. I never needed to take physical notes since university, which is completely fine. But doing this did make me feel closer to the highly driven person I was, always learning, searching for new books, and getting in all the knowledge. I liked that person. That person did not need to think about adult things.

Exam Prep 5 – Designing Storage System

 The next chapter is on storage solutions. There are a lot of them, to match all the possible needs applications could have. The main categories are:

  • object storage
  • persistent local and attached storage
  • relation and NoSQL databases
Flowchart for decisions: https://cloud.google.com/architecture/storage-advisor#decision_tree

Google Cloud Storage is the object storage solution on GCP. It is not a file system, there is no clear structure in it, and the files are treated atomically, which means that getting a part of a file is not possible. The files are arranged into buckets. Files in a bucket share access controls. The bucket names must be globally unique, therefore it is advisable to use a unique identifier in it. It has four tiers:

  • Regional – data is frequently accessed, and present in one region
  • Multiregional – data is frequently accessed from multiple regions
  • Nearline – for data accessed less than once per month
  • Coldline – data is accessed once per year or less

Cloud Filestore is a network-attached storage service. Used mostly for GCE and GKE, can be attached to multiple instances.

There are multiple databases available as well. Relational DBs follow the ACID principle.

CloudSQL is a managed SQL DB offering. It offers MySQL, PostgreSQL and Microsoft SQL servers on the internet.

Cloud Spanner is a globally scalable SQL DB on GCP. Ideal for applications that need to be available in multiple regions of the world.

BigQuery is a data warehouse used for analytics. It supports SQL. You pay based on the data you use, not the data you store.

NoSQL databases use flexible schemas.

Cloud Bigtable is a NoSQL database used for data analytics. Perfect for IoT projects.

Cloud Datastore is a document-based NoSQL DB. Its successor is Firestore, which is advised for web applications requiring flexible schema.

Cloud Memorystore is a managed Redis. Used for caching.

GCP encrypts data at rest. The user has to take care of data retention and lifecycle management. Networking and latency have to be taken into consideration as well when designing an application using cloud storage.

The review questions in this chapter went better than the others so far. I did go through them twice, to make sure that my usual issue of writing down a different letter than what I chose is not happening – I’ll keep doing that for future chapters.

Exam Prep 4

 In this post, I will go over two chapters: one on technical requirements and the other on the Compute services.

Designing for Technical Requirements

In this chapter, there were mainly three broader categories of technical requirements discussed: high availability, scalability and reliability.

Availability is the “continuous operation of a system at sufficient capacity to meet the demands of ongoing workloads”, and is measured in percentages. What this means in practice is that requests coming from clients should be responded to promptly. Many types of failures affect the availability of a service, starting from bugs introduced by developers to DNS server hiccups.

For Compute services, Google already takes responsibility for the availability of infrastructure in most of the places. They are not responsible for bugs resulting from coding errors, but in case there’s a physical server issue, they do perform live migration of the app. There are still some things that the Cloud Architect or DevOps engineer has to take care of: like making sure to use redundant resources, and use IaC. The more managed the service is (Compute Engine -> Kubernetes Engine -> App Engine), the less there is a need to care about compute availability.

Storage services on GCP can be set up to have automatic backups, run in different zones (which means different buildings in the same DC), so Google is already taking care of most of the availability settings. Even for persistent disks, there’s a minimal effort required to make them as available as needed, by providing a redundant copy or resizing them to fit increasing file sizes.

With networks, two things have to be mentioned for availability: redundant connections and Premium Tier networking.

Scalability is the ability of software to grow (or shrink) based on incoming traffic. There’s horizontal scalability – which basically means being able to deploy another instance of the service to make it work with higher loads, and there is vertical scalability – meaning increasing already existing disks and CPU numbers to be able to respond to more requests. Managed services often offer auto-scaling, and Kubernetes also has a configurable auto-scaler object type. Managed instance groups can also scale by increasing or decreasing the number of instances in them.

Reliability is covered by most of the availability practices. Redundancy and following DevOps best practices are important for reliability. SRE practices make sure that monitoring, alerting, incident response and post-mortems are properly set up.

Designing Compute Systems

In this chapter, there were four GCP offerings discussed in more detail. I have heard from colleagues that Anthos has recently been added to the curriculum, so I will definitely have to look into that more.

The four services were Compute Engine, App Engine, Kubernetes Engine and Cloud Functions. I have a few years of experience with both Compute and Kubernetes and at least one year with Cloud Functions.

Compute Engine is an Infrastructure-as-a-Service offering, giving out virtual machines running in different data centres all around the world. There are numerous machine types, each fitting different use cases, some with more and different vCPUs, some with more memory, and there are also custom/configurable types. Service Accounts are used by software not have an account, but still needs permissions for operations on GCP. They can and should be associated with VMs. Persistent disks are making sure that data is surviving the virtual machine, and the key management system to secure the app better. Shielded VMs should be used when security is extremely important. Instance groups can be managed (starting from a template) and unmanaged (mostly for load balancing).

Kubernetes Engine is the managed Kubernetes offering on GCP. It lets you run your own Kubernetes cluster, with very little need for configuration.

App Engine is a Platform-as-a-Service offering. It can run application code in a container without needing to configure the underlying infrastructure. It can be Standard (supporting only certain languages) and Flexible – being more general.

Cloud Functions are handy when a piece of code needs to be run after something happens. In CF there are functions triggered by events. These events can be happening in other GCP-managed services, and they can also be configured with webhooks. Even Stackdriver logs can trigger a cloud function via Pub/Sub, the messaging service.

It is highly advised to use all this with some sort of Infastructure-as-Code – the Deployment manager is perfect for it.

I did do the tests at the end of both chapters and although I did make mistakes, I feel like I am going the right way. I’ve been thinking about signing up soon for the exam, but first I want to finish the book. I do have a date in my mind, in any case.

Building Resilience

In the beginning of the year we had a conversation with my manager about my yearly goals. Due to the nature of my current project and the ne...