Jekyll 2023-11-25T19:07:25+07:00https://akhy.my.id/feed.xmlakhy.Write(blog) just another personal tech blogKSW (Kubeconfig Switcher) 2023-11-25T18:34:00+07:002023-11-25T18:34:00+07:00https://akhy.my.id/posts/ksw-kubeconfig-switcherA while ago I created and published my own CLI tool to switch Kubeconfig. I wrote it in Go and it’s installable easily using Homebrew:
brew install chickenzord/tap/ksw

Project repo: https://github.com/chickenzord/ksw

How it works

When you run the command and pass the context’s name:

ksw context-name
  1. Try loading kubeconfig file from these locations:
    1. Path set in KSW_KUBECONFIG_ORIGINAL (more on this below)
    2. Path set in KUBECONFIG
    3. Default location $HOME/.kube/config
  2. Minify and flatten the config so it only contains clusters and users used by the specificed “context-name”, then put it in a temp file
  3. Start a new shell (same with the currently used) with KUBECONFIG set to the temp file
  4. Additionally, these environment variables also set in the sub-shell:
    • KSW_KUBECONFIG_ORIGINAL: To keep track of original kubeconfig file when starting recursive shells
    • KSW_KUBECONFIG: Same value as KUBECONFIG
    • KSW_ACTIVE: Always set to “true”
    • KSW_SHELL: Path to the shell (e.g. /bin/zsh)
    • KSW_LEVEL: Nesting level of the shell, starting at 1 when first running ksw
    • KSW_CONTEXT: Kube context name used when running ksw

Features

  • Supports recursive shell (starting ksw shell within ksw shell)
  • Shows a built-in fuzzy finder (like fzf) when no contexts specified in the argument
  • No automatic indicator in prompt, use the provided environment variables to set it depending on your setup

Some thoughts on the reason

You might think why am I reinventing the wheel? Some tools solve the same problem already.

I want a Kubeconfig switcher that simple (as in Unix philosophy) and can integrate easily with my existing ZSH and Prezto setup without getting in the way. Must also be able to integrate with other Kubernetes tools without many changes.

Other existing solutions I have tried:

  • kubectx and kubens: They are good, but I switch and use multiple contexts concurrently a lot. Changing the context in one terminal will change other terminals as well because they share the same kubeconfig file.
  • kubie: Took a lot of inspiration from this project. But somehow it’s doing too much and messing with ZDOTDIR breaking my ZSH setup.
  • kube_ps1: I am still using this for showing current context, and it integrates well with ksw

Additionally, this project has also taught me several interesting things:

  • Spawning and interacting with sub-processes in Go
  • Understanding kubectl configurations. I dived into Kubernetes source code to get an idea of how it is working. And if you checked my code, it’s reusing Kubectl’s code as a dependency to mimic its config handling behavior.
  • Automatic tests, build, and release using Goreleaser in GitHub. It was such a breeze I used Goreleaser in all of my Go projects.
  • Managing and publishing my own Homebrew tap. It allows more people to try and use my tools quickly.

I have been using it on my own for several months. Using separate shells for different contexts seems a good fit for my workflow. I set the default context in the default kubeconfig (~/.kube/config) to the non-production cluster I’m working on the most. Anytime I need to work on another cluster (especially the production one), I just need to run ksw to start a new shell with that specific cluster. This conscious effort to switch the context has reduced the risk of doing something bad on the wrong cluster.

]]>
Akhyar A.
My multi-devices Logseq sync setup 2023-11-13T11:04:00+07:002023-11-13T11:04:00+07:00https://akhy.my.id/posts/my-multi-devices-logseq-sync-setupRecently I have just completely moved my personal knowledge management from Obsidian to Logseq. I won’t go into full length about the difference between the two or why I prefer one to the other.

To support my way of continuous digital note-taking and knowledge gathering, reducing friction to take notes is a must. Therefore, having the app ready on every device I use is essential and it poses a new requirement, the content must always be in sync across all those devices.

Problem: “I need my Logseq graph in sync across all my devices”

My Devices:

  • Work Macbook
  • Personal Windows Laptop
  • Android phone

Some backgrounds about my workflows:

  • I’m professionally a Software Engineer and it’s also my area of interest, so my personal and work notes intersect a lot
  • I use my work Macbook most of the time and write in Logseq daily to capture work-related notes
  • On the weekends or holidays, I play and do my side projects on my personal Windows laptop
  • I use my Android phone to make notes on the go. Sometimes I review and make some edits on the phone before I go to sleep
  • I’m not good at multitasking or using multiple screens simultaneously. Only one device is usually used at the same time.

Since the Logseq graph is essentially a collection of plain markdown files, my first thought was to just put them all in a Google Drive folder that can be synced seamlessly. However, this approach is far from desired. Google Drive client is proprietary and I am not sure how it handles sync conflict. I have also tried third-party sync apps like Folder Sync but still not satisfied.

Notes: I do aware about Logseq’s built-in sync feature, but my current usage is not enough yet to justify the subscription fee.

Then I decided to use Syncthing, a self-hosted distributed sync server. Seems it ticked the boxes for me:

  • It’s distributed, which means there’s no central server needed. All my devices can sync with each other peer-to-peer.
  • It supports file versioning
  • It creates a backup file on sync conflicts

Looks good? I immediately installed the clients on all my devices. I put the graph folder to be synced to all the devices. Luckily, Syncthing has wrappers for all the OSes I’m using.

Peer-to-peer Sync

The problem is, that my devices are not always available and connected to the internet (for discovery) at the same time. This and other issues like Android client service being paused/killed by a battery-saving feature have made the sync very unreliable. It caused a lot of sync conflicts due to the same file being edited in two places. I want to write and edit quickly everywhere, therefore waiting for the sync to finish every time I want to write something is not desirable.

What’s the possible solution? I added a new always-on “device” which is a cheap DigitalOcean droplet I have already used for my other pet projects. I installed Syncthing using Docker in the droplet and opened some ports so now I have 4 devices, and one of them is always reachable by others. (Sorry non-tech folks, I might cover the detailed Syncthing setup later in another post)

Added 4th device to the sync mesh

Cool, now I don’t have to worry about lagged device sync. The server Syncthing acts like a central instance. I just need to ensure the DO instance is always up, which I already monitored anyway.

Several days into this setup feels very well. Until I realized sometimes the sync feels very lagged. The bottleneck is not in the sync itself but in the device discovery. By default, Syncthing uses a central relay server to discover other devices and it adds a delay before the sync process itself can start.

How to address this? Here’s what I have done:

  1. Ensure the server Syncthing only listens on tcp://0.0.0.0:22000 (also open the firewall for this) Server side setting
  2. Set up all other Syncthing instances (in laptops and phones) to connect to the server device using a static address. Note that Daisy is my server’s device name. Setting on other devices

Now the sync can start very fast and reliably. There have been very few occurrences of sync conflict lately because the files are synced quickly even before I started editing on another device.

To recap:

  1. Setup Syncthing in all devices and sync the Logseq folder to all of them
  2. Introduce an always-on server in the cloud
  3. Use a static address to connect to the server

Next plan:

  • Run conflict resolver script on the server to automatically resolve sync conflict with Git’s 3-way merge
  • Run Logseq headless API on the server as an entry point for integration/automation with other services

That’s it. I will update this post with further findings if I find any 😉

]]>
Akhyar A.
Writing on Android 2022-03-06T22:00:00+07:002022-03-06T22:00:00+07:00https://akhy.my.id/posts/writing-on-androidThis post was written on my Android phone using SPCK Editor. Kinda awkward but let’s see if this allows me to write more on this blog 😅

Update: Seems I don’t feel very comfortable using SPCK. Changing to Markor, a simple markdown editor loading my posts simply from local git repository. I have created a git-commit-and-push script and trying to run it using tasker+termux. Let’s see if it works well 😁

]]>
Akhyar A.
Using Remark42 2022-03-01T20:00:00+07:002022-03-01T20:00:00+07:00https://akhy.my.id/posts/using-mark42Just updated this blog with Remark42 commenting system. The backend is self-hosted and supports various OAuth providers.]]>Akhyar A.Automating interactive shell input in Go 2020-07-06T12:08:00+07:002020-07-06T12:08:00+07:00https://akhy.my.id/posts/automating-interactive-shell-input-in-goOnce every several weeks, I take a moment to review and optimize my daily workflow, identifying bottlenecks or repetitive tasks that can be automated. This time I’m trying to tackle one of the most tedious tasks that I do a lot daily: logging in to a VPN using OTP sent via email.

Why?

Context-switching

I have several VPN profiles and need to switch back and forth between them at work. Opening emails and hunting down the OTP code for each VPN session slows me down, especially in tight situations like when firefighting in an incident.

I know automating 2FA sounds like defeating its very purpose, but I still do it anyway for convenience and educational purposes. Let’s see how I’m doing it.

Analyze before automate

Most of the time, automating shell input can be as simple as:

echo 'y' | ./interactive-setup.sh

Or even better, keep sending y to the process’ stdin:

yes | ./interactive-setup.sh

Yes, “yes” is a standard unix command.

So, why can’t we just do something like this? (note that oathtool is a CLI tool for generating time-based OTP)

oathtool --totp --base64 OTPKEY | openfortivpn -c prod.cfg

It can’t be done because:

  • I don’t have the key used to generate the OTP
  • The new OTP generated EVERYTIME AFTER the connection initiated
  • The generated OTPs are only sent via email, there is no other way

Let’s see how openfortivpn command gets invoked:

❯ sudo openfortivpn -c prod.cfg
INFO:   Connected to gateway.
Two-factor authentication token:🔑

The server generates and sends the OTP via email right before prompting for it. Also because it gets regenerated for each connection attempt, wrong input will somehow “invalidate” the already sent OTP (i.e. after a typo and fail, you cannot simply reconnect and input the correct OTP from the previous attempt).

My solution

I have looked at Unix’s Expect also its Go implementation by Google, but not interested in both. I decided to implement my logic in Go for learning purposes.

The program will execute openfortivpn and automatically input the OTP from my email (via IMAP). I already set the filter in my Gmail account so all OTP emails will be automatically moved to a dedicated label/folder named “OTP”.

Here’s the logic outline:

  1. Spawn openfortivpn client in the background (let’s call it “the process”)
  2. Forward process stdout to the terminal while monitoring it for the input prompt (“Two-factor authentication token:” string)
  3. When the prompt is detected, run a function to fetch email and extract the OTP
  4. Write the OTP to the process stdin, then send a new line (like pressing Return)

It seems simple, but I learned quite a lot along the way. It taught me about how to interact with subprocess’s IO stream in Go, also how to properly use goroutines and channels (the hardest part of Go for me to understand).

Here’s the simplified code.

import (
	"bufio"
	"fmt"
	"io"
	"log"
	"os"
	"os/exec"
	"path/filepath"
	"strings"
)

func connect() {
	configFile := "prod.cfg"
	promptString := "Two-factor authentication token:"

	// Prepare command
	cmd := exec.Command("openfortivpn", "-c", configFile)
	stdout, err := cmd.StdoutPipe()
	if err != nil {
		log.Fatal(err)
	}
	stdin, err := cmd.StdinPipe()
	if err != nil {
		log.Fatal(err)
	}
	cmd.Stderr = os.Stderr

	// Start command
	if err := cmd.Start(); err != nil {
		log.Fatal(err)
	}
	defer cmd.Wait()

	// Wait for OTP prompt
	promptDetected := func(bytes []byte) bool {
		frags := strings.Split(string(bytes), "\n")
		if len(frags) == 0 {
			return false
		}

		last := frags[len(frags)-1]

		return strings.HasPrefix(last, promptString)
	}
	prompt := make(chan bool, 1)
	go func(ch chan<- bool) {
		scanner := bufio.NewScanner(stdout)
		scanner.Split(bufio.ScanBytes)

		buff := []byte{}
		for scanner.Scan() {
			bytes := scanner.Bytes()
			fmt.Print(string(bytes))
			buff = append(buff, bytes...)
			if promptDetected(buff) {
				ch <- true
			}
		}
	}(prompt)
	<-prompt

	fmt.Println("Getting OTP")
	otp, err := fetchOtpFromEmail() // delegate it to another function
	if err != nil {
		log.Fatal(err)
	}

	// Send input to the prompt
	io.WriteString(stdin, otp)
	io.WriteString(stdin, "\n")
}

The complete code is on my GitHub project. It’s usable and configurable for your use.

]]>
Akhyar A.
Minimal Makefile to Run Java Projects 2019-12-18T15:12:00+07:002019-12-18T15:12:00+07:00https://akhy.my.id/posts/minimal-makefile-to-run-java-projectsRecently, a conversation with my SO reminded me about a piece of code I write long time ago. It was a college assignment on Data Structure course.

I immediately dug my email and found the project compressed in a RAR archive. The project was written using Java with NetBeans IDE default folder structure. There was no build configurtions like Maven, Ant, Makefile, or whatsoever. Only NetBeans project config and I don’t want to install it just for the sake of running the code.

After Googled a bit, I came with a quick and simple Makefile to run the code. Luckily there was no external dependencies to deal with.

SRC ?= src
DST ?= build/classes
MAIN ?= Main

.PHONY: clean compile run

clean:
	rm -f $$(find $(DST) -name *.class)

compile:
	mkdir -p $(DST)
	javac -d $(DST) $$(find $(SRC) -name *.java)

run:
	java -cp $(DST) $(MAIN)
]]>
Akhyar A.
Moving to Jekyll 2019-08-27T09:38:00+07:002019-08-27T09:38:00+07:00https://akhy.my.id/posts/moving-to-jekyllJekyll

This site is now generated by Jekyll from GitLab repo, hosted on Netlify. Improved and simplified a lot of things compared to my previous setup.

]]>
Akhyar A.
Mass-editing Jenkins Jobs in Views 2019-08-27T09:28:00+07:002019-08-27T09:28:00+07:00https://akhy.my.id/posts/mass-editing-jenkins-jobsWe might all know, Jenkins arguably is not the nicest piece of software to run in the cloud. Nevertheless, I still need to deal with it sometimes at work.

This time I’m tasked to add Slack notifications to all individual Jenkins Jobs, overriding the global default. It’s pretty trivial if you think of it as just a matter of some clicks and typing in the GUI. Unfortunately, there are too many Jobs to change (more than 20, IIRC) and I’m too lazy to do that manually.

The good news is the specific Jobs that I need to change are already grouped into several Jenkins Views with certain naming patterns. Now let’s leverage Jenkins’ built-in Groovy console with the elegance of language’s functional-ish methods.

import jenkins.plugins.slack.SlackNotifier
import jenkins.plugins.slack.CommitInfoChoice 

def viewRegex = /^\d+\. Running on .*$/
def targetChannel = '#my-notification-channel'

SlackNotifier createNotifier(String room) {
    // the constructor has been deprecated
    // you might need to adjust it on newer version of Slack plugin
    return new SlackNotifier(room: room,
            baseUrl: null,
            teamDomain: null,
            authToken: null,
            botUser: false,
            sendAs: null,
            startNotification: true,
            notifyAborted: true,
            notifyFailure: true,
            notifyNotBuilt: true,
            notifySuccess: true,
            notifyUnstable: true,
            notifyRegression: true,
            notifyBackToNormal: true,
            notifyRepeatedFailure: true,
            includeTestSummary: false,
            includeFailedTests: false,
            commitInfoChoice: CommitInfoChoice.NONE,
            includeCustomMessage: false,
            customMessage: null)
}

Jenkins.instance
    .getViews()
    .findAll { it.name ==~ viewRegex }
    .collectMany {it.getItems()}  
    .each { job ->
        println(job.name)
        def notifier = job.publishersList.find{it instanceof SlackNotifier}
        if (notifier == null) {
            println('> no slack notifier, create new')
            job.publishersList.add(createNotifier(targetChannel))
            job.save()
        } else {
            println('> slack notifier exists, setting target channel')
            notifier.setRoom(targetChannel)
            job.save()
        }
    }

You can change the variable values as needed, and run it on your Jenkins instance on http(s)://jenkins.domain.tld/script.

Cheers!

]]>
Akhyar A.
My Hugo Setup 2018-04-20T13:10:00+07:002018-04-20T13:10:00+07:00https://akhy.my.id/posts/my-hugo-setup

WARNING: this post is deprecated as I have moved from Hugo to Jekyll

Hugo

Static blog isn’t a new thing for me. I have known Jekyll, Octopress, Pelikan, Hugo, and so on. Now I’m trying out Hugo, this blog you’re reading is generated automatically from my GitHub repository for every push I make.

the idea

As a [DevOps engineer in my daily job, I’m accustomed to multi-environment setup (e.g. prod/dev environments) for deployment. Somehow I feel the urge to apply it to this blog as well *ahemm*. The branch mapping goes like this:

  • master → akhy.chickenzord.com (prod)
  • develop → akhydev.chickenzord.com (dev)

The point is that develop is used as “draft” branch so I can try out new features/hacks that might not work on my local machine. I’d love to mindlessly push to develop when doing some trial-and-error hacks, sparing myself from making the local machine behave like the live server.

Furthermore and obviously, the two environments have common configs but differ a little. For instance, the dev should build draft posts as well (i.e. Hugo’s --buildDrafts` flag).

implementation

For simplicity, I choose Caddy to serve static files created by Hugo. You should give it a try too. Caddy’s configuration is very simple and has built-in support for Git and webhook which are the features I appreciate the most in this setup.

{$SITE_ADDRESS} {
    root /var/www/html
    gzip

    header / {
        X-Frame-Options DENY
        Referrer-Policy "same-origin"
        X-XSS-Protection "1;mode=block"
    }

    log stdout
    errors stdout

    git {
        repo github.com/akhy/akhy.chickenzord.com
        branch {$GIT_BRANCH}
        path /var/www/app
        hook /webhook {$WEBHOOK_SECRET}
        then git submodule update --init --recursive
        then hugo -v {$HUGO_OPTS} --destination=/var/www/html
    }
}

Notice the variables? I make it like that so the environment can have different configs, hence the name “environment variables”. For instance, GIT_BRANCH has the value develop in dev, while master for prod. You got the idea.

The interesting part is in the git block. It sets source dir in /var/www/app and does Hugo build to /var/www/html whenever a webhook call is made to /webhook endpoint. Kudos to Aleksandr Tihomirov for the idea in his blog post!

So my blog repository on GitHub has two webhooks:

  • https://akhy.chickenzord.com/webhook
  • https://akhydev.chickenzord.com/webhook

Every push to the repo will trigger both environments to re-generate blogs from their respective branches.

deployment

I run Caddy servers (yes, not just one) with the above configuration behind reverse proxy running on Docker. I won’t explain it in detail, but the general structure goes like this:

Final Setup

That’s it.

what’s next?

I’m still hacking around to add Isso commenting system to this blog. It’s a self-hosted alternative to Disqus or IntenseDebate, so I’ll write another blog post about getting it works.

Cheers!!


Related links

  • https://zeta.pm/blog/building-this-blog/ (I take the idea from here)
  • https://github.com/chickenzord/docker-caddy-hugo
  • https://hub.docker.com/r/chickenzord/caddy-hugo/
]]>
Akhyar A.
Hello World 2018-04-16T11:15:00+07:002018-04-16T11:15:00+07:00https://akhy.my.id/posts/hello-worldJust a first post placeholder.

Here’s some hello-world code in languages I love.

Bahasa Indonesia

🇮🇩 Halo Dunia!

Japanese

🇯🇵 こんにちは世界!

Python

print('Hello World!')

Shell Script

echo 'Hello World!'

Javascript

console.log('Hello World!')
]]>
Akhyar A.