LavaTech WriteFreely


Read the latest posts from LavaTech WriteFreely.

from Ave's Blog

How does a Turkish ID work anyways

Official image of the TCKK

The Turkish ID Card, aka TCKK, is a smartcard, quite an interesting one actually. It has two separate chips for contact and contactless. Both run a locally developed smartcard OS (AKİS/“Akıllı Kart İşletim Sistemi”, lit “Smart Card Operating System”). It can theoretically use a locally developed IC (UKTÜM), but none of my IDs so far had it.

The contactless interface

On the contactless (NFC) interface, it's an ISO/IEC 14443A based ICAO 9303-compliant eMRTD (Electronic Machine Readable Travel Document). I've done quite a bit of work recently to add eMRTD support to Proxmark3 and it can read my ID perfectly, but that's a blog post for another day.

The contact interface

On the contact interface, however, it's a completely different beast: It's based on a number of Turkish Standards [1], and it's seemingly quite secure.

It has various applets like ID verification, e-signature (both by your identity, and your dedicated e-imza cert, though latter wasn't deployed yet I believe) and emergency health information. Sadly, however, it's not well documented publicly (other than some exceptions [4], and all these cool features are simply... unused.

Dumping the cert

I've dumped my first TCKK cert on my first ever TCKK back in 2018 by sniffing USB communications[7], and wrote a tool to automate it back in 2019 when I renewed my ID to get my image updated, and finally got to use it again when I got a new ID in 2020 after I lost my wallet in Leipzig after 36c3 [2].

Anyhow, today I open sourced that script and another one. I'll probably be publishing more over there in the future, especially as I understand ISO/IEC 7816-4 and ASN.1 better after implementing ICAO 9303, so I will simply go over using that.

This isn't intended to be a “TCKK verification for the masses” post, so I'll skip through the simple details.

Clone TCKKTools, install python3.6+, install dependencies, plug in your favorite contact smart card reader (I use an ACS ACR39U), put in your ID with the chip facing up. You'll likely also want to install openssl as we'll be using that for converting the certificate and verifying it.

Run python3, and it should dump your certificate as a file called cert.der:

Cert dump procedure and the dumped der file shown on a terminal

You can convert the cert from der to pem with openssl x509 -in cert.der -inform der -out cert.pem.

You can view certificate details with openssl x509 -in cert.pem -text.

Verifying the cert [3]

First off, ensure that you converted the certificate to pem format and that you have openssl installed.

Secondly, let's grab the required files. Download the following URLs (do be aware that the .crl url is fairly big, around 350MB):

So... there's an odd thing where kokshs is a der and kyshs is a pem file (where kyshs lacks a newline on the file ending), so the procedure is a little odd. In any case...

# Convert CRL to a PEM
openssl crl -inform DER -in kyshs.v1.crl -outform PEM -out crl.pem
# Add a newline to kyshs.v1.crt
echo "" >> kyshs.v1.crt
# Convert the kokshs.v1.crt file to a PEM
openssl x509 -in kokshs.v1.crt -inform der -out kok.pem
# Join intermediary cert with root cert to create a cert chain
cat kyshs.v1.crt kok.pem > chain.pem
# Join chain and CRL into a single CRL chain
cat chain.pem crl.pem > crl_chain.pem

Additionally, you may have issues verifying the certificate as the CRL at the time of writing has expired (roughly 2 weeks ago), so we'll be skipping CRL expiry checks. If this is no longer the case in the future (see [5] for more info on how you can check), drop the -no_check_time. See [6] for more info on what happens if you run without that.

To verify the certificate, run this command:

openssl verify -no_check_time -crl_check -CAfile crl_chain.pem cert.pem

It should take a while, but it will go through the whole CRL and verify your TCKK cert's validity.

If you see a message like this, then your TCKK certificate is valid:

cert.pem: OK

However, if you see one like this, then it isn't:

C = TR, serialNumber = 1234568902, CN = ACAR HASAN
error 23 at 0 depth lookup: certificate revoked
error cert2017.pem: verification failed


I've been curious if my old ID certificates that I was keeping around were in the long, long CRL that govt publishes, but only got around to checking today. It was nice to see that they were indeed in there.

I've also been meaning to publish some of the TCKK research I made, and publishing this and the two scripts over at TCKKTools feels good. I look forward to publishing more stuff.


This is just one of the many ways to verify the identity of someone using the TCKK. This may not be a legally acceptable way of verifying someone's ID for actual commercial purposes (I simply haven't checked them).


1: TS 13582, TS 13583, TS 13584, TS 13585, TS 13678, TS 13679, TS 13680, TS 13681.

2: Funny story actually. I got through the whole event without losing anything, then dropped my wallet in Leipzig Hbf at an Aldi. Almost missed my flight searching it. Called my banks on Sbahn to cancel my cards. When I got to the airport there was a “Final Call” for me, Turkish Airlines staff warned me that I was late but that they'd let me through, and airport staff practically pushed me to the front of the passport line. Border control dude still took his sweet time counting the amount of days I spent in Germany before finally letting me through. I was the last to board. I ended up getting my NVI date while taxiing to gate on Istanbul Airport. But in the end everything ended up working out and I ended up getting everything reissued, which is okay I guess.

3: Huge shoutouts to this article on as I based the CRL verification on that.

4: There's apparently a person called Furkan Duman who's working on a company developing ID verification technologies who's posted some tidbits in Turkish on his blog, I didn't get a chance to read stuff very much so far, but they look quite interesting:

5: Run openssl crl -in crl.pem -text -noout | grep "Next Update". You can safely Ctrl-c after the first output, otherwise it'll go through the whole file for no good reason. If the shown date is past current date, then the CRL has expired.

6: Running without -no_check_time leads to a rather confusing output from openssl. You still get the same output when feeding it invalid certificates, but you also get error 12 at 0 depth lookup: CRL has expired. However, on valid certificates, while you don't get error 23 at 0 depth lookup: certificate revoked like you do on invalids, you still get the CRL has expired line, and that leads to a verification failure, which ends up being a little confusing.

7: Huge shoutouts to linuxgemini for informing me this was possible (and overall sparking my interest in smartcards and RFID tech) and showing me how to do it on a cold election day in Ankara when I flew back to vote.


from lun-4 (imagine gpt-3 but gayer)

I'm glad you asked (you did, I promise) because I can infodump for an hour.

hatsune miku votes too!


Every 2 years, Brazil runs elections. One for the local elections, where someone votes for the city Mayor and the city's representative, and other where someone votes for the President, State Senator, State governor, and State deputies.

Local elections happened in November 15th 2020, so what better day to start writing a blogpost describing how it works than today.

This is going to be a technical explanation of how things work. So I'm skipping many pre-election and post-election processes. I'm going from candidate loading until vote counting.

The courts

In Brazil, we have the TSE, Tribunal Superior Eleitoral, it translates directly to the Supreme Electoral Court. It is the practical creator and maintainer of the voting systems used around the country. After TSE comes the TREs, Tribunal Regional Eleitoral, translates to Regional Electoral Court, and are the courts that actually collect the data from the voting machines back to TSE (more about that in the future of this post).

Pre-election things

Every Brazillian citizen between 18-70 years of age must vote. Citizens above 16 can vote. Not voting when you must vote means that you must pay a fine to TSE (if the fine is not paid, there are other things as well, such as being unable to emit a Passport, being unable to be admitted to a public university, etc). To be able to vote, a citizen must create their Voter Card (my translation to Título Eleitoral), and as of recent years, the Voter Card creation process involves taking a picture of the voter and a record of their fingerprints (more on that later).

In the creation process, the citizen is asked which place would they want to have their voting take place. I don't know the criteria, but places can become unavailable as they might be full of voters. After your selections, you are assigned an Electoral Zone and an Electoral Session. The hierarchy goes as follows: State –> Location (city, suburb) –> Zone (number) –> Session (number). Schools are the most common place to be Electoral Zones.

After candidates for the respective year's positions are selected, they are loaded in blank voting machines. The loading process is done via a specific kind of USB flash drive with a specific plastic shell to guide it into the back of the machine (preventing people from trying to jam it in, then switch sides, then attempt again, the usual USB problem). After that is done, another USB flash drive (of same structure) is loaded in, and the spot it's on is locked, as that flash drive will contain votes.

Electoral Session pre-election-day ceremonies

The election days are selected by TSE, composed of a 1st and 2nd round. 2nd rounds happen when a candidate for a major position (such as mayor, president, or governor) does not reach a majority (51%+) in a location with more than 200 thousand voters. 2nd rounds just have the 2 leading candidates from the 1st round.

On election day, the mesários (which are voluntary workers for TSE/TRE to manage the election) assigned for a specific Electoral Session, with relevant managers, do the starting ceremonies for the voting machine the respective TRE gave them. That ceremony happens before the actual hours of the election day (election day happens from 07:00 (7AM) until 17:00 (5PM)), so between 6-7AM.

The setup ceremonies involve testing the machine's hardware and making it print a report that it does not contain any votes inside. That report is called the Zerésima. You can see examples of them on the TSE website.

After those ceremonies, the session is ready to accept voters.

Incoming voter process

When a new voter comes in the voting session, they must have some form of identification of thsemselves and their “voter selves”. For the former, that can be done either via TSE's digital voter card app (recommended during the pandemic), the E-Título (only works for voters that registered a picture of themselves) or any government-issued ID document (passport, national ID card, driver's license). For the latter, either E-Título or their voter card works.

Upon showing their identification, one of the Mesários checks the identification provided on the session's book. That is a physical paper notebook possibly contains a picture of a voter, their name, and their voter card number. When a match is found, the voter number is told to the Mesário operating the Terminal. Depending if the voter had their fingerprint registered or not, the Terminal requires the voter to put that in its reader. (This bit may be disabled entirely, considering COVID. As an example, I was not asked to put my fingerprint in for 2020's elections, even though I did in 2018's)

The Terminal controls the voting machine in terms of “starting a voting session”, “ending the session”, “stopping a vote if the voter didn't finish their vote”, “requesting accessibility options”, etc.

After voter identification, the voting machine is now ready to receive the vote.

Voting machine architecture (physical)

The voting machines are all manufactured by TSE and TSE contractors and sent out to TREs around the country in the months leading to the election.

Here's a picture of it:

brazillian voting machine

You can use a little simulation TSE made using highly advanced JavaScript (note: fully portuguese)

Voting machines do not have any networking equipment (Wi-Fi, Ethernet, Bluetooth, etc), its only form of communication is with the Mesário Terminal, and a power cable. Voting machines have batteries as well, to handle remote locations.

Voting machine software architecture (high-level)

The voting machine runs Linux with a highly customized userland made by TSE.

Inside the voting machine there is the Registro Digital do Voto (RDV, Digital Vote Registry), that file can be thought of as a spreadsheet where the columns are the political positions, each row is a voter, and each cell is a vote for that political position. The vote, when casted by the voter, is randomized amongst the other votes, emulating a physical ballot box being shuffled around. Portuguese high-level explanation by TSE.

After a single vote

After the voter has cast their vote, they're asked to sign the notebook and the Mesário gives them a piece of paper, taken from the notebook itself, which is the proof that voter actually voted. The voter can now leave the session, and the next one comes in. This is the cycle that happens until 5PM.

Electoral session post-election-day ceremonies

At 17:00 (5PM), all voting sessions are due to stop soon. If there are any remaining voters in the queue, they must be attended (which means the session might stop after 5PM), but no new voters will be attended (so the actual place closes its doors, but the sessions inside are still open).

After all voters went through, the Mesários issue the command to make the voting machine stop. When that happens, it prints out the total amount of votes that were casted on that session, as in, “candidate X got N votes”, “candidate Y got M votes”, etc. This report is called “Boletim de Urna”, and I'll keep its translation as Vote Report.

The Vote Report is generated from the RDV file cited earlier, it is printed by the voting machine up to 11 times. The first copy is a check to see if the printer of the machine is operational, then 5 required copies, then 5 optional copies that may be requested by political party representatives, the media, or the Ministério Público. Other reports printed are the Justificative Report, and the Mesário Identification Report.

All 5 required copies of the Vote Report and the Justificative Report are signed by the President of the Voting Session, the Mesários, and other Inspectors. Mesários must sign the Mesário Identification Report.

The Vote Report MUST be publically available, either through physical means, where one of the copies is put on the voting session itself, or through digital means, on the TSE's website. The Vote Report contains a QR code which means you can cross-reference the tally made by the voting machine with what the TSE received.

Another copy is given to one of the Inspectors in the session, or the media, or a representative from Ministério Público. Remaining reports are sent back to TREs.

After all of those are finished, the voting machine displays that the session is over, and can now be turned off by the President of the voting session.

Vote Transfer

The Vote Report, while it is sent to TREs, is not the way votes are processed. The voting machine writes all of the information in its Vote Report on the locked-up USB flash drive. The seal and lock are undone, with the President for the session's supervision, and all flash drives for the Zone are sent physically back to TRE.

On the TRE, those flash drives are loaded for final counting, their data is sent back to TSE via a separate network (network is provided by Embratel), and from there, results are distributed via online/TV to everyone in the country. Here's the website for the 2020 Local Elections.


from lun-4 (imagine gpt-3 but gayer)

Before actually starting this I want to say that programming language discourse is 99% of the time based on subjective experience. The writing here is my subjective experience dealing with Python. Python may be good for you, or bad for you, or whatever. Replacements beware.

Time changes, people change

I've started learning Python after attempting to learn C but failing back in approximately 2013. But I would only consider writing Python for internet-facing production in 2017.

This rant has hindsight bias, for sure. But a good point on Python is that I didn't quite knew how to program things before learning it. Sure, I did write massive if chains with C but nothing more complex because as soon as I started reading the pointers chapter I would get confused.

Nowadays I'm not that confused with C pointers, or the difference between the stack and the heap (thanks Zig), and I'm a different person, with different goals and needs on what I want my software to be, or, what its principles are. But, even though I'm a different person, I'm still locked in to the younger me with Python, and now that I can put “5 years of experience writing Python” on my CV, I don't think this will go away any time soon.

A non-exhaustive list of things that keep me sad while writing Python

I could go on and enumerate things I'm not happy with and that I ultimately don't have power to fix, so here we go:

  • Typing is sad, it's both a mix of giving static type checking but without actually breaking the language. I mean, yes, typing is good, but in the end you can declare a function that takes an int and give it a string and you'll only know something is wrong at execution time if you don't have mypy, and so you go to install mypy, and then it doesn't find some library, or a library doesn't have typehints, or it decides to never want typehints, etc, etc, etc.
    • A subproblem of typing is using things like typing.Protocol. Which I tried to use, once. It type passed afterwards, but I feel dirty after writing something like that. I wish I could elaborate further on why, other than that general description of “something's wrong” while looking at it.
  • Packaging. God. Packaging. After going through requirements.txt,, poetry, dephell, pipenv, I just gave up and settled on since it's the only guaranteed thing that might work.
  • Not being happy with the “large” frameworks AKA Django, meaning I have to go to things like Flask, but then there's asyncio, so I'm going to Sanic, then sanic is it's own hellish thing, and I'm now comfortable on Quart. And by using a niche of a niche of a niche web framework, I'm having to create my own patterns on, for example, how to hold a database connection by just putting app.db = ... on @app.before_serving, you can be sure mypy does not catch anything about that at all. Love it.
  • A Bug in a completely separate part of my software was caused by a bump in dependencies but not bumping my python version on CI. I kept banging my head for hours until I gave up and updated from 3.7 to 3.9.
  • Endless language constructs seem to be the norm now. I don't like that we now have 4 different ways to merge dictionaries together, because the existing one didn't have “a good syntax”. That feels like a really low bar for “new” language constructs.
    • This is new on 3.9, so its likely the two last entries are more of rants than critiques, idk.

The future?

I don't know. None of those things look like will ever be fixed. I'm seeing if I can write webshit with Zig, but that's all experimental. Might take much, much longer to “ship” but I feel that by not having the ecosystem or an interpreter slowing me down I'd have a better development experience.

I can't deny that I'm now with years-long experience with Python and I have to put something in my curriculum, though.

The tech industry makes me sad.


from lun-4 (imagine gpt-3 but gayer)

Or, “Monolith First, Complicated Thing Later, Also Plan Before Switching, And How Those Things Have Been Known For Ages, HELP”

I wan't to start this with Gall's law, from General Systemantics:

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

When I see large scale distributed systems being written with Kubernetes or whatever, I have the gut feeling Gall's law wasn't followed, though my sample size is very small: 1.5 systems. 0.5 comes in because the other system I'm managing isn't Kubernetes (yet, I'm sure it'll become that after a while).

I consider that jumping from a monolith line of thought to a microservice line of thought, for any team or system, is a very large jump that must be scrutinized before actually doing it. Microservices are no silver bullet, mostly because developers are taught on writing monoliths for their whole life, and then you push up microservices, which come with a lot of specific tooling, and as years pass, even more tooling is created, to the point enumerating them all can be absolute sensory overload. Really.

My Bad Kubernetes Experience

In one of the systems I was developing on, I wasn't actually messing with Kubernetes directly, but, regardless, I had a very poor experience integrating with it. The monolith app I developed was inside a VPS, and had to push data to MongoDB and Redis nodes which were living inside the Kubernetes cluster. The person that managed it told us to use kubectl port-forward, and sure, we tested it out, and it worked, at least at the start, but we all know how it goes from here.

While the system was operating at spike load, the cluster restarted itself, and kubectl port-forward lost connectivity with it. In my mental model, it would reconnect to the cluster: it has authentication, it doesn't need to hold any state. Yet it didn't, and our app kept unable to connect to Redis (wish a very cryptic ECONNRESET in the app logs) for a long time while we attempted to debug it, a restart on the systemd unit that did the port-forward “fixed” it, after we talked with the person operating the cluster how redis was, and they told it was up.

After talking to a friend about that situation (a long time later), they told me port-forward is “a broken ssh tunnel that might not even work”, and “it's not meant to be used for anything other than testing”. So I could have used some other Kubernetes solution which wouldn't have made me write three paragraphs.

My Bad Microservices Experience

I think I can say this one is caused by bad management wanting to put “novel” things on teams that are very new, don't know anything about what a container actually is, or don't know what distributed system design actually entails to the failure modes of such system.

When you jump into microserivces, and design with them in mind, you have to know that you're designing a distributed system, and while the units composing that system are small and simple and understandable, the whole can be much, much more complicated (see: natural systems in the whole world, inculding human-made ones!). As well as you must know all the “fallacies of distributed systems”, which can become somewhat cliché, but are actually quite true, and at this rate, if I'm developing a distributed system, I should print out the list and put it next to me and every other developer, I've seen them happen, even though the fallacies have been enumerated since... 1997? It's weird how we're still having issues that come from those fallacies in 2020. We should do better.

I think this boils down to “people should learn a little bit before jumping in the microservice hype”, and in most cases, Gall's law still applies. I believe that things should be written first as a monolith, to have the developers learn the business logic in an environment that function calls aren't remote everywhere. And if scaling is required, then proceed to investigate a distributed architecture, if you can actually scale there, or do something else.

Offloading to the database

Another data point from overall system design is that you shouldn't offload everything to the database, as in, make every microservice need it for any operation, because as things like autoscaling jump in and now you have 10 times the amount of microservices, your database will suffer 10 times the load. That might catch fire.

Thouh if the database does catch on fire AND is a managed database solution, you can blame the cloud company providing it instead of yourself (hrrrrm Discord loved to do this on their postmortems, but nowadays they don't even give any postmortem. It's sad).


from LavaTech Blog

Hello everyone welcome to my quick guide on how to change password in Conversations:

0) Open Conversations, go to the main view, hit the 3 dots on top right

1) Go to Manage accounts

2) Pick your account

3) Hit the 3 dots on top right, smash that “Change password” button

4) Change your password by putting your current password to the first box and the new one to the second, and hitting the “Change Password” button

Thank you for reading my guide.

(Tags to jump to other related posts: #guide, #a3pm)


from lun-4 (imagine gpt-3 but gayer)

Disclaimer: I do not work for Discord, or with Discool. If i wanted to look cool I would call myself an “independent security researcher”, but in the end I just stare at Chromium's DevTools while Discord is open for an hour.

What are Intents?

Discord has unveiled Gateway Intents a while ago. A PR on the Discord API Docs repository was created to help library developers start adding support for Intents, and provides some answers to questions developers raised up.

The general idea of Intents is that instead of having all events being thrown away at your gateway connection, only the events you actually want are sent, this gives huge wins to general network usage. Most bots get a ton of typing events while being unable to do anything about that, for example. There was a stop-gap measure for this via the “guild_subscriptions” field, but the general idea is that Intents is the way to go, as it's more generic.

From all intents, two of them are considered “Privileged”: members and presence, providing the member list and user status (online/offline/what game they're playing) respectively.

I'm writing this on October 3rd, 2020 (with many drafts afterward). I did have a lot of time in my hands to study about Intents but I'm only doing it right now and sharing the results since I had to port my personal bot to it.

What's the actual deal/reasoning with Privileged Intents?

From the original blogpost:

We believe that whitelisting access to certain information at scale, as well as requiring verification to reach that scale, will be a big positive step towards combating bad actors and continuing to uphold the privacy and safety of Discord users.

To use Privileged Intents, your bot: – must not be in 100 guilds, or – if you are in more than 100 guilds, you must be a “verified developer” with a “verified bot”. If you attempt to bring your bot down to less than 100 guilds after that, you can't have the intents back

The verification process involves filling a form describing your bot's functionality, and give a government-issued identification document via Stripe Identity. Depending on your country, your passport may be the only available option. You can check that information on the Stripe Identity documentation.

There is a grace period before bots are forced to be verified: October 7th, 2020. Bots beyond 100 guilds past that date will not be able to join any more guilds.

From the blogpost, and the surrounding timing on it, I can safely say the creation of Privileged Intents is to provide an answer to what I dub the “discool discourse”. They haven't said that publicly, and I can only base this on feeling alone.

Discool was a service that stored a lot of Discord user metadata. You could put a User ID in, and get the list of guilds they were in (of course, not all guilds, but a lot of public guilds had their data scraped).

The way they did it was by creating burner user accounts able to join those guilds, scraping all the data already available to users (the Discord client needs the member list after all), and store them in a database. Give enough time and effort, and you have a pretty big privacy scandal regarding user data and Discord.

From my perspective, Privileged Intents were made to make developers that create a large-scale privacy scandal legally binded to Discord, and go to court, and etc, etc, etc.

Are the Privileged Intents a wrong way to protect data?

I think so, or at least, very flawed. As I said, Discool uses user accounts, they never used bots in the first place, any Intent business would never get to those bad actors. Of course, Discord has been implementing various kinds of user extra-verification with “machine learning”, but the point is that anyone with a browser can start up Selenium, point it to Discord, and start scraping. Hell, given enough time and effort someone could gcore the browser process to scrape data while still being classified as having “human sentient behavior”

Even in the case of an actual privacy scandal on Discord, the damage for it, even though you're limited to 100 guilds, even though it's technically less users it's the same level of data, you can still fuck up the lives of some people, but I think Discord only cares about having less of the numbers.

When giving data out to anyone, you should assume the worst, because you don't control their intent (pun unintended). Sure, requiring ID and so providing the existing legal system as a protection against misuse of user data works with more than 0% efficacy, but I don't think that is a good solution.

In regards to giving out my own data, I'm not sure I can trust Discord in handling of that. I lost my trust on that after they explained that their data collection (the famous /api/science route, that was formerly /api/tracking, but got blocked by extensions to the point they had to rename it), if turned off by the client, would still continue (you can prove this with devtools), and in their FAQ post, just said that it's dropped on the server.

The nitty gritty: when you turn the flag off, the events are sent, but we tell our servers to not store these events. They're dropped immediately — they're not stored or processed at all. The reason that we chose to do it this way is so that when you turn it off on your desktop app it also turns off automatically on your phone – and vice-versa. This allows us to keep things the same across all of our apps and clients, across upgrades.

Believe me when I say, since they already sync user configuration across clients, it's easy to make the app disable tracking upon seeing the already existing configuration option to disable tracking.

Learning to bypass limitations imposed by Privileged Intents

For context: in the library that I'm using for my bot,, there's a full framework, discord.ext.commands, in it you can declare a command like such:

from discord.ext import commands

async def some_command(ctx, argument1, argument2):

One specific feature of the framework is the ability to convert from raw strings to useful objects, for example, if I wanted to receive a user, I could say

async def some_command(ctx, user: discord.User):

and will convert either a mention, or an ID, or the username back to the User object. This is possible because Discord gives all member data to the bot upon the bot's startup. It is fully dependent on the members privileged intent.

Without the members intent, the member lists are basically empty, and the feature doesn't work anymore.

Bringing the feature back

Discord's message events contain full user objects (which have ID, name, avatar, discriminator, etc) and member objects, (which contain roles, nickname, etc) since the client must be able to draw out the UI containing that user.

Even though you don't have full member lists on startup, you, using that fact, can start passively collecting data to re-create the feature. It works like this: on any message, store the user in persistent storage, when needed, look up from that storage, and maybe put it in cache for performance reasons.

Considering message events also contain the guild's ID, you can safely say that with any message that contains an author and a guild ID, you can say that author is a member of that guild, and still do the same thing Discool thing. It is technically less data than originally, since you're only collecting data about the people that are actually sending messages in the guild.

Keep in mind that messages being sent also include the system messages created upon a user's join (they contain the same user object). They're messages like any other, and will reach any bot in the guild. You can have the same effect with bots that create welcome messages to new users, as they usually put a mention to the user (“welcome @user to $guild!”), and since mentions internally are represented by putting the user ID on the message, bots can still scrape it up.

You can also use that same method to extract relationships with typing events as well, as they contain guild ID and user ID

Future work

New Discord features

Recently I've seen work-in-progress inside the Discord client on a way for bots to declare their own commands (you can also see the Figma prototypes on the “The Future of Bots” blogpost), removing the need for bots to have every single message being sent to them (it currently works with bots checking in if the message starts with a given prefix, etc).

It is defnitely a better idea than the current state of things, but I'm not sure if the boundary of “hey, your user data is now in the hands of the bot developer” is very well defined. Many users would still use the bot, even once, and have their user info stored on a database. I'm not using it for malicious intent, but if we're talking about the technical purpose of Privileged Intents, we must assume every bot is malicious, and design around that. using Request Guild Members for the User and Member converters

See this issue. It would sidestep the problem of having to extract info out of messages, which would help my usecase immensely.

I don't know if it'll be implemented, and I'm too anxious to open an issue on to use it, but it is a nice idea.

Closing thoughts

this music is fucking awesome


from lun-4 (imagine gpt-3 but gayer)

I tried, as an experiment/curious endeavour, to run osu!lazer (2020.717.0) on a NetBSD 9.0 QEMU/KVM guest, on my machine. Here's the results:

  • Follow the Linux Emulation NetBSD guides, installing the linux compatibility packages (suse_*) was good enough to continue.
  • FUSE is not included by default, but thankfully, the AppImage gives instructions to extracting itself into a folder.
  • An AppRun file exists in the folder. Trying to execute it would give “Failed to initialize CoreCLR, HRESULT: 0x80070008”. It is related to ulimit -n, the default is way lower than 2048
  • Raise the file descriptor hard limit using sysctl kern.maxfiles, then bring up the hard limit (ulimit -Hn), then bring up the soft limit (ulimit -Sn). If I recall correctly, I bumped it up to 8000.
  • Some other failure happened while running AppRun, upon further inspection via ktrace/kdump, a sched_setaffinity syscall failed because of permissions.
  • Running it as root fixed that.
  • A traceback appeared regarding ICU libraries not being found, even though I have them installed.
  • You can edit osu!.runtimeconfig.json to disable that requirement. More here
  • osuTK kept talking about unsupported platforms. Upon further investigation, it was because no valid GPUs were found. A very cryptic error, for sure.
  • In the end and some more discussion, we blamed the fact I was running NetBSD as a KVM guest as the culprit. virt-manager only provides QXL, VGA, and virtio, and virtio gpu drivers aren't on NetBSD 9.0.

Maybe I'll run NetBSD as a host on one of my machines and keep this experiment running. Until then, that's what I got.

To be continued..?


from Ave's Blog

aka “How to use a ZTE MF110/MF627/MF636/MF190 on Linux in 2020”

Recommended reading before this post: A retrospective of 3G routers in Turkey

I was looking at random cheap networking stuff on the Turkish ebay clone the other day when I stumbled upon the good old 3G modems.

I always wanted one, and at some point one entered the house, but I never got a chance to use it, and I no longer am on good terms with the person who owns it now. I also ended up being given one by my grandparents several years ago, but I managed to break the sim slot before I could test it by storing a sim converter in it without the actual sim, which got stuck on the pins, and the attempts to pull it out led to the pins being destroyed. Ouch.

So, after all this time, I wanted to finally give it a shot, and they were being sold for as cheap as $3.5 + shipment, so I ordered the first one I saw, which was an “Avea Jet” model. It arrived this morning in a bootleg nutella box, and I had to pay like $2 to shipment. Yup. Can't make that shit up.

But yeah, the modem was inside that, wrapped in bubble wrap. I quickly opened it up, grabbed a SIM that has an actual data plan, found an appropriate SIM converter, stuffed it in:

So... I plugged it into my computer... and it didn't show up as a modem on neither modemmanager when I ran mmcli --list-modems, nor on dmesg:

Uh oh.

Back when my grandparents gave me their old one years ago, I didn't need to install drivers or do any special config for it to be detected, but I was running Ubuntu 16.04 back then.

This suddenly got me worried. What if the relevant kernel module was dropped since then? What if arch doesn't build it in? What if this device never got supported? What if the modem itself was broken? What if the modem is carrier locked (I was using a Turkcell SIM)?

I plugged it into the only Windows computer to try and figure out stuff, but that left me with more concerns, especially about the modem itself being broken, as the CD drive just decided to not load after some time, and as I couldn't see the modem in device manager.

So I started searching, and stumbled upon this arch wiki page, and it's pretty much the basis of this blog post.

I already had many of the software needed such as modemmanager, usbutils and mobile-broadband-provider-info, but turns out I also needed usb_modeswitch and also optionally nm-connection-editor for easy configuration of the settings (nmtui doesn't support it), and modem-manager-gui if you want a GUI for SMS.

The whole thing boiled down to:

# pacman -S mobile-broadband-provider-info modemmanager usbutils usb_modeswitch --needed
# pacman -S nm-connection-editor modem-manager-gui --needed  # these are optional
# systemctl enable --now ModemManager

But even after that, I was a little scared. usb_modeswitch --help talked about modes for huawei, cisco and many other manufacturers, but ZTE was missing from that list:

I sighed. I took a deep breath, and I re-plugged the modem and... what do you know, it showed up as a modem on dmesg and mmcli:

First off I opened modem-manager-gui, and sent a text to my girlfriend, who was able to confirm that she got it:

Being relieved that there's no apparent SIM lock or any other modem issues, I booted up nm-connection-editor, hit +, picked Mobile Broadband, and followed the wizard:

After saving, I finally booted up nmtui, disconnected from wifi and connected to the mobile broadband network:

And voila. I'm now writing this blog post while on a mobile network.

It's not fast by any means, but at least I can fall back to this in an emergency:

We're also planning having a similar setup on a colo with an IoT SIM so that we can still access the network if anything goes wrong.

That's all! Hope you enjoyed this blog post.


from LavaTech Hall of Fame

Violet M. discovered a vulnerability on 2020-05-03 with our reverse proxy handling code and informed us through the mail address we specified on our security.txt.

We were using the X-Forwarded-For header for IP ratelimiting without realizing that IPs passed onto this aren't overridden by Cloudflare, but are appended. Relevant document can be found here.

This could potentially allow an attacker to bypass IP ratelimiting. We've deployed a fix shortly after she reported the issue to us:

We also deployed a workaround for our non-CF domains, which we will improve on v3. We did this by overriding the CF-Connecting-IP header with user's IP on nginx running on the routing server. We had been overriding X-Forwarded-For before this, so non-CF domains have been safe from this vulnerability the whole time.

Violet also found a potential vulnerability that an attacker with our IP addresses could abuse which was dependent on how our setup was configured behind the scenes, however this wasn't an issue that affected our specific setup as we have IP whitelists in place, and don't have as the default route on the routing server. Specifically, we limit to CF IPs on our routing server, and to routing server on the server that runs our elixire instance.

While the main instance at is safe from all these security issues now, we recommend anyone running elixire on their own server to upgrade to the latest master, and to deploy IP whitelists if they don't already have them in place (here's Cloudflare's IP list), and don't put elixire as your default route (as an attacker may put your IP to their Cloudflare account to bypass the whitelists). If you have a short list of domains, consider enabling Authenticated Origin Pulls.

We'd like to publicly thank Violet for helping us make elixire safer.

You may find/contact Violet at:


from LavaTech Blog

Hello everyone, XMPP Services had several (unannounced) downtimes today as I did some changes to

  • I've updated the server from ejabberd 20.03 to 20.04, released 4 days ago.
  • I've enabled STUN/TURN to enable Video/Audio calls. This is somewhat big. Here's the relevant issue, and a number of links explaining this. You may use this feature with Conversations. I haven't tested it yet, however if you do manage to try it out, please let me know if it works or not on email or XMPP [email protected]
  • I synced the configuration on the server to the one on the repository. The actual settings were the same, but locations of them weren't, and they missed notes etc from each other. I've synced these, mostly from the repo to the server. You can find the relevant commit here.
  • I've open sourced our MAM wiping systemd service/timer.
  • We've updated the SSL certificate. This is a downtime we have to face roughly every 80 days.

Thanks as always for using our services, Ave

(Tags to jump to other related posts: #a3pm)


from LavaTech Blog

Hey all,

As title says, we used to be a Manjaro ARM mirror for several months, but exactly 4 weeks ago, it became a part of Manjaro's regular mirrors. This was rather big and honestly amazing news for Manjaro ARM.

It did however mean that our mirror would be deprecated.

We thought of becoming a regular mirror, but couldn't find easy resources on how to become an official one quickly, and I (ave) was quite busy at the time, so it got kind of ignored.

Well, that changes today! We're now an official Manjaro mirror. Well, it was ready yesterday, but it got approved today, and we only switched it to the upstream today (that's the equivalent of a T2 Arch repo going to T1).

You shouldn't need to do much and it should get activated automatically based on ping, but if you want to add it manually, our mirror is at

Signed, LavaTech Team

(Tags to jump to other related posts: #manjaro #mirror)


from LavaTech Blog

Hello everyone,

I've just enabled experimental IPv6 support on XMPP services.

By that, I mean that I've:

  • Added AAAA records
  • Tested basic XMPP functionality (successful connection, fetching message history, sending messages) with an IPv6-only connection (which succeeded)
  • Updated nginx to also serve on IPv6 on port 80
  • Updated sslh to also serve on IPv6 on port 443

One thing that I haven't tested that has a chance of failing is HTTP file uploads/fetches with IPv6, though it very likely will work.

Please let us know if anything doesn't work! As always, you can reach us on [email protected] or [email protected] via XMPP or email, or use our Discord Guild.

As always, thanks for your interest in our services, Ave

(Tags to jump to other related posts: #a3pm #ipv6)


from LavaTech Blog

Hello everyone,

To make LavaTech Bitwarden better for both you and us, we're moving to bitwarden_rs, a third party bitwarden server.

We're sure that you're all well aware of certain issues LavaTech Bitwarden faced, especially regarding downtimes and email issues.

Bitwarden itself had certain issues:

  • It was written in .net core, used mssql etc, and was rather heavy for a small scale deployment like ours.
  • It required the use of docker-compose and was made out of ~10 containers. This didn't play nicely with our Docker network, leading to occasional downtimes and email issues.
  • It didn't allow accessing premium features or creating an organization without purchasing a license directly from Bitwarden, making self-hosting rather useless.
  • This was able to be bypassed with use of projects such as BitBetter ( We did use this, but that still required us to manually generate licenses for people.

bitwarden_rs has premium features enabled for everyone, you can create an organization without purchasing a license or asking us for a license file.

It's all self contained in one docker container, and plays rather nicely with our network. We've also fixed the email issues for both instances (that was mostly due to our messy network tho, which is now also sorted out).

And according to our early tests, it performs much better than Bitwarden itself, both in terms of actual performance and stability. If you had issues with it before, we recommend trying it out again.

Existing users

As the database in the backend isn't compatible with bitwarden_rs, we need existing users that want to preserve their data to follow these instructions:

  • Login to on a browser.
  • Go to Tools –> Export Vault, pick “.json”, enter your password, download your file.
  • Go to on a browser, register a new account, verify email, log in.
  • Go to Tools –> Import Data, pick “Bitwarden (json)”, select the file you downloaded, hit “Import Data”, and wait for it to finish. This process may take several minutes. There's a potential that it might time out and give an error, however it will still add everything in the background. Please do not try to add it again, it will just add duplicates.
  • Log out on any other device, extension etc you use Lavatech Bitwarden on, update domain to and login again.

Our old Bitwarden instance will be shutting down in a month (May 18, 2020), or when we confirm that all users have migrated, whichever is sooner.

Thanks for your interest, LavaTech Team

(Tags to jump to other related posts: #bitwarden)


from Ave's Blog

Superonline, aka SOL, aka Turkcell Superonline, aka AS34984 is one of the largest ISPs in Turkey.

One of the ads from the ISP, modified to say oof

I've been using their 100/5Mbps unlimited fiber service (their highest-end plan, other than the 1000Mbps one that has its own listing and costs 1000TRY/mo) for over a year now.

Let me tell you: I suffered a lot. Anything from random internet cuts to constant network-wide slowdowns whenever we watched anything on Netflix. I was constantly spammed with calls trying to sell me Turkcell TV+ (even when I told them that I don't watch TV countless times), and roughly 5 months before my contract expired, trying to sell me expensive and lengthy contract renewals*.

And even when it worked, it wasn't as fast as promised, at least over WiFi (5GHz): showing 45/20

Meet the routers

Huawei HG253

When I first got my Home Internet, I was given a Huawei HG253, a rather bad router: No 5GHz WiFi, horrible DHCP (can't even set static assignments), etc.

This is a rather hated router according to bad internet forums apparently (yes, I called donanimhaber bad, bite me).

Back then I set up a pihole instance at home just to deal with the DHCP issues (and ofc, also to block some ads).

All in all, this is how it looked like (before I did cable management haha I never did):

Huawei HG253

Thankfully though, the HG253 had a security vulnerability that ended up in my favor: It sent the PPPoE password to the client on the UI, and just marked the field as a password. You can literally just check the page source and get the PPPoE password. Back then I realized this and noted down the credentials (more on this later).

The HG253 had at least one public firmware (link of my HG253 files archive, including a user guide and firmware), and had SSH enabled.

I extracted this firmware and pretty much just explored it Back Then™, but found nothing too interesting. I think I found some hashed credentials but never bothered with hashcat-ing them. SSH was also out of question, it was ancient and even when I forced ciphers it liked to error out, I couldn't get very far with it.

I don't remember exactly what happened to this router, but IIRC it just died one day, and upon calling the support line, they replaced it with a...

Huawei HG255S

The HG255S, my current ISP Router, is a fairly decent router compared to HG253 and overall to other ISP routers I've used so far: It has 5 GHz WiFi (but it sucks, you saw the speedtests earlier), decent DHCP (after the HG253 it felt nice to have), 3G modem support, built-in SIP and DECT, USB port with Samba and FTP support, etc.

Huawei HG255S

However, as you may expect, most of these features are either locked down or behind a paywall. I'd honestly love to be able to modify the SIP settings so that I can have a DECT network at home that connects to my SIP network, but SOL only allows buying phone service from them. The SIP settings menu is removed from UI. More on all this later, this is what finally brought me to the point of replacing the router.

I still kept my Pihole install with this setup in order to not lose my DHCP and DNS data if my ISP ever swapped my routers again, and at that point, I was already doing a bunch of other odd stuff on that Pi anyways (like running openhab2).

“So just replace the router”

Well... Superonline doesn't allow you to replace their router if you're a fiber customer. The PPPoE credentials are not given to you even if you ask for them unless you're an enterprise customer (Relevant page for enterprise customers).

They hate the idea of you replacing the router. Whenever I call the support line with a technical problem they ask if my router is the one they gave or not.

There's literally no technical reason for this I can see, it's all red tape: The fiber cable doesn't even plug into the router, they give you a free GPON:

Huawei HG8010 GPON

The fiber cable goes into that and terminates as a female RJ45 port, which then gets plugged into the WAN port on their router. After that, it's just PPPoE.

I've previously looked into getting an inexpensive router that can run DD-WRT or OpenWRT to plug into the ISP router (and to limit the use of the ISP router to just serving the DD-WRT/OpenWRT router instead), but the things I found were either incredibly high end or simply unavailable. I ordered a router that can run OpenWRT couple months ago, and the order got canceled saying that they don't actually have any left. I gave up.

The straw that broke the camel's back

Couple weeks back, I was looking into messing with the HG255S again, mostly to figure out how I can get my own SIP stuff running on it so that I wouldn't have to worry about the horrible SIP implementation on my Cisco phone, and so that I could free an Ethernet port.

While doing my usual scouring to find any new information, I stumbled upon this specific post on a bad Turkish forum mentioning them running OpenWRT on the Xiaomi Mini router, and asking if moving to that would get them better performance. I quickly checked N11 (Turkish amazon, basically) and saw that there's some other Xiaomi Mi Routers, specifically the Mi Router 4 and 4A (Gigabit Edition). I checked their OpenWRT compatibility, and after seeing that they're supported, I ordered a 4A for merely 230TRY.

I considered getting something better that costs more, but due to COVID-19, I am trying to lower my expenses.

I also went ahead and dropped ~120TRY for a bunch of different programmers to have around, lol.

More on the Mi Router 4A

It's a Xiaomi Mi Router (to be called MiR) 3Gv2, in which 3Gv2 is just 3G, but worse. If you can get one of those, go ahead. Sadly though, they're not available in Turkey. It has 3 gigabit Ethernet ports, one for WAN. It has 2.4GHz and 5GHz WiFi.

It has support for OpenWRT snapshots, though it was broken as part of the move to Kernel 5.4 for over a week now. I talk more about this later.

It runs their own OpenWRT fork called MiWiFi:


MiWiFi is fairly decent and honestly, is pretty usable by default. However, as you might expect, it's not very extensible. I wanted to use Wireguard with this router, and MiWiFi simply didn't offer that (though it did have built-in PPTP and L2TP). There are also some privacy concerns I have with Xiaomi due to the amount of telemetry my Xiaomi Mi phone sends.

It has two ways of getting proper OpenWRT on it:

The physical way

You can go the physical way by opening up the device, dumping the SPI, changing the uboot parameters, then flashing it back.

This is safer as you have a point to recover to if you somehow manage to softbrick, but in the end, there are people who posted their own images on the Internet (which will change your MAC address btw, you'll need to edit your MAC back if you flash those images).

While noting down that I was unable to successfully dump the SPI as I couldn't get the programmer to see it, I was unable to find enough information on several parts of this process before I could even attempt it, so here are some tips:

Software (OpenWRTInvasion)

The other approach is to take the lazy approach and use the software exploit, OpenWRTInvasion. This is what I ended up doing in the end.

FWIW, to get stok (session token), open the panel ( and log in. It will be on the URL:


OpenWRT on MiR 4A time

Shortly after it arrived, I ended up installing a build from the OpenWRT forum, as the latest builds reportedly soft-bricked the device. I spent the day setting it up and learning how to use Luci (the Web UI) and OpenWRT.

Sadly though, I realized shortly after that I wouldn't be able to run Wireguard on it for some time as:

  • MiR 4A doesn't have stable releases, just snapshots.
  • The build I installed was an unofficial build (I later tried another build and it was one too).
  • Snapshots do not have packages for older versions (except kmods, but obviously only for official builds— I tried force installing one with a matching kernel version, but it obviously didn't work as it couldn't match the symbols).
  • The OpenWRT image builder uses the latest packages from the repo.
  • Official snapshots do not get archived, which means that I couldn't switch to an official version.

So a couple days later, I decided to make my own build. Being scared of bricking my router (even if I could recover from it, I didn't want the hassle), I ended up hours trying to find which commit was the last safe one and then realized that the version I'm running included a git hash in the version code. Oops. I ended up going with that one.

So I set up an OpenWRT build environment and built it for the first time, and while praying to tech gods to not lead to a brick, I flashed it.

And it worked... though it was missing Luci and a bunch of other packages as I compiled them as modules, not as mandatory. Apparently, module means that you just get the ipks, while mandatory means that you get the ipks AND it gets built into the image.

I SSH'd in and installed the Luci modules I compiled (it was painful, it's like 10 packages), then did another build with everything set as mandatory.

And sure enough, it worked! I quickly posted my build and talked about my success in the OpenWRT forum.

All basic functionality worked as expected AND it had the wireguard kmod, so I could call it a success, right? Well, no.

I just couldn't get wireguard to work, it did show as connected on the router, but when I checked on the peers, it didn't show up. I never used OpenWRT before so I had no idea if I was doing something wrong or not, so I simply noted that down on the forum post and moved on.

The next day though, someone who's an OpenWRT dev posted about a patch they proposed to fix the issue on master. I quickly applied the patch, improved the set of packages I include, compiled, flashed, confirmed that it worked and posted a build to the forum.

I had to reset the settings to get it to work due to these DTS changes, and after a reconfiguration, I was happy to see that wireguard actually worked... mostly.

While it did work for IPv4, IPv6 just kept not working. This happened when I tried 6in4 too, which is rather annoying as I've been wanting IPv6 at home or some time. I think IPv6 is just broken somehow. I'll dig into it more later.

Edit, a couple days later: IPv6 on router was okay, however there were two issues:

  • The server I was Wireguarding to ended up having constant issues due to upstream, leading to IPv6 downtime for some time (without me realizing it, oops).
  • I had no idea how to properly distribute an IPv6 block to LAN with Wireguard, and I still don't. Yell at me with instructions here.

Anyhow, I got it working. See the conclusion for more details.

This is mostly where the state of affairs is right now. A modified version of the proposed patch was merged into master, and I also posted a build including that, but there's not much noteworthy there, nothing in the build was changed.

Extracting SOL's PPPoE creds

And as promised, what you came for: PPPoE magic.

Well, first of all, I tried using the PPPoE credentials I extracted from the HG253, but they didn't work. It'd probably work if I still had the HG253, but it probably changed when my router was being changed to an HG255S. That's all there is to the “I'll get to this later”. Yep.

There are guides out there that talk about how you can extract the credentials, but these are all aimed at people who don't use Linux, basically writing guides that are helpful to people who aren't familiar with Linux, but wasting the time of those who are familiar. Some are better tho, but IMO could be improved.

Here's my take at it:

  • Log into your router, find the PPPoE username. It should look like this: [email protected]. Note it down.
  • Install rp-pppoe

On Debian-based distros: # apt install pppoe

On Arch-based distros: # pacman -S rp-pppoe

  • Edit /etc/ppp/pppoe-server-options

Change the contents to:

# PPP options for the PPPoE server
lcp-echo-interval 10
lcp-echo-failure 2
logfile /var/log/pppoe-server-log
  • Edit /etc/ppp/pap-secrets

Change the contents to (replace REPLACETHISWITHYOURUSERNAME with your username):

# Secrets for authentication using PAP
# client        server  secret                  IP addresses
  • Create the log file for rp-pppoe: # touch /var/log/pppoe-server-log; chmod 0774 /var/log/pppoe-server-log
  • Find your ethernet interface with ip a. Mine looks like enp3s0, it's what I'll use in the future commands, replace that with your own.
  • Shut down your router, plug in a cable to the WAN port, plug the other end to your computer.
  • Run # pppoe-server -F -I enp3s0 -O /etc/ppp/pppoe-server-options on a terminal, replace enp3s0 with your own interface.
  • Run # tail -f /var/log/pppoe-server-log on another terminal
  • Turn on your router, wait for a little until you see lines like this:
rcvd [PAP AuthReq id=0x7 user="[email protected]" password="no"]
sent [PAP AuthNak id=0x7 "Session started successfully"]
PAP peer authentication failed for [email protected]
sent [LCP TermReq id=0x2 "Authentication failed"]


  script /usr/bin/pppoe -n -I enp3s0 -e 7:no:no:no:no:no:no -S '', pid 4767
Script /usr/bin/pppoe -n -I enp3s0 -e 7:no:no:no:no:no:no -S '' finished (pid 4767), status = 0x1

Take the password from the first block, and the MAC address from the second one (ignore the 7: or whatever number from the start).

Now you have everything you need to replace your SOL router.

Finally: Replacing the ISP router with a MiR 4A

This is the simple part.

Plug the cable from GPON to your router.

Log onto Luci, edit WAN (and disable WAN6), change type to PPPoE, put in the username and password we got earlier into the PAP/CHAP username and password fields like this:

PPPoE settings

Then save and apply.

ssh into your router, edit /etc/config/network, find config interface 'wan'. Add a line to it (with proper indents) with something like option macaddr 'no:no:no:no:no:no'— replace no:no:no:no:no:no with the MAC address we found earlier.

Then finally run service network restart, and you'll be free from the curse that is Superonline's ISP routers.

In conclusion

My wifi speeds are MUCH better now :)

97/20 speedtest

And I can connect to our internal network without needing to VPN on the device itself :D

Me accessing a server on edgebleed

Soon I'll even be able to have IPv6 at home :P

I even have IPv6 at home, thanks to linuxgemini :3

IPv6 test showing IPv4 from SOL and IPv6 from Lasagna Ltd

Also: Capitalism is a failure, and free market ideologies are a joke. You don't get companies competing for cheaper prices, better service and less restrictions, you get companies all limiting their customers and all of them fucking them over in different ways. I am forced to use SOL because VodafoneNet and TT both have a contract minimum of 2 years, TT is unreliable AF, and TurkNet Fiber is unavailable in 99% of Turkey, including where I live, and everyone else are just resellers.

Bonus vent

*: I constantly turned down their offers as they were all worse than what I was already getting, or were slower than 100Mbps. I was also lied to, saying that fees would go up after the new year due to BTK, which was simply wrong, they still sell the same plan for the same price I started out my contract with. I ended up calling them 2 weeks before my contract expiry date, telling them exactly what I want (100Mbps, with contracts no longer than a year), they came up with a 15 month 100Mbps plan for 135TRY for the first 6 months, then 160TRY for the next 9. I kinda hesitated for the 15 months thing, but I said meh and agreed to it.


from Ave's Blog

I'm not the fastest typer, and I don't really use 10 fingers- I'm closer to 7 or 8, but I try to minimize the amount of keypresses that I make. This means that I use shortcuts a lot, and if there's a key for it, I use it. One example of this (that involves the delete key) is how I press delete instead of right arrow and backspace.

And ever since I got my Pinebook Pro, I felt the distinct lack of a delete key.

What's worse was the fact that in place of a delete key was a power key, one that, once tapped, depending on the DE either showed a power menu, or shut off the PBP:

Power button on PBP's keyboard, in place of delete key, at top right corner of keyboard, as an actual keyboard key (official image from Pine64 Store, modified with circle around power button)

One of the first things after I installed Manjaro ARM was disabling the power button's system shutdown effects in /etc/systemd/logind.conf, by setting HandlePowerKey=ignore (and restarting systemd-logind, which fyi kills X session)

Later on, to actually get it to work as a delete key I used something I did long ago, and just got the keycode from xev and set it with xmodmap to delete.

This wasn't perfect by any means, it had some delay, and some software like gimp just ignored it (which made image editing a lot more painful).

Then the project working on improving the keyboard and touchpad ended up releasing an update, one that allowed people to make their own keymappings.

I saw this while at work, put a note for myself:

The original note from February 9

I've been meaning to put aside some time to try and implement this behavior in the firmware itself, but I just couldn't find the time or the energy.

Until today.

The setup

I don't have much of a story to tell tbh. I cloned the repo, downloaded the requirements, compiled the tools.

I compiled and installed firmware/src/keymaps/default_iso.h (by following instructions on firmware/src/ just to see if it works or not, it did, so I continued on.

After setting up this new firmware, I did notice that some functionality worked differently though, such as:

  • numlock didn't turn ha3f 6f the 2eyb6ard 5nt6 a n40-ad (numlock didn't turn half of the keyboard into a numpad), but simply allowed the numpad area on the keyboard to be used with fn keys, which is a much better way of doing things.
  • Fn+F3 no longer pressed p. p.
  • Keyboard/Touchpad name changed from the actual part name to “Pine64 Pinebook Pro”, breaking my xinput set-prop settings. Simply renaming the device on the commands fixed this.
  • Sleep button combination (Fn+Esc) did not work (I don't use this combination, but the fact that it had the image on the keyboard and worked prior to the flashing bothered me).

The tinkering

I copied the file default_iso.h to ave_iso.h, trying to figure out how it's structured. I tried to find the power button, and I couldn't find it.

There was this vague keyboard shaped array with key mappings, and I did get how one half of them worked, but I couldn't understand how the other half did:

The keyboard shaped array

Well, I dug in the codebase for a couple hours, trying to figure out everything, and it finally made sense.

FR, FS and FK are just arrays that are mapped to fns_regular, fns_special and fns_keypad arrays in the same file respectively. This is all explained on the firmware/src/

The number (such as 6 on FR(6)) given as argument is the index from said array.

An example entry of REG_FN(KC_Z, KC_NUBS) means that default action is KC_Z, while action when Fn is held down is KC_NUBS.

KC means keycode, and they're mapped in firmware/src/include/keycodes.h. Do note that not all descriptions are correct in practice though, one example is that KC_SYSTEM_POWER says 0xA5, but 0xA5 is actually used for brightness up (I explain why this is the case later).

The R() function used on rest of the keyboard are “Regular” keys, ones that have no actions with Fn. They're directly passed on to their KC_ versions.

If you hate yourself, you can also supply regular integers in place of any of the aforementioned functions and anywhere where you see a KC_, and this did help when I was trying to understand how things work.

FK is only able to be used with Fn keys when numlock is open. I'm not exactly sure what the difference of FR and FS are outside of semantics. (Looking at my own PR, I regret using FR instead of FS as I'm not fitting the semantics properly. Functionality seems the same though.)

I ended up implementing the sleep button combination, and I learned a lot about keyboards while trying to figure out how I could even emulate the power button. I have some links that I used during my adventure at the bottom of this article. I sent a PR with that patch and it got merged.

The realization

After asking around on the Pine64 #pinebook channel, I was told by a helpful person that the power button is wired to the SoC directly, and that SoC sends the power key input itself (or rather, that this input was handled by the device tree in the linux kernel and turned into an actual emulated keypress).

Most importantly however, they said that it could be remapped with udev. Now, I had only used udev rules to date,and it got me rather confused as I had no idea how one would remap anything with that. That got me to research how to do that, and I learned about a tool that I never used before: evtest.

And sure enough, I found it:

gpio-key-power on evtest's device list

Upon picking gpio-key-power and hitting the key, I immediately saw the keypress (this image was taken after the change, so it says KEY_DELETE, before the change it used to say KEY_POWER):

Power key press event on evtest

Upon more research, I learned how to write hwdb entries in udev, not rules. Similarly, I found an already existing hwdb file in /etc/udev/hwdb.d/10-usb-kbd.hwdb, which explained why the KC_SYSTEM_POWER key was mapped to brightness up: Because the hwdb was set up this way. For reference, here's what it looks like:


This also explained to me why KC_POWER caused a sleep action and not a power key action when done through the builtin keyboard (but not through the dedicated power button).

The ending

I quickly wrote a hwdb file myself on /etc/udev/hwdb.d/20-power-button.hwdb:


And upon recreating the hwdb file with # systemd-hwdb update and triggering the hwdb with # udevadm trigger /dev/input/event2, the power button started working as a proper delete key.

evtest saw it as KEY_DELETE, the delay when tapping it rapidly vanished, and stuff like gimp started to acknowledge it. Now I just need to avoid holding it down.

Handy resources


from Ave's Blog

Winter of Getting Stuff Done is a seasonal theme I have set for Winter Season of 2020, based on CGP Grey's video. It's about getting stuff that I wanted to finish for some time done and not jumping onto new ideas all the time.

I've been interested in public transportation for many years, and as they're sadly paid (Luxembourg has the right idea here, DB doesn't), also the systems that are used for collecting fares. In Istanbul, a mifare desfire EV1 card called istanbulkart is used for this purpose. Here's the page for it (yes, there's a .istanbul tld).

I use public transportation every single workday, and I try to optimize my path in terms of both cost and time. (And internet connectivity, which is why I try to go for marmaray whenever I can.)

To help myself (and others) optimize in terms of cost, I wrote up a simple tool after talking about istanbulkart with a coworker today. Specifically, he was trying to decide if he wants to get a “mavi kart” (lit. blue card, a name with your picture, name and ID number on it. Only difference from anonymous card that I know is that it allows you to get a monthly subscription- “abonman”).

I had the idea to write up a quick script to get a sum of all your istanbulkart expenses in the last month, so I wrote a quick script while we were in marmaray.

An early screenshot

(Actually, it didn't even look like this. It used floats so it was just a big jumbled mess, it didn't show card name or currency, in fact the card number was hardcoded, and it also didn't account for refunds, which is also the case on the picture above but not in the final result)

Later in the day I extended the code and cleaned it a lot, and here's the result:

Output of final result of ikmon

It's a small project that took me an hour or two to hack together, but I hope you enjoy it regardless.