LocalTuya: Device pairing broken

We have four “Skyfan” DC ceiling fans with their little Tuya IOT dongle in them, and for the most part we’ve been pretty happy with them, except at some point recently we noticed that the “fan speed control” portion of Duncan’s was missing. Everything else was fine, and I’d tried a couple of times to fix it to no avail.

Today during a break from work (I’m off early, so I can go back for a meeting with the EU team tonight), I decided damn it, I’m gonna fix this no matter what. I decided to delete the device completely from Home Assistant, which I felt reasonably safe doing because I possess a copy of the local_key from the debug output right?

Umm, no, wrong. That value appears to be something else, it’s very different from what’s actually the key, as I would later learn. Furthermore, it looks like at some point Tuya rescinded the permanent free access to their API, so now none of the API stuff is working. The forum post I’ll reference later suggests that this stopped working around June 2022, but we definitely added devices in November/December of 2023 so not sure what to make of that.

Anyway, that inability to use their API to snatch the local keys meant I’m stuck with a fan I can’t re-add to home assistant. Worse still, I backed up the configuration, but not the bloody database, so I can’t even dig the key out of there! Whoops!

I was almost mad enough to consider burning the Tuya shit (which this event notwithstanding, we’ve not had too much trouble with for the most part) to the ground, flashing what I could with Tasmota, replacing the Tuya modules for the Skyfans with an aftermarket ESPHome one, and calling it a day.

But thankfully for my wallet, I found a HAss forum post that mentioned you can periodically reactivate the API service, which appears to have worked. It didn’t work immediately, but I then tried reactivating the “Core IOT” service which required a bunch more information and required “review”, but it started working at some point after that.

I quickly added the fan back to HAss, added the missing speed DP (which was there this time) and we’re back to working. But I’m still pretty fucking mad, and I may still look at migrating shit away from Tuya yet.

Horsham, VIC, Australia fwaggle

Published:


Modified:


Filed under:


Location:

Horsham, VIC, Australia

Navigation: Older Entry

More old blog entries again

Every so often I search web.archive.org for old domains I used to own, and even this domain, to see if they found some old content behind the couch. I think it stems from the fact that whoever is squatting on my oldest vanity domain changed their robots.txt, and for some reason the Wayback machine decided that was a green-light to show the content they had previously crawled.

Don’t get me wrong, I’m grateful, I just wish they’d done a more thorough job at crawling my first site… unfortunately, like most other clueless nerds at the time, I built my site using frames, and they pretty much only crawled the outer frames and none of the actual content. I’ve also long-since lost the CD-Rs (and they weren’t in great shape) of the backups I took around that time. Finally, they might be doing me a favour, since most of what I wrote when I was younger was absolute shite.

Anyway, it’s probably more due to the fact that each time I come back and check I must be searching in a slightly different way, because I always seem to dig up a handful of new blog entries. Or it might just be that my standards are lower. Either way, this time around, it’s a few cringe-inducing entries from two different sites around 2008.

Horsham, VIC, Australia fwaggle

Published:


Modified:


Filed under:


Location:

Horsham, VIC, Australia

Navigation: Older Entry Newer Entry

Blog fixes for Google

I got an email from Google the other day about indexing issues… normally I ignore these, because I really don’t give a shit about traffic (I do not keep logs, analytics, etc, I literally don’t care)… my audience is myself, because if I don’t write something down I will absolutely forget it happened as I have the attention span of a demented goldfish. But then I remembered that sometimes when I can’t find anything, I’ll use the Google-powered search bar, and if it’s failing to index stuff then that might be bad for me, so I took a look.

I somehow ended up looking at the LightHouse/PageSpeed Insights/whatever the heck it’s called today scores, and found a few things I can improve there. Some of it is legitimate things I didn’t care about but now do (contrast ratio for instance), and much of it was fairly simple to fix.

Some of it was less-so: it’s complaining about the time before the largest contentful paint or whatever, it reckons ~2 seconds to render the front page. Considering I work on a 3rd-gen i7 that’s very tired, I kinda call bullshit, but let’s have a look. It wants me to load my ~350KB of CSS asynchronously so the page can load without it. I guess! Copy+paste some code and it works, but I do not like it. First of all the page really does load without CSS, so it flashes white briefly before the CSS is applied, and it does this on every page load even if the CSS is already present in the cache!

The really annoying part is it stopped complaining about the LCP, but instead complained about elements moving around after the first draw… no shit, you had me remove my CSS from the first draw’s critical path! I briefly considered that I could probably split up some parts of the CSS, block on loading the critical bits and the fancy, shiny bits could be loaded async after the fact, but then I thought fuck it, a 95 or so is good enough. Stick your red triangle up your arse, my site works fine in links for fuck’s sake, and the only thing stopping it from working on a much older browser is the fact that Netlify refuse to allow HTTP connections (HTTPS is forced, always).

Anyway, back to the indexing… it looks like a lot of it is caused by canonical URLs being fucky. There’s a couple specific issues here, first of all for some reason, probably from way back when I had this site on Blogger, some of the pages are linked with query strings, and Netlify treats them all the same. I think I can declare a canonical URL with a <link> tag in the top, so let’s do that. Easy.

Next up, all these goddamn redirects from missing trailing slashes. You see, when I had this site on Blogger, and then WordPress, and then Pelican hosted on AWS S3, I could have a url like /2024/02/cool-blog-post and everything was happy. Netlify + Hugo (someone, not sure who is to blame) doesn’t like this, so every post is actually a directory with an index.html inside. When you link directly to /cool-blog-post it 301 redirects to /cool-blog-post/.

That’s probably easy enough to fix the vast majority:

grep -Ern '\(/[0-9]{4}/[0-9]{1,2}/[^/]+\)' . | cut -d ':' -f 1 | sort -u | xargs sed -Ei 's#(\(/[0-9]{4}/[0-9]{1,2}/[^/ \)]+)\)#\1/)#g'

The resulting git commit is pretty monster, but I don’t think I broke anything. To be sure I dusted off my old webspyder project and made a few changes to make it work on Python3 (which I probably should commit, but anyone with any sense would just use scrapy instead) and changed it to report 301s as a fault too, and found a few more.

Will this make Google like me more? Not sure, let’s wait and see.

Oh, while I was at it, I did find by using my spider, pulling the URLs visited out of it, and comparing those with a list of directories in public/ after a hugo -D that there was a small handful of pages that were not linked from anywhere, so I changed that too. I don’t think I’m missing any now.

Horsham, VIC, Australia fwaggle

Published:


Modified:


Filed under:


Location:

Horsham, VIC, Australia

Navigation: Older Entry Newer Entry

Kubernetes: Certificate Expired

I woke up this morning to find a critical security update for Mastodon, which I better apply lickety-split! Grab the hash for the latest container version, have a quick squizz at the changelog, and then throw it in one of the sidekiq deployment manifests to start the container download and…

error validating "apps/mastodon/config.yml": error validating data: failed to download openapi: the server has asked for the client to provide credentials; if you choose to ignore these errors, turn validation off with --validate=false

Huh, that’s new. What about…

$ kubectl get pods
E0215 07:20:28.488070    1962 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0215 07:20:28.494899    1962 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0215 07:20:28.500210    1962 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0215 07:20:28.506239    1962 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
E0215 07:20:28.512440    1962 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)

… interesting. It seems that in a feat of homeprod success, I’d finally, some two and a bit years after first experimenting with Kubernetes, managed to keep a cluster up long enough for my certificate to expire. I didn’t even know that could happen, and judging by comments when mentioning it on Mastodon, it seems I wasn’t alone. Hooray for blowing shit up on a regular basis!

Anyway, fixing it up is easy enough: kubeadm certs renew all

Then copy /etc/kubernetes/admin.conf to my workstation’s ~/.kube/config and it works again. I need to restart kube-apiserver and a few other processes, but that’s a job for another day as I have work in a few minutes.

Update: 2024-02-17: Not so! I mentioned on the thread that among the reasons for burning it all down and rebuilding my cluster was that at one point I let it get two versions behind. Kubernetes supports multiple versions concurrently, so they’re not EOL, but it doesn’t support upgrading across multiple minor versions. Since I use Alpine Linux for most of my hosts, it only lets you install the latest version, so if you wait too long, there’s literally no way to get the minor version you require.

I noted in early January that I was in such a situation there, but Alpine still had 1.28 so I reasoned that I still had time. I figured this weekend was the time, and guess what? I did not have time, Alpine carries 1.29. So despite having kept the cluster alive for >1 year, I burned it down a few days later.

At least I’m good at it now, nothing major went wrong and I had the entire thing back up in under an hour of actual downtime… the major delay was that I fat-fingered the upgrade command for the one Ubuntu host in the cluster, and required a second reboot, and since it’s the disk server nothing else could come up in the mean time.

Horsham, VIC, Australia fwaggle

Published:


Modified:


Filed under:


Location:

Horsham, VIC, Australia

Navigation: Older Entry Newer Entry

New Toy: Unifi Doorbell

A while back we bought a camera to sweep the front boundary of the house, and because I seem to have a penchant for giving Ubiquiti large portions of my paycheque even though they don’t really deserve it, we doubled-down and bought one of their doorbell kits too.

Since I really only checked the camera output from home, I didn’t notice this, but it seems that “Unifi Protect” (their mobile application for accessing the cameras, distinct from Unifi itself) doesn’t actually work over a VPN, including the offerings provided by them!

It seems like you have two options for connecting Protect: the UI cloud (which we turned off at the first available opportunity, and for good reason it seems as they’ve just had another security issue wherein people are seeing other people’s consoles when they log in); or local console discovery, which relies on L2 discovery, which is by definition not routable and thus doesn’t work over VPNs.

There are so many threads on Ubiquiti’s forums, Reddit, etc crying foul over this that I really can’t pick one to link. Most seem to come to the conclusion that I did: this seems like a deliberate choice (since the console picker for the Unifi app allows you to manually specify a console, they obviously knew this would be a requirement, and if they didn’t their UX architectures need a stern talking-to), presumably to railroad you into using their UI cloud offering similar in the way that - despite Ubiquiti understanding LLDP - setting up the UDM requires configuring UI cloud then you must disable it.

The good news? They added manual console configuration in a beta version of the Android app just a couple weeks ago, and the release started rolling out yesterday. I signed up for the beta and confirmed it works, then for Sabriena’s phone we’ll just wait for the update to hit general availability.

More good news: the UDM-SE supports WireGuard natively now (not 100% sure when they added that) and since Tailscale fell over (I think I stuffed up the routing to the LXD container it’s running in) I’ll just configure that. It’s dead easy to set up, so I’ll get Sabriena set up on that too.

So now the only remaining issue is figuring out why push notifications don’t work - it’s not immediately clear whether they’re supposed to work without the UI cloud or not. I may wind up just rigging something up with HomeAssistant instead. For now it comes with a chime, so it’s not critical (it does mean we can’t answer the door if we’re away from home, which is the primary goal of this bullshit).

But the main thing I wanted accomplished is done: this house formerly had two doorbells, a small wireless one which did not have the inside part and was trivially removed; and the outdoor unit of the intercom system, which I was able to have removed too so that people will stop pushing it and wondering why we did not answer the door. I haven’t decided whether to keep a nice wooden plaque (I had a wooden plaque now, but it’s not nice) or try to take one of the spare bricks and mortar it into place. I don’t think I’m a good enough mason to make that look neat, so I’ll probably have Dad route the edges on some dress timber, stain it up, and probably put the house number on it or something so it looks the part.

Horsham, VIC, Australia fwaggle

Published:


Modified:


Filed under:


Location:

Horsham, VIC, Australia

Navigation: Older Entry Newer Entry