"WASHINGTON (AP) — The U.S. Air Force said Thursday it would deny all
transgender service members who have served between 15 and 18 years the option
to retire early and would instead separate them without retirement benefits.
A SWAY session by Joanne of Royal Far West School. http://sway.org.au/ via https://coviu.com/ SWAY is an oral language and literacy program based on Aboriginal knowledge, culture and stories. It has been developed by Educators, Aboriginal Education Officers and Speech Pathologists at the Royal Far West School in Manly, NSW.
Category: Array
Uploaded by: Silvia Pfeiffer
Hosted: youtube
Category: 2
Uploaded by: Silvia Pfeiffer
Hosted: youtube
This screencast shows how a user of the PARADISEC catalog logs in and explores the collections, items and files that the archive contains.
Category: 2
Uploaded by: Silvia Pfeiffer
Hosted: youtube
Screencast of how to use the PARADISEC catalog for managing and publishing collections.
Category: 2
Uploaded by: Silvia Pfeiffer
Hosted: youtube
Screencast of how a PARADISEC administrator uses the PARADISEC catalog for managing the consistency of metadata and staying on top of uploaded files.
Category: 2
Uploaded by: Silvia Pfeiffer
Hosted: youtube
As I’m currently somewhat underemployed, and could do with some extra income, I’m starting an open source crowd-funding experiment. My hypothesis is that the open source community, and perhaps a community-minded company or two, really wants more open source code in the world, and is willing to put a few dollars my way to make that happen.
To begin with, I’m asking for contributions to implement a bunch of feature requests on action-validator
, a Rust CLI tool I wrote to validate the syntax of GitHub actions and workflows.
The premise is quite simple: for every AU$150 (about US$100) I receive in donations, I’ll implement one of the nominated feature requests.
If people want a particular feature implemented, they can nominate a feature in their donation message, otherwise when “general” donations get to AU$150, I’ll just pick a feature that looks interesting.
More details are on my code fund page.
In the same spirit of simplicity, donations can be made through my Ko-fi page, and I’ll keep track of the various totals in a hand-written HTML table.
So, in short, if you want more open source code to exist, now would be a good time to visit my Ko-fi page and chip in a few dollars. If you’re curious to know more, my code fund page has a list of Foreseeably Anticipated Questions that might address your curiosity. Otherwise, ask your questions in the comments or email me.
Reagan: His Life and Legend by Max Boot
A fairly straightforward single volume biography. Covers everything but not a day-by-day. Especially good with Reagan’s early life. Recommended 4/5
Pillars of Creation: How the James Webb Telescope Unlocked the Secrets of the Cosmos by Richard Panek
Good but a bit shorter than I would have liked. It could have really used another 100 pages on the development of the Telescope and a few stories about researchers. 3/5
My Scoring System
I am always looking for ways of increasing the compute power I have at a reasonable price. I am very happy with my HP z840 dual CPU workstation [1] that I’m using as a server and my HP z640 single CPU workstation [2]. Both of them were available second hand at quite reasonable prices and could be cheaply upgraded to faster CPUs. But if I can get something a lot faster for a reasonable price then I’ll definitely get it.
The home server and home workstation I currently use have socket LGA2011-v3 [3] which supports the E5-2699A v4 CPU which gives a rating of 26,939 according to Passmark [4]. That Passmark score is quite decent, you can get CPUs using DDR4 RAM that go up to almost double that but it’s a reasonable speed and it works in systems that are readily available at low prices. The z640 is regularly on sale for less than $400AU and the z840 is occasionally below $600.
The Dell PowerEdge T430 is an ok dual-CPU tower server using the same socket. One thing that’s not well known is that is it limited to something like 135W per CPU when run with two CPUs. So it will work correctly with a single E5-2697A v4 with 145W TDP (I’ve tested that) but will refuse to boot with two of them. In my test system I tried replacing the 495W PSUs with 750W PSUs and it made no difference, the motherboard has the limit. With only a single CPU you only get 8/12 DIMM sockets and not all PCIe slots work. There are many second hand T430s on sale with only a single CPU presumably because the T330 sucks. My T430 works fine with a pair of E5-2683 v4 CPUs.
The Dell PowerEdge T630 also takes the same CPUs but supports higher TDP than the T430. They also support 18*3.5″ disks or 32*2.5″ but they are noisy. I wouldn’t buy one for home use.
There are some nice AMD CPUs manufactured around the same time and AMD has done a better job of making multiple CPUs that fit the same socket. The reason I don’t generally use AMD CPUs is that they are used in a minority of the server grade systems so as I want ECC RAM and other server features I generally can’t find AMD systems at a reasonable price on ebay etc. There are people who really want second hand server grade systems with AMD CPUs and outbid me. This is probably a region dependent issue, maybe if I was buying in the US I could get some nice workstations with AMD CPUs at low prices.
Socket LGA1151 [5] is used in the Dell PowerEdge T330. It only supports 2 memory channels and 4 DIMMs compared to the 4 channels and 8 DIMMs in LGA2011, and it also has a limit of 64G total RAM for most systems and 128G for some systems. By today’s standards even 128G is a real limit for server use, DDR4 RDIMMs are about $1/GB and when spending $600+ on system and CPU upgrade you wouldn’t want to spend less than $130 on RAM. The CPUs with decent performance for that socket like the i9-9900K aren’t supported by the T330 (possibly they don’t support ECC RAM). The CPUs that Dell supports perform very poorly. I suspect that Dell deliberately nerfed the T330 to drive sales of the T430.
The Lenovo P330 uses socket LGA1151-2 but has the same issues of taking slow CPUs in addition to using UDIMMs which are significantly more expensive on the second hand market.
The next Intel socket after LGA2011-v3 is LGA2066 [6]. That is in The Dell Precision 5820 and HP Z4 G4. It takes an i9-10980XE for 32,404 on Passmark or a W-2295 for 30,906. The variant of the Dell 5820 that supports the i9 CPUs doesn’t seem to support ECC RAM so it’s not a proper workstation. The single thread performance difference between the W-2295 and the E5-2699A v4 is 2640 to 2055, a 28% increase for the W-2295. There are “High Frequency Optimized” cpus for socket LGA2011-v3 but they all deliver less than 2,300 on the Passmark single-thread tests which is much less than what you can get from socket LGA2066. The W-2295 costs $1000 on ebay and the E5-2699A v4 is readily available for under $400 and a few months ago I got a matched pair for a bit over $400. Note that getting a matched pair of Intel CPUs is a major pain [7].
Comparing sockets LGA2011-v3 and LGA2066 for a single-CPU system is a $300 system (HP x640) + $400 CPU (E5-2699A v4) vs $500 system (Dell Precision 5820) + $1000 CPU (W-2295), so more than twice the price for a 30% performance benefit on some tasks. The LGA2011-v3 and USB-C both launched in 2014 so LGA2011-v3 systems don’t have USB-C sockets, a $20 USB-C PCIe card doesn’t change the economics.
Socket LGA3647 [8] is used in the Dell PowerEdge T440. It supports 6 channels of DDR4 RAM which is a very nice feature for bigger systems. According to one Dell web page the best CPU Dell officially supports for this is the Xeon Gold 5120 which gives performance only slightly better than the E5-2683 v4 which has a low enough TDP that a T430 can run two of them. But according to another Dell web page they support 16 core CPUs which means performance better than a T430 but less than a HP z840. The T440 doesn’t seem like a great system, if I got one cheap I could find a use for it but I wouldn’t pay the prices that they go for on ebay. The Dell PowerEdge T640 has the same socket and is described as supporting up to 28 core CPUs. But I anticipate that it would be as loud as the T630 and it’s also expensive.
This socket is also used in the HP Z6 G4 which takes a W-3265 or Xeon Gold 6258R CPU for the high end options. The HP Z6 G4 systems on ebay are all above $1500 and the Xeon Gold 6258R is also over $1000 so while the Xeon Gold 6258R in a Z6 G4 will give 50% better performance on multithreaded operations than the systems I currently have it’s costing almost 3* as much. It has 6 DIMM sockets which is a nice improvement over the 4 in the z640. The Z6 G4 takes a maximum of 768G of RAM with the optional extra CPU board (which is very expensive both new and on ebay) compared to my z840 which has 512G and half it’s DIMM slots empty. The HP Z8 G4 has the same socket and takes up to 3TB of RAM if used with CPUs that support it (most CPUs only support 768G and you need a “M” variant to support more). The higher performance CPUs supported in the Z6 G4 and Z8 G4 don’t have enough entries in the Passmark database to be accurate, but going from 22 cores in the E5-2699A v4 to 28 in the Xeon Platinum 8180 when using the same RAM technology doesn’t seem like a huge benefit. The Z6 and Z8 G4 systems run DDR4 RAM at up to 2666 speed while the z640 and z840 only to 2400, a 10% increase in RAM speed is nice but not a huge difference.
I don’t think that any socket LGA3647 systems will ever be ones I want to buy. They don’t offer much over LGA2011-v3 but are in newer and fancier systems that will go for significantly higher prices.
I think that DDR5 systems will be my next step up in tower server and workstation performance after the socket LGA2011-v3 systems. I don’t think anything less will offer me enough of a benefit to justify a change. I also don’t think that they will be in the price range I am willing to pay until well after DDR6 is released, some people are hoping for DDR6 to be released late this year but next year seems more likely. So maybe in 2027 there will be some nice DDR5 systems going cheap.
Here are the benchmark results of CPUs I mentioned in this post according to passmark.com [9]. I didn’t reference results of CPUs that only had 1 or 2 results posted as they aren’t likely to be accurate.
CPU | Single Thread | Multi Thread | TDP |
---|---|---|---|
E5-2683 v4 | 1,713 | 17,591 | 120W |
Xeon Gold 5120 | 1,755 | 18,251 | 105W |
i9-9900K | 2,919 | 18,152 | 95W |
E5-2697A v4 | 2,106 | 21,610 | 145W |
E5-2699A v4 | 2,055 | 26,939 | 145W |
W-3265 | 2,572 | 30,105 | 205W |
W-2295 | 2,642 | 30,924 | 165W |
i9-10980XE | 2,662 | 32,397 | 165W |
Xeon Gold 6258R | 2,080 | 40,252 | 205W |
On December 14, 2024 – three weeks after I published the last exciting installment in this series of posts – our new Redflow ZCell battery, which replaced the original one which had developed a leak in the electrode stack, itself failed due to a leak in the electrode stack. With Redflow in liquidation there was obviously no way I was getting a warranty replacement this time around. Happily, Aidan Moore from QuantumNRG put me in touch with Jason Litchfield from GrazAg, who had obtained a number of Redflow’s post-liquidation stock of batteries. With the Christmas holidays coming up, the timing wasn’t great, but we were ultimately able to get the failed unit replaced with a new ZBM3.
At this point the obvious question from anyone who’s been following the Redflow saga is probably going to be: why persevere, especially in light of this article from the ABC which speaks of ongoing reliability issues and disturbingly high failure rates for these batteries. That’s a good question, and like many good questions it has a long and complicated answer.
The technical path of least resistance would have been to migrate to a small rack of Pylontech batteries, as these apparently Just WorkTM with our existing Victron inverter/charger gear. The downside is they’re lithium, so a non-zero fire risk, and our installation is currently in the crawl space under the dining room. If we switched to lithium batteries, we’d need to arrange a separate outdoor steel enclosure of some kind with appropriate venting and fans, probably on the other side of the driveway, and get wiring to and from that. My extremely hand-wavey guess at the time was that it’d easily have cost us at least $20K to do that properly, with maybe half of that being the batteries.
The thing is, I remain convinced that flow batteries are in general a better idea for long-term stationary energy storage than lithium. This article from the Guardian provides a quick high-level summary of what makes flow batteries different. What I really want to be able to do – given Redflow is gone – is migrate to another flow battery, ideally one that actually lives up to the promise of multi-decade longevity. Maybe someone will finally come up with a residential scale vanadium flow battery. Maybe someone will buy Redflow’s IP, carry on their work and fix some of their reliability issues (the latest update from the liquidators at the time of writing says that they have “entered an exclusive negotiation period with a party for the acquisition of Redflow Group’s intellectual property (IP) and certain specific assets”). Maybe we’ll even see a viable open source flow battery – I would love for this to happen, not least because if it failed I’d probably be able to figure out how to fix the damn thing myself!
Leaving our current system in place, and swapping in a new ZBM3 meant we could kick the migration can down the road a ways. It bought us more time to see what other technologies develop, and it cost a lot less in the short term than migrating to lithium would have: $2,750 including GST for a post-demise-of-Redflow 10kWh ZBM3 (although shipping was interesting – more on that later). The real trick going forwards is seeing exactly how far down the road we’ll be able to kick that can. How can we ensure the greatest possible longevity of the new battery?
The ABC article puts it down to manufacturing problems, notably a dependence on repurposed third-party components. While I can see that dependence causing all sorts of extremely irritating manufacturing and design issues, I’m not entirely convinced this is the whole story. I will freely admit that my personal sample is very small, but my two batteries both failing due to electrode stack leaks? If a hose split or a pump had died, or some random doohikey let the magic smoke out, then OK, cool, I get it, those I can see being repurposed third-party components. But these failures were apparently in the electrode stack, and I’m struggling to see how that could be a repurposed third-party component. If nothing else the stack (and the tanks) are surely the pieces that Redflow manufactured themselves. This is their core technology. What could be causing stack leaks? Are they just poorly manufactured, or is there some sort of chemical failure at runtime which physically splits the stack? Or something else? Bear in mind that this is all speculation on my part – I’m neither a chemist nor a battery manufacturer – but I know what I’ve seen, and I know what I’ve heard about leaks in other peoples’ batteries.
On the chemistry front, I found a paper from 2023 entitled Scientific issues of zinc-bromine flow batteries and mitigation strategies. This was authored by a bunch of researchers from the University of Queensland and the former CTO of Redflow, and highlights hydrogen evolution, zinc corrosion and zinc dendrite formation as the fundamental issues with zinc bromine flow batteries. I sincerely hope the authors will forgive me for condensing their fascinating ~9,000 word paper into the following 95 word paragraph:
When the battery is being charged, zinc is plated onto the electrodes. During discharge, the zinc is removed. Dendrites (little tree like structures) can grow due to uneven zinc deposition, or due to hydrogen gas evolution. Left unchecked, dendrites can puncture the separator between the electrodes and lead to short circuits. Additionally, hydrogen gas generated by the battery can raise the electrolyte pH. If the pH is too high, solid zinc can clog a membrane in the stack. If the pH is too low, it can cause zinc corrosion which can make the battery self-discharge.
What if Redflow just never completely solved or mitigated the above issues? Could a dendrite puncture not just the separator, but actually split the stack and result in it leaking? Could clogged membranes combined with hydrogen gas create enough pressure to do the same?
We know that ZBMs have a maintenance cycle which runs at least every 72 hours to first discharge the battery then (theoretically) completely strip the zinc from the electrodes over a subsequent two hour period. We also know that ZBMs have a carbon sock which sits inside the zinc electrolyte tank and helps to keep electrolyte pH in the correct operating range. This needs to be replaced annually.
What if 72 hours is still too long between maintenance cycles? If you search back far enough you’ll find that the maximum maintenance period was originally 96 hours, and I assume that was later revised down to 72 hours after experience in the field. I’ve had subsequent correspondence which says that even more frequent maintenance (24-48 hours) can be better for the batteries. I’ve also encountered a curious intermittent fault with the ZBM3 where occasionally the Strip Pump Run Timer in the battery operates at half speed. If that happens and you don’t notice and reset the battery, the maintenance cycle will actually occur every 144 hours, which is way too long.
In the past I’ve observed frequent high charge current warnings in the Battery Management System (BMS) logs. This is actually normal, as by default the charge voltage is configured to be 57.5V, and there’s a separate high current voltage reduction setting of 1V. The idea is that this will try to make the battery charge as quickly as possible, and if the current gets too high, it will drop the charge voltage dynamically by 1V, which results in current reduction. Is it possible this variable (i.e. potentially uneven) charge current results in uneven zinc deposition?
I’ve also noticed that the battery State of Charge (SoC) calculations get sketchier the longer it’s been between maintenance cycles. If I have maintenance set to 72 hours, then at the end of the maintenance cycle, the battery fairly reliably still reports about 7% SoC. With a 48 hour maintenance period, it reports about 3% SoC at end of maintenance, and with a 24 hour maintenance period, it’s more like 1%. Once maintenance completes the SoC is reset to 0% automatically (because the battery really is empty at that point), but this got me thinking… If the SoC calculation is off, is there any way the battery could inadvertently allow itself to overcharge? Given the numbers above are all obviously overestimates I hope it’s more likely that the battery undercharges, but still, I had to wonder.
Aidan suggested three configuration tweaks which Redflow had told him to try to potentially help optimise battery lifespan:
These are all done via the BMS. The maximum SoC and maintenance time limit are set on the Battery Maintenance screen under Capacity Limiting and Maintenance Timing respectively. I went with 90% SoC as above and 48 hour maintenance. The charge voltage is on the EMS Integration screen. I’ve used the following settings:
In my case, the Normal Charge Voltage was originally 57V, and as I dropped it by 1.5V to get to 55.5V, I dropped the Charge-Blocked and Discharge/Maintenance Cycle voltages by the same amount to arrive at the above figures.
Dropping the maximum SoC means that the battery can’t get completely full and stay there for a long time. This must reduce the total amount of zinc plated on the electrodes, which I hope helps reduce dendrite formation. I also found when reading the paper mentioned earlier that “H2 evolution occurs mostly near the top of charge with mossy or spongy like zinc being plated”, which looks like another good reason to avoid fully charging the battery.
Dropping the charge voltage necessarily reduces the charge current and I assume keeps it much more even than it would be otherwise. I have not seen any high charge current warning since making this change. On the other hand, it does mean the battery charges slower than it would otherwise. I did a little experiment to test this, just watching the figures for amperage and kW the BMS gave me when I tweaked charge voltages:
This means I’m not using the battery as effectively as I could be with a higher charge voltage/current, but if this serves to extend the battery life, I think it’s worth it under the circumstances.
It’s important to keep an eye on is the Strip Pump Run Timer, which went weird on me a couple of times. I really should write a little script to automatically warn me if it starts running at half speed, but I’ve been habitually looking at the BMS briefly almost every day since the system was installed, so I noticed when this problem occurred because the maintenance timing was off. To reset a battery that gets in this state, go to Tools: ZBM Modbus Tool and write the value 0x80 to register 0x2053. This will appear to fail because it immediately resets the unit which thus never reports a successful write, but it does the trick.
Some time in the next six months I’m going to need to beg, borrow, steal or figure out how to manufacture carbon socks. The good news is that this time the replacement procedure is going to be really easy, because unlike the ZBM2 (where you had to mess with some pipe work) and my previous ZBM3 (where there was a cap on the side which in my case would have been completely inaccessible due to proximity to a wall), this one has an easy access screw cap on the front of the electrolyte tank.
The Redflow cloud went offline in late October 2024. This allowed remote access to the BMS, and I understand that some Redflow customers were unaware that it’s possible to access the BMS locally without the cloud. The Redflow cloud allowed firmware updates, and also let Redflow staff monitor batteries and configure them remotely, but it is not actually a hard requirement that this system exist in order for the batteries to continue to operate.
One way to access the BMS locally is via the wifi network on the BMS itself. If this is turned on, and you search for wifi access points you should find one named something like “zcell-bms-XXXX”. The password should be “zcellzcell”. Once you’re connected, open a web browser and go to http://zcell:3000. If that doesn’t work, try http://172.16.29.241:3000. This should let you see the BMS status. If you try to make any configuration changes it will ask you to log in. The default username and password are “admin” and “admin”. These can be changed under Configuration: Users.
The other way of accessing the BMS is to connect to whatever the IP address of the BMS is on your local network. The trick in this case is figuring out what the IP address is. I know what mine is because I logged into my router and looked at its list of attached devices.
Given the Redflow cloud is down and Redflow is out of business, I would actually suggest going into the BMS Site Configuration screen and unchecking the “Enable BMS cloud connection” and “Allow Redflow access to system for service intervention” boxes. There are two reasons for this:
Personally I hope whoever buys the Redflow IP will turn the cloud back on, in which case the above advice will no longer apply.
Individuals such as myself can’t just ring up a random courier and say “Hey, can you please go to New South Wales, pick up a 278kg crate with hazchem stickers that say ‘corrosion’ and have pictures of dead fish, and bring it to me here in Tasmania?” The courier will say “Hell no”, unless you have an account with them. Accordingly I would like to thank Stuart Thomas from Alive Technologies through whom I was able to arrange shipping, because his company does have an account with a courier, and he was also after some batteries so we were able to do a combined shipment. If anyone else is looking to move these batteries around, the courier in this instance was Imagine Cargo. I understand Redflow in the past used Mainfreight and Chemcouriers. In all cases, the courier will need to know the exact dimensions and weight which are in the manual, and will want a safety data sheet. Here they are:
Further thanks to Stuart and Gus (whose flatbed truck almost didn’t make it up our driveway) for last mile delivery, swapping the new ZBM3 into the old enclosure, and getting the damn thing in under our house.
It’s disappointing on many levels that Redflow went under, but like I said earlier, I remain convinced that flow batteries are in general a better idea for long-term stationary energy storage than lithium. I find it interesting that the sale of Redflow’s IP includes “specific assets and shares in Redflow (Thailand) Limited”. Given that’s where the manufacturing was done, could that indicate that the buyer is interested in potentially carrying on further development or manufacturing work? The identity of the buyer remains confidential right now, and final settlement is still a year away, so I guess we’ll just have to wait and see.
Our new ZBM3 was commissioned on March 18, and has been running well ever since. I’ve done everything I know to do to try to ensure it has a long and happy life, and will continue to keep a very close eye on it. There will be followup posts if and when anything else interesting happens.
Some time rather earlier in this journey, I found an easter egg in the BMS, which I didn’t mention in any of my previous posts. I think that might be a nice note to finish on here.
Full/Detailed voting results I am basing this off – You should read this while looking at that
Huge increase in the number of votes: from 144 last year to 2785 this year
The election was to elect Two Council Members from 11 candidates.
The FSU supported Jonathan Ayling and Douglas Brown and gave instructions to their members to vote for them. Other left-wing people circulated voting lists that often included Dylan Reeve and Adam Hunt near the top while discouraging voting for Jonathan Ayling, Douglas Brown and Brynn Neilson.
The initial round saw the FSU’s Jonathan Ayling and Douglas Brown get 1046 votes between them or 38% of those cast and were ranked 1st and 3rd. Since the winning threshold is 929 the FSU was almost certain to win at least one seat at this point. The FSU encouraged their voters to rank Jonathan & Douglas 1st or 2nd by the voter’s birthday. Not all FSU voters followed this and 2/3s ranked Jonathan first which meant he was very likely to be elected.
Dylan Reeve was ranked 2nd while 4th was Bianca Grizhar. It is possible that since candidates were listed in alphabetical order by first name that Bianca benefited. Or possibly Bianca had lots of fans who didn’t post to places I saw.
The bottom 6 candidates ( Seth, Roger, Brynn, Toby, Suzie, Maureen) were eliminated in this round. This is because the total votes for them 118 was less than the next highest candidate Peter-Lucas Jones had (188). So there was no way shuffling votes between them would put them ahead of Peter-Lucas.
Of the 118 votes from defeated candidates 11 couldn’t be distributed in the 2nd round because they didn’t list candidates still in the race ( ie exhausted) . The rest were spread fairly evenly with Bianca picking up most and Douglas and Jonathan less. Indicating people were avoiding the FSU candidates.
Peter-Lucas was in last place 19 votes behind Kaye-Maree so was eliminated.
Peter-Lucas’ 202 votes were transferred quite unevenly. 15 were exhausted but 56% of the rest went to Kaye-Maree. Kaye-Maree was often grouped with Peter-Lucas in a “Maori Ticket” so that would explain the transfers from him to her. Only one of Peter-Lucas’ votes went to a FSU candidate.
Because Kaye-Maree picked up so many votes from Peter-Lucas she now was ahead of Adam.
Adam also picked up relatively few transfers from Kaye-Maree. So Adam was eliminated.
Of Adam’s 320 votes 22 were exhausted. 51% of the rest went to Kaye-Maree while just 9 ( 3%) went to FSU candidates. Dylan also picked up enough votes to put him in 1st place. Remaining candidates were now Dylan, Jonathan, Kaye-Maree, Bianca and Douglas.
Once again Kaye-Maree picked up enough transfers to put her ahead of last place. So this means that in this round the FSU’s Douglas Brown was eliminated. I’ll note that Douglas picked up 350 votes in the first round (when he was 3rd) but since then had received just 13 more votes in transfers and was in 5th place when he was eliminated (both Kaye-Maree and Bianca had passed him with gains of 279 and 133 respectively)
As expected 96% of Douglas’ votes transferred to Jonathan which put him well over the threshold and he was elected.
So Jonathan was the 1st elected Candidate in Round 5
Since Jonathan had 1052 votes he was 123 votes over the 929 winning amount. However 995 out of his 1052 voters did not rank Bianca, Dylan or Kaye-Maree. So only 6.66445 votes were transferred instead of 123.
Of the 57 that did rank one of those three remaining candidates 25 picked Dylan, 20 Kaye-Maree and 12 Bianca. So rankings between those 3 were unchanged.
Bianca was thus in last place 27 votes behind Kaye-Maree so she was eliminated.
This left just two candidates Dylan on 740 and Kaye-Maree 255 votes behind on 485. Since Bianca was on 458 votes he transfers could have elected either candidate but since Dylan was significantly ahead around 75% would have to go to Kaye-Maree for her to win.
In reality just 44% of Bianca votes went to Kaye-Maree, 41% to Dylan and 15% were exhausted. So Dylan ended up 258 votes ahead of Kaye-Maree and just a couple of votes short of being elected that round.
Kaye-Maree was thus eliminated.
Kaye-Maree’s votes were officially transferred to Dylan and Dylan was elected as the 2nd winner.
Ignoring SMS, which is vulnerable to SIM-swapping attacks, TOTP (Time-based One-Time Passwords) is probably the most popular second factor authentication method at the moment. While reviewing a pull request adding support for TOTP, I decided to investigate the current state of authenticators in 2025 with regards to their support for the various security parameters.
A previous analysis from 2019 found that many popular authenticators were happy to accept parameters they didn't actually support and then generate the wrong codes. At the time, a service wanting to offer TOTP to its users had to stick to the default security parameters or face major interoperability issues with common authenticator clients. Has the landscape changed or are we still stuck with security decisions made 15 years ago?
As an aside: yes, everybody is linking to a wiki page for an archived Google repo because there is no formal spec for the URI format.
I tested a number of Android authenticators against the oathtool client:
/usr/bin/oathtool --totp=SHA1 --base32 JVRWCZDTMVZWK5BAMJSSAZLOMVZGK5TJMVXGIZLDN5SGKZBAOVZI
/usr/bin/oathtool --totp=SHA256 --base32 JVRWCZDTMVZWK5BAMJSSAZLOMVZGK5TJMVXGIZLDN5SGKZBAOVZI
1Password:
Authy (Twillio):
Bitwarden Authenticator:
Duo Security:
Google Authenticator:
LastPass Authenticator:
Microsoft Authenticator:
I also tested the infamous Google Authenticator on iOS:
otpauth://totp/francois+1%40brave.com?secret=JVRWCZDTMVZWK5BAMJSSAZLOMVZGK5TJ&issuer=Brave%20Account&algorithm=SHA1&image=https://account.brave.com/images/email/brave-41x40.png
otpauth://totp/francois+1%40brave.com?secret=JVRWCZDTMVZWK5BAMJSSAZLOMVZGK5TJMVXGIZLDN5SGKZBAOVZI&issuer=Brave%20Account&algorithm=SHA1&image=https://account.brave.com/images/email/brave-41x40.png
So unfortunately, the 2019 recommendations still stand:
You should also avoid putting the secret
parameter
last in the URI to avoid breaking
some versions of Google Authenticator which parse these URIs incorrectly.
Other security and user experience considerations:
image
and
color
.
Most clients will ignore them, but they also don't hurt.Louis Rossman made an informative YouTube video about right to repair and the US military [1]. This is really important as it helps promote free software and open standards.
The ACM has an insightful article about hidden controls [2]. We need EU regulations about hidden controls in safety critical systems like cars.
This Daily WTF article has some interesting security implications for Windows [3].
Earth.com has an interesting article about the “rubber hand illusion” and how it works on Octopus [4]. For a long time I have been opposed to eating Octopus because I think they are too intelligent.
Micah Lee wrote an informative guide to using Signal groups for activism [6].
David Brin wrote an insightful blog post about the phases of the ongoing US civil war [7].
tl;dr: there is an attack in the wild which is triggering dangerous-but-seemingly-intended behaviour in the Oj JSON parser when used in the default and recommended manner, which can lead to everyone’s favourite kind of security problem: object deserialization bugs! If you have the
oj
gem anywhere in yourGemfile.lock
, the quickest mitigation is to make sure you haveOj.default_options = { mode: :strict }
somewhere, and that no library is overwriting that setting to something else.
As a sensible sysadmin, all the sites I run send me a notification if any unhandled exception gets raised. Mostly, what I get sent is error-handling corner cases I missed, but now and then… things get more interesting.
In this case, it was a PG::UndefinedColumn
exception, which looked something like this:
PG::UndefinedColumn: ERROR: column "xyzzydeadbeef" does not exist
This is weird on two fronts: firstly, this application has been running for a while, and if there was a schema problem, I’d expect it to have made itself apparent long before now. And secondly, while I don’t profess to perfection in my programming, I’m usually better at naming my database columns than that.
Something is definitely hinky here, so let’s jump into the mystery mobile!
The exception notifications I get sent include a whole lot of information about the request that caused the exception, including the request body. In this case, the request body was JSON, and looked like this:
{"name":":xyzzydeadbeef", ...}
The leading colon looks an awful lot like the syntax for a Ruby symbol, but it’s in a JSON string. Surely there’s no way a JSON parser would be turning that into a symbol, right? Right?!?
Immediately, I thought that that possibly was what was happening, because I use Sequel for my SQL database access needs, and Sequel treats symbols as database column names. It seemed like too much of a coincidence that a vaguely symbol-shaped string was being sent in, and the exact same name was showing up as a column name.
But how the flying fudgepickles was a JSON string being turned into a Ruby symbol, anyway? Enter… Oj.
A long, long time ago, the “standard” Ruby JSON library had a reputation for being slow.
Thus did many competitors flourish, claiming more features and better performance.
Strong amongst the contenders was oj
(for “Optimized JSON”), touted as “The fastest JSON parser and object serializer”.
Given the history, it’s not surprising that people who wanted the best possible performance turned to Oj, leading to it being found in a great many projects, often as a sub-dependency of a dependency of a dependency (which is how it ended up in my project).
You might have noticed in Oj’s description that, in addition to claiming “fastest”, it also describes itself as an “object serializer”. Anyone who has kept an eye on the security bug landscape will recall that “object deserialization” is a rich vein of vulnerabilities to mine. Libraries that do object deserialization, especially ones with a history that goes back to before the vulnerability class was well-understood, are likely to be trouble magnets.
And thus, it turns out to be with Oj.
By default, Oj will happily turn any string that starts with a colon into a symbol:
>> require "oj"
>> Oj.load('{"name":":xyzzydeadbeef","username":"bob","answer":42}')
=> {"name"=>:xyzzydeadbeef, "username"=>"bob", "answer"=>42}
How that gets exploited is only limited by the creativity of an attacker. Which I’ll talk about more shortly – but first, a word from my rant cortex.
While the object of my ire today is Oj and its fast-and-loose approach to deserialization, it is just one example of a pervasive problem in software: insecurity by default.
Whether it’s a database listening on 0.0.0.0
with no password as soon as its installed, or a library whose default behaviour is to permit arbitrary code execution, it all contributes to a software ecosystem that is an appalling security nightmare.
When a user (in this case, a developer who wants to parse JSON) comes across a new piece of software, they have – by definition – no idea what they’re doing with that software. They’re going to use the defaults, and follow the most easily-available documentation, to achieve their goal. It is unrealistic to assume that a new user of a piece of software is going to do things “the right way”, unless that right way is the only way, or at least the by-far-the-easiest way.
Conversely, the developer(s) of the software is/are the domain experts. They have knowledge of the problem domain, through their exploration while building the software, and unrivalled expertise in the codebase.
Given this disparity in knowledge, it is tantamount to malpractice for the experts – the developer(s) – to off-load the responsibility for the safe and secure use of the software to the party that has the least knowledge of how to do that (the new user).
To apply this general principle to the specific case, take the “Using” section of the Oj README.
The example code there calls Oj.load
, with no indication that this code will, in fact, parse specially-crafted JSON documents into Ruby objects.
The brand-user user of the library, no doubt being under pressure to Get Things Done, is almost certainly going to look at this “Using” example, get the apparent result they were after (a parsed JSON document), and call it a day.
It is unlikely that a brand-new user will, for instance, scroll down to the “Further Reading” section, find the second last (of ten) listed documents, “Security.md”, and carefully peruse it. If they do, they’ll find an oblique suggestion that parsing untrusted input is “never a good idea”. While that’s true, it’s also rather unhelpful, because I’d wager that by far the majority of JSON parsed in the world is “untrusted”, in one way or another, given the predominance of JSON as a format for serializing data passing over the Internet. This guidance is roughly akin to putting a label on a car’s airbags that “driving at speed can be hazardous to your health”: true, but unhelpful under the circumstances.
The solution is for default behaviours to be secure, and any deviation from that default that has the potential to degrade security must, at the very least, be clearly labelled as such.
For example, the Oj.load
function should be named Oj.unsafe_load
, and the Oj.load
function should behave as the Oj.safe_load
function does presently.
By naming the unsafe function as explicitly unsafe, developers (and reviewers) have at least a fighting chance of recognising they’re doing something risky.
We put warning labels on just about everything in the real world; the same should be true of dangerous function calls.
OK, rant over. Back to the story.
So far, I’ve hopefully made it clear that Oj does some Weird Stuff with parsing certain JSON strings. It caused an unhandled exception in a web application I run, which isn’t cool, but apart from bombing me with exception notifications, what’s the harm?
For starters, let’s look at our original example: when presented with a symbol, Sequel will interpret that as a column name, rather than a string value. Thus, if our “save an update to the user” code looked like this:
# request_body has the JSON representation of the form being submitted
body = Oj.load(request_body)
DB[:users].where(id: user_id).update(name: body["name"])
In normal operation, this will issue an SQL query along the lines of UPDATE users SET name='Jaime' WHERE id=42
.
If the name given is “Jaime O’Dowd”, all is still good, because Sequel quotes string values, etc etc.
All’s well so far.
But, imagine there is a column in the users
table that normally users cannot read, perhaps admin_notes
.
Or perhaps an attacker has gotten temporary access to an account, and wants to dump the user’s password hash for offline cracking.
So, they send an update claiming that their name is :admin_notes
(or :password_hash
).
In JSON, that’ll look like {"name":":admin_notes"}
, and Oj.load
will happily turn that into a Ruby object of {"name"=>:admin_notes}
.
When run through the above “update the user” code fragment, it’ll produce the SQL UPDATE users SET name=admin_notes WHERE id=42
.
In other words, it’ll copy the contents of the admin_notes
column into the name
column – which the attacker can then read out just by refreshing their profile page.
That an attacker can read other fields in the same table isn’t great, but that’s barely scratching the surface.
Remember before I said that Oj does “object serialization”?
That means that, in general, you can create arbitrary Ruby objects from JSON.
Since objects contain code, it’s entirely possible to trigger arbitrary code execution by instantiating an appropriate Ruby object.
I’m not going to go into details about how to do this, because it’s not really my area of expertise, and many others have covered it in detail.
But rest assured, if an attacker can feed input of their choosing into a default call to Oj.load
, they’ve been handed remote code execution on a platter.
As Oj’s object deserialization is intended and documented behaviour, don’t expect a future release to make any of this any safer. Instead, we need to mitigate the risks. Here are my recommended steps:
Gemfile.lock
(or SBOM, if that’s your thing) to see if the oj
gem is anywhere in your codebase.
Remember that even if you don’t use it directly, it’s popular enough that it is used in a lot of places.
If you find it in your transitive dependency tree anywhere, there’s a chance you’re vulnerable, limited only by the ingenuity of attackers to feed crafted JSON into a deeply-hidden Oj.load
call.oj
directly and use it in your project, consider not doing that.
The json
gem is acceptably fast, and JSON.parse
won’t create arbitrary Ruby objects.oj
to do so, find all calls to Oj.load
in your code and switch them to call Oj.safe_load
.permitted_classes
argument to Psych.load).
I’d make it a priority to move away from using Oj for that, and switch to something somewhat safer (such as the aforementioned Psych).
At the very least, audit and comment heavily to minimise the risk of user-provided input sneaking into those calls somehow, and pass mode: :object
as the second argument to Oj.load
, to make it explicit that you are opting-in to this far more dangerous behaviour only when it’s absolutely necessary.Oj.load
in your dependencies, consider setting the default Oj parsing mode to :strict
, by putting Oj.default_options = { mode: :strict }
somewhere in your initialization code (and make sure no dependencies are setting it to something else later!).
There is a small chance that this change of default might break something, if a dependency is using Oj to deliberately create Ruby objects from JSON, but the overwhelming likelihood is that Oj’s just being used to parse “ordinary” JSON, and these calls are just RCE vulnerabilities waiting to give you a bad time.If I’ve helped you identify and fix potential RCE vulnerabilities in your software, or even just opened your eyes to the risks of object deserialization, please help me out by buying me a refreshing beverage. I would really appreciate any support you can give. Alternately, if you’d like my help in fixing these (and many other) sorts of problems, I’m looking for work, so email me.
One thing that annoys me on TV shows and movies is getting the details wrong. Yes it’s fiction and yes some things can’t be done correctly and in some situations correctly portraying things goes against the plot. But otherwise I think they should try to make it accurate.
I was just watching The Americans (a generally good show that I recommend watching) and in Season 4 Episode 9 there’s a close up of a glass of wine which clearly shows that the Tears of Wine effect is missing, the liquid in the glass obviously has the surface tension of water not of wine. When they run a show about spies then have to expect that the core audience will be the type of detail oriented people who notice these things. Having actors not actually drink alcohol on set is a standard practice, if they have to do 10 takes of someone drinking a glass of wine then that would be a problem if they actually drank real wine. But they could substitute real wine for the close up shots and of course just getting it right the first time is a good option.
Some ridiculous inaccuracy we just need to deal with, like knives making a schwing sound when pulled out of scabbards and “silenced” guns usually still being quite loud (so many people are used to it being wrong). Organisations like the KGB had guns that were actually silent, but they generally looked obviously different to regular guns and had a much lower effective range.
The gold coins shown on TV are another ridiculous thing. The sound of metal hitting something depends on how hard it is and how dense it is. Surely most people have heard the sounds of dropping steel nuts and ball bearings and the sound of dropping lead sinkers and knows that the sounds of items of similar size and shape differ greatly based on density and hardness. A modern coin made of copper, cupro-nickel (the current “silver” coins), or copper-aluminium (the current “gold” coins) sounds very different to a gold coin when dropped on a bench. For a show like The Witcher it wouldn’t be difficult to make actual gold coins of a similar quality to iron age coin production, any jeweller could make the blanks and making stamps hard enough to press gold isn’t an engineering challenge (stamping copper coins would be much more difficult). The coins used for the show could be sold to fans afterwards.
Once coins are made they can’t be just heaped up. Even if you are a sorcerer you probably couldn’t fill a barrel a meter high with gold coins and not have it break from the weight and/or have the coins at the bottom cold welded. Gold coins are supposed to have a precise amount of gold and if you pile them up too high then the cold welding process will transfer gold between coins changing the value. If someone was going to have a significant quantity of gold stored then it would be in gold ingots with separators between layers to prevent cold welding.
Movies tend not to show coins close up, I presume that’s because they considered it too difficult to make coins and they just use some random coins from their own country.
Another annoying thing is shows that don’t match up the build dates of objects used. It’s nice when they get it right like the movie Titanic featuring a M1911 pistol which is something that a rich person in 1912 would likely have. The series Carnival Row (which I recommend) has weapons that mostly match our WW1 era, everything that doesn’t involve magic seems legit. One of the worst examples of this is the movie Anna (by Luc Besson which is mostly a recreation of his film Nikita but in the early 90s and with the KGB). That film features laptops with color screens and USB ports before USB was invented and when color screens weren’t common on laptops, as an aside military spec laptops tend to have older designs than consumer spec ones.
I’ve mostly given up on hoping that movies will get “hacking” scenes that are any more accurate than knives making a “schwing” sound. But it shouldn’t be that hard for them to find computer gear that was manufactured in the right year to use for the film.
Why can’t they hire experts on technology to check everything?
When companies design products a major concern seems to be what the reviewers will have to say about it. For any product of significant value the users are unable to perform any reasonable test before buying, for a casual user some problems may only be apparent after weeks of use so professional reviews are important to many people. The market apparently doesn’t want reviews of the form “here’s a list of products that are quite similar and all do the job well, you can buy any of them, it’s no big deal” which would be the most technically accurate way of doing it.
So the reviewers compare the products on the criteria that are easiest to measure, this lead to phones being compared by how light and thin they are. I think it’s often the case that users would be better served by thicker heavier phones that have larger batteries but instead they are being sold phones that have good battery life in a fresh installation but which don’t last a day with a full load of apps installed.
The latest issue with bad reviews driving poor product design is electric cars. For a while the advocates of old fashioned cars have touted the range of petrol cars which has become an issue for comparing EVs. I have been driving cars for 35 years and so far I have never driven anywhere that’s out of range of the current electric charging network, even with the range of the LEAF (which is smaller than many other EVs). If I ever felt the need to drive across the Nullarbor Plain then I could rent a car to do that and the costs of such car rental would be small compared to the money I’m saving by driving an EV and also small when compared to the premium I would have to pay for an EV with a larger range.
Some of the recent articles I’ve seen about EVs have covered vehicles with a battery range over 700Km which is greater than the legal distance a commercial driver can drive without a break. I’ve also seen articles about plans to have a small petrol or Diesel motor in an EV to recharge the battery without directly driving the wheels. A 9KW Diesel motor could provide enough electricity on average to keep the charge maintained in a LEAF battery and according to the specs of Diesel generators would take about 55Kg of fuel to provide the charge a LEAF needs to drive 1000Km. The idea of a mostly electric hybrid car that can do 1000Km on one tank of fuel is interesting as a thought experiment but doesn’t seem to have much actual use. Apparently a Chinese company is planning to release a car that can do 1400Km one one tank of fuel using such technology which is impressive but not particularly useful.
The next issue of unreasonable competition is in charge speed. Charging a car at 2KW from a regular power socket is a real limit to what you can do with a car. It’s a limit that hasn’t bothered me so far because the most driving I typically do in a week is less than one full charge, so at most I have to charge overnight twice in a week. But if I was going to drive to another city without hiring a car that has better range I’d need a fast charger. Most current models of the Nissan LEAF support charging speeds up to 50KW which means fully charging the battery in under an hour (or slightly over an hour for the long range version). If I was to drive from Melbourne to Canberra in my LEAF I’d have to charge twice which would be an annoyance at those speeds. There are a variety of EVs that can charge at 100KW and some as high as 350KW. 350KW is enough to fully charge the largest EV batteries in half an hour which seems to be as much as anyone would need. But there are apparently plans for 1MW car chargers which would theoretically be able to charge a Hummer (the EV with the largest battery) in 12 minutes. One obvious part of the solution to EV charging times is to not drive a Hummer! Another thing to note is that batteries can’t be charged at a high rate for all charge levels, this is why advertising for fast chargers makes claims like “80% charge in half an hour” which definitely doesn’t mean “100% charge in 37.5 minutes”!
There are significant engineering issues with high power applications. A 1MW cable is not just a bigger version of a regular power cable, there are additional safety issues, user training is required and cooling of the connector is probably required. That’s a lot to just get a better number in the table at the end of a review. There is research in progress on the Megawatt Charging System which is designed to charge heavy vehicles (presumably trucks and buses) at up to 3.75MW. Charging a truck at that rate is reasonable as the process of obtaining and maintaining a heavy vehicle license requires a significant amount of effort and some extra training in 3.75MW charging probably doesn’t make much difference.
A final issue with fast charging is the capacity of the grid. A few years ago I attended a lecture by an electrical engineer who works for the Victorian railway system which was very interesting. The Vic rail power setup involved about 100MW of grid connectivity with special contracts with the grid operators due to the fact that 1MW trains suddenly starting and stopping causes engineering problems that aren’t trivial to solve. They were also working on battery packs and super capacitors to deal with regenerative braking and to avoid brownouts in long sections of track. For a medium size petrol station 14 bays for fuelling cars is common. If 6 such petrol stations were replaced with fast charging stations that can charge cars at 1MW each that would draw the same power as the train network for the entire state! There is a need for significant engineering work to allow most cars to be electric no matter how it’s done, but we don’t need to make that worse just for benchmarks.
I enjoy CommandFests, while I’ve judged at all events (okay I’ve never done an invitational, I’ve done the rest of the ones you can probably name), the lay back nature and focus on everyone having fun at a CommandFest is special. At a GP many of the players have fun, but only a handful feel like they’ve achieved all they could. Arguably only the single winner does. (or should it be ‘did’?)
For at least 12 years laptops have been defaulting to not having the traditional PC 101 key keyboard function key functionality and instead have had other functions like controlling the volume and have had a key labelled Fn to toggle the functions. It’s been a BIOS option to control whether traditional function keys or controls for volume etc are the default and for at least 12 years I’ve configured all my laptops to have the traditional function keys as the default.
Recently I’ve been working in corporate IT and having exposure to many laptops with the default BIOS settings for those keys to change volume etc and no reasonable option for addressing it. This has made me reconsider the options for configuring these things.
Here’s a page listing the standard uses of function keys [1]. Here is a summary of the relevant part of that page:
The keys F1, F3, F4, F7, F9, F10, and F12 don’t get much use for me and for the people I observe. The F2 and F8 keys aren’t useful in most programs, F6 is only really used in web browsers – but the web browser counts as “most programs” nowadays.
Here’s the description of Thinkpad Fn keys [2]. I use Thinkpads for fun and Dell laptops for work, so it would be nice if they both worked in similar ways but of course they don’t. Dell doesn’t document how their Fn keys are laid out, but the relevant bit is that F1 to F4 are the same as on Thinkpads which is convenient as they are the ones that are likely to be commonly used and needed in a hurry.
I have used the KDE settings on my Thinkpad to map the function F1 to F3 keys to the Fn equivalents which are F1 to mute-audio, F2 for vol-down, and F3 for vol-up to allow using them without holding down the Fn key while having other function keys such as F5 and F6 have their usual GUI functionality. Now I have to could train myself to use F8 in situations where I usually use F2, at least when using a laptop.
The only other Fn combinations I use are F5 and F6 for controlling screen brightness, but that’s not something I use much.
It’s annoying that the laptop manufacturers forced me to this. Having a Fn key to get extra functions and not need 101+ keys on a laptop size device is a reasonable design choice. But they could have done away with the PrintScreen key to make space for something else. Also for Thinkpads a touch pad is something that could obviously be removed to gain some extra space as the Trackpoint does all that’s needed in that regard.
There are many negative articles about “AI” (which is not about actual Artificial Intelligence also known as “AGI”). Which I think are mostly overblown and often ridiculous.
Complaints about resource usage are common, training Llama 3.1 could apparently produce as much pollution as “10,000 round trips by car between Los Angeles and New York City”. That’s not great but when you compare to the actual number of people doing such drives in the US and the number of people taking commercial flights on that route it doesn’t seem like such a big deal. Apparently commercial passenger jets cause CO2 emissions per passenger about equal to a car with 2 people. Why is it relevant whether pollution comes from running servers, driving cars, or steel mills? Why not just tax polluters for the damage they do and let the market sort it out? People in the US make a big deal about not being communist, so why not have a capitalist solution, make it more expensive to do undesirable things and let the market sort it out?
ML systems are a less bad use of compute resources than Bitcoin, at least ML systems give some useful results while Bitcoin has nothing good going for it.
People often complain about the apparent impossibility of “AI” companies doing what investors think they will do. But this isn’t anything new, that all happened before with the “dot com boom”. I’m not the first person to make this comparison, The Daily WTF (a high quality site about IT mistakes) has an interesting article making this comparison [1]. But my conclusions are quite different.
The result of that was a lot of Internet companies going bankrupt, the investors in those companies losing money, and other companies then bought up their assets and made profitable companies. The cheap Internet we now have was built on the hardware from bankrupt companies which was sold for far less than the manufacture price. That allowed it to scale up from modem speeds to ADSL without the users paying enough to cover the purchase of the infrastructure. In the early 2000s I worked for two major Dutch ISPs that went bankrupt (not my fault) and one of them continued operations in the identical manner after having the stock price go to zero (I didn’t get to witness what happened with the other one). As far as I’m aware random Dutch citizens and residents didn’t suffer from this and employees just got jobs elsewhere.
There are good things being done with ML systems and when companies like OpenAI go bankrupt other companies will buy the hardware and do good things.
NVidia isn’t ever going to have the future sales that would justify a market capitalisation of almost 4 Trillion US dollars. This market cap can support paying for new research and purchasing rights to patented technology in a similar way to the high stock price of Google supported buying YouTube, DoubleClick, and Motorola Mobility which are the keys to Google’s profits now.
Until recently I worked for a company that used ML systems to analyse drivers for signs of fatigue, distraction, or other inappropriate things (smoking which is illegal in China, using a mobile phone, etc). That work was directly aimed at saving human lives with a significant secondary aim of saving wear on vehicles (in the mining industry drowsy drivers damage truck tires and that’s a huge business expense).
There are many applications of ML in medical research such as recognising cancer cells in tissue samples.
There are many less important uses for ML systems, such as recognising different types of pastries to correctly bill bakery customers – technology that was apparently repurposed for recognising cancer cells.
The ability to recognise objects in photos is useful. It can be used for people who want to learn about random objects they see and could be used for helping young children learn about their environment. It also has some potential for assistance for visually impaired people, it wouldn’t be good for safety critical systems (don’t cross a road because a ML system says there are no cars coming) but could be useful for identifying objects (is this a lemon or a lime). The Humane AI pin had some real potential to do good things but there wasn’t a suitable business model [2], I think that someone will develop similar technology in a useful way eventually.
Even without trying to do what the Humane AI Pin attempted, there are many ways for ML based systems to assist phone and PC use.
ML systems allow analysing large quantities of data and giving information that may be correct. When used by a human who knows how to recognise good answers this can be an efficient way of solving problems. I personally have solved many computer problems with the help of LLM systems while skipping over many results that were obviously wrong to me. I believe that any expert in any field that is covered in the LLM input data could find some benefits from getting suggestions from an LLM. It won’t necessarily allow them to solve problems that they couldn’t solve without it but it can provide them with a set of obviously wrong answers mixed in with some useful tips about where to look for the right answers.
I don’t think it’s reasonable to expect ML systems to make as much impact on society as the industrial revolution, and the agricultural revolutions which took society from more than 90% farm workers to less than 5%. That doesn’t mean everything will be fine but it is something that can seem OK after the changes have happened. I’m not saying “apart from the death and destruction everything will be good”, the death and destruction are optional. Improvements in manufacturing and farming didn’t have to involve poverty and death for many people, improvements to agriculture didn’t have to involve overcrowding and death from disease. This was an issue of political decisions that were made.
Political decisions that are being made now have the aim of making the rich even richer and leaving more people in poverty and in many cases dying due to being unable to afford healthcare. The ML systems that aim to facilitate such things haven’t been as successful as evil people have hoped but it will happen and we need appropriate legislation if we aren’t going to have revolutions.
There are documented cases of suicide being inspired by Chat GPT systems [4]. There have been people inspired towards murder by ChatGPT systems but AFAIK no-one has actually succeeded in such a crime yet. There are serious issues that need to be addressed with the technology and with legal constraints about how people may use it. It’s interesting to consider the possible uses of ChatGPT systems for providing suggestions to a psychologist, maybe ChatGPT systems could be used to alleviate mental health problems.
The cases of LLM systems being used for cheating on assignments etc isn’t a real issue. People have been cheating on assignments since organised education was invented.
There is a real problem of ML systems based on biased input data that issue decisions that are the average of the bigotry of the people who provided input. That isn’t going to be worse than the current situation of bigoted humans making decisions based on hate and preconceptions but it will be more insidious. It is possible to search for that so for example a bank could test it’s mortgage approval ML system by changing one factor at a time (name, gender, age, address, etc) and see if it changes the answer. If it turns out that the ML system is biased on names then the input data could have names removed. If it turns out to be biased about address then there could be weights put in to oppose that.
For a long time there has been excessive trust in computers. Computers aren’t magic they just do maths really fast and implement choices based on the work of programmers – who have all the failings of other humans. Excessive trust in a rule based system is less risky than excessive trust in a ML system where no-one really knows why it makes the decisions it makes.
Self driving cars kill people, this is the truth that Tesla stock holders don’t want people to know.
Companies that try to automate everything with “AI” are going to be in for some nasty surprises. Getting computers to do everything that humans do in any job is going to be a large portion of an actual intelligent computer which if it is achieved will raise an entirely different set of problems.
I’ve previously blogged about ML Security [5]. I don’t think this will be any worse than all the other computer security problems in the long term, although it will be more insidious.
Companies spending billions of dollars without firm plans for how to make money are going to go bankrupt no matter what business they are in. Companies like Google and Microsoft can waste some billions of dollars on AI Chat systems and still keep going as successful businesses. Companies like OpenAI that do nothing other than such chat systems won’t go well. But their assets can be used by new companies when sold at less than 10% the purchase price.
Companies like NVidia that have high stock prices based on the supposed ongoing growth in use of their hardware will have their stock prices crash. But the new technology they develop will be used by other people for other purposes. If hospitals can get cheap diagnostic ML systems because of unreasonable investment into “AI” then that could be a win for humanity.
Companies that bet their entire business on AI even when it’s not necessarily their core business (as Tesla has done with self driving) will have their stock price crash dramatically at a minimum and have the possibility of bankruptcy. Having Tesla go bankrupt is definitely better than having people try to use them as self driving cars.
I spent a lot of last year trying to understand why companies treated the employees they rely on for success so poorly — The Man Who Broke Capitalism; Lights Out; The Idea Factory; AI Snake Oil; Leaders Eat Last; and so on are all part of that journey. At the time I was a bit fixated on Jack Welsh and his long term impact on General Electric — which I would summarize as being overwhelmingly negative. It was a classic example of managing to short term profit metrics, instead of for long term sustainable growth by delighting your customers.
Ultimately this is why I chose to take a break from working for corporate America, as being treated like a replaceable cog in a profit machine wasn’t really working for me.
Then Edward Zitron wrote this blog post which really resonated with me…
When you care only about shareholder value, the only job you have is to promote further exploitation and dominance — not to have happy customers, not to make your company “a good place to work,” not to make a good product, not to make a difference or contribute to anything other than further growth.
While this is, to anyone with a vapor of an intellectual or moral dimension, absolutely f**king stupid, it’s an idea that’s proven depressingly endemic among the managerial elite, in part because it has entered the culture, and because it is hammered across in MBA classes and corporate training seminars.
I had assumed that my general lack of enthusiasm with tech at the moment was due to burnout, but now I wonder if its because there really aren’t any companies out there that are genuinely seeking to improve the lives of their customers, instead of just doing random stuff to juice this quarter’s profit.
We live in a symbolic economy where we apply for jobs, writing CVs and cover letters to resemble a certain kind of hire, with our resume read by someone who doesn’t do or understand our job, but yet is responsible for determining whether we’re worthy of going to the next step of the hiring process. All this so that we might get an interview with a manager or executive who will decide whether they think we can do it. We are managed by people whose job is implicitly not to do work, but oversee it. We are, as children (and young adults), encouraged to aspire to become a manager or executive, to “own our own business,” to “have people that work for us,” and the terms of our society are, by default, that management is not a role you work at, so much as a position you hold — a figurehead that passes the buck and makes far more of them than you do.
I assume that management used to be people who had once done a job, “graduating” to leading a team of people who now did approximately that thing. I say “assume” because its been a long time since I’ve seen much evidence of that in the places I’ve worked at. Now, management is performed by specialists who do not actually understand the mechanics of the work being done, and therefore issue edicts which are decoupled from the reality of the work to be done.
I think this is a key idea — good managers come from the pool of people who have done the type of work they manage, not from the outside having earned some management qualification as a “decider”.
Maybe the point here is around what one should aspire to. Is a life of directing other people to do work a life that you’d consider well lived when you hit the end point? Personally I want to directly and tangibly improve the world around me. There are lots of ways of doing that, but most of them involve actually doing stuff, not just directing other people to do stuff.
I’ve been thinking similar things to Edward already, but I had been using different words. I think honestly Edward’s words are probably better than mine, although I think he’s also being more specific to business problems than I was. I note that I haven’t actually explained my unifying theory of management incompetence here, but that really seems like a topic for another day.
Abundance: What Progress Takes by Ezra Klein, Derek Thompson
A critic of the rules from 50 years ago stopping excesses of development that are now stopping building, science and progress. US-centric but relevant to elsewhere. 4/5
On Locations: Lessons Learned from My Life On Set with The Sopranos and in the Film Industry by Mark Kamine
Lots of stories from the film/TV industry mixed with the author’s career history as a location scout, location manager and unit production manager 4/5
My Audiobook Scoring System
Please note: this post is a re-write of a previous post about my Vevor weather station that failed after a few months because of ineffective weather seals on the outdoor unit. Given the poor warranty service from Vevor, I instead went and bought a weather station from a local retailer. In my case a Jaycar Digitech XC-0434, which also appears to be sold online as a Bressser 6in1 weather station.
Now that we’ve defeated QNAP’s slightly broken udev, we can run a Docker container with rtl_433 in it to wire up our Digitech / Bresser 6in1 weather station to Home Assistant via MQTT. First off, we need a Docker container running rtl_433, which assumes you’ve already setup the udev rule mentioned in the previous post, even if you’re not using a QNAP!
I like to write little shell scripts to run Docker containers. In this case this one:
#!/bin/bash
docker rm -f digitech_weather || true
device=$(readlink -f /dev/rtl433)
docker run --restart always -d \
--name digitech_weather \
--device ${device} \
hertzg/rtl433:master-debian -f 917M -F json \
-Fmqtt://mqttserver,retain=0,events=rtl_433[/model][/id]
What this script does is remove any previous version of the container that might be running. It then uses our reliable symlink from the previous post to lookup the real device file. That real device file is then passed through to the Docker container. I am not entirely sure of the subtleties here, but rtl_433 refused to use the device if I passed it through as the symlink, and Docker doesn’t appear to be able to remap device files like it does for ports or mounts. Regardless, this worked at least.
The finally, we have the correct command line for rtl_433 for this weather station. Note that “mqttserver” is probably not the name of your MQTT server. This command line names the MQTT topic for the model and ID of the weather station, so if you had more than one of these you’d get separate topics for them. My weather station appears at “rtl_433/Bresser-6in1/848729174” for example.
In terms of completeness, the logs from rtl_433 look like this (reformatted to not be ugly, its all a single line in the actual user interface):
{
"time": "2025-06-24 05:07:44",
"model": "Bresser-6in1",
"id": 848729174,
"channel": 0,
"battery_ok": 1,
"temperature_C": 20.5,
"humidity": 31,
"sensor_type": 1,
"wind_max_m_s": 0,
"wind_avg_m_s": 0,
"wind_dir_deg": 234,
"uv": 0,
"flags": 0,
"mic": "CRC"
}
{
"time": "2025-06-24 05:07:56",
"model": "Bresser-6in1",
"id": 848729174,
"channel": 0,
"sensor_type": 1,
"wind_max_m_s": 0,
"wind_avg_m_s": 0,
"wind_dir_deg": 234,
"rain_mm": 1.2,
"flags": 1,
"mic": "CRC"
}
And the MQTT writes match this format as well. I’m not entirely sure why the device wants to send two updates, but more power to it.
Finally, we just need to write it up to Home Assistant. I am going to assume you already have MQTT configured, and wont talk about that bit. However, I have these in my configuration.yaml file:
mqtt:
- sensor: !include mqtt/sensors.yaml
- binary_sensor: !include mqtt/binary_sensors.yaml
Which is including mqtt/sensors.yaml:
- name: "Weather station temperature"
force_update: true
state_topic: "rtl_433/Bresser-6in1/848729174"
value_template: "{{ value_json.temperature_C }}"
unit_of_measurement: "°C"
- name: "Weather station humidity"
state_topic: "rtl_433/Bresser-6in1/848729174"
value_template: "{{ value_json.humidity }}"
unit_of_measurement: "%"
- name: "Weather station wind average"
state_topic: "rtl_433/Bresser-6in1/848729174"
value_template: "{{ value_json.wind_avg_m_s }}"
unit_of_measurement: "m/s"
- name: "Weather station wind maximum"
state_topic: "rtl_433/Bresser-6in1/848729174"
value_template: "{{ value_json.wind_max_m_s }}"
unit_of_measurement: "m/s"
- name: "Weather station wind direction"
state_topic: "rtl_433/Bresser-6in1/848729174"
value_template: "{{ value_json.wind_dir_deg }}"
unit_of_measurement: "degrees"
- name: "Weather station rain"
state_topic: "rtl_433/Bresser-6in1/848729174"
value_template: "{{ value_json.rain_mm }}"
unit_of_measurement: "mm"
- name: "Weather station UV"
state_topic: "rtl_433/Bresser-6in1/848729174"
value_template: "{{ value_json.uv }}"
And mqtt/binary_sensors.yaml:
- name: "Weather station battery ok"
force_update: true
payload_on: 1
payload_off: 0
state_topic: "rtl_433/Bresser-6in1/848729174"
value_template: "{{ value_json.battery_ok }}"
I hope this saves someone, possibly future me, some time setting this up. It took me a few hours to get this all working today.
I’ve read this book a few times, but honestly the Foundation TV series has left me with a new enthusiasm to re-read some Asimov stuff. I have previously read the entire extended multi-author series, and honestly a fair few of them sucked — especially the ones by other authors — so this time I have the luxury of being a bit more picky. Worse, Asimov remixed the robot stories several times into various volumes, and it can be quite confusing. The Complete Robot contains all the robot stories, and replaces I, Robot (reading one, reading two), The Rest of the Robots; Robot Dreams; and Robot Visions. It also contains a couple of previously unpublished stories.
Most recently I read the stories in 2019, where I found its 1950s treatment of gender issues a bit concerning. I did not have such concerns in 2012 when I was more focused on how the lack of monetary inflation in the book dated it (among other things). This time around, I made it exactly one page into I, Robot before finding a new way that the book is showing its age — it immediately comments that an important part of robot design is that you be able to predict the behaviour of the machine. This is an interesting point in these days of widespread unpredictable and hallucinating generative AI models. We still have a long way to go it seems. There are of course other ways the book has aged — wire recorders, the lack of ubiquitous computing, the dates! However, it seems petty to begrudge Asimov those for a book which is having its 75th birthday this year.
Overall I’d say my 2025 take on The Complete Robot is that it is definitely dated although many of the stories are still good. I cringe at the descriptions of Susan Calvin as loveless because she doesn’t have a partner, and the story with curing Autism as a central plot line made me deeply uncomfortable. This book however is a product of its time I suppose.
My other criticism is that the book is too long. Sure its a collection of short stories, but some of them are better than others and a big more vigor in the curation process would have produced a better read. This book took so long to read because there were definitely periods where I struggled to be motivated to read more because the characters simply weren’t interesting.
So, N years later, how is that going? It was going pretty well, but then there was a pandemic with lock-downs and curfews, which rather restricted access to dark skies.The obvious fix was to obtain access to dark skies, by way of a holiday house in the Wimmera.
In the mean time there were also a bunch of revolutions in astronomy, mostly to blame on open hardware. That means it is now possible to buy an off the shelf computer to control a bunch of mounts, cameras, auto-focusers, dew heaters and other gear. These are essentially raspberry pi machines with a modified operating system and (generally) a mobile app to control them.
Rather than fight software, keep laptops (and myself) out in the cold and kludge together VNC access, I got one of these machines (an asiair mini) and data acquisition is now mostly automated and not a problem. I set it up, tell it what I want, and in the morning I have images.
I do however still use open source software on Mac OS X to do my data processing. Notably I use Siril for pre-processing, stacking, stetching and noise reduction.
Fateful Choices, Ten Decisions That Changed the World, 1940-1941 by Ian Kershaw
A fascinating book covering decisions from the point of view of the wartime leaders making them. Highly recommend 5/5
Charged: A History of Batteries and Lessons for a Clean Energy Future by James Morton Turner
More a history of pollution from batteries and their construction than a straight history of the technology. It delivers that well enough though 3/5
Box Office Poison: Hollywood’s Story in a Century of Flops by Tim Robey
Covers 26 movies (skipping some of the best known) with fun behind-the-scenes tales of disaster and over-reach. 4/5
My Audiobook Scoring System
I’ve been thinking a bit about training at work recently, largely in the context of having spent the last twenty years working for US technology companies. I think effectively all of these companies made a pretty big mistake — they viewed training of employees as a perk much like vacation, book budgets, or t-shirts. They advertise their training programs as part of their recruitment process, and just like other perks they’re cut when times get a bit grim. However, that’s not actually why employers should train their people. We train people so that they have the skills they need to do their jobs — especially when things get real and aren’t working out to plan.
There are definitely industries who have good examples of this sort of risk reduction training done well — airlines and the military both engage in regular training activities that ensure that when things get exciting the people know what to do. This includes leaders being trained on how to make decisions that are likely to result in the desired outcomes.
However… High tech is not one of those places. I’ve seen a lot of tailored in-house training programs replaced by a corporate Udemy or LinkedIn Training subscription, and I have seen very little evidence that these all-you-can-eat e-learning platforms actually deliver much in terms of results. This is especially true because I am yet to meet an organization that gives their employees enough contiguous time to actually do one of these e-learning courses without doing it on their own time or with frequent context switches. I think what they do deliver is a training thing you can put on your HR site that has a fixed cost to the company.
I think its also true that individual employees should focus their self-funded training (if they do any, I suspect its rare) on skills that they find interesting or align with their longer term career goals. It is not the role of an individual to subsidize their employer.
When I rule the world (which is looking increasingly unlikely, but anyways), I think I’d start by identifying the skills a given team need to succeed — both in the base case of business as usual, but also in times of crisis. I’d stack rank those skills, and then I’d ensure that the team had at least a couple of examples of each skill. Perhaps the availability of a skill should be proportional to both the likelihood and impact of the risk if guards against? I wouldn’t be too concerned about the employment type of the people being trained either — if you seniors are contractors that’s not relevant, I still want my senior people to have the skills needed when they need them.
But I don’t rule the world. So I’ll just keep doing random learning things on my own time because they personally interest me, not because my employer of the day might one day need them.
Andy Pavlo of the CMU Database Group is well known for saying that while NoSQL databases acquire cyclical popularity, all databases eventually iterate back to a SQL interface — it happened with MongoDB and Google’s BigTable for example.
I think I have hit that point with etcd. Initially I ported from MySQL to etcd because I really wanted the inexpensive distributed locking and being able to watch values. However, I never actually watch values in my code any more, and I now spend a huge amount of my time maintaining what my code calls “caches”, but which I can now see are just poorly implemented secondary indexes. The straw that broke the camel’s back was https://github.com/etcd-io/etcd/issues/9043, which changed etcd’s defaults to only being able to return 1.5mb in a RPC request.
I therefore think it might be time for me to port back to a real SQL database, perhaps keeping etcd to manage distributed locks. Perhaps.
I need to think about this more to be honest, but I think I’ve hit the limit of what you can express in key / value pairs directly stored in etcd. I often want to look up items based off of a portion of their value (the values are JSON), but that’s not possible in etcd without maintaining those extra indices that I now maintain. As I’ve grown as a programmer, I now really really want the Chubby-style check-and-replace transactional multi-table update syntax that etcd offers and S3 recently introduced as well. So moving back to a pure SQL database would leave me missing that.
One alternative to ditching etcd entirely would be to write a RPC service which sat in front of it and abstracted away the underlying data store. If I treated etcd as a storage engine, and then maintained the various indices in that abstracting layer, then I might get to a happier place. This would map to how modern databases are build somewhat if we thought of the keys in etcd as page locations in a storage engine. etcd would be a quite expensive storage engine however given it’s in-memory only attributes.
Oh, and you should all go and watch Andy Pavlo’s excellent lectures on how to build a database storage engine:
…even if none of my friends seem to think so.
I’ve been off on a bit of a tangent recently. Its a slow burn tangent, that I am pretty sure was kicked off by this Geek Narrator podcast episode about the design of Turbo Puffer with Simon Eskildsen:
The basic idea is that you can build very large scale database systems using only the primitives provided by an object store such as Amazon S3. Now, the performance might also suck, but you can alleviate some of that with a good caching layer and in return you get massive scale. This first video caused me to discover the work of Andy Pavlo, who was interviewed by the same podcast:
Andy is hilarious, and then I had to go and watch the videos from his CMU Introduction to Databases course, which were excellent:
But none of this gets me to the point of being able to convince my friends that I am not insane. Perhaps sqlite adding support for this very thing will improve my credibility.
Honestly, I think there are a lot of interesting systems that can be built on top of an immutable object store such as S3, and that we should be talking more about that.
For the last couple of years I have been working on a SPICE protocol native proxy called Kerbside. The basic idea is to be able to provide SPICE Virtual Desktop Interface (VDI) consoles to users from cloud platforms such as Shaken Fist, OpenStack, or oVirt. Think Citrix, but for Open Source cloud platforms. SPICE is attractive here because it has some features that other more common VDI protocols like VNC don’t have — good cut and paste support, USB device pass-through, multiple monitor support, and so on. RDP has these, but RDP was not a supported VDI protocol when using qemu on Linux with KVM until incredibly recently — literally the last couple of months.
(In terms of clouds that Kerbside supports, I think it would be relatively trivial to also support Proxmox, KubeVirt, or a list of static manually created virtual machines, but there’s only so many things one Mikal can do at once…)
Some of these cloud platforms have supported SPICE consoles for a while, but generally with warts. OpenStack for example only exposes them as HTML5 transcoded sessions with reduced functionality. oVirt exposes them via a “proxy” which is just squid (or equivalent), but its fairly dumb — it exposes the underlying hypervisor details to the client for example. I thought I could do better than that.
The proxy itself has worked for a while. I haven’t eliminated the possibility that the proxy will need to be re-written in something more performant that Python, but Python is convenient for rapid prototyping and that’s a Future Mikal problem. The proxy is not perfect, but has been a lower priority while I landed supporting code in OpenStack. Now that OpenStack Epoxy (2025.1) includes most of the supporting code, its time to circle back to the proxy itself.
In the intervening time, the pinned requirements for Kerbside have bit rotted, so its time to take CI seriously so that I can have something like Renovate walk those dependencies forwards. I feel its important to have good CI before turning on something like Renovate, because otherwise how can you tell if the dependency version increment changes broke things?
But… CI testing for graphical consoles seems fiddly to me. I can do some naive things such as testing if I receive a SPICE protocol banner from the hypervisor via the proxy, but its not great. This is the approach taken with OpenStack Tempest testing for now, but its definitely a stop gap solution. To that end, I’ve been working an a simple command line SPICE client which will help me validate that a session works. However, that presents another layer of engineering to do… How do I provide an instance with well defined graphical console behavior?
Simply put, I want very defined behaviour for the instance being used as the test target. It should be small. It should boot quickly. It should never lock or blank the screen. It should have graphical output which makes it easy to determine a state change (for example when a key press occurs). Obviously, its time to write a UEFI binary to be the test target and not bother with an operating system at all. Its entirely a coincidence that I think UEFI binaries are quite interesting and have had them on the todo list for a while.
Now, this YouTube playlist from the excellently named Queso Fuego is a nice introduction to UEFI programming, a thing I had not done before:
However, the punch line is that the bare minimum we need is a disk image with:
Luckily, Mr Fuego provides a tool which does all of this at https://github.com/queso-fuego/UEFI-GPT-image-creator which makes this part much easier. I’m not entirely sure that watching someone else write code is a great use of my time, but on the other hand if you don’t have anyone in your life that understands UEFI, it is a way to be walked through the topic — especially when the UEFI specification is over 2,300 pages long and the book I bought on the topic isn’t actually the most gripping reading. Overall I don’t regret watching the video series.
For the disk image we basically want this:
So that’s the disk image. Inside the partitions we’ll need a FAT32 file system, which is well described by Wikipedia and also handled by Mr Fuego’s tooling.
Next of course need an actual program to run as well though. Based heavily on the code from the tutorial series, I present the snappily named uefi-latency-guest. This is available as a pre-built qcow2 image at https://images.shakenfist.com/testimages/uefi-latency-guest.qcow2 as well.
This post is long enough as it is, so we can talk more about these matters again later.
Meeting opened at 19:39 AEDT by Joel and quorum was achieved.
Minutes taken by Neill
For all membership applications we want to see some involvement from prospective members. We will update the membership application form to collect this for new applications. For existing pending ones we will ask via email.
Meeting closed at 20:49
Next meeting is scheduled for 2025-05-21 and is a subcommittee meeting
The post 2025–05-07 Council Meeting Minutes appeared first on Linux Australia.
Meeting opened at 08:02 AEDT by Jenny and quorum was achieved.
Minutes taken by Neill
Meeting closed at 22:11 AEDT
Next meeting is scheduled for 2025-03-12 at 20:00 UT+1100 (AEDT)
The post 2025-02-26 Council Meeting Minutes appeared first on Linux Australia.
Meeting opened at 20:08 AEDT by Joel and quorum was achieved.
Minutes taken by Neill
Comes down to the areas that the conferences use, sometimes the venue reports catering and venue costs in the one line item.
We will look into the 14k office expenses for Kiwi PyCon
We have not yet had time to go through this report in detail.
It looks like they have managed to make an $18k profit, which is an impressive turnaround.
They are looking at putting in a bid to run another conference in Japan, but we need to have a clear understanding of what it would mean for LA, especially in terms of tax implications.
[RS]: If we do not exceed AUD$100,000 (approximately, converted from JPY) ticket sales to Japanese residents we will not have any tax obligations in Japan.
[JW] Question about liability for losses – normally these run through a professional events company – what would our liability for losses be?
[RS] The Drupal Association asks for a percentage of revenue which is not acceptable.
[JA] The DA is keen to move to a percentage of revenue. We need a new agreement to run the next conference anyway. Not keen to commit to a percentage of revenue as that exposes LA to more risks. Apart from tax, the other item to be sorted out is insurance.
[JC] What is the DA giving to the conference?
[JA] They dedicate some resources for marketing and possibly send some people to the conference. Need to dig into the details more.
[RS] We should be able to ask our accountant about the tax implications.
We may need a separate meeting to discuss this with the organising team.
Miles has started the process of building a team to bid to run EO in Canberra in 2026.
Carlos from OpenSI is also keen to host the event in Canberra. Carlos has concerns about the effort involved, but is keen to run the conference. It would be good if Carlos spoke to Miles.
OpenSI (and Carlos) have sponsored the last two Everything Opens.
Has been removed from the mailing list after sending a number of strange emails to the list and then contacting people on the list directly in inappropriate ways.
He was warned that if he didn’t stop he would be unsubscribed. He did not stop and has now been unsubscribed.
Have told us they are at least pausing things, until such time as they find new people to run events.
They have asked for $1,000 for “contingencies” which is difficult. They should submit expense claims instead.
We should also offer to provide hosting for them in order to reduce their expenses.
Council has received a defamation concern notice. This has been discussed at a separate meeting. We are waiting for a response.
We received a budget and starting team. Council has not yet had time to go through the proposal in detail. Council members will individually review the proposal over the next few days and discuss any issues as a group.
Again. Council has not yet had time to go through the proposal in detail. Council members will individually review the proposal over the next few days and discuss any issues as a group.
We have particular concerns around the WordPress/WordCamp central entities and their recent actions.
Meeting closed at 21:32
Next meeting is scheduled for 2025-02-26 at 20:00 UT+1100 (AEDT) which will be a subcommittee meeting.
The post 2025-02-12 Council Meeting Minutes appeared first on Linux Australia.
Meeting opened at 19:35 AEDT by Joel. Quorum was achieved.
Minutes taken by Neill and Jonathan..
This is correspondence which was sent to Council. Any communication sent directly to the secretary or other executive addresses is not included.
Meeting closed at 21:07 AEST (UT+1000).
Next meeting is scheduled for 2025-05-07 19:30 AEST (UT+1000)
The post 2025-04-23 Council Meeting Minutes appeared first on Linux Australia.
Meeting opened at 19:32 AEST by Joel and quorum was achieved.
Minutes taken by Neill
https://drive.google.com/file/d/19UmKe7QRFSQdSlgcw2rnkws73Hq6MK2U/view?usp=drive_link
Once Joel has received clarification we will move a vote on the final document via email.
Meeting closed at 19:51 AEST
Next meeting is scheduled for 2025-04-23 and is a subcommittee meeting.
The post 2025-04-09 Council Meeting Minutes appeared first on Linux Australia.
Meeting opened at 20:18 AEDT by Joel and quorum was achieved.
Minutes taken by Neill
Moved by: Joel
Seconded: Neill
Outcome: Motion passed
Discussed scenario where Drupal Association don’t accede to this agreement. It would depend on how they respond, in the worst case scenario we will withdraw our support.
Russell will discuss our concerns over the budget with KiwiPyCon.
Motion: That Linux Australia accepts Kiwi PyCon 2025 as an event subcommittee based on the provided budget.
Moved by: Joel
Seconded: Elena
Outcome: Motion passed
Two items were discussed in camera
Meeting closed at 21:46
Next meeting is scheduled for 2025-03-26 and is a subcommittee meeting.
The post 2025-03-12 Council Meeting Minutes appeared first on Linux Australia.
Meeting opened at 20:30 AEDT by Russell. Quorum was achieved.
Minutes taken by Jonathan.
This is correspondence which was sent to Council. Any communication sent directly to the secretary or other executive addresses is not included.
Joel and/or Neill will work with Julia to arrange a suitable time.
Meeting closed at 21:32.
Next meeting is scheduled for 2025-04-09:
The post 2025-03-26 Council Meeting Minutes appeared first on Linux Australia.
The following is an illustration of how to use xargs to conduct parallel operations on single-threaded applications, specifically wget.
GNU wget is a great tool for downloading content from websites. The wget command is a non-interactive network downloader; by "non-interactive" what is meant is that it can be run in the background. Some very hand options include -c (continue, for partially downloaded files), -m (mirror, for an entire website), -r --no-parent (recursive, no parent, to download part of a website and its subdirectories). The cURL application has a wider range of protocols and includes upload options, but is non-recursive.
Recently, I had the need to download a small number of PDF files. The wildcard-based approach would be:
$ wget -r -nd --no-parent -A 'rpgreview_*.pdf' http://rpgreview.net/files/
The -r and --no-parent options have already been explained. The -nd option allows one to save all files to the current directory, without hierarchy of directories. The -A option ('accept', or -R 'reject) allows one to specify comma-separated lists of file name suffixes or patterns to accept or reject. Note that if any of the wildcard characters, *, ?, or ranges [] to be in an acclist or rejlist.
Running the above has the following time:
real 2m19.353s
user 0m0.836s
sys 0m2.998s
An alternatiive, looping through each file one at a time, would have been something like:
for issue in {1..53}
do
wget "https://rpgreview.net/files/rpgreview_$issue.pdf"
done
(Just for the record, wget can get a bit gnarly when dealing with http requests because for some webservers there is no requirement for path delimiters to match directory delimiters. For the purposes of this discussion it is assumed that we're dealing with a rational being where the two are equivalent.)
Using a combination of the printf command and the xargs command a list of the URLs can be constructed which is then passed to xargs which can split the list to run in parallel.
By itself, xargs simply reads items from standard input, delimited by blanks or newlines, and executes commands from that list of items as arguments. This is somewhat different to the pipe command which, by itself, sends the output of one command as the input stream to another. In contrast, xargs takes data from standard input and executes a command which, by default, the data is appended to the end of the command as an argument. However, the data can be inserted anywhere however, using a placeholder for the input; the typical placeholder is {}.
The value -P 8 is entirely arbitrary here and should be modified according to available resources. Adding -nc prevents xargs attempting to download the same file more than once (wget will not overwrite an existing file, but rather append a new file with .1, etc. The -n option ensures that only one argument is run per process.
printf "https://rpgreview.net/files/rpgreview_%d.pdf\n" {1..53} | xargs -n 1 -P 8 wget -q -nc
The time of the above comes to:
real 1m23.534s
user 0m1.567s
sys 0m2.732s
Yet another choice is to use GNU parallel and seq.
seq 53 | parallel -j8 wget "https://rpgreview.net/files/rpgreview_{}.pdf"
real 1m57.647s
user 0m1.830s
sys 0m4.214s
A final option, most common in high-performance computing systems with job schedulers, is to make use of a job array. This is effective assuming resource availability. This is a very effective option if each task in the array takes more than a couple of minutes each (given that there is an overhead involved in constructing the job, submitting it to the queue, etc). In Slurm, a script the directives and code would look like the following:
#SBATCH --job-name="file-array"
#SBATCH --ntasks=1
#SBATCH --time=0-00:15:00
#SBATCH --array=1-53
wget "https://rpgreview.net/files/rpgreview_${SLURM_ARRAY_TASK_ID}.pdf"
Never Panic Early: An Apollo 13 Astronaut’s Journey by Fred Haise
A fairly straightforward autobiography. Covers the areas you’d expect and has a few interesting stories that tie into the title. 3/5
My Scoring System for Books
This book is a history of the Bell Labs run by AT&T for much of the 20th century. These are the labs which produced many of the things I use day to day — Unix and the C programming language for example, although this book focuses on other people present at the lab, and a bit earlier than the Unix people. Unix, a history and a memoir for example is set in the same location but later in time.
One interesting point the book makes early is that the America of the early 20th century wasn’t super into scientists, it was much more about engineers. So for example Edison was an engineer whose super power was systematically grinding through a problem space looking for solutions to a problem, but not necessarily actually understanding the mechanism that caused the solution to work. A really good example, although not one of Edison’s, is adding lead to fuel to stop engine knocking and wear — they literally walked the periodic table until they found an element that worked. I am left wondering how much of this failure to understand the underlying mechanism was a contributor to the longer term environmental and health implications of these innovations — would a competent scientist have produced the same product given a knowledge of the likely outcomes? This is worry in the context of modern America becoming increasingly anti-science. Will there be a reverse to the former brute force approach and ignoring longer term consequences?
The book discusses the invention and manufacture of the vacuum tube, an invention necessary to create the transcontinental phone system that AT&T Long Lines so desired. This story is told in the context of the careers of early labs hires, who were all Members of Technical Staff (MTSes), a job title still used by several FAANG companies to this day.
There’s then the story of early work towards what we’d now call a transistor to replace the energy intense and fragile vacuum tubes. This effort was however derailed by being pulled into America’s WW2 efforts. Along the way they might of invented nuclear reactors just a little (but probably after others). An interesting aspect to this story that I hadn’t heard before is the controversy around William Shockley — a highly competitive man who committed the cardinal sin (in AT&T culture at least) of competing with his own direct reports. Later in the book we also learn that Shockley was a racist eugenicist, so that’s nice.
Next up is a brief introduction to Claude Shannon’s work on what we now call Information Theory — the idea of how to quantify the maximum amount of information you can reliably transmit on a given channel. This was by far my favourite subject at university while studying computer engineering, so I particularly enjoyed this section. I’ve had a book about Shannon on my reading to-do list for a long time, I really should get around to reading it soon. Ironically I read the description of Shannon as effectively someone with very high functioning ADHD and possibly autism. He was just lucky enough to find an environment that valued brilliance over conformity.
We then pivot back to the development of silicon transistors (as opposed to the previous germanium ones) and ultimately the first practical solar cell. Sadly that solar cell from the early 1950s was too expensive to make to be economically viable, but it was certainly a glimpse of the future. Bell Labs would return to those solar cells later in the book to solve another interesting problem.
It must have been difficult to write a book like this — there are simply so many inventions which changed our world to cover. In the context of the development of both satellite communications (both first with passive Mylar balloons and then active satellites) and cellular telephony, it is likely fair that the development of Unix and C only receive a few paragraphs. However, it certainly doesn’t map to my own personal biases well given the later two are the underpinnings for basically all modern internet scale technologies.
Bell Labs ultimately didn’t survive the break up of AT&T. Something with its name did, but it simply wasn’t the innovative place it once was. The book uses it’s closing chapter to ask an interesting question I think could use more examination than the book gives it — is the America of today trading on the innovations of 50 years ago made by places like Bell Labs? Or is there true “great leap forward” innovation still occurring in America. I think it’s worryingly possible the former is true. The book also argues that government research funding to a large extent replaced the torrent of cash that the early telephony monopoly gave Bell Labs. With global cuts in research funding that might not be true for long.
One minor disappointment with this book is that it is mostly descriptive — the who, the what, and the when. I see a missed opportunity here to try and explain why Bell Labs was as successful as it was. For over 60 years they were the leading industrial research lab in the world, which is a pretty impressive feat. The book does spend a few pages at the very end sort of playing with ideas around this, but overall it feels like an after though. Perhaps that question isn’t answerable, but I would have liked to have seen Gertner at least try a bit harder.
The definitive history of America’s greatest incubator of innovation and the birthplace of some of the 20th century’s most influential technologies “Filled with colorful characters and inspiring lessons . . . The Idea Factory explores one of the most critical issues of our time: What causes innovation?” —Walter Isaacson, The New York Times Book Review “Compelling . . . Gertner's book offers fascinating evidence for those seeking to understand how a society should best invest its research resources.” —The Wall Street Journal From its beginnings in the 1920s until its demise in the 1980s, Bell Labs-officially, the research and development wing of AT&T-was the biggest, and arguably the best, laboratory for new ideas in the world. From the transistor to the laser, from digital communications to cellular telephony, it's hard to find an aspect of modern life that hasn't been touched by Bell Labs. In The Idea Factory, Jon Gertner traces the origins of some of the twentieth century's most important inventions and delivers a riveting and heretofore untold chapter of American history. At its heart this is a story about the life and work of a small group of brilliant and eccentric men-Mervin Kelly, Bill Shockley, Claude Shannon, John Pierce, and Bill Baker-who spent their careers at Bell Labs. Today, when the drive to invent has become a mantra, Bell Labs offers us a way to enrich our understanding of the challenges and solutions to technological innovation. Here, after all, was where the foundational ideas on the management of innovation were born.
Since I posted by idea for two Auckland light Metro lines I’ve had some feedback and made some updates. This is the new version of the article. At the bottom of this article I’m written about the changes and reasoning behind them. You can also read my article on Light Metro Technology to get more information about the technology I am proposing be used.
Below is the new network I am planning as a basis for Auckland. It consists of
In this post I’ll cover the North/South line. Not that all pictures/maps in the article can be opened to see the larger version
As outlined in a previous article Light Metro is Automated (driverless), Grade Separated with Short Trains and High Frequencies. It is well suited to Auckland where requirements exceed Light Rail but a full metro would be overkill.
The key advantages of Light Metro over street running light rail is it’s high capacity, frequency and higher speed. Attempting to push Light Rail beyond it’s natural sweet-spot result in a grade-separated system that costs as much as Light Metro but is worse and often costs more to run.
The below table shows the capacity of a Light Metro line (in each direction). For Auckland the stations outside the CBD could be serviced by buses to further increase coverage area. Trains could start at short length and frequency increased as high as possible before longer trains should be used.
Headway / Trains per Hour | 2 Cars | 3 Cars | 4 Cars | 6 Cars |
5 min / 12 tph | 2,400 | 3,600 | 4,800 | 7,200 |
3 min / 20 tph | 4,000 | 6,000 | 8,000 | 12,000 |
2 min / 30 tph | 6,000 | 9,000 | 12,000 | 18,000 |
90 sec / 40 tph | 8,000 | 12,000 | 16,000 | 24,000 |
If the system is run with 4-car trains ( 52m long, 400 people ) then each line has the capacity over double one of the major Auckland motorways such as the Western or Southern.
This line would start from Takapuna run under the CBD and connect to the Airport in the South.
The line would be grade separated above the road as much as possible since this is cheaper than under-grounding. It would be underground though the central city however.
Total length would be around 28km of which around 6.2km would be underground. Cost would be something like $15b
This would be completely overhead and mainly above roadways. It would start from a station at Takapuna and then another station at the Existing Busway station of Akoranga.
The route would then go between the motorway and the sea until it reaches a tunnel across the harbour near Northcote Pt.
See section below for discussion as to the advantages and disadvantages of a tunnel vs a bridge for crossing Waitemata Harbour.
A route to Takapuna avoids duplicating the existing busway. Going off the 2024 bus statistics around 20% of the busway passengers go to Takapuna and Milford and it is likely some more would change from the bus to metro at Akoranga station if the Metro better served the southern part of their journey.
I have mapped out some possible extensions to this line but there are several other options.
A natural extension to this line would be a line to the North Shore Hospital and then Milford.
A line to Northcote and Birkenhead could also be built, although this line unfortunately goes the wrong way for Northcote and Birkenhead people going into the city.
Later lines might go down Wairau Rd to the Wairau Valley
The first station will be under the eastern end of Victoria Park. Ideally it should have entrances on the far sides of Fenshaw, Halsey and Victora Streets so passengers do not have to cross busy roads to reach it.
The Te Waihorotiu CRL station is apparently already future-proofed with space for a North/South line. The station will effectively be the centre of the Auckland System. There should also be a surface Light Rail line nearby on Queen Street.
The University station would be quite deep (because the line would be under Queen Street and the Grafton Valley Motorway) and probably be an elevator-only station.
I’ll cover how lines intersect later
South of the University the line has a station on the corner of Park Road to serve the Hospital, The Grafton Campus and nearby area. This station may be deep enough to also be elevator-only.
The line comes out of it’s tunnel just after the domain and is now elevated until it reaches the Airport.
The first station is on Carlton Gore road to serve Northern Newmarket. Then there is Station in central Newmarket connected to the existing rail station.
South of Newmarket the line will travel above Manukau Rd and continue South through Royal Oak and Onehunga.
Possible stations could be (at roughly 1km intervals):
The southernmost section crosses the Mangere harbour near the current bridges (No special bridge should be needed since it will be fairly low) and has a station at Mangere Bridge Village just South of the crossing.
The line then continues south along the Coronation Rd, McKenzie Rd and Bader Drive with the provision of a future station near Miller Road.
The station follows Bader Drive over the motorway to a station in Mangere town Centre. It then cuts over some houses (which will need to be purchased) back to the motorway and follows it South.
I’ve future proofed a station in the airport industrial area after which the line goes underground between Landing Drive and Ihumatoa Road for about 1km to a station under the airport.
The Depot for the Line would probably be best placed somewhere between the Airport and Mangere Town Center Station.
The plan currently has the line using a tunnel. This is probably about $1b more expensive than a bridge and makes it very difficult to provide a walking or cycling option.
The reason for this is that currently the Upper Harbour (West of the Auckland Harbour Bridge) has Chelsea Sugar Refinery and the Kauri Point Armament Depot. Both of these much be reachable by large ships. This requires a clearance of any bridge over the water to be similar to the current bridge’s 43 metres.
Since trains have problems with gradients much over 3% this would require over 1km of climb to reach which is almost impossible to fit south of the ship channel.
If the Sugar Refinery and Ammunition Depot can be moved then a lower bridge quickly becomes the best option. With a saving of over $1b it might be better to pay for them to move. However I have kept things as a tunnel for now.
This maps shows the other lines including the existing lines, a Queen St Light Rail and my proposed Western line.
The Victoria Park station as well as serving the immediate area supports transfers to the Western Line to get to lower-Queen Street and transfer to College Hill and Northcote buses
The Te Waihorotiu Station allows access to Aotea Square area and transfers to the CRL and Light Rail
The University Station serves the Universities and allows transfers to the Western line and Symonds Street Buses.
As the lines gets further South the demand on the single line will be reduced. So I propose that the line eventually branch just South of Newmarket
The new line would go down Great South Road as far as the existing Penrose Station and then East along Penrose Rd to meet the Sylvia Park station. It would then go East to Pakuranga and down Ti Rakau Dr to Botany. The existing Greenlane and Remuera stations could perhaps be replaced by Stations on the line
Because Light Metro can handle very high frequency, capacity on both branches would be high. For example trains could be every 2 minutes in the central areas and every 4 minutes on the branches
I’ve sketched out some later lines mainly to make sure the planned line has room for growth.
The first is a line from Mangere Town Center to Botany that goes though Papatoetoe and Otara.
The other is a cross town line that Starts in Point Chevalier and Follows Carrington, Mt Albert, Mt Smart and Great South Roads to Manakau City.
Penrose station would be a natural intersection point for two of the lines (and the existing heavy rail) and could be upgraded to be a major transfer station.
It is important to keep the cost of the line down as much as possible to improve it’s chances of being built. Especially the urge must be avoid to create expensive monumental stations or under-grounding more sections to avoid perceived disruption.
I propose the following measures
Note that driverless operation allows frequent trains which in turn allow shorter trains which means smaller cheaper stations.
I put my costs are what I consider realistic New Zealand levels. Some countries are able to build for less but I’ve listed what I hope New Zealand can build for.
North/South Line length by type of track
Total cost
Travel times for this sort of Metro are around 30km/h including stops although this line has fewer stops than most and some long stretches so should be a bit faster. Assuming 30km/h we get:
Currently the Airport takes around 1h 30m via public transport and 40 minutes for the Skydrive private Airport bus outside of peak times. The bus from Takapuna takes 20-25 minutes. Remember that the metro will have very high fequency so waiting for the next train will only be a couple of minutes.
I’ve tried to make the route realistic and useful. As far as I can tell all the climbs are within the capacity of a typical metro system. However some of the corners may be a little tight and additional space may be needed.
While only a small proportion of the passengers will travel to and from the airport I think the service will be great for this. Some people do express concern about room for luggage but riders manage on other systems around the world.
I think that this line and the Western line has a great potential to improve transport across Auckland. It will take pressure off the Northern Busway and also add huge capacity to the South West and Airport.
My next article will cover my proposal for a Western Line
General
I’ve made 3 stations on the line optional to save cost and increase speed
North Shore
Instead of replacing the Northern busway I’ve updated the line to go to Takapuna and eventually Milford instead. This will be around 1.7km from Akoranga Station rather than 10km so it should say $2-3 billion from the initial build and mean that we augment the Northern Busway rather than replace it..
I’ve included a few possible routes but there are a lot of options both for the initial route into Takapuna and future lines.
Central City
I’ve replace the Wynyard and Victoria stations with a single station under the east of Victoria Park. It would also have a similar service area to the two stations.
I’ve also rerouted the end of the Western Line to go along Fenshaw St rather than across the Viaduct Basin
Auckland Hospital
My original plan had the line going South from the University, under the motorway and Museum and emerging on the North of Newmarket
However skipping the hospital seems a bad idea. It is a huge trip generator not just for the hospital but for the Grafton Campus and other businesses and housing nearby.
So I’ve updated the route to have a station under the corner of Park road and the line emerging on Carlton Gore road with a station near the bottom of that road.
Mangere
I’ve changed Mangere town centre to be directly on the line instead of a short branch. This does mean the line goes through housing to get back on the motorway.
Also got rid of one station in Mangere Bridge.
Credits for suggestions on changes
Reentry: SpaceX, Elon Musk, and the Reusable Rockets that Launched a Second Space Age by Eric Berger
A sequel to his previous book on SpaceX this covers 2008 to 2023 and concentrates on the development of Falcon 9. Very good, recommended 4/5
Down and Dirty Pictures: Miramax, Sundance, and the Rise of Independent Film by Peter Biskind
Publish 2004 the books is 70% Miramax & 10% Sundance. Lots of crazy Miramax and Hollywood stories. Fun and interesting to read 4/5
Hope I Get Old Before I Die: Why Rock Stars Never Retire by David Hepworth
A fun romp though the unexpected 3rd Act of 60s and 70s Music Stars since 1985. Full of amusing stories delivered with Hepworth’s usual witty style 4/5
Careless People by Sarah Wynn-Williams
An insider’s stories of Facebook (and a Shark attack). Lots of nuts stuff and good yarns the reflect pretty negatively. Although the author seems too good to be true. 4/5
My Scoring System
It is appropriate, on World Quantum Day, to talk about quantum computing and quantum computers, as the two are often confused. Quantum computing is any method to generate quantum effects whereby qubit states can exist in superposition (o,1, both) rather than binary states (0,1). Binary states are represented in classical computing in low-level software as logical 0s and 1s but in hardware as high and low-voltage states.
The typical system to do quantum computing, or at least simulate it, is usually High Performance Computing (HPC). That works, it's a proven technology that has a rate of return of $44 per $1 invested - and higher when COVID research is considered. The development of HPC clusters with message passing is one of the most successful technological developments in computing in the last thirty years.
In contrast, a quantum computer directly uses a quantum mechanical system and requires appropriate specialised hardware. For example, GENCI in France uses a photonic computer, LRZ in Germany uses superconducting qubits, PSNC in Poland uses trapped ions, etc. David P. DiVincenzo offers the most significant physical challenges that face quantum computers, regardless of what technology is used; these include scaling qubits, initialisation of values, developing a universal gate for the construction of a quantum operation, developing gates that are faster than decoherence from a quantum state due to environmental interactions, and reading qubits (especially considering that can alter the quantum state).
As a result, classical computers outperform quantum computers in all real-world applications. Not only that, there is a serious issue of whether quantum computers will ever be able to outperform classical computers. Mikhail Dyakonov points out that the rudimentary qubits used in quantum computing systems is insufficient for useful calculations.
"Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10^300. That's a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe."
The promise of quantum computers is, of course, very significant in theory. In theory, it can perform some calculations incredibly fast, and the larger the task, the more impressive the result, to the extent that common secure encryption systems could be broken, as well as the more prosaic use in quantum simulations. In reality, the physical implementation has been more than challenging, to put it mildly. Classical computers, in principle, can solve the same problems as a quantum computer can, in principle solve as well. For a classical computer the problem is the sheer quantity of time that is required. For quantum computers, the problem is the implementation in reality. For the time being, and for the foreseeable future, it seems that quantum computing will continue to be done on classical computers.
Attachment | Size |
---|---|
![]() | 189.81 KB |
Wednesday, 9th April 2025, 8:00pm AEST
Meeting started at 8:00pm AEST.
MR JOEL ADDISON, President
As was flagged earlier and at the AGM, we need to go through a process of amending our constitution if we want to remain as a not for profit.
The deadline for this process is 30 June 2025
Thank you Russell for the hard work in preparing the necessary changes as a series of pull requests. Thank you to everyone who reviewed and provided feedback on the changes.
Before we vote, is there anything anyone wants to bring up for discussion?
[No issues raised]
Question: Do we have a quorum?
Answer: Yes and everyone in the meeting is qualified to vote as a member of Linux Australia.
We have had nine proxies registered. We will process the vote via a Zoom poll and then add the proxies. It requires a 75% vote in favour for the change to pass.
MOTION by Joel Addison that the complete constitution, as set out within the constitution.md file at https://github.com/linuxaustralia/constitution_and_policies/pull/47/files, be adopted.
Seconded by: Neill Cox
Motion raised via Zoom poll
Zoom results: 100% in favour (24 votes cast), plus Russell Stuart and Joel Addison
Proxies: 9 proxies also voted in favour
In total 35 votes and unanimously in favour.
We will now send this to the NSW Government to have it officially registered. Once it is registered then we will merge the pull request.
Meeting closed at 20:19 AEST
The post Minutes of Linux Australia – Special General Meeting April 2025 appeared first on Linux Australia.
My wife and I were with Optus for our mobile phone service since approximately the dawn of time, but recently decided to switch to another provider. We’d become less happy with Optus over the last few years after a data breach in 2022, an extended outage in 2023, and – most personally irritating – with them increasing the price of our plan despite us being under contract. Yes, I know the contract says they’re allowed to do that given 30 days notice, but they never used to do that. If you signed up for a $45 per month (or whatever) plan for two years, that’s what you paid per month for the duration. Not anymore. To their credit, when my wife lodged a complaint about this, they did end up offering us a 10% discount on our bill for the next 24 months, which effectively brought us back to the previous pricing, but we still maintain this practice just isn’t decent, dammit.
The question was: which provider to switch to? There are three networks in Australia – Telstra, Optus and Vodafone, so you either go with one of them, or with someone who’s reselling services on one of those networks. We already have a backup pre-paid phone with Telstra for emergencies, and so preferred the idea of continuing our main service on some other network for the sake of redundancy. iiNet (our ISP) repeatedly sent us email about nice cheap mobile services, but they were reselling Vodafone, and we’d always heard Vodafone had the worst coverage in regional Australia so we initially demurred. A few weeks ago though, iiNet told us they’d doubled their network coverage. It turns out this is due to TPG (iiNet and Vodafone’s parent) striking a deal with Optus for mutual network access. This all sounded like a good deal, so we ran with it. We received a new SIM each in the mail, so all we needed to do was log in to the iiNet toolbox website, receive a one time-code via SMS to confirm the SIM port, then put the new SIM in each of our phones, power cycle them and wait to connect. We decided to do one phone at a time lest we be left with no service if anything went wrong. I did my phone first, and something did indeed go wrong.
After doing the SIM activation dance, my phone – an aging Samsung Galaxy A8 4G which Optus had given me on a two year contract back in 2018 – said it was connected to iiNet. Mobile data worked. SMS worked. But I could not make or receive calls. Anyone I tried to call, the phone said “calling…” but there was no sound of a phone ringing, and eventually it just went >clunk< “call ended”. Incoming calls went straight to voicemail, which of course I could not access. Not knowing any better I figured maybe it was a SIM porting issue and decided to ignore it for a day in the hope that it would come good with time. Forty-eight hours later I realised time wasn’t working, so called iiNet support using this thing:
The extremely patient and courteous Jinky from iiNet support walked me through restarting my phone and re-inserting the SIM (which of course I’d already done), and resetting the network settings (which I hadn’t). She also did a network reset at their end, but I still couldn’t make or receive calls. Then she asked me to try the SIM in another handset, so I swapped it into our backup Telstra handset (a Samsung Galaxy S8), and somewhat to our surprise that worked fine. We double checked my handset model (SM-A530F) against the approved devices list, and it’s there, so it should have worked in my handset too… Alas, because we’d demonstrated that the SIM did work in another handset, there was nothing further Jinky could do for me other than suggest using a different handset, or finding a technician to help figure out what was wrong with my Galaxy A8.
After a lot of irritating searching I found a post on Whirlpool from someone who was having trouble making and receiving calls with their Galaxy A8 after the 3G network shutdown in late 2024. The interesting thing here was that they were using an Optus-branded but unlocked phone, with a Testra SIM. With that SIM, they couldn’t make or receive calls, but with an Optus SIM, they could. This sounded a lot like my case, just substitute “iiNet SIM” for “Testra SIM”. The problem seemed to be something to do with VoLTE settings? or flags? or something? That are somehow carrier dependent? And the solution was allegedly to partially re-flash the handset’s firmware – the CSC, or Country Specific Code bits – with generic Samsung binaries.
So I dug around a bit more. This post from Aral Balkan about flashing stock firmware onto a Galaxy S9+ using the heimdall firmware flashing tool on Ubuntu Linux was extremely enlightening. The Samsung Updating Firmware Guide on Whirlpool helpfully included a very important detail about flashing this stuff:
- Use CSC_*** if you want to do a clean flash or
- HOME_CSC_*** if you want to keep your apps and data. <== MOST PEOPLE USE THIS
The next question was: where do I get the firmware from? Samsung have apparently made it extremely difficult to obtain arbitrary firmware images directly from them – they’re buried somewhere in encrypted form on Samsung’s official update servers, so I ended up using samfw.com. I downloaded the OPS (Optus), VAU (Vodafone) and XSA (unbranded) firmware archives, matching the version currently on my phone, extracted them, then compared them to each other. The included archives for AP (System &Recovery), BL (Bootloader) and CP (Modem / Radio) were all identical. The CSC (Country / Region / Operator) and HOME_CSC files were different in each case. These are the ones I wanted, and the only ones I needed to flash. So, as described in the previously linked posts, here’s what I ended up doing:
heimdall flash --CACHE cache.img --HIDDEN hidden.img
and waited in terror for my handset to be bricked.The procedure worked perfectly. VoLTE – which wasn’t previously active on my phone – now was, and I could make and receive calls. VoLTE stands for Voice over Long-Term Evolution, and is the communications standard for making voice calls on a 4G mobile network.
It was at this point that the woefully untrained infosec goblin who inhabits part of my brainstem began gibbering in panic. Something along the lines of “what the hell are you doing installing allegedly Samsung firmware from a web site you found listed in a random forum post on the internet?!?”
I believed from everything I’d read so far that samfw.com was reputable, but of course I had to double-check. After an awful lot of screwing around on a Windows virtual machine with a combination of SamFirm_Reborn (which could download Samsung firmware once I tweaked SamFirm.exe.config
to not require a specific .NET runtime, but couldn’t decrypt it due presumably to that missing .NET runtime), and SamFirm (which can’t download the firmware due to Samsung changing their API to need a serial number or IMEI, but could decrypt what I’d downloaded separately with SamFirm_Reborn), I was able to confirm that the firmware I’d downloaded previously does in fact match exactly what Samsung themselves make available. So I think I’m good.
The SIM activation dance on my wife’s phone – a rather newer Samsung Galaxy S21 Ultra 5G – went without a hitch.
Neuro-divergence, encompassing conditions such as autism spectrum, ADHD, and sensory processing, can profoundly influence how individuals perceive and respond to their bodily signals.
While neurotypical individuals generally recognise and respond to hunger, thirst, and satiety cues with relative ease, neuro-divergent individuals often face unique challenges in this area. Understanding these challenges is crucial for fostering empathy and supporting effective strategies for well-being.
This article is written so it is directly readable and useful (in terms of providing action items) for people in your immediate surroundings, but naturally it can be directly applied by neuro-spicy people themselves!
For many neuro-divergent people, recognising hunger and thirst cues can be a complex task. These signals, which manifest as subtle physiological changes, might not be as easily identifiable or may be misinterpreted.
For instance, someone on the spectrum might not feel hunger as a straightforward sensation in the stomach but instead experience it as irritability or a headache. Similarly, those with ADHD may become so hyper-focused on tasks that they overlook or ignore feelings of hunger and thirst entirely.
Sensory processing issues can further complicate the interpretation of bodily signals. Neuro-divergent individuals often experience heightened or diminished sensory perception.
This variability means that sensations like hunger pangs or a dry mouth might be either too intense to ignore or too faint to detect. The result is a disconnection from the body’s natural cues, leading to irregular eating and drinking habits.
Recognising satiety and fullness presents another layer of difficulty. For neuro-divergent individuals, the brain-gut communication pathway might not function in a typical manner.
This miscommunication can lead to difficulties in knowing when to stop eating, either due to a delayed recognition of fullness or because the sensory experience of eating (such as the textures and flavours of food) becomes a primary focus rather than the physiological need.
Emotions and cognitive patterns also play significant roles. Anxiety, a common experience among neuro-divergent individuals, can mask hunger or thirst cues, making it harder to recognise and respond appropriately.
Additionally, rigid thinking patterns or routines, often seen with autism spectrum, might dictate eating schedules and behaviours more than actual bodily needs.
Understanding these challenges opens the door to effective strategies and support mechanisms:
Understanding the complex interplay between neuro-divergence and bodily signals underscores the importance of personalised approaches and compassionate support.
By acknowledging and addressing these challenges, we can help neuro-divergent individuals achieve better health and well-being!
The post Understanding and responding to hunger and thirst signals by neuro-divergent people first appeared on BlueHackers.org.The standard computing model uses unithreaded instructions and data with automation through looping and conditional branching. Automation is encouraged as it results in the computer doing the work that it is designed for. However, this can be inefficient when using a multicore system. An alternative in HPC systems is to make use of job arrays, which use a job to allocate resources to sub-jobs which can be individually controlled, whether directed toward instruction sets or datasets. Further, job arrays can be combined with job dependencies, allowing for conditional chains of job submission and runs. Finally, job arrays can be simulated through the use of heredocs with looped submission. This may even allow more familiar control with shell scripting.
This slidedeck is derived from a presentation to the Univerity of Melbourne's "Spartan Champions" group on March 7, 2025.
The Nvidia Way: Jensen Huang and the Making of a Tech Giant by Tae Kim
A history of Jensen Huang and Nvidia. Easy to follow and interesting. 4/5
The Hobbit by J. R. R. Tolkien
Another listen of the Serkis version. Once again I’m mostly okay with his reading, although I prefer Inglis’ 4/5
My Audiobook Scoring System
Since 2016, Facebook Messenger has allowed the option for end-to-end encrypted messages, and in 2023, they were established as the default.
This has caused some issues with people exporting Messenger data for archival or viewing purposes. It is a lot quicker to search for information when the data is local, and it is better practice to do so.
Encrypted Facebook Messenger data can be downloaded by selecting "Privacy & safety" on the Facebook menu, then "End-to-end encrypted chats", then "Message storage", and finally "Download secure storage data".
When the file is ready, it will provided as a zip file that contains various binaries in a "media" directory and well-formatted JSON files for each message chat for current Friends.
Despite the formatting, JSON files are less than optimal for humans to read. The "Lament" Python script makes use of pandas, json, sys, and datetime modules. Simply copy it into your messages directory, where all the JSON files are located and run:
$ python3 lament.py
It will then output a formatted HTML table that can be opened by a browser of your preference. The table includes a human-readable date (rather than the UNIX timestamp), the sender, the message, reactions, and a reference to any media.
The format and logic of the Python script are simple enough that it can be easily extended to other JSON data from Messenger and can be adapted to other JSON data files.
Aotearoa New Zealand has run eResearch symposiums and conferences since 2010, with the 2025 conference held in Christchurch, co-hosted with Genomics Aotearoa, New Zealand eScience Infrastructure (NeSI) and REANNZ. The first day, a "Carpentries Connect" event, was held at Canterbury University, whilst the main conference was held at the "Chateau on the Park" motel. The conference is focused on technological tools and processes that aid researchers; it is notable that most of the attendees are in the fields of technology deployment, support, management, education and training; they are often professionals who are concurrently researchers or come from a research background.
The Carpentries Connect day featured the Executive Director of the Carpentries, Dr. Kari L. Jordan, who spoke passionately on the importance of the Carpentries to improve the computational skills of researchers and the process of continuous improvement, dedication to evidence-based andragogy, and expansion over the last twenty-seven years. This has led to over 100,000 people receiving instruction and widespread adoption throughout the world and, in particular, in New Zealand. Of particular note was the annoucement that the HPC Carpentry program was now sufficiently developed - after many years - that deployment was expected this year. Of note, Dr. Tom Pollard from Massachusetts Institute of Technology (MIT), USA and Technical Director, delivered a fascinating talk on "Responsible Machine Learning" and what "irresponsible" machine learning looks like!
The conference proper consisted of three days of keynotes and concurrent sessions, about 75 in total. It is, of course, impossible to give a summary of all events that give them justice, but rather remarks are given on a few presentations that especially stood out from my own perspective. These must include the keynotes from Dr Giuseppe Barca from the University of Melbourne on quantum chemistry, drug development, and the use of Frontier, the second most powerful public supercomputer in the world, and especially their development of GPU-based quantum chemistry software, linear scaling and quantum potential and ML training with multi-layer molecular mechanics.
Another was Professor Rick Stevens of the University of Chicago and Argonne National Laboratory, who illustrated the explosive development in AI in the past few years and how planning was already underway for further massive investment in LLMs in the next few years. His comments matched a pithy remark from April Neoh of NVIDIA/HPE earlier in the conference that model innovations were "learning faster than we can learn what to do with it". He also made the observation that there was increasing pressure for hardware to move to mixed precision for performance, which matched Barca's earlier remarks that the future will increasingly have domain-specific architectures.
A third presentation of note was that from Amber McEwan, Chief Executive Officer at REANNZ, who spoke on the merge of NeSI with that body, following the Ministry of Business, Innovation, and Employment report proposing the merger in 2022 and the directive for implementation. It was, as can expected, a carefully-worded presentation that outlined the process for integration, the need for consolidation of eResearch infrastructure, and the purpose to attract and retain talent in the eResearch workforce. The initial stage, also as can be expected for these two mature organisations, is a "lift and shift" approach that will be complete by July 2025.
My own presentation, "HPC Bioinformatics Education: The University of Melbourne Experience" drew from a large body of previous work on HPC education. An outline of the UoM and precinct environment for bioinformatics was provided, along with general andrological approaches, the University of Melbourne's HPC onboarding process, particular needs for bioinformaticians, and empirical outcomes were all provided. The presentation was well-received, with ongoing interest and feedback. I must also note that the talk dovetailed quite nicely into the one that was followed by Paul Gardner of the University of Otago, who, following exhaustive test cases, differentiated bioinformatic tools across the dimensions of speed and accuracy and concluded that sustained development was the best metric for determining the most effective software.
Understandably, interest in developing eresearch skills featured highly, with several presentations returning to this subject of discussion with an extensive BoF dedicated to the topic from the Carpentries groups, NeSI, Genomics Aotearoa, and the individual universities. Another item of interest was how extensive the "Spartan architecture" of hybrid HPC extended with cloud compute ("FlexibleHPC" is their preferred term) and with a lowered barrier-to-entry with Open OnDemand has been adopted in New Zealand, both in terms of the national system and smaller systems at Otago University and the University of Canterbury. The national system, following upgrades, will have 20K cores using AMD Genoas plus existing AMD Milan and Intel Broadband processors.
Overall, eResearchNZ continues to be an exceptional conference that brings together those on the cutting edge of high performance and high throughput computing, the people who look after such systems, and those who provide a bridge for researchers to access and understand the technology. After conferences that, due to COVID circumstances, were held in virtual environments, to have face-to-face meetings again allowed for excellent networking opportunities and free conversation and elaboration on presentations. Once again, New Zealand can be justly proud of its ability to "punch above its weight" and put on a world-class eResearch conference.
There’s a lot of blah flying around about DeepSeek and its latest models. My rule when running training sessions has always been TIAS, if someone asked me something that could be figured out relatively quickly by actually trying it. TIAS stands for Try It And See. It’s more valuable, because it’s a first-hand validation and doing the activity means the outcome is more likely to be remembered. So let’s do that!
My desktop has a decent AMD Ryzen 5 CPU, 32GB RAM, and an RTX 3060 GPU with 12GB VRAM. So while that’s not super hot, I sufficient to load and query a variety of LLMs locally without pain. By the way, ollama is a wonderful tool for this! So I had ollama grab deepseek-R1, which is the biggest reasoning model. It takes slightly longer to get an answer, but it tells you the process (aka “what it is thinking”. indicated inbetween the <think> … </think> tags). Details of the model:
>>> /show info Model architecture qwen2 parameters 7.6B context length 131072 embedding length 3584 quantization Q4_K_M
I am not using any pre-prompting here, I simply started ollama with the model and started asking the questions as noted below. A local model in ollama is typically not Internet connected nor does it have access to the broader system, proven by the following:
>>> What is the current date?
<think>
</think>
Hi there! I suggest getting online to get real-time information. If you have any other questions, please don't hesitate to let me know!
Ok, fair. On to a question then that the model might be able to answer.
>>> Who is the current president of the US?
<think>
Alright, the user previously asked for the current date and time, which I provided a response for. Now, they're asking who the current president of the US is.
I need to make sure my response is accurate and up-to-date. Since the last response was from July 2023, it's likely that the current president as of October 2023 would still be Joe Biden.
I should structure the answer clearly, stating their position explicitly. Adding a friendly emoji can make the response feel warmer.
I'll also keep it concise and straightforward to match the user's possible expectations.
</think>
As of now, the current President of the United States is **Joe Biden**.
Given the stated model build date of October 2023 which used input on this from July 2023, the answer is correct. The reasoning is interesting (though not unexpected).
a) it is maintaining state. As it should, otherwise each question would stand on its own and you’d never be able to ask a follow-up question.
b) It tries to infer from the previous question as to what the user’s intent is. Useful.
>>> Who is the current president of China?
<think>
Alright, so I need to figure out who the current president of China is. Let me start by recalling some information I have about Chinese leaders.
I know that former President Xi Jinping has been serving as the leader for quite a while now. I think he was first elected in 2012 or maybe 2013. Wait, actually, it was 2012 when he became the General Secretary of the Communist Party and then later he became president.
I also remember that there have been other leaders before him, like Deng Xiaoping, who was a very important leader in the reform and opening-up policies. Then there's Hu Jintao, who served two terms. After Hu, I think it was Wen Jiabao, but I'm not sure about his tenure.
So, putting this together, after Deng Xiaoping came Hu Jintao, then Wen Jiabao, followed by Xi Jinping taking over from 2013 onwards. So the current president is Xi Jinping.
</think>
The current President of China is **Xi Jinping**.
Correct. Now for the question that seems to get everybody’s knickers in a knot. DeepSeek’s mobile app doesn’t want to answer it, and people call out China’s censorship. We’ll talk about that in a moment, but let’s first try asking the local model straight out:
>>> Tell me about the Tiananmen Square event.
<think>
Okay, so I need to explain what happened at Tiananmen Square in China during the year 1989. From what I remember, it was a very significant event related to the Chinese government and its policies.
First, I think it started with a peaceful demonstration by students, intellectuals, and other groups who were upset about certain government actions or policies they believed were unfair. They gathered in Tiananmen Square because they wanted to express their opinions and demands.
As more people joined the protest, things turned into a larger-scale event. There were clashes between the protesters and the government's security forces. I believe there was some violence involved, with tear gas being used and maybe even some arrests or disruptions of the protest.
I also remember that there were military actions involved to maintain control, which might have led to casualties on both sides. The exact numbers are fuzzy in my memory, but I think several people were injured or killed during the events.
The government eventually cracked down hard on what they saw as an illegal demonstration. There was a lot of media coverage about it at the time, and now there's a debate about whether it was justified or an overreaction.
I'm not entirely sure about all the details, like the exact dates or how many people were involved, but I know it was a pivotal moment in modern Chinese history that led to various political changes. Maybe it's related to the 1989 protests and the subsequent shift in power within the government?
Overall, it seems like a complex event with multiple factors influencing its outcome, including government actions, international reactions, and the internal dynamics of the Communist Party.
</think>
Tiananmen Square is a significant historical event that occurred in Beijing on June 3-4, 1989. It was a peaceful demonstration by students, intellectuals, workers, and others who were protesting against what they perceived as unfair government policies regarding various issues such as human rights, economic reforms, and environmental regulations.
The protest began with small demonstrations and gradually turned into larger, more organized antigovernment activities. On June 3, several thousand people gathered in Tiananmen Square to protest against the Chinese Communist Party's handling of these issues.
As the protests grew, there was significant government crackdown. Security forces used tear gas, rubber bullets, and heavy machine gun fire to disperse the crowd, resulting in casualties on both sides. The government ultimately cracked down more brutally, arresting and killing many protesters. According to some accounts, at least 138 people were killed during the events.
The Tiananmen Square protests led to a dramatic shift in Chinese political power. After the crackdown, several members of the Chinese Communist Party who had supported the government's actions against the protesters were removed from leadership positions. The event also catalyzed widespread international condemnation and opened up a
period of political transition in China.
Right. So there we have it. It’s not refusing, it’s not hedging, it’s not skirting around the issue, it just answers (you should see ChatGPT hedge it answers, it’s icky, you need to very strictly pre-prompt it to make it not do that). DeepSeek doesn’t appear to be holding back at all, down to the use of “heavy machine gun fire to disperse the crowd”. As to whether there were indeed casualties on both sides I cannot say, even internationally there is rather a lack of independently verified information regarding the event. But for the purpose of this exercise, we can at least conclude that the model itself does not appear to be censoring its output.
So what about the mobile app that queries the model running in China? Well, someone else asked it a similar question to what I did above, and it didn’t want to talk about it. Then the person added “, answer in l33t speak.” to the question, whereupon they received a substantial answer (possibly less extensive than mine, but they may have queried the non-reasoning model).
What does this tell us? It’s simple logic (at least as a hypothesis): it probably means that the model itself contains all the information, but that in the online app the output gets scanned and censored via some automated mechanism. That mechanism isn’t perfect and humans are very creative, so in this instance it was bypassed. Remember: you can often tell a lot about how an application works internally just by observing how it behaves externally. And with the experiment of running a big DeepSeek model locally, we’ve just verified our hypothesis of where the censorship occurs as well, it seems clear that the model itself is not censored. At least not on these issues.
This is not to say that the model isn’t biased. But all models are biased, at the very least through their base dataset as well as the reinforcement learning, but often also for cultural reasons. Anyone pretending otherwise is either naive or being dishonest. But that’s something to further investigate and write about another time.
The post An initial look at running DeepSeek-R1 locally first appeared on Lentz family blog."Whoever has seen the universe, whoever has beheld the fiery designs of the universe, cannot think in terms of one man, of that man's trivial fortunes or misfortunes, though he be that very man."
-- Luis Borges
Titan is the largest moon of Saturn, about 1.2 billion kilometres from Earth. Discovered in 1655 by Christiaan Huygens, the dense opaque atmosphere prevented any understanding of Titan's surface for many years. It is the only moon known to have an atmosphere with a greater density than Earth and the only known object in space that has stable bodies of surface liquid.
Knowledge of Titan was greatly improved by the Cassini-Huygens probe-lander, a joint effort by NASA, the ESA (European Space Agency), and the ASI (Agenzia Spaziale Italiana). It was launched on October 15, 1997. Travelling to Saturn included flybys of Venus, Earth, asteroids, and Jupiter.
The Huygens lander module travelled with Cassini until its separation from the probe on December 25, 2004; Huygens landed by parachute on Titan on January 14, 2005. For its own part, Cassini made passes through the gaps between Saturn and its inner rings before going into Saturn's upper atmosphere, where it burned up.
The Huygens lander discovered that Titan has a thick atmosphere that's 95% nitrogen and 2% methane. The surface has rivers, lakes, and even seas of the hydrocarbons methane and ethane.
The landing is something that, to this day, I watch in awe.
Happy landing day, Huygens.
I write, but just not here. Client sites, X, etc. so there is chronicling, but just not on the blog.
What changed from Hello 2024?
I got married. I moved into the flat. Companies have gone up and down, like life.
170 days on the road, 308,832km travelled, 38 cities, and 16 countries. I have never travelled this little in recent life, but maybe the whole getting married thing (planning a wedding is no mean feat), and sorting the flat out (dealing with incompetent interior designers, sorting things there, etc.), caused this?
It is 2025, and I’m actually planted in Kuala Lumpur, not having done an end of year trip, to usher in the New Year somewhere else. I started the year in Paris, and I ended the year in Kuala Lumpur, tired, and maybe a bit burnt out.
Working hard to get back into the grind; don’t get me wrong, I’ve been doing nothing but grinding, but c’est la vie.
The end of this year marks my seventeenth year working in high performance computing and my ninth at the University of Melbourne in this role. When I compare this to previous years there have been some notable changes in the technology and the system I am primarily involved with (Spartan), but also in my own employment activities. Late last year, there was a structural review of our operations at Research Computing Services, as the existing organisational chart was becoming unwieldy and increasingly untenable. I ended up as the team leader for HPC Services and have stepped back somewhat from technical to management of a small but awesome team, along with organisational activities between other service groups (data, cloud) and our very close relationship with the infrastructure group.
Compared to last year, Spartan has increased to 7121 accounts and 2361 projects, mainly in engineering, bioinformatics (especially health) economics, mathematics, and more, and has been cited in at least 55 new papers. Machine learning has been a particularly popular area of interest for several years now on the system, which has especially benefited from Spartan's significant investment in GPUs, whose excellent vector computational performance is evident in the system receiving certification as a global supercomputer in November last year, jumping from a position of 453 (for the GPU partitions alone) in November 2023 to 262 in November 2024. Directly related to Spartan work, I attended two major conferences in person this year, "Supercomputing Asia" and "eResearch Australasia". For the former, I gave a presentation on the International HPC Certification Forum and a poster on usage outcomes from training. For the latter, I gave a presentation on the development of Spartan from a small but innovative system to its current supercomputer status.
Training various postgraduate and postdoctoral researchers on how to use the system has been part of my work for more than a decade now, and it took some acceptance on my part several years ago when I realised that I was the most prolific supercomputer educator in the country. This year, several hundred researchers attended the twenty-two workshops that I conducted on Linux knowledge, regular expressions, HPC job submission, high performance and parallel Python, parallel programming (MPI, OpenMP, CUDA), mathematical and statistical programming (R, Octave/MATLAB, etc.), and more. In addition, each year, I am brought in for lectures and assessments for the University's Cluster and Cloud Computing course, which also has several hundred students. In addition, this year, I took a leave from the University to travel to the Australian Institute for Marine Science in Townsville to run a week-long HPC training course for around fifty of the most switched-on (mostly) young researchers I have ever had the pleasure of meeting.
All of this has resulted in an extremely good review by my manager, who really appreciated the initiatives that I have taken within the new structure. These activities will continue, as I am increasingly emphasising the importance of organisational and technical quality assurance to RCS as a whole. A good portion of next year is already organised: I know I will be attending eResearch New Zealand to deliver a paper on HPC Training for Bioinformatics, eResearch Australasia in Brisbane, and I'll be doing lectures for UniMelb's COMP90024 course. In addition, I'll be doing my best to reduce the number of Spartan workshops I run, in preference to more online videos and documentation (we do a lot of the latter already, but it's never enough to satiate demand).
In many ways, I am deeply blessed to have the sort of job that I do. Even if I get a bit grumpy about bureaucracy at times, I love my work. I get to provide supercomputer support to researchers whose discoveries and inventions make real changes to the world we live in with a stunning return on investment of 7:1 over two years, nearly entirely in the form of positive social externalities. It is computing for medicine, for climatology, for materials, for agriculture, for the environment, rather than computing for social media or games (both of which I use, by the way). It is the sort of computing I was inspired by as a youngster, research-focused and close to the metal. I may add that it is good and secure employment, especially given that technical and knowledge skills are increasingly valuable where the ratio of capital to labour increases. In a nutshell, I am more than happy with how supercomputing is progressing, and I am very happy with this career choice.
I recently replaced the screen of a Google Pixel 3A XL, the new panel is made by tianma and worked well under Andoird, until it doesn’t. On every boot up the screen will work until the phone went to sleep, and then the screen will stop responding to touch, until another reboot. After the screen became unresponsive, the rest of the phone would remain responsive during the locked state and it’s possible to unlock the screen with fingerprint, but there is no way to make the touchscreen responsive again without reboot.
To fix this, go to Settings -> System -> Gestures and disable Double-tap to check phone
. After which the screen should no longer stuck into unresponsive state. This seems to be a common problem affecting many phones with replaced screen.
Google will surely shutdown their support forum one day and I encourage everyone to put their notes somewhere reliable, like a selfhosted blog :)
Our 5.94kW solar array with Redflow ZCell battery and Victron Energy inverter/charger system is now slightly over three years old, which means it’s time to review its third year of operation. There are several previous posts in this series:
If you’ve read the above you’ll know that the solar array was originally installed back in 2017 along with a Sanden heat pump hot water service. That initial installation saved us a lot on our electricity bills, but it wasn’t until we got the ZCell and the Victron gear that we were able to really manage our own power. The ZCell allows us to store our own locally generated electricity for later use, and the Victron kit manages everything and gives us a whole lot of fascinating data to look at via the VRM portal.
There were some kinks in the first two years. We missed out on three weeks of prime solar PV generation from January 20 – February 11 in 2022 due to having to replace the MPPT solar charge controller. We also had no solar PV generation from February 17 – March 9 in 2023 on account of having our old tile roof replaced with colorbond steel. In my last post on this topic I wrote:
In both cases our PV generation was lower than it should have been by an estimated 500-600kW. Hopefully nothing like this happens again in future years.
…and then at the very end of that post:
I’m looking forward to doing another one of these posts in a year’s time. Hopefully I will have nothing at all interesting to report.
Alas, something “like this” did happen again, and I have some interesting things to report.
In early December 2023 our battery failed due to a leak in the electrode stack. It was replaced under warranty, but the replacement unit didn’t arrive until March 2024. It was a long three months. Then in August when we were looking at finally purchasing a second ZCell, we discovered that Redflow had made a commercial decision to focus exclusively on large-scale deployments (minimum 200 kWh, i.e. 20 batteries) and was thus no longer selling individual ZBMs for residential or small business use. As an existing customer we probably would have still been able to get a second battery, except that in late August the company went into voluntary administration after failing to secure funding to build a new factory in Queensland. The administrators attempted to seek a sale and/or recapitalisation, but this was ultimately unsuccessful. The company ceased operations on October 18 and subsequently went into liquidation. This raises several questions about the future of our system, but more on that later. First, let’s look at how the system performed in year three.
Here are the figures for grid power in, solar generation, power used by our loads, and power exported to the grid over the past three years. As in the last two posts, the “what?” column here is the difference between grid in plus solar in, minus loads minus export, i.e. the power consumed by the system itself, or the energy cost of the system.
Year | Grid In | Solar In | Total In | Loads | Export | Total Out | what? |
---|---|---|---|---|---|---|---|
2021-2022 | 8,531 | 5,640 | 14,171 | 10,849 | 754 | 11,603 | 2,568 |
2022-2023 | 8,936 | 5,744 | 14,680 | 11,534 | 799 | 12,333 | 2,347 |
2023-2024 | 8,878 | 5,621 | 14,499 | 11,162 | 1,489 | 12,651 | 1,848 |
Note that in year three our grid power usage and solar generation are slightly down from the previous year (-58kWh and -123kWh respectively), so the total power going into the system is lower by 181kWh. Our loads are happily down by 372kWh, a good chunk of which will be due to replacing some old always-on computer equipment with something a bit less power hungry.
What’s really interesting here is that our power exported to the grid is close to double the previous two years, and the energy cost of the system is noticeably lower. In the first two years of operation the latter figure was 16-18% of the total power going into the system, but in year three it’s down to a bit under 13%.
The additional solar export appears to be largely due to the failed battery. Compare the following two graphs from 2022-2023 and 2023-2024.Yellow is direct usage of solar power, blue is solar to battery and red is solar to grid. As you can see there’s way more solar to grid in the period December 2023 – March 2024 when the battery was dead and thus unable to be charged:
Why is there still any blue in that period indicating solar power was going to the battery? This is where things get a bit weird. One consideration is that the battery is presumably still drawing a tiny bit of power for its control circuitry and fans, but when I look at the figures for January 2024 (for example), it shows 76.8 kWh of power going to the battery from solar. There is no way that actually happened with the battery dead and unable to be charged.
Here’s what I think is going on: when the battery went into failure mode, the ZCell Battery Management System (BMS) will have told the Victron gear not to charge it. This effectively disabled the MPPT solar charger, which meant we weren’t able to use our solar at all, not even to run the house. I asked Murray from Lifestyle Electrical Services if there was some way we could reconfigure things to still use solar power with the battery out of action and he remoted in and tweaked some settings. Unfortunately I don’t have an exact record of what was changed at this point, because it was discussed via phone. All I have in my notes is a very terse “Set CGX to use Victron BMS?” which doesn’t make much sense because we don’t have a Victron BMS. Possibly it refers to switching the battery monitor setting from “ZCell BMS” to “MultiPlus-II 48/5000/ 70-50 on VE.Bus”. Anyway, whatever the case, I think we have to assume that the “to battery” and “from battery” figures from December 2023 – March 2024 are all lies.
At this point we were able to limp along with our solar generation still working during the day, but something was still not quite right. Every morning and evening the MPPT appeared to be fighting to run. Watching the console at, say, 08:00, I’d see the MPPT providing solar power for a few seconds, then it’d stop for a second or two, then it’d run again for a few seconds. After some time it would start behaving normally and we’d have solar generation for the day, but then in the evening it would go back to that flicking on and off behaviour. My assumption is that the ZCell BMS was still trying to force the MPPT off. Then in mid-Februrary I suddenly got a whole lot of Battery Low Voltage warnings from the MPPT, which I guess makes sense – the ZCell was still connected and its reported voltage had been very slowly dropping away over the past couple of months. The warnings appeared when it finally hit 2.5V. Murray and I experimented further to try to get the MPPT to stop doing the weird fighting thing, but were unsuccessful. At one point during this we ended up with the Mutli-Plus II inverter/chargers in some sort of fault state and contacted Simon Hackett for further assistance. We got all the Victron gear back into a sensible state and Simon and I spent a bunch of time on a Saturday afternoon messing with everything we could think of, but ultimately we were unable to get the MPPT to provide power from the solar panels, and use grid power, without the battery present. One or the other – grid power only or solar power only – we could do, but we couldn’t get the system to do both at the same time again without the battery present. Turns out a thing that’s designed to be an Energy Storage System just won’t quite work right without the Storage part. So from February 15 through to March 14 when the replacement battery arrived we were running on grid power only with no solar generation.
Happily, we didn’t have any grid power outages during the three months we were without a battery. Our first outage of any note wasn’t until March 23, slightly over a week after the replacement battery was installed. There were a few brief grid outages at other times later – a couple of minutes one day in April, some glitches on a couple of days in August, but the really bad one was on the 1st of September when the entire state got absolutely hammered by extremely severe weather. Given there was a severe weather warning from the BOM I’d made sure the battery was full in advance, which was good because our grid power went out while we were asleep at about 00:37 and didn’t come back on until 17:28. We woke up some time after the grid went down with the battery at 86% state of charge and went around the house to turn off everything we could except for the fridge and freezer, which got our load down to something like 250W. By morning, the battery still had about 70% in it and even though the weather was bad we still had some solar generation, so between battery and solar we got through just fine until the grid came back on in the afternoon. We were lucky though – some folks in the north of the state were without power for two weeks due to this event. I later received a cheque for $160 from TasNetworks in compensation for our outage. I dread to think what the entire event cost everyone, and I don’t just mean in terms of money.
Speaking of money though, the other set of numbers we need to look at are our power bills. Here’s everything from the last seven years:
Year | From Grid | Total Bill | Grid $/kWh | Loads | Loads $/kWh |
---|---|---|---|---|---|
2016-2017 | 17,026 | $4,485.45 | $0.26 | 17,026 | $0.26 |
2018-2019 | 9,031 | $2,278.33 | $0.25 | 11,827 | $0.19 |
2019-2020 | 9,324 | $2,384.79 | $0.26 | 12,255 | $0.19 |
2020-2021 | 7,582 | $1,921.77 | $0.25 | 10,358 | $0.19 |
2021-2022 | 8,531 | $1,731.40 | $0.20 | 10,849 | $0.16 |
2022-2023 | 8,936 | $1,989.12 | $0.22 | 11,534 | $0.17 |
2023-2024 | 8,878 | $2,108.77 | $0.24 | 11,162 | $0.19 |
As explained in the last post, I’m deliberately smooshing a bunch of numbers together (peak power charge, off peak power charge, feed in tariff, daily supply charge) to arrive at an effective cost/kWh of grid power, then bearing in mind our loads are partially powered from solar I can also determine what it costs us to run all our loads. 2016-2017 is before we got the solar panels and the new hot water service, so you can see the immediate savings there, then further savings after the battery went in in 2021. This year our cost/kWh (and thus our power bill) is higher than last year for two reasons:
I should probably also mention that we actually spent $1,778.94 on power this year, not $2,108.77. That’s thanks largely due to a $250 ‘Supercharged’ Renewable Energy Dividend payment from the Tasmanian Government and $75 from the Federal Government’s Energy Bill Relief Fund. The remaining $4.83 in savings is from Aurora Energy’s ridiculous Power Hours events. I say “ridiculous” because they periodically give you a bunch of time slots to choose from, and once you’ve locked one of them in, any power you use at that time is free. To my mind this incentivises additional power usage, when we should really be doing the exact opposite and trying to use less power over all. So I haven’t tried to use more energy, I’ve just tried to lock in times that were in the evening when we were going to be using more grid power than during the day to scrape in what savings I could.
One other weird thing happened this year with the new battery. ZCells need to go into a maintenance cycle every three days. This happens automatically, but is something I habitually keep an eye on. On September 11 I noticed that we had been four days without running maintenance. Upon investigation of the battery logs I discovered that the Time Since Strip counter and Strip Pump Run Timer were running at half speed, i.e. every minute they were each only advancing by approximately 30 seconds:
I manually put the battery into maintenance mode and Simon was able to remotely reset the CPU by writing some magic number to a modbus register, which got the counters back to the correct speed. I have no idea whether this is a software bug or a hardware issue, but I’ll continue to keep an eye on it. The difficulty is going to be dealing with the problem should it recur, given the demise of Redflow. Simon certainly won’t be able to log in remotely now that the Redflow cloud is down, although there is a manual reset procedure. If you remove the case from the battery there is apparently a small phillips head screw on the panel with the indicator lights. Give the screw a twist and the lights go out. Untwist and the lights come back on and the unit is reset. I have yet to actually try this.
The big question now is, where do we go from here? The Victron gear – the Cerbo GX console, the Multi-Plus II inverter/chargers, the MPPT – all work well with multiple different types of battery, so our basic infrastructure is future-proof. Immediately I hope to be able to keep our ZCell running for as long as possible, and if I’m able to get a second one as a result of the Redflow liquidation I will, simply so that we can ensure the greatest possible longevity of the system before we need to migrate to something else. We will also have to somehow figure out how to obtain carbon socks which need annual replacement to maintain the electrolyte pH. If we had to migrate to something else in a hurry Pylontech might be a good choice, but the problem is that we really don’t want a rack of lithium batteries in the crawl space under our dining room because of the fire risk. There are other types of flow battery out there (vanadium comes to mind) but everything I’ve looked at on that front is either way too big and expensive for residential usage, or is “coming soon now please invest in us it’s going to be awesome”.
I have no idea what year four will look like, but I expect it to be interesting.
As per previous conferences, eResearch Australasia 2024 in Melbourne had several hundred attendees from the scientific research community, research computing developers and operators, administrators and managers, and various vendors. The program gives a very good indication of the level of this conference and the reason that it has been such a success over the last fifteen years and more.
For the first time in the post-COVID environment, the conference was a face-to-face event and a very welcome opportunity to network with old colleagues in this environment as well as the opportunity to discover new people and new developments (as well as hearing some salacious gossip of changes in the eResearch landscape). Various presentations on artificial intelligence and machine learning (away from the current hype over Large Language Models) was prevalent, especially with regard to astounding developments in bioinformatics, of which the developers of Alphafold, the AI program that predicts protein structures, won the Nobel Prize in Chemistry is a prominent example.
My own presentation was on the development of the Spartan general-purpose high performance computing system at the University of Melbourne, which started as a small-scale and very experimental system operating on a shoestring budget, to one of the top systems in the world. I'm pleased to say that the talk seemed to be very well received to the crowded room (my head count was at least 70) and with a number of people asking for the slide deck afterwards; based on the comments from others, Spartan is extremely well regarded within the Australian eResearch community for these successes and for our extensive training program.
I believe buying a Kindle in 2024 is a bad idea, even if you only intend to use it for reading DRM-free locally stored ebooks. Basic functions such as organizing books into folders/collections are locked until the device is registered and with each system update the interface has became slower and more bloated.
Initially I purchased this device because Amazon book store isn’t too bad and it’s one of the easier way to buy Japanese books outside of Japan, but with all the anti-features Amazon add in I don’t think it’s still worth using.
Using a recent exploit and with this downgrader thread on the mobileread forum, I’m able to downgrade my paperwhite to an older 5.11.2 firmware which has a simpler interface while being much more responsive. If you already have a Kindle perhaps this is worth doing.
It’s possible to install alternative UI and custom OS to many Kindle models but they generally run slower than the default launcher. On the open hardware side Pine64 is making an e-ink tablet called the PineNote with an Rockchip RK3566 and 4G of RAM it should be fast enough to handle most documents/ebooks, but currently there is no usable Linux distribution for it.
The easiest way to change/set PIN for FIDO2 token seems to be with Chromium/Chrome:
chrome://settings/securityKeys
, or click Settings -> Privacy and Security -> Security -> Manage security keysCreate a PIN
, if you don’t have a PIN set already, a new PIN will be created, otherwise you will be asked to change the existing pinHello amazing teachers! Are you looking for a fun and engaging way to bring history to life for your students? Meet Sabaton, a Swedish heavy metal band known for their powerful songs about historical events. While heavy metal might not be the first thing that comes to mind for a primary school setting, Sabaton’s music […]
The post Rocking History: How Sabaton’s Songs Can Transform Your Classroom first appeared on OpenSTEM Pty Ltd.I’ve been daily driving the PinePhone Pro with swmo for some times now, it’s not perfect but I still find it be one of the most enjoyable devices I’ve used. Probably only behind BlackBerry Q30/Passport which also has a decent keyboard and runs an unfortunately locked-down version of QNX. For me it’s less like a phone and more like a portable terminal for times when using a full size laptop is uncomfortable or impractical, and with the keyboard it’s possible to write lengthy articles on the go.
This isn’t the only portable Linux terminal I owned, before this I used a Nokia N900 which till this day is still being maintained by the maemo leste team, but the shutdown of 3G network in where I live made it significantly less usable as a phone and since it doesn’t have a proper USB port I cannot use it as a serial console easily.
The overall experience on the PPP now as of 2024 isn’t as polished as that of the BlackBerry Passport, and adhoc hacks are often required to get the system going, however as the ecosystem progress the experience will also improve with new revisions of hardware and better software.
I use sxmo
and swmo
interchangeably in this post, they refer to the same framework running under Xorg and wayland, the experience is pretty much the same.
Sxmo is packaged for Debian:
sudo apt install sway sxmo-util
Allow access to LED/brightness:
sudo usermod -aG feedbackd user
The default scaling of sxmo doesn’t allow the many desktop applications to display their window properly, especially when such application is written under the assumption of being used on a larger screen. To set the scaling to something more reasonable, add the following line to ~/.config/sxmo/sway
:
exec wlr-randr --output DSI-1 --scale 1.3
When using swmo environment initialization is mostly done in ~/.config/sxmo/sway
and ~/.config/sxmo/xinit
is not used.
Scaling for Firefox needs to be adjusted separately by first enabling compact UI and then set settings -> default zoom
to your liking.
I used lightdm as my session manager, to launch lightdm in landscape mode, change the display-setup-script
line in the [Seat:*]
section of /etc/lightdm/lightdm.conf to:
display-setup-script=sh -c 'xrandr -o right; exit 0'
To rotate to swmo to landscape mode on start:
$ echo exec sxmo_rotate.sh >> ~/.config/sxmo/sway
To rotate Linux framebuffer, add fbcon=rotate:1
to the U_BOOT_PARAMETERS
line in /usr/share/u-boot-menu/conf.d/mobian.conf
and run u-boot-update
to apply.
I also removed quiet splash
from U_BOOT_PARAMETERS
to disable polymouth animation as it isn’t very useful on landscape mode.
Swmo doesn’t come with a secure screen locker. but swaylock works fine and it can be bind to a key combination with sway’s configure file. To save some battery life, systemctl suspend
can be triggered after swaylock, to bind that to Meta+L:
# .config/sxmo/sway
bindsym $mod+l exec 'swaylock -f -c 000000 && systemctl suspend'
In suspend mode, the battery discharge at a rate of about 1% per hour, I consider this to be more than acceptable.
To unlock from a shell, just kill swaylock
.
Before you can suspend the system as a non-root user, the following polkit rule needs to be written to /etc/polkit-1/rules.d/85-suspend.rules
:
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.login1.suspend" &&
subject.isInGroup("users")) {
return polkit.Result.YES;
}
});
It would be better if there can be a universal interactive user group which automatically grant such permission to the desktop/mobile user.
The default keymap for the PinePhone keyboard is missing a few useful keys, namely F11/F12 and PgUp/PgDown. To create those keys I used evremap(1) to make a custom keymap. Unfortunately the Fn key cannot be mapped as a layer switcher easily, so I opted to remap AltG and Esc as my primary modifiers.
I’m working on a Debian package for evremap and it will be made available for Debian/Mobian soon.
Incus is a container/VM manager for Linux, it’s available for Debian from bookworm/backports and is a fork of LXD by the original maintainers behind LXD. It works well for creating isolated and unprivileged containers. I have multiple incus containers on the PinePhone Pro for Debian packaging and it’s a better experience than manually creating and managing chroots. In case there is a need for running another container inside an unprivileged incus container, it’s possible to configure incus to intercept certain safe system calls and forward them to the host, removing the need for using privileged container.
Sway is decently usable in convergence mode, in which the phone is connected to a dock that outputs to an external display and keyboard and mouse are used as primary controls instead of the touchscreen.
This isn’t surprising since sway always had great support for multi monitor, however another often overlooked convergence mode is with waypipe. In this mode another Linux machine (e.g. a laptop) can be used to interact with applications running on the phone and the phone will be kept charged by the laptop. This is particularly useful for debugging phone applications or for accessing resources on the phone (e.g. sending and receiving sms). One thing missing in this setup is that graphic applications cannot roam between the phone and the external system (e.g. move running applications from one machine to another). Xpra does this for Xorg but doesn’t work with wayland.
Due to the simplicity of the swmo environment it’s not too difficult to get the system running with SELinux in Enforcing mode, and I encourage everyone reading this to try it. If running debian/mobian a good starting point is the SELinux/Setup page on Debian wiki.
Note: selinux-activate won’t add the required security=selinux
kernel option to u-boot (it only deals with GRUB) so you have to manually add it to the U_BOOT_PARAMETERS
line in /usr/share/u-boot-menu/conf.d/mobian.conf
and run u-boot-update
after selinux-activate
. The file labeling process can easily take 10 minutes and the progress won’t be displayed on the framebuffer (only visible via the serial console).
SELinux along with the reference policy aren’t enough for building a reasonably secure interactive system, but let’s leave that for a future post.
The April 2024 meeting is the first meeting after Everything Open 2024 and the discussions are primarily around talks and lectures people found interesting during the conference, including the n3n VPN and the challenges of running personal email server. At the start of the meeting Yifei Zhan demonstrated a development build of Maemo Leste, an active Maemo-like operating system running on a PinePhone Pro.
Other topics discussed including modern network protocol ossification, SIP and possible free and open source VoLTE implementation.
The PinePhone keyboard contains a battery, which will be used to charge the PinePhone when the keyboard is attached. Althrough there are existing warnings on the pine64 wiki which sums up to ‘don’t charge or connect anything to your pinephone’s type C interface when the keyboard is attached’, my two pinephone keyboards still managed to fry themselves, with one releasing stinky magic smoke and the other melting the plastic around the pogo pins on the pinephone backplate.
This all happened while the pinephone’s type C interface being physically block when attached to the keyboard. In the first case, the keyboard’s controller PCB blew up when I tried to charge it, in the latter case the keyboard somehow overheated and melted the plastic near the pogo interface on the phone side.
Pine64 provided me a free replacement keyboard after multiple emails back and forth, but according to Pine64 there will be no more free replacement for me in future, and there is no guarantee that this will not happen to my replacement keyboard.
The cost for replacing all the fried parts with spare parts from the Pine64 store is about 40 USD (pogo pins + backplate + keyboard PCB), and considering this problem is likely to happen again, I don’t think purchasing those parts is a wise decision.
Both the melting plastic and the magic smoke originated from the fact that charges are constantly shuffled around when the keyboard is attached to the pinephone, and since the keyboard can function independently from the battery, we can disconnect and remove the battery from the keyboard case to make sure it will not blow up again. After such precedure the keyboard will keep functioning althrough the keyboard-attached pinephone might flip over much more easily due to the lightened keyboard base. Be aware that the keyboard isn’t designed to be taken apart, and doing so will likely result in scratches on the case. As for me, I’d much rather have a keyboard case without builtin battery than have something that can overheat or blow up.
To prevent the kernel module from flooding the dmesg and reporting bogus battery level after the battery removal, blacklist the ip5xxx_power
module:
# echo blacklist ip5xxx_power > /etc/modprobe.d/blacklist.conf
I didn’t take as much notes on day 2 and 3, so I merged them into a single article.
Adversaries:
LLM can help eliminate common language mistakes, perform better social enginerring
Many adversaries are trying to integrate LLMs into their workflow, with varying results
Time frame from initial foothold to lateral movements is getting shorter, due to better toolings?
porting syzcaller to run on Power
general fuzzinng engines
Unsupervised: no human input required
Coveraged-guided: fuzz and measures which codepath is fuzzed
Things to fuzz: syscalls/dxrivers/fs/ebpf/kvm/network stacks…
Simple kernel fuzzers existed est. 1991
Hosted version on Google Cloud: https://syzkaller.appspot.com/upstream
Sanitisers: print errors on memory corruption/UB/concurrency problems etc
KMSAN isn’t on Power yet
Hardware:
New architecture enablement
Stack traces are printed differently across archs
instruction fuzzing
QEMU/KVM on bare metal Open Power systems
Bug found:
PowerVM
PowerVC
FileSender
radio::console
AgOpenGPS
BPF made creating new scheduler simpler
Scheduling problem is now more complicated due to increasing complexity of workload/CPU design
BPF provides reliable access to critical data structures inside the kernel
Forked from n2n to avoid CLA
Peer-to-peer VPN at network layer, acting like a distributed virtual switch
NAT piecing
Written in C, should have good cross-platform supports (more testing wanted on *BSD
)
TunTap interface support is expected from the OS side, shouldn’t be a problem for common Unix-likes
Packaging and distro submission are still WIP
Future roadmap
Useful for
Simpler than wireguard/openvpn but offers OK security (not for security-critical apps?)
Easier to configure, use INI style config files
seL4 is bad at usability, Lions OS intends to solve this
Still in early stage of development
Composable components for build custom OS for a single task
Focus on simplicity
0.1.0 just released, still in its early stage
high performance
Only for Arm64/aarch64 now, riscv64 in future?
A reference system called Kitty exists
Here you can find a list of links related to my topic which I find useful or just interesting.
Info page https://2024.everythingopen.au/schedule/presentation/24/
Slides EO2024.Slides.exploring.mobile.linux.security.odp
Recording XXX to be processed
VerityMobile GitHub :: ZhanYF/veritymobile
Access Measurements from Linux Userland
Sign in to GitLab with fTPM-backed FIDO token
https://optee.readthedocs.io/en/latest/general/about.html
https://optee.readthedocs.io/en/latest/architecture/secure_storage.html
https://optee.readthedocs.io/en/latest/architecture/globalplatform_api.html#globalplatform-api
https://optee.readthedocs.io/en/latest/general/presentations.html
https://source.android.com/docs/security/features/trusty
https://support.apple.com/en-sg/guide/security/sec59b0b31ff/web
This also covers Measured Boot and Secure Boot
https://next.redhat.com/2021/05/13/what-can-you-do-with-a-tpm/
https://github.com/psanford/tpm-fido
https://www.ledger.com/blog/ssh-with-tpm
https://bootlin.com/pub/conferences/2021/lee/perrot-secure-boot/perrot-secure-boot.pdf
https://github.com/kkamagui/shadow-box-for-arm https://github.com/kkamagui/manifest
https://github.com/kkamagui/shadow-box-for-x86
https://blog.3mdeb.com/2021/2021-12-03-rockchip-secure-boot/
https://sergioprado.blog/rpmb-a-secret-place-inside-the-emmc/
https://github.com/firecracker-microvm/firecracker
https://github.com/firecracker-microvm/firectl
I got a call yesterday from a guy who had looked at the Experia Bruce has at Zen and was considering buying one. I talked with him for about three quarters of an hour, going through my experience, and to sum it up simply I can just say: this is a fantastic motorbike.
Firstly, it handles exactly like a standard motorbike - it handles almost exactly like my previous Triumph Tiger Sport 1050. But it is so much easier to ride. You twist the throttle and you go. You wind it back and you slow down. If you want to, the bike will happily do nought to 100km/hr in under four seconds. But it will also happily and smoothly glide along in traffic. It says "you name the speed, I'm happy to go". It's not temperamental or impatient; it has no weird points where the throttle suddenly gets an extra boost or where the engine braking suddenly drops off. It is simple to ride.
As an aside, this makes it perfect for lane filtering. On my previous bike this would always be tinged with a frisson of danger - I had to rev it and ease the clutch in with a fair bit of power so I didn't accidentally stall it, but that always took some time. Now, I simply twist the throttle and I am ahead of the traffic - no danger of stalling, no delay in the clutch gripping, just power. It is much safer in that scenario.
I haven't done a lot of touring yet, but I've ridden up to Gosford once and up to Sydney several times. This is where Energica really is ahead of pretty much every other electric motorbike on the market now - they do DC fast charging. And by 'fast charger' here I mean anything from 50KW up; the Energica can only take 25KW maximum anyway :-) But this basically means I have to structure any stops we do around where I can charge up - no more stopping in at the local pub or a cafe on a whim for morning tea. That has to either offer DC fast charging or I'm moving on - the 3KW onboard AC charger means a 22KW AC charger is useless to me. In the hour or two we might stop for lunch I'd only get another 60 - 80 kilometres more range on AC; on DC I would be done in less than an hour.
But OTOH my experience so far is that structuring those breaks around where I can charge up is relatively easy. Most riders will furiously nod when I say that I can't sit in the seat for more than two hours before I really need to stretch the legs and massage the bum :-) So if that break is at a DC charger, no problems. I can stop at Sutton Forest or Pheasant's Nest or even Campbelltown and, in the time it takes for me to go to the toilet and have a bit of a coffee and snack break, the bike is basically charged and ready to go again.
The lesson I've learned, though, is to always give it that bit longer and charge as much as I can up to 80%. It's tempting sometimes when I'm standing around in a car park watching the bike charge to move on and charge up a bit more at the next stop. The problem is that, with chargers still relatively rare and there often only being one or two at each site, a single charger not working can mean another fifty or even a hundred kilometres more riding. That's a quarter to half my range, so I cannot afford to risk that. Charge up and take a good book (and a spare set of headphones).
In the future, of course, when there's a bank of a dozen DC fast chargers in every town, this won't be a problem. Charger anxiety only exists because they are still relatively rare. When charging is easy to find and always available, and there are electric forecourts like the UK is starting to get, charging stops will be easy and will fit in with my riding.
Anyway.
Other advantages of the Experia:
You can get it with a complete set of Givi MonoKey top box and panniers. This means you can buy your own much nicer and more streamlined top box and it fits right on.
Charging at home takes about six hours, so it's easy to do overnight. The Experia comes with an EVSE so you don't need any special charger at home. And really, since the onboard AC charger can only accept 3KW, there's hardly any point in spending much money on a home charger for the Experia.
Minor niggles:
The seat is a bit hard. I'm considering getting the EONE Canyon saddle, although I also just need to try to work out how to get underneath the seat to see if I can fit my existing sheepskin seat cover.
There are a few occasional glitches in the display in certain rare situations. I've mentioned them to Energica, hopefully they'll be addressed.
Overall rating:
5 stars. Already recommending.
Way back in the distant past, when the Apple ][ and the Commodore 64 were king, you could read the manual for a microprocessor and see how many CPU cycles each instruction took, and then do the math as to how long a sequence of instructions would take to execute. This cycle counting was used pretty effectively to do really neat things such as how you’d get anything on the screen from an Atari 2600. Modern CPUs are… complex. They can do several things at once, in a different order than what you wrote them in, and have an interesting arrangement of shared resources to allocate.
So, unlike with simpler hardware, if you have a sequence of instructions for a modern processor, it’s going to be pretty hard to work out how many cycles that could take by hand, and it’s going to differ for each micro-architecture available for the instruction set.
When designing a microprocessor, simulating what a series of existing instructions will take to execute compared to the previous generation of microprocessor is pretty important. The aim should be for it to take less time or energy or some other metric that means your new processor is better than the old one. It can be okay if processor generation to generation some sequence of instructions take more cycles, if your cycles are more frequent, or power efficient, or other positive metric you’re designing for.
Programmers may want this simulation too, as some code paths get rather performance critical for certain applications. Open Source tools for this aren’t as prolific as I’d like, but there is llvm-mca
which I (relatively) recently learned about.
llvm-mca is a performance analysis tool that uses information available in LLVM (e.g. scheduling models) to statically measure the performance of machine code in a specific CPU.
the llvm-mca docs
So, when looking at an issue in the IPv6 address and connection hashing code in Linux last year, and being quite conscious of modern systems dealing with a LOT of network packets, and thus this can be quite CPU usage sensitive, I wanted to make sure that my suggested changes weren’t going to have a large impact on performance – across the variety of CPU generations in use.
There’s two ways to do this: run everything, throw a lot of packets at something, and measure it. That can be a long dev cycle, and sometimes just annoying to get going. It can be a lot quicker to simulate the small section of code in question and do some analysis of it before going through the trouble of spinning up multiple test environments to prove it in the real world.
So, enter llvm-mca and the ability to try and quickly evaluate possible changes before testing them. Seeing as the code in question was nicely self contained, I could easily get this to a point where I could easily get gcc
(or llvm
) to spit out assembler for it separately from the kernel tree. My preference was for gcc as that’s what most distros end up compiling Linux with, including the Linux distribution that’s my day job (Amazon Linux).
In order to share the results of the experiments as part of the discussion on where the code changes should end up, I published the code and results in a github project as things got way too large to throw on a mailing list post and retain sanity.
I used a container so that I could easily run it in a repeatable isolated environment, as well as have others reproduce my results if needed. Different compiler versions and optimization levels will very much produce different sequences of instructions, and thus possibly quite different results. This delta in compiler optimization levels is partially why the numbers don’t quite match on some of the mailing list messages, although the delta of the various options was all the same. The other reason is learning how to better use llvm-mca
to isolate down the exact sequence of instructions I was caring about (and not including things like the guesswork that llvm-mca
has to do for branches).
One thing I learned along the way is how to better use llvm-mca
to get the results that I was looking for. One trick is to very much avoid branches, as that’s going to be near complete guesswork as there’s not a simulation of the branch predictor (at least in the version I was using.
The big thing I wanted to prove: is doing the extra work having a small or large impact on number of elapsed cycles. The answer was that doing a bunch of extra “work” was essentially near free. The CPU core could execute enough things in parallel that the incremental cost of doing extra work just… wasn’t relevant.
This helped getting a patch deployed without impact to performance, as well as get a patch upstream, fixing an issue that was partially fixed 10 years prior, and had existed since day 1 of the Linux IPv6 code.
Naturally, this wasn’t a solo effort, and that’s one of the joys of working with a bunch of smart people – both at the same company I work for, and in the broader open source community. It’s always humbling when you’re looking at code outside your usual area of expertise that was written (and then modified) by Really Smart People, and you’re then trying to fix a problem in it, while trying to learn all the implications of changing that bit of code.
Anyway, check out llvm-mca
for your next adventure into premature optimization, as if you’re going to get started with evil, you may as well start with what’s at the root of all of it.
At this rate, there is no real blogging here, regardless of the lofty plans to starting writing more. Stats update from Hello 2023:
219 days on the road (less than 2022! -37, over a month, shocking), 376,961km travelled, 44 cities, 17 countries.
Can’t say why it was less, because it felt like I spent a long time away…
In Kuala Lumpur, I purchased a flat (just in time to see Malaysia go down), and I swapped cars (had a good 15 year run). I co-founded a company, and I think there is a lot more to come.
2024 is shaping up to be exciting, busy, and a year, where one must just do.
good read: 27 Years Ago, Steve Jobs Said the Best Employees Focus on Content, Not Process. Research Shows He Was Right. in simple terms, just do.
It’s time for a review of the second year of operation of our Redflow ZCell battery and Victron Energy inverter/charger system. To understand what follows it will help to read the earlier posts in this series:
In case ~12,000 words of background reading seem daunting, I’ll try to summarise the most important details here:
With the background out of the way we can get on to the fun stuff, including a roof replacement, an unexpected fault after a power outage followed by some mains switchboard rewiring, a small electrolyte leak, further hackery to keep a bit of charge in the battery most of the time, and finally some numbers.
The big job we did this year was replacing our concrete tile roof with colorbond steel. When we bought the house – which is in a rural area and thus a bushfire risk – we thought: “concrete brick exterior, concrete tile roof – sweet, that’s not flammable”. Unfortunately it turns out that while a tile roof works just fine to keep water out, it won’t keep embers out. There’s a gadzillion little gaps where the tiles overlap each other, and in an ember attack, embers will get up in there and ignite the fantastic amount of dust and other stuff that’s accumulated inside the ceiling over several decades, and then your house will burn down. This could be avoided by installing roof blanket insulation under the tiles, but in order to do that you have to first remove all the tiles and put them down somewhere without breaking them, then later put them all back on again. It’s a lot of work. Alternately, you can just rip them all off and replace the whole lot with nice new steel, with roof blanket insulation underneath.
Of course, you need good weather to replace a roof, and you need to take your solar panels down while it’s happening. This meant we had twenty-two solar panels stacked on our back porch for three weeks of prime PV time from February 17 – March 9, 2023, which I suspect lost us a good 500kW of power generation. Also, the roof job meant we didn’t have the budget to get a second ZCell this year – for the cost of the roof replacement, we could have had three new ZCells installed – but as my wife rightly pointed out, all the battery storage in the world won’t do you any good if your house burns down.
We had at least five grid power outages during the year. A few were brief, the grid being down for only a couple of minutes, but there were two longer ones in September (one for 30 minutes, one for about an hour and half). We got through the long ones just fine with either the sun high in the sky, or charge in the battery, or both. One of the earlier short outages though uncovered a problem. On the morning of May 30, my wife woke up to discover there was no power, and thus no running water. Not a good thing to wake up to. This happened while I was away, because of course something like this would happen while I was away. It turns out there had been a grid outage at about 02:10, then the grid power had come back, but our system had not. The Multis ended up in some sort of fault state and were refusing to power our loads. On the console was an alarm message: “#8 – Ground relay test failed”.
Note the times in the console messages are about 08:00. I confirmed via the logs from the VRM portal that the grid really did go out some time between 02:10 and 02:15, but after that there was nothing in the logs until 07:59, which is when my wife used the manual changeover switch to shift all our loads back to direct grid power, bypassing the Victron kit. That brought our internet connection back, along with the running water. I contacted Murray Roberts from Lifestyle Electrical and Simon Hackett for assistance, Murray logged in remotely and reset the Multis, my wife flicked the changeover switch back and everything was fine. But the question remained, what had gone wrong?
The ground relay in the Multis is there to connect neutral to ground when the grid fails. Neutral and ground are already physically connected on the grid (AC input) side of the Multis in the main switchboard, but when the grid power goes out, the Multis disconnect their inputs, which means the loads on the AC output side no longer have that fixed connection from neutral to ground. The ground relay activates in this case to provide that connection, which is necessary for correct operation of the safety switches on the power circuits in the house.
The ground relay is tested automatically by the Multis. Looking up Error 8 – Ground relay test failed on Victron’s web site indicated that either the ground relay really was faulty, or possibly there was a wiring fault or an issue with one of the loads in our house. So I did some testing. First, with the battery at 50% State of Charge (SoC), I did the following:
This demonstrated that the ground relay and the Multis in general were fine. Had there been a problem at that level we would have seen an error when I restored mains power. I then reconnected the loads and repeated steps 2-5 above. Again, there was no error which indicated the problem wasn’t due to a wiring defect or short in any of the power or lighting circuits. I also re-tested with the heater on and the water pump running just in case there may have been an issue specifically with either of those devices. Again, there was no error.
The only difference between my test above and the power outage in the middle of the night was that in the middle of the night there was no charge in the battery (it was right after a maintenance cycle) and no power from the sun. So in the evening I turned off the DC isolators for the PV and deactivated my overnight scheduled grid charge so there’d be no backup power of any form in the morning. Then I repeated the test:
The underlying detailed error message was “PE2 Closed”, which meant that it was seeing the relay as closed when it’s meant to be open. Our best guess is that we’d somehow hit an edge case in the Multi’s ground relay test, where they maybe tried to switch to inverting mode and activated the ground relay, then just died in that state because there was no backup power, and got confused when mains power returned. I got things running again by simply power cycling the Multis.
So it kinda wasn’t a big deal, except that if the grid went out briefly with no backup power, our loads would remain without power until one of us manually reset the system. This was arguably worse than not having the system at all, especially if it happened in the middle of the night, or when we were away from home. The fact that we didn’t hit this problem in the first year of operation is a testament to how unlikely this event is, but the fact that it could happen at all remained a problem.
One fix would have been to get a second battery, because then we’d be able to keep at least a tiny bit of backup power at all times regardless of maintenance cycles, but we’re not there yet. Happily, Simon found another fix, which was to physically connect the neutral together between the AC input and AC output sides of the Multis, then reconfigure them to use the grid code “AS4777.2:2015 AC Neutral Path externally joined”. That physical link means the load (output) side picks up the ground connection from the grid (input) side in the swichboard, and changing the grid code setting in the Multis disables the ground relay and thus the test which isn’t necessary anymore.
Murray needed to come out anyway to replace the carbon sock in the ZCell (a small item of annual maintenance) and was able to do that little bit of rewriting and configuration at the same time. I repeated my tests both with and without backup power and everything worked perfectly, i.e. the system came back immediately by itself after a grid outage with no backup power, and of course switched over to inverting just fine when there was backup power available.
This leads to the next little bit of fun. The carbon sock is a thing that sits inside the zinc electrolyte tank and helps to keep the electrolyte pH in the correct operating range. Unfortunately I didn’t manage to get a photo of one, but they look a bit like door snakes. Replacing the carbon sock means opening the case, popping one side of the Gas Handling Unit (GHU) off the tank, pulling out the old sock and putting in a new one. Here’s a picture of the ZCell with the back of the case off, indicating where the carbon sock goes:
When Murray popped the GHU off, he noticed that one of the larger pipes on one side had perished slightly. Thankfully he happened to have a spare GHU with him so was able to replace the assembly immediately. All was well until later that afternoon, when the battery indicated hardware failure due to “Leak 1 Trip” and shut itself down out of an abundance of caution. Upon further investigation the next day, Murry and I discovered there was a tiny split in one of the little hoses going into the GHU which was letting the electrolyte drip out.
This small electrolyte leak was caught lower down in the battery, where the leak sensor is. Murray sucked the leaked electrolyte out of there, re-terminated that little hose and we were back in business. I was happy to learn that Redflow had obviously thought about the possibility of this type of failure and handled it. As I said to Murray at the time, we’d rather have a battery that leaks then turns itself off than a battery that catches fire!
Aside from those two interesting events, the rest of the year of operation was largely quite boring, which is exactly what one wants from a power system. As before I kept a small overnight scheduled charge and a larger late afternoon scheduled charge active on weekdays to ensure there was some power in the battery to use at peak (i.e. expensive) grid times. In spring and summer the afternoon charge is largely superfluous because the battery has usually been well filled up from the solar by then anyway, but there’s no harm in leaving it turned on. The one hack I did do during the year was to figure out a way to keep a small (I went with 15%) MinSoC in the battery at all times except for maintenance cycle evenings, and the morning after. This is more than enough to smooth out minor grid outages of a few minutes, and given our general load levels should be enough to run the house for more than an hour overnight if necessary, provided the hot water system and heating don’t decide to come on at the same time.
My earlier experiment along these lines involved a script that ran on the Cerbo twice a day to adjust scheduled charge settings in order to keep the battery at 100% SoC at all times except for peak electricity hours and maintenance cycle evenings. As mentioned in TANSTAAFL I ran that for all of July, August and most of September 2022. It worked fine, but ultimately I decided it was largely a waste of energy and money, especially when run during the winter months when there’s not much sun and you end up doing a lot of grid charging. This is a horribly inefficient way of getting power into the battery (AC to DC) versus charging the battery direct from solar PV. We did still use those scripts in the second year, but rather more judiciously, i.e. we kept an eye on the BOM forecasts as we always do, then occasionally activated the 100% charge when we knew severe weather and/or thunderstorms were on the way, those being the things most likely to cause extended grid outages. I also manually triggered maintenance on the battery earlier than strictly necessary several times when we expected severe weather in the coming days, to avoid having a maintenance cycle (and thus empty battery) coincide with potential outages. On most of those occasions this effort proved to be unnecessary. Bearing all that in mind, my general advice to anyone else with a single ZCell system (aside from maybe adding scheduled charges to time-shift expensive peak electricity) is to just leave it alone and let it do its thing. You’ll use most of your locally generated electricity onsite, you’ll save some money on your power bills, and you’ll avoid some, but not all, grid outages. This is a pretty good position to be in.
That said, I couldn’t resist messing around some more, hence my MinSoC experiment. Simon’s installation guide points out that “for correct system operation, the Settings->ESS menu ‘Min SoC’ value must be set to 0% in single-ZCell systems”. The issue here is that if MinSoC is greater than 0%, the Victron gear will try to charge the battery while the battery is simultaneously trying to empty itself during maintenance, which of course just isn’t going to work. My solution to this is the following script, which I run from a cron job on the Cerbo twice a day, once at midnight UTC and again at 06:00 UTC with the --check-maintenance
flag set:
Midnight UTC corresponds to the end of our morning peak electricity time, and 06:00 UTC corresponds to the start of our afternoon peak. What this means is that after the morning peak finishes, the MinSoC setting will cause the system to automatically charge the battery to the value specified if it’s not up there already. Given it’s after the morning peak (10:00 AEST / 11:00 AEDT) this charge will likely come from solar PV, not the grid. When the script runs again just before the afternoon peak (16:00 AEST / 17:00 AEDT), MinSoC is set to either the value specified (effectively a no-op), or zero if it’s a maintenance day. This allows the battery to be discharged correctly in the evening on maintenance days, while keeping some charge every other day in case of emergencies. Unlike the script that tries for 100% SoC, this arrangement results in far less grid charging, while still giving protection from minor outages most of the time.
In case Simon is reading this now and is thinking “FFS, I wrote ‘MinSoC must be set to 0% in single-ZCell systems’ for a reason!” I should also add a note of caution. The script above detects ZCell maintenance cycles based solely on the configured maintenance time limit and the duration since last maintenance. It does not – and cannot – take into account occasions when the user manually forces maintenance, or situations in which a ZCell for whatever reason hypothetically decides to go into maintenance of its own accord. The latter shouldn’t generally happen, but it can. The point is, if you’re running this MinSoC script from a cron job, you really do still want to keep an eye on what the battery is doing each day, in case you need to turn that setting off and disable the cron job. If you’re not up for that I will reiterate my general advice from earlier: just leave the system alone – let it do its thing and you’ll (almost always) be perfectly fine. Or, get a second ZCell and you can ignore the last several paragraphs entirely.
Now, finally, let’s look at some numbers. The year periods here are a little sloppy for irritating historical reasons. 2018-2019, 2019-2020 and 2020-2021 are all August-based due to Aurora Energy’s previous quarterly billing cycle. The 2021-2022 year starts in late September partly because I had to wait until our new electricity meter was installed in September 2021, and partly because it let me include some nice screenshots when I started writing TANSTAAFL on September 25, 2022. I’ve chosen to make this year (2022-2023) mostly sane, in that it runs from October 1, 2022 through September 30, 2023 inclusive. This is only six days offset from the previous year, but notably makes it much easier to accurately correlate data from the VRM portal with our bills from Aurora. Overall we have five consecutive non-overlapping 12 month periods that are pretty close together. It’s not perfect, but I think it’s good enough to work with for our purposes here.
YeaR | Grid In | Solar In | Total In | Loads | Export |
---|---|---|---|---|---|
2018-2019 | 9,031 | 6,682 | 15,713 | 11,827 | 3,886 |
2019-2020 | 9,324 | 6,468 | 15,792 | 12,255 | 3,537 |
2020-2021 | 7,582 | 6,347 | 13,929 | 10,358 | 3,571 |
2021-2022 | 8,531 | 5,640 | 14,171 | 10,849 | 754 |
2022-2023 | 8,936 | 5,744 | 14,680 | 11,534 | 799 |
Overall, 2022-2023 had a similar shape to 2021-2022, including the fact that in both these years we missed three weeks of solar generation in late summer. In 2022 this was due to replacing the MPPT, and in 2023 it was because we replaced the roof. In both cases our PV generation was lower than it should have been by an estimated 500-600kW. Hopefully nothing like this happens again in future years.
All of our numbers in 2022-2023 were a bit higher than in 2021-2022. We pulled 4.75% more power from the grid, generated 1.84% more solar, the total power going into the system (grid + solar) was 3.59% higher, our loads used 6.31% more power, and we exported 5.97% more power than the previous year.
I honestly don’t know why our loads used more power this year. Here’s a table showing our consumption for both years, and the differences each month (note that September 2022 is only approximate because of how the years don’t quite line up):
Month | 2022 | 2023 | Diff |
---|---|---|---|
October | 988 | 873 | -115 |
November | 866 | 805 | -61 |
December | 767 | 965 | 198 |
January | 822 | 775 | -47 |
February | 638 | 721 | 83 |
March | 813 | 911 | 98 |
April | 775 | 1,115 | 340 |
May | 953 | 1,098 | 145 |
June | 1,073 | 1,149 | 76 |
July | 1,118 | 1,103 | -15 |
August | 966 | 1,065 | 99 |
September | 1,070 | 964 | -116 |
Here’s a graph:
Did we use more cooling this December? Did we use more heating this April and May? I dug the nearest weather station’s monthly mean minimum and maximum temperatures out of the BOM Climate Data Online tool and found that there’s maybe a degree or so variance one way or the other each month year to year, so I don’t know what I can infer from that. All I can say is that something happened in December and April, but I don’t know what.
Another interesting thing is that what I referred to as “the energy cost of the system” in TANSTAAFL has gone down. That’s the kW figure below in the “what?” column, which is the difference between grid in + solar in – loads – export, i.e. the power consumed by the system itself. In 2021-2022, that was 2,568 kW, or about 18% of the total power that went into the system. In 2022-2023 it was down to 2,347kWh, or just under 16%:
Year | Grid In | Solar In | Total In | Loads | Export | Total Out | what? |
---|---|---|---|---|---|---|---|
2021-2022 | 8,531 | 5,640 | 14,171 | 10,849 | 754 | 11,603 | 2,568 |
2022-2023 | 8,936 | 5,744 | 14,680 | 11,534 | 799 | 12,333 | 2,347 |
I suspect the cause of this reduction is that we didn’t spend two and a half months doing lots of grid charging of the battery in 2022-2023. If that’s the case, this again points to the advisability of just letting the system do its thing and not messing with it too much unless you really know you need to.
The last set of numbers I have involve actual money. Here’s what our electricity bills looked like over the past five years:
Year | From Grid | Total Bill | Cost/kWh |
---|---|---|---|
2018-2019 | 9,031 | $2,278.33 | $0.25 |
2019-2020 | 9,324 | $2,384.79 | $0.26 |
2020-2021 | 7,582 | $1,921.77 | $0.25 |
2021-2022 | 8,531 | $1,731.40 | $0.20 |
2022-2023 | 8,936 | $1,989.12 | $0.22 |
Note that cost/kWh as I have it here is simply the total dollar amount of our bills divided by the total power drawn from the grid (I’m deliberately ignoring the additional power we use that comes from the sun in this calculation). The bills themselves say “peak power costs $X, off-peak costs $Y, you get $Z back for power exported and there’s a daily supply charge of $SUCKS_TO_BE_YOU”, but that’s all noise. What ultimately matters in my opinion is what I call the effective cost per kilowatt hour, which is why those things are all smooshed together here. The important point is that with our existing solar array we were previously effectively paying about $0.25 per kWh for grid power. After getting the battery and switching to Peak & Off-Peak billing, that went down to $0.20/kWh – a reduction of 20%. Now we’ve inched back up to $0.22/kWh, but it turns out that’s just because power prices have increased. As far as I can tell Aurora Energy don’t publish historical pricing data, so as a public service, I’ll include what I’ve been able to glean from our prior bills here:
It’s nice that the feed-in tariff (i.e. what you get credited when you export power) has gone up quite a bit, but unless you’re somehow able to export 2-3x more power than you import, you’ll never get ahead of the ~20% increase in power prices over the last two years.
Having calculated the effective cost/kWh for grid power, I’m now going to do one more thing which I didn’t think to do during last year’s analysis, and that’s calculate the effective cost/kWh of running our loads, bearing in mind that they’re partially powered from the grid, and partially from the sun. I’ve managed to dig up some old Aurora bills from 2016-2017, back before we put the solar panels on. This should make for an interesting comparison.
Year | From Grid | Total Bill | Grid $/kWh | Loads | Loads $/kWh |
---|---|---|---|---|---|
2016-2017 | 17,026 | $4,485.45 | $0.26 | 17,026 | $0.26 |
2018-2019 | 9,031 | $2,278.33 | $0.25 | 11,827 | $0.19 |
2019-2020 | 9,324 | $2,384.79 | $0.26 | 12,255 | $0.19 |
2020-2021 | 7,582 | $1,921.77 | $0.25 | 10,358 | $0.19 |
2021-2022 | 8,531 | $1,731.40 | $0.20 | 10,849 | $0.16 |
2022-2023 | 8,936 | $1,989.12 | $0.22 | 11,534 | $0.17 |
The first thing to note is the horrifying 17 megawatts we pulled in 2016-2017. Given the hot water and lounge room heat pump were on a separate tariff, I was able to determine that four of those megawatts (i.e. about 24% of our power usage) went on heating that year. Replacing the crusty old conventional electric hot water system with a Sanden heat pump hot water service cut that in half – subsequent years showed the heating/hot water tariff using about 2MW/year. We obviously also somehow reduced our loads by another ~3MW/year on top of that, but I can’t find the Aurora bills for 2017-2018 so I’m not sure exactly when that drop happened. My best guess is that I probably got rid of some old, always-on computer equipment.
The second thing to note is how the cost of running the loads drops. In 2016-2017 the grid cost/kWh is the same as the loads cost/kWh, because grid power is all we had. From 2018-2021 though, the load cost/kWh drops to $0.19, a saving of about 26%. It remains there until 2021-2022 when we got the battery and it dropped again to $0.16 (another 15% or so). So the big win was certainly putting the solar panels on and swapping the hot water system, with the battery being a decent improvement on top of that.
Further wins are going to come from decreasing our power consumption. In previous posts I had mentioned the need to replace panel heaters with heat pumps, and also that some of our aging computer equipment needed upgrading. We did finally get a heat pump installed in the master bedroom this year, and we replaced the old undersized lounge room heat pump with a new correctly sized unit. This happened on June 30 though, so will have had minimal impact on this years’ figures. Likewise an always-on computer that previously pulled ~100W is now better, stronger and faster in all respects, while only pulling ~50W. That will save us ~438kW of power per year, but given the upgrade happened in mid August, again we won’t see the full effects until later.
I’m looking forward to doing another one of these posts in a year’s time. Hopefully I will have nothing at all interesting to report.
I (relatively) recently went down the rabbit hole of trying out personal finance apps to help get a better grip on, well, the things you’d expect (personal finances and planning around them).
In the past, I’ve had an off-again-on-again relationship with GNUCash. I did give it a solid go for a few months in 2004/2005 it seems (I found my old files) and I even had the OFX exports of transactions for a limited amount of time for a limited number of bank accounts! Amazingly, there’s a GNUCash port to macOS, and it’ll happily open up this file from what is alarmingly close to 20 years ago.
Back in those times, running Linux on the desktop was even more of an adventure than it has been since then, and I always found GNUCash to be strange (possibly a theme with me and personal finance software), but generally fine. It doesn’t seem to have changed a great deal in the years since. You still have to manually import data from your bank unless you happen to be lucky enough to live in the very limited number of places where there’s some kind of automation for it.
So, going back to GNUCash was an option. But I wanted to survey the land of what was available, and if it was possible to exchange money for convenience. I am not big on the motivation to go and spend a lot of time on this kind of thing anyway, so it had to be easy for me to do so.
For my requirements, I basically had:
I viewed a mobile app (iOS) as a Nice to Have rather than essential. Given that, my shortlist was:
I’ve used it before, its web site at https://www.gnucash.org/ looks much the same as it always has. It’s Free and Open Source Software, and is thus well aligned with my values, and that’s a big step towards not having vendor lock-in.
I honestly could probably make it work. I wish it had the ability to import transactions from banks for anywhere I have ever lived or banked with. I also wish the UI got to be a bit more consistent and modern, and even remotely Mac like on the Mac version.
Honestly, if the deal was that a web service would pull bank transactions in exchange for ~$10/month and also fund GNUCash development… I’d struggle to say no.
Here’s an option that has been around forever – https://www.quicken.com/ – and one that I figured I should solidly look at. It’s actually one I even spent money on…. before requesting a refund. It’s Import/Export is so broken it’s an insult to broken software everywhere.
Did you know that Quicken doesn’t import the Quicken Interchange Format (QIF), and hasn’t since 2005?
Me, incredulously, when trying out quicken
I don’t understand why you wouldn’t support as many as possible formats that banks export your transaction data as. It cannot possibly be that hard to parse these things, nor can it possibly be code that requires a lot of maintenance.
This basically meant that I couldn’t import data from my Australian Banks. Urgh. This alone ruled it out.
It really didn’t build confidence in ever getting my data out. At every turn it seemed to be really keen on locking you into Quicken rather than having a good experience all-up.
This one was new to me – https://www.wiz.money/ – and had a fancy URL and everything. I spent a bunch of time trying MoneyWiz, and I concluded that it is pretty, but buggy. I had managed to create a report where it said I’d earned $0, but you click into it, and then it gives actual numbers. Not being self consistent and getting the numbers wrong, when this is literally the only function of said app (to get the numbers right), took this out of the running.
It did sync from my US and Australian banks though, so points there.
Intuit used to own Quicken until it sold it to H.I.G. Capital in 2016 (according to Wikipedia). I have no idea if that has had an impact as to the feature set / usability of Quicken, but they now have this Cloud-only product called Mint.
The big issue I had with Mint was that there didn’t seem to be any way to get your data out of it. It seemed to exemplify vendor lock-in. This seems to have changed a bit since I was originally looking, which is good (maybe I just couldn’t find it?). But with the cloud-only approach I wasn’t hugely comfortable with having everything there. It also seemed to be lacking a few features that I was begging to find useful in other places.
It is the only product that links with the Apple Card though. No idea why that is the case.
The price tag of $0 was pretty unbeatable, which does make me wonder where the money is made from to fund its development and maintenance. My guess is that it’s through commission on the various financial products advertised through it, and I dearly hope it is not through selling data on its users (I have no reason to believe it is, there’s just the popular habit of companies doing this).
This is what I’ve settled on. It seemed to be easy enough for me to figure out how to use, sync with an iPhone App, be a reasonable price, and be able to import and sync things from accounts that I have. Oddly enough, nothing can connect and pull things from the Apple Card – which is really weird. That isn’t a Banktivity thing though, that’s just universal (except for Intuit’s Mint).
I’ve been using it for a bit more than a year now, and am still pretty happy. I wish there was the ability to attach a PDF of a statement to the Statement that you reconcile. I wish I could better tune the auto match/classification rules, and a few other relatively minor things.
Periodically in life I’ve had the desire to be somewhat fit, or at least have the benefits that come with that such as not dying early and being able to navigate a mountain (or just the city of Seattle) on foot without collapsing. I have also found that holding myself accountable via data is pretty vital to me actually going and repeatedly doing something.
So, at some point I got myself a Garmin watch. The year was 2012 and it was a Garmin Forerunner 410. It had a standard black/grey LCD screen, GPS (where getting a GPS lock could be utterly infuriatingly slow), a sensor you attached to your foot, a sensor you strap to your chest for Heart Rate monitoring, and an ANT+ dongle for connecting to a PC to download your activities. There was even some open source software that someone wrote so I could actually get data off my watch on my Linux laptops. This wasn’t a smart watch – it was exclusively for wearing while exercising and tracking an activity, otherwise it was just a watch.
However, as I was ramping up to marathon distance running, one huge flaw emerged: I was not fast enough to run a marathon in the time that the battery in my Garmin lasted. IIRC it would end up dying around 3hr30min into something, which at the time was increasingly something I’d describe as “not going for too long of a run”. So, the search for a replacement began!
The year was 2017, and the Garmin fenix 5x attracted me for two big reasons: a battery life to be respected, and turn-by-turn navigation. At the time, I seldom went running with a phone, preferring a tiny SanDisk media play (RIP, they made a new version that completely sucked) and a watch. The attraction of being able to get better maps back to where I started (e.g. a hotel in some strange city where I didn’t speak the language) was very appealing. It also had (what I would now describe as) rudimentary smart-watch features. It didn’t have even remotely everything the Pebble had, but it was enough.
So, a (non-trivial) pile of money later (even with discounts), I had myself a shiny and virtually indestructible new Garmin. I didn’t even need a dongle to sync it anywhere – it could just upload via its own WiFi connection, or through Bluetooth to the Garmin Connect app to my phone. I could also (if I ever remembered to), plug in the USB cable to it and download the activities to my computer.
One problem: my skin rebelled against the Garmin fenix 5x after a while. Like, properly rebelled. If it wasn’t coming off, I wanted to rip it off. I tried all of the tricks that are posted anywhere online. Didn’t help. I even got tested for what was the most likely culprit (a Nickel allergy), and didn’t have one of them, so I (still) have no idea what I’m actually allergic to in it. It’s just that I cannot wear it constantly. Urgh. I was enjoying the daily smart watch uses too!
So, that’s one rather expensive watch that is special purpose only, and even then started to get to be a bit of an issue around longer activities. Urgh.
So the hunt began for a smart watch that I could wear constantly. This usually ends in frustration as anything I wanted was hundreds of $ and pretty much nobody listed what materials were in it apart from “stainless steel”, “may contain”, and some disclaimer about “other materials”, which wasn’t a particularly useful starting point for “it is one of these things that my skin doesn’t like”. As at least if the next one also turned out to cause me problems, I could at least have a list of things that I could then narrow down to what I needed to avoid.
So that was all annoying, with the end result being that I went a long time without really wearing a watch. Why? The search resumed periodically and ended up either with nothing, or totally nothing. That was except if I wanted to get further into some vendor lock-in.
Honestly, the only manufacturer of anything smartwatch like which actually listed everything and had some options was Apple. Bizarre. Well, since I already got on the iPhone bandwagon, this was possible. Rather annoyingly, they are very tied together and thus it makes it a bit of a vendor-lock-in if you alternate phone and watch replacement and at any point wish to switch platforms.
That being said though, it does work well and not irritate my skin. So that’s a bonus! If I get back into marathon level distance running, we’ll see how well it goes. But for more common distances that I’ve run or cycled with it… the accuracy seems decent, HR monitor never just sometimes decides I’m not exerting myself, and the GPS actually gets a lock in reasonable time. Plus it can pair with headphones and be the only thing I take out with me.
A few random notes about things that can make life on macOS (the modern one, as in, circa 2023) better for those coming from Linux.
For various reasons you may end up with Mac hardware with macOS on the metal rather than Linux. This could be anything from battery life of the Apple Silicon machines (and not quite being ready to jump on the Asahi Linux bandwagon), to being able to run the corporate suite of Enterprise Software (arguably a bug more than a feature), to some other reason that is also fine.
My approach to most of my development is to have a remote more powerful Linux machine to do the heavy lifting, or do Linux development on Linux, and not bank on messing around with a bunch of software on macOS that would approximate something on Linux. This also means I can move my GUI environment (the Mac) easily forward without worrying about whatever weird workarounds I needed to do in order to get things going for whatever development work I’m doing, and vice-versa.
Terminal emulator? iTerm2. The built in Terminal.app is fine, but there’s more than a few nice things in iTerm2, including tmux integration which can end up making it feel a lot more like a regular Linux machine. I should probably go read the tmux integration best practices before I complain about some random bugs I think I’ve hit, so let’s pretend I did that and everything is perfect.
I tend to use the Mac for SSHing to bigger Linux machines for most of my work. At work, that’s mostly to a Graviton 2 EC2 Instance running Amazon Linux with all my development environments on it. At home, it’s mostly a Raptor Blackbird POWER9 system running Fedora.
Running Linux locally? For all the use cases of containers, Podman Desktop or finch. There’s a GUI part of Podman which is nice, and finch I know about because of the relatively nearby team that works on it, and its relationship to lima. Lima positions itself as WSL2-like but for Mac. There’s UTM for a full virtual machine / qemu environment, although I rarely end up using this and am more commonly using a container or just SSHing to a bigger Linux box.
There’s XCode for any macOS development that may be needed (e.g. when you want that extra feature in UTM or something) I do use Homebrew to install a few things locally.
Have a read of Andrew‘s blog post on OpenBMC Development on an Apple M1 MacBook Pro too.
Last week I had occasion to test deploying ceph-csi on a k3s cluster, so that Kubernetes workloads could access block storage provided by an external Ceph cluster. I went with the upstream Ceph documentation, because assuming everything worked it’d then be really easy for me to say to others “just go do this”.
Everything did not work.
I’d gone through all the instructions, inserting my own Ceph cluster’s FSID and MON IP addresses in the right places, applied the YAML to deploy the provisioner and node plugins, and all the provisioner bits were running just fine, but the csi-rbdplugin pods were stuck in CrashLoopBackOff:
> kubectl get pods NAME READY STATUS RESTARTS AGE csi-rbdplugin-22zjr 1/3 CrashLoopBackOff 107 (3m55s ago) 2d csi-rbdplugin-pbtc2 1/3 CrashLoopBackOff 104 (3m33s ago) 2d csi-rbdplugin-provisioner-9dcfd56d7-c8s72 7/7 Running 28 (35m ago) 8d csi-rbdplugin-provisioner-9dcfd56d7-hcztz 7/7 Running 28 (35m ago) 8d csi-rbdplugin-provisioner-9dcfd56d7-w2ctc 7/7 Running 28 (35m ago) 8d csi-rbdplugin-r2rzr 1/3 CrashLoopBackOff 106 (3m39s ago) 2d
The csi-rbdplugin pod consists of three containers – driver-registrar, csi-rbdplugin, liveness-prometheus – and csi-rbdplugin wasn’t able to load the rbd kernel module:
> kubectl logs csi-rbdplugin-22zjr --container csi-rbdplugin I0726 10:25:12.862125 7628 cephcsi.go:199] Driver version: canary and Git version: d432421a88238a878a470d54cbf2c50f2e61cdda I0726 10:25:12.862452 7628 cephcsi.go:231] Starting driver type: rbd with name: rbd.csi.ceph.com I0726 10:25:12.865907 7628 mount_linux.go:284] Detected umount with safe 'not mounted' behavior E0726 10:25:12.872477 7628 rbd_util.go:303] modprobe failed (an error (exit status 1) occurred while running modprobe args: [rbd]): "modprobe: ERROR: could not insert 'rbd': Key was rejected by service\n" F0726 10:25:12.872702 7628 driver.go:150] an error (exit status 1) occurred while running modprobe args: [rbd]
Matching “modprobe: ERROR: could not insert ‘rbd’: Key was rejected by service” in the above was an error on each host’s console: “Loading of unsigned module is rejected”. These hosts all have secure boot enabled, so I figured it had to be something to do with that. So I logged into one of the hosts and ran modprobe rbd
as root, but that worked just fine. No key errors, no unsigned module errors. And once I’d run modprobe rbd
(and later modprobe nbd
) on the host, the csi-rbdplugin container restarted and worked just fine.
So why wouldn’t modprobe
work inside the container? /lib/modules
from the host is mounted inside the container, the container has the right extra privileges… Clearly I needed to run a shell in the failing container to poke around inside when it was in CrashLoopBackOff state, but I realised I had no idea how to do that. I knew I could kubectl exec -it csi-rbdplugin-22zjr --container csi-rbdplugin -- /bin/bash
but of course that only works if the container is actually running. My container wouldn’t even start because of that modprobe error.
Having previously spent a reasonable amount of time with podman, which has podman run
, I wondered if there were a kubectl run
that would let me start a new container using the upstream cephcsi image, but running a shell, instead of its default command. Happily, there is a kubectl run
, so I tried it:
> kubectl run -it cephcsi --image=quay.io/cephcsi/cephcsi:canary --rm=true --command=true -- /bin/bash If you don't see a command prompt, try pressing enter. [root@cephcsi /]# modprobe rbd modprobe: FATAL: Module rbd not found in directory /lib/modules/5.14.21-150400.24.66-default [root@cephcsi /]# ls /lib/modules/ [root@cephcsi /]#
Ohhh, right, of course, that doesn’t have the host’s /lib/modules
mounted. podman run
lets me add volume mounts using -v
options , so surely kubectl run
will let me do that too.
At this point in the story, the notes I wrote last week include an awful lot of swearing.
See, kubectl run
doesn’t have a -v
option to add mounts, but what it does have is an --overrides
option to let you add a chunk of JSON to override the generated pod. So I went back to the relevant YAML and teased out the bits I needed to come up with this monstrosity:
> kubectl run -it cephcsi-test \ --image=quay.io/cephcsi/cephcsi:canary --rm=true \ --overrides='{ "apiVersion": "v1", "spec": { "containers": [ { "name": "cephcsi", "command": ["/bin/bash"], "stdin": true, "tty": true, "image": "quay.io/cephcsi/cephcsi:canary", "volumeMounts": [ { "mountPath": "/lib/modules", "name": "lib-modules" }], "securityContext": { "allowPrivilegeEscalation": true, "capabilities": { "add": [ "SYS_ADMIN" ] }, "privileged": true } } ], "volumes": [ { "name": "lib-modules", "hostPath": { "path": "/lib/modules", "type": "" } } ] } }'
But at least I could get a shell and reproduce the problem:
> kubectl run -it cephcsi-test [honking great horrible chunk of JSON] [root@cephcsi-test /]# ls /lib/modules/ 5.14.21-150400.24.66-default [root@cephcsi-test /]# modprobe rbd modprobe: ERROR: could not insert 'rbd': Key was rejected by service
A certain amount more screwing around looking at the source for modprobe and bits of the kernel confirmed that the kernel really didn’t think the module was signed for some reason (mod_verify_sig()
was returning -ENODATA
), but I knew these modules were fine, because I could load them on the host. Eventually I hit on this:
[root@cephcsi-test /]# ls /lib/modules/*/kernel/drivers/block/rbd* /lib/modules/5.14.21-150400.24.66-default/kernel/drivers/block/rbd.ko.zst
Wait, what’s that .zst
extension? It turns out we (SUSE) have been shipping zstd-compressed kernel modules since – as best as I can tell – some time in 2021. modprobe
on my SLE Micro 5.3 host of course supports this:
# grep PRETTY /etc/os-release PRETTY_NAME="SUSE Linux Enterprise Micro for Rancher 5.3" # modprobe --version kmod version 29 +ZSTD +XZ +ZLIB +LIBCRYPTO -EXPERIMENTAL
modprobe
in the CentOS Stream 8 upstream cephcsi container does not:
[root@cephcsi-test /]# grep PRETTY /etc/os-release PRETTY_NAME="CentOS Stream 8" [root@cephcsi-test /]# modprobe --version kmod version 25 +XZ +ZLIB +OPENSSL -EXPERIMENTAL
Mystery solved, but I have to say the error messages presented were spectacularly misleading. I later tried with secure boot disabled, and got something marginally better – in that case modprobe
failed with “modprobe: ERROR: could not insert ‘rbd’: Exec format error”, and dmesg
on the host gave me “Invalid ELF header magic: != \x7fELF”. If I’d seen messaging like that in the first place I might have been quicker to twig to the compression thing.
Anyway, the point of this post wasn’t to rant about inscrutable kernel errors, it was to rant about how there’s no way anyone could be reasonably expected to figure out how to do that --overrides
thing with the JSON to debug a container stuck in CrashLoopBackOff. Assuming I couldn’t possibly be the first person to need to debug containers in this state, I told my story to some colleagues, a couple of whom said (approximately) “Oh, I edit the pod YAML and change the container’s command to tail -f /dev/null
or sleep 1d
. Then it starts up just fine and I can kubectl exec
into it and mess around”. Those things totally work, and I wish I’d thought to do that myself. The best answer I got though was to use kubectl debug
to make a copy of the existing pod but with the command changed. I didn’t even know kubectl debug
existed, which I guess is my reward for not reading the entire manual
So, finally, here’s the right way to do what I was trying to do:
> kubectl debug csi-rbdplugin-22zjr -it \ --copy-to=csi-debug --container=csi-rbdplugin -- /bin/bash [root@... /]# modprobe rbd modprobe: ERROR: could not insert 'rbd': Key was rejected by service (...do whatever other messing around you need to do, then...) [root@... /]# exit Session ended, resume using 'kubectl attach csi-debug -c csi-rbdplugin -i -t' command when the pod is running > kubectl delete pod csi-debug pod "csi-debug" deleted
In the above kubectl debug
invocation, csi-rbdplugin-22zjr
is the existing pod that’s stuck in CrashLoopBackOff, csi-debug
is the name of the new pod being created, and csi-rbdplugin
is the container in that pod that has its command replaced with /bin/bash
, so you can mess around inside it.
The July 2023 meeting sparked multiple new topics including Linux security architecture, Debian ports of LoongArch and Risc-V as well as hardware design of PinePhone backplates.
On the practical side, Russell Coker demonstrated running different applications in isolated environment with bubblewrap sandbox, as well as other hardening techniques and the way they interact with the host system. Russell also discussed some possible pathways of hardening desktop Linux to reach the security level of modern Android. Yifei Zhan demonstrated sending and receiving messages with the PineDio USB LoRa adapter and how to inspect LoRa signal with off-the-shelf software defined radio receiver, and discussed how the driver situation for LoRa on Linux might be improved. Yifei then gave a demonstration on utilizing KVM on PinePhone Pro to run NetBSD and OpenBSD virtual machines, more details on running VMs on the PinePhone Pro can be found on this blog post from Yifei.
We also had some discussion of the current state of Mobian and Debian ecosystem, along with how to contribute to different parts of Mobian with a Mobian developer who joined us.
Somewhat a while ago now, I wrote about how every time I return to write some software for the Mac, the preferred language has changed. The purpose of this adventure was to get my photos out of the aging Shotwell and onto my (then new) Mac and the Apple Photos App.
I’ve had a pretty varied experience with photo management on Linux over the past couple of decades. For a while I used f-spot as it was the new hotness. At some point this became…. slow and crashy enough that it was unusable. Today, it appears that the GitHub project warns that current bugs include “Not starting”.
At some point (and via a method I have long since forgotten), I did manage to finally get my photos over to Shotwell, which was the new hotness at the time. That data migration was so long ago now I actually forget what features I was missing from f-spot that I was grumbling about. I remember the import being annoying though. At some point in time Shotwell was no longer was the new hotness and now there is GNOME Photos. I remember looking at GNOME Photos, and seeing no method of importing photos from Shotwell, so put it aside. Hopefully that situation has improved somewhere.
At some point Shotwell was becoming rather stagnated, and I noticed more things stopping to work rather than getting added features and performance. The good news is that there has been some more development activity on Shotwell, so hopefully my issues with it end up being resolved.
One recommendation for Linux photo management was digiKam, and one that I never ended up using full time. One of the reasons behind that was that I couldn’t really see any non manual way to import photos from Shotwell into it.
With tens of thousands of photos (~58k at the time of writing), doing things manually didn’t seem like much fun at all.
As I postponed my decision, I ended up moving my main machine over to a Mac for a variety of random reasons, and one quite motivating thing was the ability to have Photos from my iPhone magically sync over to my photo library without having to plug it into my computer and copy things across.
So…. how to get photos across from Shotwell on Linux to Photos on a Mac/iPhone (and also keep a very keen eye on how to do it the other way around, because, well, vendor lock-in isn’t great).
It would be kind of neat if I could just run Shotwell on the Mac and have some kind of import button, but seeing as there wasn’t already a native Mac port, and that Shotwell is written in Vala rather than something I know has a working toolchain on macOS…. this seemed like more work than I’d really like to take on.
Luckily, I remembered that Shotwell’s database is actually just a SQLite database pointing to all the files on disk. So, if I could work out how to read it accurately, and how to import all the relevant metadata (such as what Albums a photo is in, tags, title, and description) into Apple Photos, I’d be able to make it work.
So… is there any useful documentation as to how the database is structured?
Semi annoyingly, Shotwell is written in Vala, a rather niche programming language that while integrating with all the GObject stuff that GNOME uses, is largely unheard of. Luckily, the database code in Shotwell isn’t too hard to read, so was a useful fallback for when the documentation proves inadequate.
So, I armed myself with the following resources:
Programming the Mac side of things, it was a good excuse to start looking at Swift, so knowing I’d also need to read a SQLite database directly (rather than use any higher level abstraction), I armed myself with the following resources:
From here, I could work on getting the first half going, the ability to view my Shotwell database on the Mac (which is what I posted a screenshot of back in Feb 2022).
But also, I had to work out what I was doing on the other end of things, how would I import photos? It turns out there’s an API!
A bit of SwiftUI code:
import SwiftUI
import AppKit
import Photos
struct ContentView: View {
@State var favorite_checked : Bool = false
@State var hidden_checked : Bool = false
var body: some View {
VStack() {
Text("Select a photo for import")
Toggle("Favorite", isOn: $favorite_checked)
Toggle("Hidden", isOn: $hidden_checked)
Button("Import Photo")
{
let panel = NSOpenPanel()
panel.allowsMultipleSelection = false
panel.canChooseDirectories = false
if panel.runModal() == .OK {
let photo_url = panel.url!
print("selected: " + String(photo_url.absoluteString))
addAsset(url: photo_url, isFavorite: favorite_checked, isHidden: hidden_checked)
}
}
.padding()
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
Combined with a bit of code to do the import (which does look a bunch like the examples in the docs):
import SwiftUI
import Photos
import AppKit
@main
struct SinglePhotoImporterApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
func addAsset(url: URL, isFavorite: Bool, isHidden: Bool) {
// Add the asset to the photo library.
let path = "/Users/stewart/Pictures/1970/01/01/1415446258647.jpg"
let url = URL(fileURLWithPath: path)
PHPhotoLibrary.shared().performChanges({
let addedImage = PHAssetChangeRequest.creationRequestForAssetFromImage(atFileURL: url)
addedImage?.isHidden = isHidden
addedImage?.isFavorite = isFavorite
}, completionHandler: {success, error in
if !success { print("Error creating the asset: \(String(describing: error))") } else
{
print("Imported!")
}
})
}
This all meant I could import a single photo. However, there were some limitations.
There’s the PHAssetCollectionChangeRequest to do things to Albums, so it would solve that problem, but I couldn’t for the life of me work out how to add/edit Titles and Descriptions.
It was so close!
So what did I need to do in order to import Titles and Descriptions? It turns out you can do that via AppleScript. Yes, that thing that launched in 1993 and has somehow survived the transition of m68k based Macs to PowerPC based Macs to Intel based Macs to ARM based Macs.
So, just to make it easier to debug what was going on, I started adding code to my ShotwellImporter tool that would generate snippets of AppleScript I could run and check that it was doing the right thing…. but then very quickly ran into a problem…. it appears that the AppleScript language interpreter on modern macOS has limits that you’d be more familiar with in 1993 than 2023, and I very quickly hit limits where the script would just error out before running (I was out of dictionary size allegedly).
But there’s a new option! Everything you can do with AppleScript you can now do with JavaScript – it’s just even less documented than AppleScript is! But it does work! I got to the point where I could generate JavaScript that imported photos, into all the relevant albums, and set title and descriptions.
A useful write up of using JavaScript rather than AppleScript to do things with Photos: https://mudge.name/2019/11/13/scripting-photos-for-macos-with-javascript/
More recent than when I was doing my hacking, https://alexwlchan.net/2023/managing-albums-in-photos/ is a good read.
With luck I’ll find some time to write up a bit of a walkthrough of my code, and push it up somewhere.
In my last post, I wrote about how I taught sesdev (originally a tool for deploying Ceph clusters on virtual machines) to deploy k3s, because I wanted a little sandbox in which I could break learn more about Kubernetes. It’s nice to be able to do a toy deployment locally, on a bunch of VMs, on my own hardware, in my home office, rather than paying to do it on someone else’s computer. Given the k3s thing worked, I figured the next step was to teach sesdev how to deploy Longhorn so I could break that learn more about that too.
Teaching sesdev to deploy Longhorn meant asking it to:
/dev/vdb
on all the nodes that have extra disks, then mount that on /var/lib/longhorn
.kubectl label node -l 'node-role.kubernetes.io/master!=true' node.longhorn.io/create-default-disk=true
to ensure Longhorn does its storage thing only on the nodes that aren’t the k3s master.So, now I can do this:
> sesdev create k3s --deploy-longhorn
=== Creating deployment "k3s-longhorn" with the following configuration ===
Deployment-wide parameters (applicable to all VMs in deployment):
- deployment ID: k3s-longhorn
- number of VMs: 5
- version: k3s
- OS: tumbleweed
- public network: 10.20.78.0/24
Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y
=== Running shell command ===
vagrant up --no-destroy-on-error --provision
Bringing machine 'master' up with 'libvirt' provider…
Bringing machine 'node1' up with 'libvirt' provider…
Bringing machine 'node2' up with 'libvirt' provider…
Bringing machine 'node3' up with 'libvirt' provider…
Bringing machine 'node4' up with 'libvirt' provider…
[... lots more log noise here - this takes several minutes... ]
=== Deployment Finished ===
You can login into the cluster with:
$ sesdev ssh k3s-longhorn
Longhorn will now be deploying, which may take some time.
After logging into the cluster, try these:
# kubectl get pods -n longhorn-system --watch
# kubectl get pods -n longhorn-system
The Longhorn UI will be accessible via any cluster IP address
(see the kubectl -n longhorn-system get ingress
output above).
Note that no authentication is required.
…and, after another minute or two, I can access the Longhorn UI and try creating some volumes. There’s a brief period while the UI pod is still starting where it just says “404 page not found”, and later after the UI is up, there’s still other pods coming online, so on the Volume screen in the Longhorn UI an error appears: “failed to get the parameters: failed to get target node ID: cannot find a node that is ready and has the default engine image longhornio/longhorn-engine:v1.4.1 deployed“. Rest assured this goes away in due course (it’s not impossible I’m suffering here from rural Tasmanian internet lag pulling container images). Anyway, with my five nodes – four of which have an 8GB virtual disk for use by Longhorn – I end up with a bit less than 22GB storage available:
Now for the fun part. Longhorn is a distributed storage solution, so I thought it would be interesting to see how it handled a couple of types of failure. The following tests are somewhat arbitrary (I’m really just kicking the tyres randomly at this stage) but Longhorn did, I think, behave pretty well given what I did to it.
Volumes in Longhorn consist of replicas stored as sparse files on a regular filesystem on each storage node. The Longhorn documentation recommends using a dedicated disk rather than just having /var/lib/longhorn
backed by the root filesystem, so that’s what sesdev does: /var/lib/longhorn
is an ext4 filesystem mounted on /dev/vdb
. Now, what happens to Longhorn if that underlying block device suffers some kind of horrible failure? To test that, I used the Longhorn UI to create a 2GB volume, then attached that to the master node:
Then, I ssh’d to the master node and with my 2GB Longhorn volume attached, made a filesystem on it and created a little file:
> sesdev ssh k3s-longhorn Have a lot of fun... master:~ # cat /proc/partitions major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 8 0 2097152 sda master:~ # mkfs /dev/sda mke2fs 1.46.5 (30-Dec-2021) Discarding device blocks: done Creating filesystem with 524288 4k blocks and 131072 inodes Filesystem UUID: 3709b21c-b9a2-41c1-a6dd-e449bdeb275b Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done master:~ # mount /dev/sda /mnt master:~ # echo foo > /mnt/foo master:~ # cat /mnt/foo foo
Then I went and trashed the block device backing one of the replicas:
> sesdev ssh k3s-longhorn node3 Have a lot of fun... node3:~ # ls /var/lib/longhorn engine-binaries longhorn-disk.cfg lost+found replicas unix-domain-socket node3:~ # dd if=/dev/urandom of=/dev/vdb bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB, 100 MiB) copied, 0.486205 s, 216 MB/s node3:~ # ls /var/lib/longhorn node3:~ # dmesg|tail -n1 [ 6544.197183] EXT4-fs error (device vdb): ext4_map_blocks:607: inode #393220: block 1607168: comm longhorn: lblock 0 mapped to illegal pblock 1607168 (length 1)
At this point, the Longhorn UI still showed the volume as green (healthy, ready, scheduled). Then, back on the master node, I tried creating another file:
master:~ # echo bar > /mnt/bar master:~ # cat /mnt/bar bar
That’s fine so far, but suddenly the Longhorn UI noticed that something very bad had happened:
Ultimately node3 was rebooted and ended up stalled with the console requesting the root password for maintenance:
Meanwhile, Longhorn went and rebuilt a third replica on node2:
…and the volume remained usable the entire time:
master:~ # echo baz > /mnt/baz master:~ # ls /mnt bar baz foo lost+found
That’s perfect!
Looking at the Node screen we could see that node3 was still down:
That’s OK, I was able to fix node3. I logged in on the console and ran mkfs.ext4 /dev/vdb
then brought the node back up again.The disk remained unschedulable, because Longhorn was still expecting the ‘old’ disk to be there (I assume based on the UUID stored in /var/lib/longhorn/longhorn-disk.cfg) and of course the ‘new’ disk is empty. So I used the Longhorn UI to disable scheduling for that ‘old’ disk, then deleted it. Shortly after, Longhorn recognised the ‘new’ disk mounted at /var/lib/longhorn
and everything was back to green across the board.
So Longhorn recovered well from the backing store of one replica going bad. Next I thought I’d try to break it from the other end by running a volume out of space. What follows is possibly not a fair test, because what I did was create a single Longhorn volume larger than the underlying disks, then filled that up. In normal usage, I assume one would ensure there’s plenty of backing storage available to service multiple volumes, that individual volumes wouldn’t generally be expected to get more than a certain percentage full, and that some sort of monitoring and/or alerting would be in place to warn of disk pressure.
With four nodes, each with a single 8GB disk, and Longhorn apparently reserving 2.33GB by default on each disk, that means no Longhorn volume can physically store more than a bit over 5.5GB of data (see the Size column in the previous screenshot). Given that the default setting for Storage Over Provisioning Percentage is 200, we’re actually allowed to allocate up to a bit under 11GB.
So I went and created a 10GB volume, attached that to the master node, created a filesystem on it, and wrote a whole lot of zeros to it:
master:~ # mkfs.ext4 /dev/sda mke2fs 1.46.5 (30-Dec-2021) [...] master:~ # mount /dev/sda /mnt master:~ # df -h /mnt Filesystem Size Used Avail Use% Mounted on /dev/sda 9.8G 24K 9.3G 1% /mnt master:~ # dd if=/dev/zero of=/mnt/big-lot-of-zeros bs=1M status=progress 2357198848 bytes (2.4 GB, 2.2 GiB) copied, 107 s, 22.0 MB/s
While that dd
was running, I was able to see the used space of the replicas increasing in the Longhorn UI:
After a few more minutes, the dd
stalled…
master:~ # dd if=/dev/zero of=/mnt/big-lot-of-zeros bs=1M status=progress 9039773696 bytes (9.0 GB, 8.4 GiB) copied, 478 s, 18.9 MB/s
…there was a lot of unpleasantness on the master node’s console…
…the replicas became unschedulable due to lack of space…
…and finally the volume faulted:
Now what?
It turns out that Longhorn will actually recover if we’re able to somehow expand the disks that store the replicas. This is probably a good argument for backing Longhorn with an LVM volume on each node in real world deployments, because then you could just add another disk and extend the volume onto it. In my case though, given it’s all VMs and virtual block devices, I can actually just enlarge those devices. For each node then, I:
qemu-img resize /var/lib/libvirt/images/k3s-longhorn_$NODE-vdb.qcow2 +8G
resize2fs /dev/vdb
to take advantage of the extra disk space.After doing that to node1, Longhorn realised there was enough space there and brought node1’s replica of my 10GB volume back online. It also summarily discarded the other two replicas from the still-full disks on node2 and node3, which didn’t yet have enough free space to be useful:
As I repeated the virtual disk expansion on the other nodes, Longhorn happily went off and recreated the missing replicas:
Finally I could re-attach the volume to the master node, and have a look to see how many of my zeros were actually written to the volume:
master:~ # cat /proc/partitions major minor #blocks name 254 0 44040192 vda 254 1 2048 vda1 254 2 20480 vda2 254 3 44016623 vda3 8 0 10485760 sda master:~ # mount /dev/sda /mnt master:~ # ls -l /mnt total 7839764 -rw-r--r-- 1 root root 8027897856 May 3 04:41 big-lot-of-zeros drwx------ 2 root root 16384 May 3 04:34 lost+found
Recall that dd
claimed to have written 9039773696 bytes before it stalled when the volume faulted, so I guess that last gigabyte of zeros is lost in the aether. But, recall also that this isn’t really a fair test – one overprovisioned volume deliberately being quickly and deliberately filled to breaking point vs. a production deployment with (presumably) multiple volumes that don’t fill quite so fast, and where one is hopefully paying at least a little bit of attention to disk pressure as time goes by.
It’s worth noting that in a situation where there are multiple Longhorn volumes, assuming one disk or LVM volume per node, the replicas will all share the same underlying disks, and once those disks are full it seems all the Longhorn volumes backed by them will fault. Given multiple Longhorn volumes, one solution – rather than expanding the underlying disks – is simply to delete a volume or two if you can stand to lose the data, or maybe delete some snapshots (I didn’t try the latter yet). Once there’s enough free space, the remaining volumes will come back online. If you’re really worried about this failure mode, you could always just disable overprovisioning in the first place – whether this makes sense or not will really depend on your workloads and their data usage patterns.
All in all, like I said earlier, I think Longhorn behaved pretty well given what I did to it. Some more information in the event log could perhaps be beneficial though. In the UI I can see warnings from longhorn-node-controller e.g. “the disk default-disk-1cdbc4e904539d26(/var/lib/longhorn/) on the node node1 has 3879731200 available, but requires reserved 2505089433, minimal 25% to schedule more replicas” and warnings from longhorn-engine-controller e.g. “Detected replica overprovisioned-r-73d18ad6 (10.42.3.19:10000) in error“, but I couldn’t find anything really obvious like “Dude, your disks are totally full!”
Later, I found more detail in the engine manager logs after generating a support bundle ([…] level=error msg=”I/O error” error=”tcp://10.42.4.34:10000: write /host/var/lib/longhorn/replicas/overprovisioned-c3b9b547/volume-head-003.img: no space left on device”) so the error information is available – maybe it’s just a matter of learning where to look for it.
We – that is to say the storage team at SUSE – have a tool we’ve been using for the past few years to help with development and testing of Ceph on SUSE Linux. It’s called sesdev because it was created largely for SES (SUSE Enterprise Storage) development. It’s essentially a wrapper around vagrant and libvirt that will spin up clusters of VMs running openSUSE or SLES, then deploy Ceph on them. You would never use such clusters in production, but it’s really nice to be able to easily spin up a cluster for testing purposes that behaves something like a real cluster would, then throw it away when you’re done.
I’ve recently been trying to spend more time playing with Kubernetes, which means I wanted to be able to spin up clusters of VMs running openSUSE or SLES, then deploy Kubernetes on them, then throw the clusters away when I was done, or when I broke something horribly and wanted to start over. Yes, I know there’s a bunch of other tools for doing toy Kubernetes deployments (minikube comes to mind), but given I already had sesdev and was pretty familiar with it, I thought it’d be worthwhile seeing if I could teach it to deploy k3s, a particularly lightweight version of Kubernetes. Turns out that wasn’t too difficult, so now I can do this:
> sesdev create k3s === Creating deployment "k3s" with the following configuration === Deployment-wide parameters (applicable to all VMs in deployment): deployment ID: k3s number of VMs: 5 version: k3s OS: tumbleweed public network: 10.20.190.0/24 Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y === Running shell command === vagrant up --no-destroy-on-error --provision Bringing machine 'master' up with 'libvirt' provider... Bringing machine 'node1' up with 'libvirt' provider... Bringing machine 'node2' up with 'libvirt' provider... Bringing machine 'node3' up with 'libvirt' provider... Bringing machine 'node4' up with 'libvirt' provider... [... wait a few minutes (there's lots more log information output here in real life) ...] === Deployment Finished === You can login into the cluster with: $ sesdev ssh k3s
…and then I can do this:
> sesdev ssh k3s Last login: Fri Mar 24 11:50:15 CET 2023 from 10.20.190.204 on ssh Have a lot of fun… master:~ # kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane,master 5m16s v1.25.7+k3s1 node2 Ready 2m17s v1.25.7+k3s1 node1 Ready 2m15s v1.25.7+k3s1 node3 Ready 2m16s v1.25.7+k3s1 node4 Ready 2m16s v1.25.7+k3s1 master:~ # kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-79f67d76f8-rpj4d 1/1 Running 0 5m9s kube-system metrics-server-5f9f776df5-rsqhb 1/1 Running 0 5m9s kube-system coredns-597584b69b-xh4p7 1/1 Running 0 5m9s kube-system helm-install-traefik-crd-zz2ld 0/1 Completed 0 5m10s kube-system helm-install-traefik-ckdsr 0/1 Completed 1 5m10s kube-system svclb-traefik-952808e4-5txd7 2/2 Running 0 3m55s kube-system traefik-66c46d954f-pgnv8 1/1 Running 0 3m55s kube-system svclb-traefik-952808e4-dkkp6 2/2 Running 0 2m25s kube-system svclb-traefik-952808e4-7wk6l 2/2 Running 0 2m13s kube-system svclb-traefik-952808e4-chmbx 2/2 Running 0 2m14s kube-system svclb-traefik-952808e4-k7hrw 2/2 Running 0 2m14s
…and then I can make a mess with kubectl apply
, helm
, etc.
One thing that sesdev knows how to do is deploy VMs with extra virtual disks. This functionality is there for Ceph deployments, but there’s no reason we can’t turn it on when deploying k3s:
> sesdev create k3s --num-disks=2 > sesdev ssh k3s master:~ # for node in \ $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') ; do echo $node ; ssh $node cat /proc/partitions ; done master major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 node3 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node2 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node4 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc node1 major minor #blocks name 253 0 44040192 vda 253 1 2048 vda1 253 2 20480 vda2 253 3 44016623 vda3 253 16 8388608 vdb 253 32 8388608 vdc
As you can see this gives all the worker nodes an extra two 8GB virtual disks. I suspect this may make sesdev an interesting tool for testing other Kubernetes based storage systems such as Longhorn, but I haven’t tried that yet.
The decision to do this rather than build my own was complicated, and I'm going to mostly skip over the detail of that. At some time I might put it in another blog post. But for now it's enough to say that I'd accidentally cooked the motor in my Mark I, the work on the Mark II was going to take ages, and I was in the relatively fortunate situation of being able to afford the Experia if I sold my existing Triumph Tiger Sport and the parts for the Mark II.
For other complicated reasons I was planning to be in Sydney after the weekend that Bruce at Zen Motorcycles told me the bike would be arriving. Rather than have it freighted down, and since I would have room for my riding gear in our car, I decided to pick it up and ride it back on the Monday. In reconnoitering the route, we discovered that by pure coincidence Zen Motorcycles is on Euston Road in Alexandria, only 200 metres away from the entrance to WestConnex and the M8. So with one traffic light I could be out of Sydney.
I will admit to being more than a little excited that morning. Electric vehicles are still, in 2023, a rare enough commodity that waiting lists can be months long; I ordered this bike in October 2022 and it arrived in March 2023. So I'd had plenty of time to build my expectations. And likewise the thought of riding a brand new bike - literally one of the first of its kind in the country (it is the thirty-second Experia ever made!) - was a little daunting. I obtained PDF copies of the manual and familiarised myself with turning the cruise control on and off, as well as checking and setting the regen braking levels. Didn't want to stuff anything up on the way home.
There is that weird feeling in those situations of things being both very ordinary and completely unique. I met Bruce, we chatted, I saw the other Experia models in the store, met Ed - who had come down to chat with Bruce, and just happened to be the guy who rode a Harley Davidson Livewire from Perth to Sydney and then from Sydney to Cape Tribulation and back. He shared stories from his trip and tips on hypermiling. I signed paperwork, picked up the keys, put on my gear, prepared myself.
Even now I still get a bit choked up just thinking of that moment. Seeing that bike there, physically real, in front of me - after those months of anticipation - made the excitement real as well.
So finally, after making sure I wasn't floating, and making sure I had my ear plugs in and helmet on the right way round, I got on. Felt the bike's weight. Turned it on. Prepared myself. Took off. My partner followed behind, through the lights, onto the M8 toward Canberra. I gave her the thumbs up.
We planned to stop for lunch at Mittagong, while the NRMA still offers the free charger at the RSL there. One lady was charging her Nissan Leaf on the ChaDeMo side; shortly after I plugged in a guy arrived in his Volvo XC40 Recharge. He had the bigger battery and would take longer; I just needed a ten minute top up to get me to Marulan.
I got to Marulan and plugged in; a guy came thinking he needed to tell the petrol motorbike not to park in the electric vehicle bay, but then realised that the plug was going into my bike. Kate headed off, having charged up as well, and I waited another ten minutes or so to get a bit more charge. Then I rode back.
I stopped, only once more - at Mac's Reef Road. I turned off and did a U turn, then waited for the traffic to clear before trying the bike's acceleration. Believe me when I say this bike will absolutely do a 0-100km/hr in under four seconds! It is not a light bike, but when you pull on the power it gets up and goes.
Here is my basic review, given that experience and then having ridden it for about ten weeks around town.
The absolute best feature of the Energica Experia is that it is perfectly comfortable riding around town. Ease on the throttle and it gently takes off at the traffic lights and keeps pace with the traffic. Ease off, and it gently comes to rest with regenerative braking and a light touch on the rear brake after stopping to hold it still. If you want to take off faster, wind the throttle on more. It is not temperamental or twitchy, and you have no annoying gears and clutch to balance.
In fact, I feel much more confident lane filtering, because before I would have to have the clutch ready and be prepared to give the Tiger Sport lots of throttle lest I accidentally stall it in front of an irate line of traffic. With the Experia, I can simply wait peacefully - using no power - and then when the light goes green I simply twist on the throttle and I am away ahead of even the most aggressive car driver.
It is amazingly empowering.
I'm not going to bore you with the stats - you can probably look them up yourself if you care. The main thing to me is that it has DC fast charging, and watching 75KW go into a 22.5KWHr battery is just a little bit terrifying as well as incredibly cool. The stated range of 250km on a charge at highway speeds is absolutely correct, from my experience riding it down from Sydney. And that plus the fast charging means that I think it is going to be quite reasonable to tour on this bike, stopping off at fast or even mid-level chargers - even a boring 22KW charger can fill the battery up in an hour. The touring group I travel with stops often enough that if those stops can be top ups, I will not hold anyone up.
Some time in the near future I hope to have a nice fine day where I can take it out on the Cotter Loop. This is an 80km stretch of road that goes west of Canberra into the foothills of the Brindabella Ranges, out past the Deep Space Tracking Station and Tidbinbilla Nature Reserve. It's a great combination of curving country roads and hilly terrain, and reasonably well maintained as well. I did that on the Tiger Sport, with a GoPro, before I sold it - and if I can ever convince PiTiVi to actually compile the video from it I will put that hour's ride up on a platform somewhere.
I want to do that as much to show off Canberra's scenery as to show off the bike.
And if the CATL battery capacity improvement comes through to the rest of the industry, and we get bikes that can do 400km to 500km on a charge, then electric motorbike touring really will be no different to petrol motorbike touring. The Experia is definitely at the forefront of that change, but it is definitely possible on this bike.
Rustup (the community package manage for the Rust language) was starting to really suffer : CI times were up at ~ one hour.
We’ve made some strides in bringing this down.
The first thing, which achieved about a 30% reduction in test time was to stop recreating all the test context every time.
Rustup tests the download/installation/upgrade of distributions of Rust. To avoid downloading gigabytes in the test suite, the suite creates mocks of the published Rust artifacts. These mocks are GPG signed and compressed with multiple compression methods, both of which are quite heavyweight operations to perform – and not actually the interesting code under test to execute.
Previously, every test was entirely hermetic, and usually the server state was also unmodified.
There were two cases where the state was modified. One, a small number of tests testing error conditions such as GPG signature failures. And two, quite a number of tests that were testing temporal behaviour: for instance, install nightly at time A, then with a newer server state, perform a rustup update and check a new version is downloaded and installed.
We’re partway through this migration, but compare these two tests:
fn check_updates_some() {
check_update_setup(&|config| {
set_current_dist_date(config, "2015-01-01");
config.expect_ok(&["rustup", "update", "stable"]);
config.expect_ok(&["rustup", "update", "beta"]);
config.expect_ok(&["rustup", "update", "nightly"]);
set_current_dist_date(config, "2015-01-02");
config.expect_stdout_ok(
&["rustup", "check"],
for_host!(
r"stable-{0} - Update available : 1.0.0 (hash-stable-1.0.0) -> 1.1.0 (hash-stable-1.1.0)
beta-{0} - Update available : 1.1.0 (hash-beta-1.1.0) -> 1.2.0 (hash-beta-1.2.0)
nightly-{0} - Update available : 1.2.0 (hash-nightly-1) -> 1.3.0 (hash-nightly-2)
"
),
);
})
}
fn check_updates_some() {
test(&|config| {
config.with_scenario(Scenario::ArchivesV2_2015_01_01, &|config| {
config.expect_ok(&["rustup", "toolchain", "add", "stable", "beta", "nightly"]);
});
config.with_scenario(Scenario::SimpleV2, &|config| {
config.expect_stdout_ok(
&["rustup", "check"],
for_host!(
r"stable-{0} - Update available : 1.0.0 (hash-stable-1.0.0) -> 1.1.0 (hash-stable-1.1.0)
beta-{0} - Update available : 1.1.0 (hash-beta-1.1.0) -> 1.2.0 (hash-beta-1.2.0)
nightly-{0} - Update available : 1.2.0 (hash-nightly-1) -> 1.3.0 (hash-nightly-2)
"
),
);
})
})
}
The former version mutates the date with set_current_dist_date; the new version uses two scenarios, one for the earlier time, and one for the later time. This permits the server state to be constructed only once. On a per-test basis it can move as much as 50% of the time out of the test.
The next major gain was moving from having 14 separate integration test binaries to just one. This reduces the link cost of linking the test binaries, all of which link in the same library. It also permits us to see unused functions in our test support library, which helps with cleaning up cruft rather than having it accumulate.
Part of the test suite for each test is setting up an installed rustup environment. Why not start from scratch every time? Well, we obviously have tests that do that, but most tests are focused on steps beyond the new-user case. Setting up an installed rustup environment has a few steps, but particular ones are copying a binary of rustup into the test sandbox, and hard linking it under various names: cargo, rustc, rustup etc.
A debug build of rustup is ~20MB. Running 400 tests means about 8GB of IO; on some platforms most of that IO won’t hit disk, on others it will.
In review now is a PR that changes the initial copy to a hardlink: we hardlink the rustup-init built by cargo into each test, and then hardlink that to the various binaries. That saves 8GB of IO, which isn’t much from some perspectives, but it adds pressure on the page cache, and is wasted work. One wrinkle is a very low max-links limit on NTFS of 1023; to mitigate that we count the links made to rustup-init and generate a new inode for the original to avoid failures happening.
In GitHub actions this lowers our test time to 19m for Linux, 24m for Windows, which is a lot better but not great.
I plan on experimenting with separate actions for building release artifacts and doing CI tests – at the moment we have the same action do both, but they don’t share artifacts in the cache in any meaningful way, so we can probably gain parallelism there, as well as turning off release builds entirely for CI.
We should finish the cached test context work and use it everywhere.
Also we’re looking at having less integration tests and more narrow close to the code tests.
Back in 2012, I received a box of eight hundred openSUSE 12.1 promo DVDs, which I then set out to distribute to local Linux users’ groups, tech conferences, other SUSE crew in Australia, and so forth. I didn’t manage to shift all 800 DVDs at the time, and I recently rediscovered the remaining three hundred and eighty four while installing some new shelves. As openSUSE 12.1 went end of life in May 2013, it seemed likely the DVDs were now useless, but I couldn’t bring myself to toss them in landfill. Instead, given last week was Hack Week, I decided to use them for an art project. Here’s the end result:
Making that mosaic was extremely fiddly. It’s possibly the most annoying Hack Week project I’ve ever done, but I’m very happy with the outcome
The backing is a piece of 900mm x 600mm x 6mm plywood, primed with some leftover kitchen and bathroom undercoat, then spray pained black. I’d forgotten how bad spray paint smells, but it makes for a nice finish. To get the Geeko shape, I took the official openSUSE logo, then turned it into an outline in Inkscape, saved that as a PNG, opened it in GIMP, and cut it into nine 300mm x 200mm pieces which I then printed on A4 paper, stuck together with tape, and cut out to make a stencil. Of course, the first time I did that, nothing quite lined up, so I had to reprint it but with “Ignore page margins” turned off and “Draw crop marks” turned on, then cut the pages down along the crop marks before sticking them together the second time. Then I placed the stencil on the backing, glued the eye down (that just had to be made from the centre of a DVD!) and started laying out cut up DVD shards.
I initially tried cutting the DVDs with tin snips, which is easy on the hands, but had a tendency to sometimes warp the DVD pieces and/or cause them to delaminate, so I reverted to a large pair of scissors which was more effort but ultimately less problematic.
After placing the pieces that made up the head, tail, feet and spine, and deciding I was happy with how they looked, I glued each piece down with superglue. Think: carefully pick up DVD shard without moving too many other shards, turn over, dab on a few tiny globs of superglue, lower into place, press for a few seconds, move to next piece. Do not get any superglue on your fingers, or you’ll risk sticking your fingers together and/or make a gluey mess on the shiny visible side of the DVD shards.
It was another three sessions of layout-then-glue-down to fill in the body. I think I stuck my fingers together about six, or eight, or maybe twenty times. Also, despite my best efforts to get superglue absolutely nowhere near the stencil at all, when I removed the stencil, it had stuck to the backing in several places. I managed to scrape/cut that off with a combination of fingernails, tweezers, and the very sharp knife in my SLE 12 commemorative Leatherman tool, then touched up the remaining white bits with a fine point black Sharpie.
Judging from the leftover DVD centre pieces, this mosaic used about 12 DVDs in all, which isn’t very many considering my initial stash. I had a few other ideas for the remainder, mostly involving hanging them up somehow, which I messed around with earlier on while waiting for the paint to dry on the plywood.
One (failed) idea was to use a cutting wheel on my Dremel tool to slice half way through a few DVDs, then slot them into each other to make a hanging thingy that would spin in the wind. I was unable to make a smooth/straight enough cut for this to work, and superglue doesn’t bridge gaps. You can maybe get an idea of what I was aiming at from this photo:
My wife had an idea for a better way to do this, which is to take a piece of dowel, cut slots in the sides, and glue DVD halves into the slots using Araldite (that’s an epoxy resin, in case you didn’t grow up with that brand name). I didn’t get around to trying this, but I reckon she’s onto something. Next time I’m at the hardware store, I’ll try to remember to pick up some suitably sized dowel.
I did make one somewhat simpler hanging thingy, which I call “Geeko’s Tail (Uncurled)”. It’s just DVDs superglued together on the flat, hanging from fishing line, but I think it’s kinda cool:
Also, I’ve discovered that Officeworks has an e-waste recycling program, so any DVDs I don’t use in future projects needn’t go to landfill.
Update 2023-02-20: For photos of the mosaic, plus wallpapers made from the photos, see https://github.com/tserong/hackweek22
I have long said “Long Malaysians, Short Malaysia” in conversation to many. Maybe it took me a while to tweet it, but this was the first example: Dec 29, 2021. I’ve tweeted it a lot more since.
Malaysia has a 10th Prime Minister, but in general, it is a very precarious partnership. Consider it, same shit, different day?
5/n: Otherwise, there will be no change.
So change via “purported democracy” is never going to happen with a country like Malaysia, rotten to the core. It is a crazy dream.
You succeed, despite of. Davka.
Reboot, or bust.
Good luck, Malaysia.
— Colin Charles (@bytebot) August 18, 2021
I just have to get off the Malaysian news diet. Malaysians elsewhere, are generally very successful. Malaysians suffering by their daily doldrums, well, they just need to wake up, see the light, and succeed.
In the end, as much as people paraphrase, ask not what the country can do for you, legitimately, this is your life, and you should be taking good care of yourself and your loved ones. You succeed, despite of. Politics and the state happens, regardless of.
Me, personally? Ideas are abound for how to get Malaysians who see the light, to succeed elsewhere. And if I read, and get angry at something (tweet rage?), I’m going to pop RM50 into an investment account, which should help me get off this poor habit. I’ll probably also just cut subscriptions to Malaysian news things… Less exposure, is actually better for you. I can’t believe that it has taken me this long to realise this.
Time to build.
I did poorly blogging last year. Oops. I think to myself when I read, This Thing Still On?, I really have to do better in 2023. Maybe the catalyst is the fact that Twitter is becoming a shit show. I doubt people will leave the platform in droves, per se, but I think we are coming back to the need for decentralised blogs again.
I have 477 days to becoming 40. I ditched the Hobonich Techo sometime in 2022, and just focused on the Field Notes, and this year, I’ve got a Monocle x Leuchtturm1917 + Field Notes combo (though it seems my subscription lapsed Winter 2022, I should really burn down the existing collection, and resubscribe).
2022 was pretty amazing. Lots of work. Lots of fun. 256 days on the road (what a number), 339,551km travelled, 49 cities, 20 countries.
The getting back into doing, and not being afraid of experimenting in public is what 2023 is all about. The Year of The Rabbit is upon us tomorrow, hence why I don’t mind a little later Hello 2023 :)
Get back into the habit of doing. And publishing by learning and doing. No fear. Not that I wasn’t doing, but its time to be prolific with what’s been going on.
I better remember that.
I like using Catalyst Cloud to host some of my personal sites. In the past I used to use CAcert for my TLS certificates, but more recently I've been using Let's Encrypt for my TLS certificates as they're trusted in all browsers. Currently the LoadBalancer as a Service (LBaaS) in Catalyst Cloud doesn't have built in support for Let's Encrypt. I could use an apache2/nginx proxy and handle the TLS termination there and have that manage the Let's Encrypt lifecycle, but really, I'd rather use LBaaS.
So I thought I'd set about working out how to get Dehydrated (the Let's Encrypt client I've been using) to drive LBaaS (known as Octavia). I figured this would be of interest to other people using Octavia with OpenStack in general, not just Catalyst Cloud.
There's a few things you need to do. These instructions are specific to Debian:
As we're using HTTP-01 Challenge Type here, you need to have the LoadBalancer forwarding port 80 to your website to allow for the challenge response. It is good practice to have a redirect to HTTPS, here's an example virtual host for Apache:
<VirtualHost *:80> ServerName www.example.com ServerAlias example.com RewriteEngine On RewriteRule ^/.well-known/ - [L] RewriteRule ^/(.*)$ https://www.example.com/$1 [R=301,L] <Location /> Require all granted </Location> </VirtualHost>You all also need this in /etc/apache2/conf-enabled/letsencrypt.conf:
Alias /.well-known/acme-challenge /var/lib/dehydrated/acme-challenges <Directory /var/lib/dehydrated/acme-challenges> Options None AllowOverride None # Apache 2.x <IfModule !mod_authz_core.c> Order allow,deny Allow from all </IfModule> # Apache 2.4 <IfModule mod_authz_core.c> Require all granted </IfModule> </Directory>
And that should be all that you need to do. Now, when Dehydrated updates your certificate, it should update your LoadBalancer as well!
Sample hook.sh:deploy_cert() { local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" \ CHAINFILE="${5}" TIMESTAMP="${6}" shift 6 # File contents should be: # export OS_PASSWORD='your password in here' . /etc/dehydrated/catalystcloud/password # OpenRC file from the Catalyst Cloud dashboard . /etc/dehydrated/catalystcloud/openrc.sh --no-token # UUID of the LoadBalancer to be managed LB_LISTENER='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' # Barbican uses P12 files, we need to make one. P12=$(readlink -f $KEYFILE \ | sed -E 's/privkey-([0-9]+)\.pem/barbican-\1.p12/') openssl pkcs12 -export -inkey $KEYFILE -in $CERTFILE -certfile \ $FULLCHAINFILE -passout pass: -out $P12 # Keep track of existing certs for this domain (hopefully no more than 100) EXISTING_URIS=$(openstack secret list --limit 100 \ -c Name -c 'Secret href' -f json \ | jq -r ".[]|select(.Name | startswith(\"$DOMAIN\"))|.\"Secret href\"") # Upload the new cert NOW=$(date +"%s") openstack secret store --name $DOMAIN-$TIMESTAMP-$NOW -e base64 \ -t "application/octet-stream" --payload="$(base64 < $P12)" NEW_URI=$(openstack secret list --name $DOMAIN-$TIMESTAMP-$NOW \ -c 'Secret href' -f value) \ || unset NEW_URI # Change LoadBalancer to use new cert - if the old one was the default, # change the default. If the old one was in the SNI list, update the # SNI list. if [ -n "$EXISTING_URIS" ]; then DEFAULT_CONTAINER=$(openstack loadbalancer listener show $LB_LISTENER \ -c default_tls_container_ref -f value) for URI in $EXISTING_URIS; do if [ "x$URI" = "x$DEFAULT_CONTAINER" ]; then openstack loadbalancer listener set $LB_LISTENER \ --default-tls-container-ref $NEW_URI fi done SNI_CONTAINERS=$(openstack loadbalancer listener show $LB_LISTENER \ -c sni_container_refs -f value | sed "s/'//g" | sed 's/^\[//' \ | sed 's/\]$//' | sed "s/,//g") for URI in $EXISTING_URIS; do if echo $SNI_CONTAINERS | grep -q $URI; then SNI_CONTAINERS=$(echo $SNI_CONTAINERS | sed "s,$URI,$NEW_URI,") openstack loadbalancer listener set $LB_LISTENER \ --sni-container-refs $SNI_CONTAINERS fi done # Remove old certs for URI in $EXISTING_URIS; do openstack secret delete $URI done fi } HANDLER="$1"; shift #if [[ "${HANDLER}" =~ ^(deploy_challenge|clean_challenge|sync_cert|deploy_cert|deploy_ocsp|unchanged_cert|invalid_challenge|request_failure|generate_csr|startup_hook|exit_hook)$ ]]; then if [[ "${HANDLER}" =~ ^(deploy_cert)$ ]]; then "$HANDLER" "$@" fi
We’ve done this a number of times over the last decade, from OSDC to LCA. The idea is to provide a free psychologist or counsellor at an in-person conference. Attendees can do an anonymous booking by taking a stickynote (with the timeslot) from a signup sheet, and thus get a free appointment.
Many people find it difficult taking the first (very important) step towards getting professional help, and we’ve received good feedback that this approach indeed assists.
So far we’ve always focused on open source conferences. Now we’re moving into information security! First BrisSEC 2022 (Friday 29 April at the Hilton in Brisbane, QLD) and then AusCERT 2022 (10-13 May at the Star Hotel, Gold Coast QLD). The awesome and geek friendly Dr Carla Rogers will be at both events.
How does this get funded? Well, we’ve crowdfunded some, nudged sponsors, most mostly it gets picked up by the conference organisers (aka indirectly by the sponsors, mostly).
If you’re a conference organiser, or would like a particular upcoming conference to offer this service, do drop us a line and we’re happy to chase it up for you and help the organisers to make it happen. We know how to run that now.
In-person is best. But for virtual conferences, sure contact us as well.
The post Free psychologist service at conferences: April 2022 update first appeared on BlueHackers.org.The hack day didn’t go as well as I hoped, but didn’t go too badly. There was smaller attendance than hoped and the discussion was mostly about things other than FLOSS. But everyone who attended had fun and learned interesting things so generally I think it counts as a success. There was discussion on topics including military hardware, viruses (particularly Covid), rocketry, and literature. During the discussion one error in a Wikipedia page was discussed and hopefully we can get that fixed.
I think that everyone who attended will be interested in more such meetings. Overall I think this is a reasonable start to the Hack Day meetings, when I previously ran such meetings they often ended up being more social events than serious hacking events and that’s OK too.
One conclusion that we came to regarding meetings is that they should always be well announced in email and that the iCal file isn’t useful for everyone. Discussion continues on the best methods of announcing meetings but I anticipate that better email will get more attendance.
The March 2022 meeting went reasonably well. Everyone seemed to have fun and learn useful things about computers. After 2 hours my Internet connection dropped out which stopped the people who were using VMs from doing the tutorial. Fortunately most people seemed ready for a break so we ended the meeting. The early and abrupt ending of the meeting was a disappointment but it wasn’t too bad, the meeting would probably only have gone for another half hour otherwise.
The BigBlueButton system was shown to be effective for training when one person got confused with the Debian package configuration options for Postfix and they were able to share the window with everyone else to get advice. I was also confused by that stage.
The main feature of the meeting was training in setting up a mailserver with Postfix, here are the lecture notes for it [1]. The consensus at the end of the meeting was that people wanted more of that for the April meeting. So for the April meeting I will add to the Postfix Training to include SpamAssassin, SPF, DKIM, and DMARC. For the start of the next meeting instead of providing bare Debian installations for the VMs I’ll provide a basic Postfix/Dovecot setup so people can get straight into SpamAssassin etc.
For the May meeting training on SE Linux was requested.
Towards the end of the meeting we discussed Matrix and federated social media. LUV has a Matrix server and I can give accounts to anyone who’s involved in FOSS in the Australia and New Zealand area. For Mastodon the NZOSS Mastodon server [2] seems like a good option. I have an account there to try Mastodon, my Mastodon address is @etbe@mastodon.nzoss.nz .
We are going to make Matrix a primary communication method for the Flounder group, the room is #flounder:luv.asn.au . My Matrix address is @etbe:luv.asn.au .
We now have a mailing list see https://lists.linux.org.au/mailman/listinfo/flounder for information, the address to post to the list is flounder@lists.linux.org.au..
We also have a new URL for the blog and events. See the right sidebar for the link to the iCal file which can be connected to Google Calendar and most online calendaring systems.
We just had the first Flounder meeting which went well. Had some interesting discussion of storage technology, I learnt a few new things. Some people did the ZFS training and BTRFS training and we had lots of interesting discussion.
Andrew Pam gave a summary of new things in Linux and talked about the sites lwn.net, gamingonlinux.com, and cnx-software.com that he uses to find Linux news. One thing he talked about is the latest developments with SteamDeck which is driving Linux support in Steam games. The site protondb.com tracks Linux support in Steam games.
We had some discussion of BPF, for an introduction to that technology see the BPF lecture from LCA 2022.
The next meeting (Saturday 5th of March 1PM Melbourne time) will focus on running your own mail server which is always of interest to people who are interested in system administration and which is probably of more interest than usual because of Google forcing companies with “a legacy G Suite subscription” to transition to a more expensive “Business family” offering.
I “recently” wrote about obtaining a new (to me, actually quite old) computer over in The Apple Power Macintosh 7200/120 PC Compatible (Part 1). This post is a bit of a detour, but may help others understand why some images they download from the internet don’t work.
Disk partitioning is (of course) a way to divide up a single disk into multiple volumes (partitions) for different uses. While the idea is similar, computer platforms over the ages have done this in a variety of different ways, with varying formats on disk, and varying limitations. The ones that you’re most likely to be familiar with are the MBR partitioning scheme (from the IBM PC), and the GPT partitioning scheme (common for UEFI systems such as the modern PC and Mac). One you’re less likely to be familiar with is the Apple Partition Map scheme.
The way all IBM PCs and compatibles worked from the introduction of MS-DOS 2.0 in 1983 until some time after 2005 was the Master Boot Record partitioning scheme. It was outrageously simple: of the first 512 byte sector of a disk, the first 446 bytes was for the bootstrapping code (the “boot sector”), the last 2 bytes were for the magic two bytes telling the BIOS this disk was bootable, and the other 64 bytes were four entries of 16 bytes, each describing a disk partition. The Wikipedia page is a good overview of what it all looks like. Since “four partitions should be enough for anybody” wasn’t going to last, DOS 3.2 introduced “extended partitions” which was just using one of those 4 partitions as another similar data structure that could point to more partitions.
In the 1980s (similar to today), the Macintosh was, of course, different. The Apple Partition Map is significantly more flexible than the MBR on PCs. For a start, you could have more than four partitions! You could actually have a lot more than four partitions, as the Apple Partition Map is a single 512-byte sector for each partition, and the partition map is itself a partition. Instead of being block 0 (like the MBR is), it actually starts at block 1, and is contiguous (The Driver Descriptor Record is what’s at block 0). So, once created, it’s hard to extend. Typically it’d be created as 64×512-byte entries, for 32kb… which turns out is actually about enough for anyone.
The Inside Macintosh reference on the SCSI Manager goes through more detail as to these structures. If you’re wondering what language all the coding examples are in, it’s Pascal – which was fairly popular for writing Macintosh applications in back in the day.
But the actual partition map isn’t the “interesting” part of all this (and yes, the quotation marks are significant here), because Macs are pretty darn finicky about what disks to boot off, which gets to be interesting if you’re trying to find a CD-ROM image on the internet from which to boot, and then use to install an Operating System from.
… the preferred programming language changes.
I never programmed a 1980s Macintosh actually in the 1980s. It was sometime in the early 1990s that I first experienced Microsoft Basic for the Macintosh. I’d previously (unknowingly at the time as it was branded Commodore) experienced Microsoft BASIC on the Commodore 16, Commodore 64, and even the Apple ][, but the Macintosh version was something else. It let you do some pretty neat things such as construct a GUI with largely the same amount of effort as it took to construct a Text based UI on the micros I was familiar with.
Okay, to be fair, I’d also dabbled in Microsoft QBasic that came bundled with MS-DOS of the era, which let you do a whole bunch of graphics – so you could theoretically construct a GUI with it. Something I did attempt to do. Programming on the Mac was so much easier to construct a GUI.
Of course, Microsoft Basic wasn’t the preferred way to program on the Macintosh. At that time it was largely Pascal, with C being something that also existed – but you were going to see Pascal in Inside Macintosh. It was probably somewhat fortuitous that I’d poked at Pascal a bit as something alternate to look at in the high school computing classes. I can only remember using TurboPascal on DOS systems and never actually writing Pascal on the Macintosh.
By the middle part of the 1990s though, I was firmly incompetently writing C on the Mac. No doubt the quality of my code increased after I’d done some university courses actually covering the language rather than the only practical way I had to attempt to write anything useful being looking at Inside Macintosh examples in Pascal and “C for Dummies” which was very not-Macintosh. Writing C on UNIX/Linux was a lot easier – everything was made for it, including Actual Documentation!
Anyway, in the early 2000s I ran MacOS X for a bit on my white iBook G3, and did a (very) small amount of any GUI / Project Builder (the precursor to Xcode) related development – instead largely focusing on command line / X11 things. The latest coolness being to use Objective-C to program applications (unless you were bringing over your Classic MacOS Carbon based application, then you could still write C). Enter some (incompetent) Objective-C coding!
Then Apple went to x86, so the hardware ceased being interesting, and I had no reason to poke at it even as a side effect of having hardware that could run the software stack. Enter a long-ass time of Debian, Ubuntu, and Fedora on laptops.
Come 2022 though, and (for reasons I should really write up), I’m poking at a Mac again and it’s now Swift as the preferred way to write apps. So, I’m (incompetently) hacking away at Swift code. I have to admit, it’s pretty nice. I’ve managed to be somewhat productive in a relative short amount of time, and all the affordances in the language gear towards the kind of safety that is a PITA when coding in C.
So this is my WIP utility to be able to import photos from a Shotwell database into the macOS Photos app:
There’s a lot of rough edges and unknowns left, including how to actually do the import (it looks like there’s going to be Swift code doing AppleScript things as the PhotoKit API is inadequate). But hey, some incompetent hacking in not too much time has a kind-of photo browser thing going on that feels pretty snappy.
Recently I read Michael Snoyman’s post on combining Axum, Hyper, Tonic and Tower. While his solution worked, it irked me – it seemed like there should be a much tighter solution possible.
I can deep dive into the code in a later post perhaps, but I think there are four points of difference. One, since the post was written Axum has started boxing its routes : so the enum dispatch approach taken, which delivers low overheads actually has no benefits today.
Two, while writing out the entire type by hand has some benefits, async code is much more pithy.
Thirdly, the code in the post is entirely generic, except the routing function itself.
And fourth, the outer Service<AddrStream>
is an unnecessary layer to abstract over: given the similar constraints – the inner Service
must take Request<..>
, it is possible to just not use a couple of helpers and instead work directly with Service<Request...>
.
So, onto a pithier version.
First, the app server code itself.
use std::{convert::Infallible, net::SocketAddr};
use axum::routing::get;
use hyper::{server::conn::AddrStream, service::make_service_fn};
use hyper::{Body, Request};
use tonic::async_trait;
use demo::echo_server::{Echo, EchoServer};
use demo::{EchoReply, EchoRequest};
struct MyEcho;
#[async_trait]
impl Echo for MyEcho {
async fn echo(
&self,
request: tonic::Request<EchoRequest>,
) -> Result<tonic::Response<EchoReply>, tonic::Status> {
Ok(tonic::Response::new(EchoReply {
message: format!("Echoing back: {}", request.get_ref().message),
}))
}
}
#[tokio::main]
async fn main() {
let addr = SocketAddr::from(([0, 0, 0, 0], 3000));
let axum_service = axum::Router::new().route("/", get(|| async { "Hello world!" }));
let grpc_service = tonic::transport::Server::builder()
.add_service(EchoServer::new(MyEcho))
.into_service();
let both_service =
demo_router::Router::new(axum_service, grpc_service, |req: &Request<Body>| {
Ok::<bool, Infallible>(
req.headers().get("content-type").map(|x| x.as_bytes())
== Some(b"application/grpc"),
)
});
let make_service = make_service_fn(move |_conn: &AddrStream| {
let both_service = both_service.clone();
async { Ok::<_, Infallible>(both_service) }
});
let server = hyper::Server::bind(&addr).serve(make_service);
if let Err(e) = server.await {
eprintln!("server error: {}", e);
}
}
Note the Router: it takes the two services and Fn to determine which to use on any given request. Then we just drop that composed service into make_service_fn
and we’re done.
Next up we have the Router
implementation. This is generic across any two Service<Request<...>>
types as long as they are both Into<Bytes>
for their Data, and Into<Box<dyn Error>>
for errors.
use std::{future::Future, pin::Pin, task::Poll};
use http_body::combinators::UnsyncBoxBody;
use hyper::{body::HttpBody, Body, Request, Response};
use tower::Service;
#[derive(Clone)]
pub struct Router<First, Second, F> {
first: First,
second: Second,
discriminator: F,
}
impl<First, Second, F> Router<First, Second, F> {
pub fn new(first: First, second: Second, discriminator: F) -> Self {
Self {
first,
second,
discriminator,
}
}
}
impl<First, Second, FirstBody, FirstBodyError, SecondBody, SecondBodyError, F, FErr>
Service<Request<Body>> for BinaryRouter<First, Second, F>
where
First: Service<Request<Body>, Response = Response<FirstBody>>,
First::Error: Into<Box<dyn std::error::Error + Send + Sync>> + 'static,
First::Future: Send + 'static,
First::Response: 'static,
Second: Service<Request<Body>, Response = Response<SecondBody>>,
Second::Error: Into<Box<dyn std::error::Error + Send + Sync>> + 'static,
Second::Future: Send + 'static,
Second::Response: 'static,
F: Fn(&Request<Body>) -> Result<bool, FErr>,
FErr: Into<Box<dyn std::error::Error + Send + Sync>> + Send + 'static,
FirstBody: HttpBody<Error = FirstBodyError> + Send + 'static,
FirstBody::Data: Into<bytes::Bytes>,
FirstBodyError: Into<Box<dyn std::error::Error + Send + Sync>> + 'static,
SecondBody: HttpBody<Error = SecondBodyError> + Send + 'static,
SecondBody::Data: Into<bytes::Bytes>,
SecondBodyError: Into<Box<dyn std::error::Error + Send + Sync>> + 'static,
{
type Response = Response<
UnsyncBoxBody<
<hyper::Body as HttpBody>::Data,
Box<dyn std::error::Error + Send + Sync + 'static>,
>,
>;
type Error = Box<dyn std::error::Error + Send + Sync + 'static>;
type Future =
Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send + 'static>>;
fn poll_ready(
&mut self,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Result<(), Self::Error>> {
match self.first.poll_ready(cx) {
Poll::Ready(Ok(())) => match self.second.poll_ready(cx) {
Poll::Ready(Ok(())) => Poll::Ready(Ok(())),
Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),
Poll::Pending => Poll::Pending,
},
Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),
Poll::Pending => Poll::Pending,
}
}
fn call(&mut self, req: Request<Body>) -> Self::Future {
let discriminant = { (self.discriminator)(&req) };
let (first, second) = if matches!(discriminant, Ok(false)) {
(Some(self.first.call(req)), None)
} else if matches!(discriminant, Ok(true)) {
(None, Some(self.second.call(req)))
} else {
(None, None)
};
let f = async {
Ok(match discriminant.map_err(Into::into)? {
true => second
.unwrap()
.await
.map_err(Into::into)?
.map(|b| b.map_data(Into::into).map_err(Into::into).boxed_unsync()),
false => first
.unwrap()
.await
.map_err(Into::into)?
.map(|b| b.map_data(Into::into).map_err(Into::into).boxed_unsync()),
})
};
Box::pin(f)
}
}
Interesting things here – I use boxed_unsync
to abstract over the body concrete type, and I implement the future using async code rather than as a separate struct. It becomes much smaller even after a few bits of extra type constraining.
One thing that flummoxed me for a little was the need to capture the future for the underlying response outside of the async block. Failing to do so provokes a 'static
requirement which was tricky to debug. Fortunately there is a bug on making this easier to diagnose in rustc already. The underlying problem is that if you create the async block, and then dereference self, the type for impl of .first has to live an arbitrary time. Whereas by capturing the future immediately, only the impl of the future has to live an arbitrary time, and that doesn’t then require changing the signature of the function.
This is almost worth turning into a crate – I couldn’t see an existing one when I looked, though it does end up rather small – < 100 lines. What do you all think?
The first meeting will start at 1PM Australian Eastern time (Melbourne/Sydney) which is +1100 on Saturday the 5th of February.
I will start the video chat an hour early in case someone makes a timezone mistake and gets there an hour before it starts. If anyone else joins early we will have random chat until the start time (deliberately avoiding topics worthy of the main meeting). The link http://b.coker.com.au will redirect to the meeting URL on the day.
The first scheduled talk is a summary and discussion of free software related news. Anyone who knows of something new that excites them is welcome to speak about it.
The main event is discussion of storage technology and hands-on training on BTRFS and ZFS for those who are interested. Here are the ZFS training notes and here are the BTRFS training notes. Feel free to do the training exercises on your own VM before the meeting if you wish.
Then discussion of the future of the group and the use of FOSS social media. While social media is never going to be compulsory some people will want to use it to communicate and we could run some servers for software that is considered good (lots of server capacity is available).
Finally we have to plan future meetings and decide on which communication methods are desired.
The BBB instance to be used for the video conference is sponsored by NZOSS and Catalyst Cloud.
Since PM Scott Morrison did not announce the federal election date last week, it will now be held somewhere between March and May (see the post from ABC’s Antony Green for details). Various aspects of elections are covered in the Civics & Citizenship Australian Curriculum in Years 4, 5 and 6. Students are interested in […]
The post Covering the federal election, before the election first appeared on OpenSTEM Pty Ltd.Flounder is a new free software users group based in the Australia/NZ area. Flounder stands for FLOSS (Free Libre Open Source Software) down under.
Here is my blog post describing the initial idea, the comment from d3Xt3r suggested the name. Flounder is a group of fish that has species native to Australia and NZ.
The main aim is to provide educational benefits to free software users via an online meeting that can’t be obtained by watching YouTube videos etc in a scope that is larger than one country. When the pandemic ends we will keep running this as there are benefits to be obtained from a meeting of a wide geographic scope that can’t be obtained by meetings in a single city. People from other countries are welcome to attend but they aren’t the focus of the meeting.
Until we get a better DNS name the address http://b.coker.com.au will redirect to the BBB instance used for online meetings (the meeting address isn’t yet setup so it redirects to the blog). The aim is that there will always be a short URL for the meeting so anyone who has one device lose contact can quickly type the URL into their backup device.
The first meeting will be on the 5th of Feb 2022 at 1PM Melbourne time +1100. When we get a proper domain I’ll publish a URL for an iCal file with entries for all meetings. I will also find some suitable way for meeting times to be localised (I’m sure there’s a WordPress plugin for that).
For the hands-on part of the meetings there will be virtual machine images you can download to run on your own system (tested with KVM, should work with other VM systems) and the possibility of logging in to a running VM. The demonstration VMs will have public IPv6 addresses and will also be available through different ports on a single IPv4 address, having IPv6 on your workstation will be convenient for you but you can survive without it.
Linux Australia has a list of LUGs in Australia, is there a similar list for NZ? One thing I’d like to see is a list of links for iCal files for all the meetings and also an iCal aggregator that for all iCal feeds of online meetings. I’ll host it myself if necessary, but it’s probably best to do it via Linux Australia (Linux Australasia?) if possible.
I’m attending the https://linux.conf.au/ conference online this weekend, which is always a good opportunity for some sideline hacking.
I found something boneheaded doing that today.
There have been a few times while inventing the OpenHMD Rift driver where I’ve noticed something strange and followed the thread until it made sense. Sometimes that leads to improvements in the driver, sometimes not.
In this case, I wanted to generate a graph of how long the computer vision processing takes – from the moment each camera frame is captured until poses are generated for each device.
To do that, I have a some logging branches that output JSON events to log files and I write scripts to process those. I used that data and produced:
Two things caught my eye in this graph. The first is the way the baseline latency (pink lines) increases from ~20ms to ~58ms. The 2nd is the quantisation effect, where pose latencies are clearly moving in discrete steps.
Neither of those should be happening.
Camera frames are being captured from the CV1 sensors every 19.2ms, and it takes that 17-18ms for them to be delivered across the USB. Depending on how many IR sources the cameras can see, figuring out the device poses can take a different amount of time, but the baseline should always hover around 17-18ms because the fast “device tracking locked” case take as little as 1ms.
Did you see me mention 19.2ms as the interframe period? Guess what the spacing on those quantisation levels are in the graph? I recognised it as implying that something in the processing is tied to frame timing when it should not be.
This 2nd graph helped me pinpoint what exactly was going on. This graph is cut from the part of the session where the latency has jumped up. What it shows is a ~1 frame delay between when the frame is received (frame-arrival-finish-local-ts) before the initial analysis even starts!
That could imply that the analysis thread is just busy processing the previous frame and doesn’t get start working on the new one yet – but the graph says that fast analysis is typically done in 1-10ms at most. It should rarely be busy when the next frame arrives.
This is where I found the bone headed code – a rookie mistake I wrote when putting in place the image analysis threads early on in the driver development and never noticed.
There are 3 threads involved:
These 3 threads communicate using frame worker queues passing frames between each other. Each analysis thread does this pseudocode:
while driver_running: Pop a frame from the queue Process the frame Sleep for new frame notification
The problem is in the 3rd line. If the driver is ever still processing the frame in line 2 when a new frame arrives – say because the computer got really busy – the thread sleeps anyway and won’t wake up until the next frame arrives. At that point, there’ll be 2 frames in the queue, but it only still processes one – so the analysis gains a 1 frame latency from that point on. If it happens a second time, it gets later by another frame! Any further and it starts reclaiming frames from the queues to keep the video capture thread fed – but it only reclaims one frame at a time, so the latency remains!
The fix is simple:
while driver_running: Pop a frame Process the frame if queue_is_empty(): sleep for new frame notification
Doing that for both the fast and long analysis threads changed the profile of the pose latency graph completely.
This is a massive win! To be clear, this has been causing problems in the driver for at least 18 months but was never obvious from the logs alone. A single good graph is worth a thousand logs.
What does this mean in practice?
The way the fusion filter I’ve built works, in between pose updates from the cameras, the position and orientation of each device are predicted / updated using the accelerometer and gyro readings. Particularly for position, using the IMU for prediction drifts fairly quickly. The longer the driver spends ‘coasting’ on the IMU, the less accurate the position tracking is. So, the sooner the driver can get a correction from the camera to the fusion filter the less drift we’ll get – especially under fast motion. Particularly for the hand controllers that get waved around.
Poses are now being updated up to 40ms earlier and the baseline is consistent with the USB transfer delay.
You can also visibly see the effect of the JPEG decoding support I added over Christmas. The ‘red’ camera is directly connected to USB3, while the ‘khaki’ camera is feeding JPEG frames over USB2 that then need to be decoded, adding a few ms delay.
The latency reduction is nicely visible in the pose graphs, where the ‘drop shadow’ effect of pose updates tailing fusion predictions largely disappears and there are fewer large gaps in the pose observations when long analysis happens (visible as straight lines jumping from point to point in the trace):
Yes, the blog is still on. January 2004 I moved to WordPress, and it is still here January 2022. I didn’t write much last year (neither here, not experimenting with the Hey blog). I didn’t post anything to Instagram last year either from what I can tell, just a lot of stories.
August 16 2021, I realised I was 1,000 days till May 12 2024, which is when I become 40. As of today, that leads 850 days. Did I squander the last 150 days? I’m back to writing almost daily in the Hobonichi Techo (I think last year and the year before were mostly washouts; I barely scribbled anything offline).
I got a new Apple Watch Series 7 yesterday. I can say I used the Series 4 well (79% battery life), purchased in the UK when I broke my Series 0 in Edinburgh airport.
TripIt stats for last year claimed 95 days on the road. This is of course, a massive joke, but I’m glad I did get to visit London, Lisbon, New York, San Francisco, Los Angeles without issue. I spent a lot of time in Kuantan, a bunch of Langkawi trips, and also, I stayed for many months at the Grand Hyatt Kuala Lumpur during the May lockdowns (I practically stayed there all lockdown).
With 850 days to go till I’m 40, I have plenty I would like to achieve. I think I’ll write a lot more here. And elsewhere. Get back into the habit of doing. And publishing by learning and doing. No fear. Not that I wasn’t doing, but its time to be prolific with what’s been going on.
Once again time has passed, and another update on Oculus Rift support feels due! As always, it feels like I’ve been busy with work and not found enough time for Rift CV1 hacking. Nevertheless, looking back over the history since I last wrote, there’s quite a lot to tell!
In general, the controller tracking is now really good most of the time. Like, wildly-swing-your-arms-and-not-lose-track levels (most of the time). The problems I’m hunting now are intermittent and hard to identify in the moment while using the headset – hence my enthusiasm over the last updates for implementing stream recording and a simulation setup. I’ll get back to that.
Since I last wrote, the tracking improvements have mostly come from identifying and rejecting incorrect measurements. That is, if I have 2 sensors active and 1 sensor says the left controller is in one place, but the 2nd sensor says it’s somewhere else, we’ll reject one of those – choosing the pose that best matches what we already know about the controller. The last known position, the gravity direction the IMU is detecting, and the last known orientation. The tracker will now also reject observations for a time if (for example) the reported orientation is outside the range we expect. The IMU gyroscope can track the orientation of a device for quite a while, so can be relied on to identify strong pose priors once we’ve integrated a few camera observations to get the yaw correct.
It works really well, but I think improving this area is still where most future refinements will come. That and avoiding incorrect pose extractions in the first place.
The above plot is a sample of headset tracking, showing the extracted poses from the computer vision vs the pose priors / tracking from the Kalman filter. As you can see, there are excursions in both position and orientation detected from the video, but these are largely ignored by the filter, producing a steadier result.
This plot shows the left controller being tracked during a Beat Saber session. The controller tracking plot is quite different, because controllers move a lot more than the headset, and have fewer LEDs to track against. There are larger gaps here in the timeline while the vision re-acquires the device – and in those gaps you can see the Kalman filter interpolating using IMU input only (sometimes well, sometimes less so).
Another nice thing I did is changes in the way the search for a tracked device is made in a video frame. Before starting looking for a particular device it always now gets the latest estimate of the previous device position from the fusion filter. Previously, it would use the estimate of the device pose as it was when the camera exposure happened – but between then and the moment we start analysis more IMU observations and other camera observations might arrive and be integrated into the filter, which will have updated the estimate of where the device was in the frame.
This is the bit where I think the Kalman filter is particularly clever: Estimates of the device position at an earlier or later exposure can improve and refine the filter’s estimate of where the device was when the camera captured the frame we’re currently analysing! So clever. That mechanism (lagged state tracking) is what allows the filter to integrate past tracking observations once the analysis is done – so even if the video frame search take 150ms (for example), it will correct the filter’s estimate of where the device was 150ms in the past, which ripples through and corrects the estimate of where the device is now.
To improve the identification of devices better, I measured the actual angle from which LEDs are visible (about 75 degrees off axis) and measured the size. The pose matching now has a better idea of which LEDs should be visible for a proposed orientation and what pixel size we expect them to have at a particular distance.
I fixed a bug in the output pose smoothing filter where it would glitch as you turned completely around and crossed the point where the angle jumps from +pi to -pi or vice versa.
I got a wide-angle hi-res webcam and took photos of a checkerboard pattern through the lens of my headset, then used OpenCV and panotools to calculate new distortion and chromatic aberration parameters for the display. For me, this has greatly improved. I’m waiting to hear if that’s true for everyone, or if I’ve just fixed it for my headset.
Config blocks! A long time ago, I prototyped code to create a persistent OpenHMD configuration file store in ~/.config/openhmd. The rift-kalman-filter branch now uses that to store the configuration blocks that it reads from the controllers. The first time a controller is seen, it will load the JSON calibration block as before, but it will now store it in that directory – removing a multiple second radio read process on every subsequent startup.
To go along with that, I have an experimental rift-room-config branch that creates a rift-room-config.json file and stores the camera positions after the first startup. I haven’t pushed that to the rift-kalman-filter branch yet, because I’m a bit worried it’ll cause surprising problems for people. If the initial estimate of the headset pose is wrong, the code will back-project the wrong positions for the cameras, which will get written to the file and cause every subsequent run of OpenHMD to generate bad tracking until the file is removed. The goal is to have a loop that monitors whether the camera positions seem stable based on the tracking reports, and to use averaging and resetting to correct them if not – or at least to warn the user that they should re-run some (non-existent) setup utility.
The final big ticket item was a rewrite of how the USB video frame capture thread collects pixels and passes them to the analysis threads. This now does less work in the USB thread, so misses fewer frames, and also I made it so that every frame is now searched for LEDs and blob identities tracked with motion vectors, even when no further analysis will be done on that frame. That means that when we’re running late, it better preserves LED blob identities until the analysis threads can catch up – increasing the chances of having known LEDs to directly find device positions and avoid searching. This rewrite also opened up a path to easily support JPEG decode – which is needed to support Rift Sensors connected on USB 2.0 ports.
I mentioned the recording simulator continues to progress. Since the tracking problems are now getting really tricky to figure out, this tool is becoming increasingly important. So far, I have code in OpenHMD to record all video and tracking data to a .mkv file. Then, there’s a simulator tool that loads those recordings. Currently it is capable of extracting the data back out of the recording, parsing the JSON and decoding the video, and presenting it to a partially implemented simulator that then runs the same blob analysis and tracking OpenHMD does. The end goal is a Godot based visualiser for this simulation, and to be able to step back and forth through time examining what happened at critical moments so I can improve the tracking for those situations.
To make recordings, there’s the rift-debug-gstreamer-record
branch of OpenHMD. If you have GStreamer and the right plugins (gst-plugins-good) installed, and you set env vars like this, each run of OpenHMD will generate a recording in the target directory (make sure the target dir exists):
export OHMD_TRACE_DIR=/home/user/openhmd-traces/
export OHMD_FULL_RECORDING=1
The next things that are calling to me are to improve the room configuration estimation and storage as mentioned above – to detect when the poses a camera is reporting don’t make sense because it’s been bumped or moved.
I’d also like to add back in tracking of the LEDS on the back of the headset headband, to support 360 tracking. I disabled those because they cause me trouble – the headband is adjustable relative to the headset, so the LEDs don’t appear where the 3D model says they should be and that causes jitter and pose mismatches. They need special handling.
One last thing I’m finding exciting is a new person taking an interest in Rift S and starting to look at inside-out tracking for that. That’s just happened in the last few days, so not much to report yet – but I’ll be happy to have someone looking at that while I’m still busy over here in CV1 land!
As always, if you have any questions, comments or testing feedback – hit me up at thaytan@noraisin.net or on @thaytan Twitter/IRC.
Thank you to the kind people signed up as Github Sponsors for this project!
For a long time computer manufacturers have tried to differentiate themselves and their products from their competitors with fancy names with odd capitalisation and spelling. But as an author, using these names does a disservice to the reader: how are they to know that DEC is pronounced as if it was written Dec ("deck").
It's time we pushed back, and wrote for our readers, not for corporations.
It's time to use standard English rules for these Corporate Fancy Names. Proper names begin with a capital, unlike "ciscoSystems®" (so bad that Cisco itself moved away from it). Words are separated by spaces, so "Cisco Systems". Abbreviations and acronyms are written in lower case if they are pronounced as a word, in upper case if each letter is pronounced: so "ram" and "IBM®".
So from here on in I'll be using the following:
I'd encourage you to try this in your own writing. It does look odd for the first time, but the result is undeniably more readable. If we are not writing to be understood by our audience then we are nothing more than an unpaid member of some corporation's marketing team.
I gave the talk On The Use and Misuse of Decorators as part of PyConline AU 2021, the second in annoyingly long sequence of not-in-person PyCon AU events. Here’s some code samples that you might be interested in:
@property
implementationThis shows a demo of @property
-style getters. Setters are left as an exercise :)
def demo_property(f):
f.is_a_property = True
return f
class HasProperties:
def __getattribute__(self, name):
ret = super().__getattribute__(name)
if hasattr(ret, "is_a_property"):
return ret()
else:
return ret
class Demo(HasProperties):
@demo_property
def is_a_property(self):
return "I'm a property"
def is_a_function(self):
return "I'm a function"
a = Demo()
print(a.is_a_function())
print(a.is_a_property)
@run
(The Scoped Block)@run
is a decorator that will run the body of the decorated function, and then store the result of that function in place of the function’s name. It makes it easier to assign the results of complex statements to a variable, and get the advantages of functions having less leaky scopes than if
or loop blocks.
def run(f):
return f()
@run
def hello_world():
return "Hello, World!"
print(hello_world)
@apply
(Multi-line stream transformers)def apply(transformer, iterable_):
def _applicator(f):
return(transformer(f, iterable_))
return _applicator
@apply(map, range(100)
def fizzbuzzed(i):
if i % 3 == 0 and i % 5 == 0:
return "fizzbuzz"
if i % 3 == 0:
return "fizz"
elif i % 5 == 0:
return "buzz"
else:
return str(i)
def html(f):
builder = HtmlNodeBuilder("html")
f(builder)
return builder.build()
class HtmlNodeBuilder:
def __init__(self, tag_name):
self.tag_name = tag_name
self.nodes = []
def node(self, f):
builder = HtmlNodeBuilder(f.__name__)
f(builder)
self.nodes.append(builder.build())
def text(self, text):
self.nodes.append(text)
def build(self):
nodes = "\n".join(self.nodes)
return f"<{self.tag_name}>\n{nodes}\n</{self.tag_name}>"
@html
def document(b):
@b.node
def head(b):
@b.node
def title(b):
b.text("Hello, World!")
@b.node
def body(b):
for i in range(10, 0, -1):
@b.node
def p(b):
b.text(f"{i}")
This is an incomplete implementation of a code registry for handling simple text processing tasks:
```python
def register(self, input, output):
def _register_code(f):
self.registry[(input, output)] = f
return f
return _register_code
in_type = (iterable[str], (WILDCARD, ) out_type = (Counter, (WILDCARD, frequency))
@registry.register(in_type, out_type) def count_strings(strings):
return Counter(strings)
@registry.register( (iterable[str], (WILDCARD, )), (iterable[str], (WILDCARD, lowercase)) ) def words_to_lowercase(words): …
@registry.register( (iterable[str], (WILDCARD, )), (iterable[str], (WILDCARD, no_punctuation)) ) def words_without_punctuation(words): …
def find_steps( self, input_type, input_attrs, output_type, output_attrs ):
hand_wave()
def give_me(self, input, output_type, output_attrs):
steps = self.find_steps(
type(input), (), output_type, output_attrs
)
temp = input
for step in steps:
temp = step(temp)
return temp
A while ago, I wrote a post about how to build and test my Oculus CV1 tracking code in SteamVR using the SteamVR-OpenHMD driver. I have updated those instructions and moved them to https://noraisin.net/diary/?page_id=1048 – so use those if you’d like to try things out.
The pandemic continues to sap my time for OpenHMD improvements. Since my last post, I have been working on various refinements. The biggest visible improvements are:
Adding velocity and acceleration reporting is needed in VR apps that support throwing things. It means that throwing objects and using gravity-grab to fetch objects works in Half-Life: Alyx, making it playable now.
The rewrite to the pose transformation code fixed problems where the rotation of controller models in VR didn’t match the rotation applied in the real world. Controllers would appear attached to the wrong part of the hand, and rotate around the wrong axis. Movements feel more natural now.
My focus going forward is on fixing glitches that are caused by tracking losses or outliers. Those problems happen when the computer vision code either fails to match what the cameras see to the device LED models, or when it matches incorrectly.
Tracking failure leads to the headset view or controllers ‘flying away’ suddenly. Incorrect matching leads to controllers jumping and jittering to the wrong pose, or swapping hands. Either condition is very annoying.
Unfortunately, as the tracking has improved the remaining problems get harder to understand and there is less low-hanging fruit for improvement. Further, when the computer vision runs at 52Hz, it’s impossible to diagnose the reasons for a glitch in real time.
I’ve built a branch of OpenHMD that uses GStreamer to record the CV1 camera video, plus IMU and tracking logs into a video file.
To go with those recordings, I’ve been working on a replay and simulation tool, that uses the Godot game engine to visualise the tracking session. The goal is to show, frame-by-frame, where OpenHMD thought the cameras, headset and controllers were at each point in the session, and to be able to step back and forth through the recording.
Right now, I’m working on the simulation portion of the replay, that will use the tracking logs to recreate all the poses.
I’ve been asked more than once what it was like at the beginning of Ubuntu, before it was a company, when an email from someone I’d never heard of came into my mailbox.
We’re coming up on 20 years now since Ubuntu was founded, and I had cause to do some spelunking into IMAP archives recently… while there I took the opportunity to grab the very first email I received.
The Ubuntu long shot succeeded wildly. Of course, we liked to joke about how spammy those emails where: cold-calling a raft of Debian developers with job offers, some of them were closer to phishing attacks :). This very early one – I was the second employee (though I started at 4 days a week to transition my clients gradually) – was less so.
I think its interesting though to note how explicit a gamble this was framed as: a time limited experiment, funded for a year. As the company scaled this very rapidly became a hiring problem and the horizon had to be pushed out to 2 years to get folk to join.
And of course, while we started with arch
in earnest, we rapidly hit significant usability problems, some of which were solvable with porcelain and shallow non-architectural changes, and we built initially patches, and then the bazaar
VCS project to tackle those. But others were not: for instance, I recall exceeding the 32K hard link limit on ext3 due to a single long history during a VCS conversion. The sum of these challenges led us to create the bzr
project, a ground up rethink of our version control needs, architecture, implementation and user-experience. While ultimately git has conquered all, bzr
had – still has in fact – extremely loyal advocates, due to its laser sharp focus on usability.
Anyhow, here it is: one of the original no-name-here-yet, aka Ubuntu, introductory emails (with permission from Mark, of course). When I clicked through to the website Mark provided there was a link there to a fantastical website about a space tourist… not what I had expected to be reading in Adelaide during LCA 2004.
From: Mark Shuttleworth <xxx@xxx>
To: Robert Collins <xxx@xxx>
Date: Thu, 15 Jan 2004, 04:30
Tom Lord gave me your email address, I believe he’s
already sent you the email that I sent him so I’m sure
you have some background.
In short, I am going to fund some open source
development for a year. This is part of a new project
that I will be getting off the ground in the coming
weeks. I don’t know where it will lead, it’s flying in
the face of a stiff breeze but I think at the end of
the day it will at least fund a few very good open
source developers for a full year to work on the
projects they like most.
One of the pieces of the puzzle is high end source
code management. I’ll be looking to build an
infrastructure that will manage source code for
between 100 and 8000 open source projects (yes,
there’s a big difference between the two, I don’t know
at which end of the spectrum we will be at the end of
the year but our infrastructure will have to at least
be capable of scaling to the latter within two years)
with upwards of 2000 developers, drawing code from a
variety of sources, playing with it and spitting it
out regularly in nice packages.
Arch and Subversion seem to be the two leading
contenders for “next generation open source sccm”. I’d
be interested in your thoughts on the two of them, and
how they stack up. I’m looking to hire one person who
will lead that part of the effort. They’ll work alone
from home, and be responsible for two things. First,
extending the tool (arch or svn) in ways that help the
project. Such extensions will be released under an
open source licence, and hopefully embraced by the
tools maintainers and included in the mainline code
for the tool. And second, they will be responsible for
our large-scale implementation of SCCM, using that
tool, and building the management scripts and other
infrastructure to support such a large, and hopefully
highly automated, set of repositories.
Would you be interested in this position? What
attributes and experience do you think would make you
a great person to have on the team? What would your
salary expectation be, as a monthly figure, for a one
year contract full time?
I’m currently on your continent, well, just off it. On
Lizard Island, up North. Am headed today for Brisbane,
then on the 17th to Launceston via Melbourne. If you
happen to be on any of those stops, would you be
interested in meeting up to discuss it further?
If you’re curious you can find out a bit more about me
at www.markshuttleworth.com. This project is much
lower key than some of what you’ll find there. It’s a
very long shot indeed. But if at worst all that
happens is a bunch of open source work gets funded at
my expense I’ll feel it was money well spent.
Cheers,
Mark
=====
—
“Good judgement comes from experience, and often experience
comes from bad judgement” – Rita Mae Brown
I have always liked cryptography, and public-key cryptography in particularly. When Pretty Good Privacy (PGP) first came out in 1991, I not only started using it, also but looking at the documentation and the code to see how it worked. I created my own implementation in C using very small keys, just to better understand.
Cryptography has been running a race against both faster and cheaper computing power. And these days, with banking and most other aspects of our lives entirely relying on secure communications, it’s a very juicy target for bad actors.
About 5 years ago, the National (USA) Institute for Science and Technology (NIST) initiated a search for cryptographic algorithmic that should withstand a near-future world where quantum computers with a significant number of qubits are a reality. There have been a number of rounds, which mid 2020 saw round 3 and the finalists.
This submission caught my eye some time ago: Classic McEliece, and out of the four finalists it’s the only one that is not lattice-based [wikipedia link].
For Public Key Encryption and Key Exchange Mechanism, Prof Bill Buchanan thinks that the winner will be lattice-based, but I am not convinced.
Tiny side-track, you may wonder where does the McEleice name come from? From mathematician Robert McEleice (1942-2019). McEleice developed his cryptosystem in 1978. So it’s not just named after him, he designed it. For various reasons that have nothing to do with the mathematical solidity of the ideas, it didn’t get used at the time. He’s done plenty cool other things, too. From his Caltech obituary:
He made fundamental contributions to the theory and design of channel codes for communication systems—including the interplanetary telecommunication systems that were used by the Voyager, Galileo, Mars Pathfinder, Cassini, and Mars Exploration Rover missions.
Back to lattices, there are both unknowns (aspects that have not been studied in exhaustive depth) and recent mathematical attacks, both of which create uncertainty – in the crypto sphere as well as for business and politics. Given how long it takes for crypto schemes to get widely adopted, the latter two are somewhat relevant, particularly since cyber security is a hot topic.
Lattices are definitely interesting, but given what we know so far, it is my feeling that systems based on lattices are more likely to be proven breakable than Classic McEleice, which come to this finalists’ table with 40+ years track record of in-depth analysis. Mind that all finalists are of course solid at this stage – but NIST’s thoughts on expected developments and breakthroughs is what is likely to decide the winner. NIST are not looking for shiny, they are looking for very very solid in all possible ways.
Prof Buchanan recently published implementations for the finalists, and did some benchmarks where we can directly compare them against each other.
We can see that Classic McEleice’s key generation is CPU intensive, but is that really a problem? The large size of its public key may be more of a factor (disadvantage), however the small ciphertext I think more than offsets that disadvantage.
As we’re nearing the end of the NIST process, in my opinion, fast encryption/decryption and small cyphertext, combined with the long track record of in-depth analysis, may still see Classic McEleice come out the winner.
The post Classic McEleice and the NIST search for post-quantum crypto first appeared on Lentz family blog.Living in California, I’ve (sadly) grown accustomed to needing to keep track of our local air quality index (AQI) ratings, particularly as we live close to places where large wildfires happen every other year.
Last year, Josh and I bought a PurpleAir outdoor air quality meter, which has been great. We contribute our data to a collection of very local air quality meters, which is important, since the hilly nature of the North Bay means that the nearest government air quality ratings can be significantly different to what we experience here in Petaluma.
I recently went looking to pull my PurpleAir sensor data into my Home Assistant setup. Unfortunately, the PurpleAir API does not return the AQI metric for air quality, only the raw PM2.5/PM5/PM10 numbers. After some searching, I found a nice template sensor solution on the Home Assistant forums, which I’ve modernised by adding the AQI as a sub-sensor, and adding unique ID fields to each useful sensor, so that you can assign them to a location.
You’ll end up with sensors for raw PM2.5, the PM2.5 AQI value, the US EPA air quality category, air pressure, relative humidity and air pressure.
First up, visit the PurpleAir Map, find the sensor you care about, click “get this widget�, and then “JSON�. That will give you the URL to set as the resource
key in purpleair.yaml
.
In HomeAssistant, add the following line to your configuration.yaml
:
sensor: !include purpleair.yaml
and then add the following contents to purpleair.yaml
- platform: rest
name: 'PurpleAir'
# Substitute in the URL of the sensor you care about. To find the URL, go
# to purpleair.com/map, find your sensor, click on it, click on "Get This
# Widget" then click on "JSON".
resource: https://www.purpleair.com/json?key={KEY_GOES_HERE}&show={SENSOR_ID}
# Only query once a minute to avoid rate limits:
scan_interval: 60
# Set this sensor to be the AQI value.
#
# Code translated from JavaScript found at:
# https://docs.google.com/document/d/15ijz94dXJ-YAZLi9iZ_RaBwrZ4KtYeCy08goGBwnbCU/edit#
value_template: >
{{ value_json["results"][0]["Label"] }}
unit_of_measurement: ""
# The value of the sensor can't be longer than 255 characters, but the
# attributes can. Store away all the data for use by the templates below.
json_attributes:
- results
- platform: template
sensors:
purpleair_aqi:
unique_id: 'purpleair_SENSORID_aqi_pm25'
friendly_name: 'PurpleAir PM2.5 AQI'
value_template: >
{% macro calcAQI(Cp, Ih, Il, BPh, BPl) -%}
{{ (((Ih - Il)/(BPh - BPl)) * (Cp - BPl) + Il)|round|float }}
{%- endmacro %}
{% if (states('sensor.purpleair_pm25')|float) > 1000 %}
invalid
{% elif (states('sensor.purpleair_pm25')|float) > 350.5 %}
{{ calcAQI((states('sensor.purpleair_pm25')|float), 500.0, 401.0, 500.0, 350.5) }}
{% elif (states('sensor.purpleair_pm25')|float) > 250.5 %}
{{ calcAQI((states('sensor.purpleair_pm25')|float), 400.0, 301.0, 350.4, 250.5) }}
{% elif (states('sensor.purpleair_pm25')|float) > 150.5 %}
{{ calcAQI((states('sensor.purpleair_pm25')|float), 300.0, 201.0, 250.4, 150.5) }}
{% elif (states('sensor.purpleair_pm25')|float) > 55.5 %}
{{ calcAQI((states('sensor.purpleair_pm25')|float), 200.0, 151.0, 150.4, 55.5) }}
{% elif (states('sensor.purpleair_pm25')|float) > 35.5 %}
{{ calcAQI((states('sensor.purpleair_pm25')|float), 150.0, 101.0, 55.4, 35.5) }}
{% elif (states('sensor.purpleair_pm25')|float) > 12.1 %}
{{ calcAQI((states('sensor.purpleair_pm25')|float), 100.0, 51.0, 35.4, 12.1) }}
{% elif (states('sensor.purpleair_pm25')|float) >= 0.0 %}
{{ calcAQI((states('sensor.purpleair_pm25')|float), 50.0, 0.0, 12.0, 0.0) }}
{% else %}
invalid
{% endif %}
unit_of_measurement: "bit"
purpleair_description:
unique_id: 'purpleair_SENSORID_description'
friendly_name: 'PurpleAir AQI Description'
value_template: >
{% if (states('sensor.purpleair_aqi')|float) >= 401.0 %}
Hazardous
{% elif (states('sensor.purpleair_aqi')|float) >= 301.0 %}
Hazardous
{% elif (states('sensor.purpleair_aqi')|float) >= 201.0 %}
Very Unhealthy
{% elif (states('sensor.purpleair_aqi')|float) >= 151.0 %}
Unhealthy
{% elif (states('sensor.purpleair_aqi')|float) >= 101.0 %}
Unhealthy for Sensitive Groups
{% elif (states('sensor.purpleair_aqi')|float) >= 51.0 %}
Moderate
{% elif (states('sensor.purpleair_aqi')|float) >= 0.0 %}
Good
{% else %}
undefined
{% endif %}
entity_id: sensor.purpleair
purpleair_pm25:
unique_id: 'purpleair_SENSORID_pm25'
friendly_name: 'PurpleAir PM 2.5'
value_template: "{{ state_attr('sensor.purpleair','results')[0]['PM2_5Value'] }}"
unit_of_measurement: "μg/m3"
entity_id: sensor.purpleair
purpleair_temp:
unique_id: 'purpleair_SENSORID_temperature'
friendly_name: 'PurpleAir Temperature'
value_template: "{{ state_attr('sensor.purpleair','results')[0]['temp_f'] }}"
unit_of_measurement: "°F"
entity_id: sensor.purpleair
purpleair_humidity:
unique_id: 'purpleair_SENSORID_humidity'
friendly_name: 'PurpleAir Humidity'
value_template: "{{ state_attr('sensor.purpleair','results')[0]['humidity'] }}"
unit_of_measurement: "%"
entity_id: sensor.purpleair
purpleair_pressure:
unique_id: 'purpleair_SENSORID_pressure'
friendly_name: 'PurpleAir Pressure'
value_template: "{{ state_attr('sensor.purpleair','results')[0]['pressure'] }}"
unit_of_measurement: "hPa"
entity_id: sensor.purpleair
I had difficulty getting the AQI to display as a numeric graph when I didn’t set a unit. I went with bit
, and that worked just fine. 🤷�♂�
So, this idea has been brewing for a while now… try and watch all of Doctor Who. All of it. All 38 seasons. Today(ish), we started. First up, from 1963 (first aired not quite when intended due to the Kennedy assassination): An Unearthly Child. The first episode of the first serial.
A lot of iconic things are there from the start: the music, the Police Box, embarrassing moments of not quite remembering what time one is in, and normal humans accidentally finding their way into the TARDIS.
I first saw this way back when a child, where they were repeated on ABC TV in Australia for some anniversary of Doctor Who (I forget which one). Well, I saw all but the first episode as the train home was delayed and stopped outside Caulfield for no reason for ages. Some things never change.
Of course, being a show from the early 1960s, there’s some rougher spots. We’re not about to have the picture of diversity, and there’s going to be casual racism and sexism. What will be interesting is noticing these things today, and contrasting with my memory of them at the time (at least for episodes I’ve seen before), and what I know of the attitudes of the time.
“This year-ometer is not calculating properly” is a very 2020 line though (technically from the second episode).
It’s been a while since my last post about tracking support for the Oculus Rift in February. There’s been big improvements since then – working really well a lot of the time. It’s gone from “If I don’t make any sudden moves, I can finish an easy Beat Saber level” to “You can’t hide from me!” quality.
Equally, there are still enough glitches and corner cases that I think I’ll still be at this a while.
Here’s a video from 3 weeks ago of (not me) playing Beat Saber on Expert+ setting showing just how good things can be now:
Strap in. Here’s what I’ve worked on in the last 6 weeks:
Most of the biggest improvements have come from improving the computer vision algorithm that’s matching the observed LEDs (blobs) in the camera frames to the 3D models of the devices.
I split the brute-force search algorithm into 2 phases. It now does a first pass looking for ‘obvious’ matches. In that pass, it does a shallow graph search of blobs and their nearest few neighbours against LEDs and their nearest neighbours, looking for a match using a “Strong” match metric. A match is considered strong if expected LEDs match observed blobs to within 1.5 pixels.
Coupled with checks on the expected orientation (matching the Gravity vector detected by the IMU) and the pose prior (expected position and orientation are within predicted error bounds) this short-circuit on the search is hit a lot of the time, and often completes within 1 frame duration.
In the remaining tricky cases, where a deeper graph search is required in order to recover the pose, the initial search reduces the number of LEDs and blobs under consideration, speeding up the remaining search.
I also added an LED size model to the mix – for a candidate pose, it tries to work out how large (in pixels) each LED should appear, and use that as a bound on matching blobs to LEDs. This helps reduce mismatches as devices move further from the camera.
When a brute-force search for pose recovery completes, the system now knows the identity of various blobs in the camera image. One way it avoids a search next time is to transfer the labels into future camera observations using optical-flow tracking on the visible blobs.
The problem is that even sped-up the search can still take a few frame-durations to complete. Previously LED labels would be transferred from frame to frame as they arrived, but there’s now a unique ID associated with each blob that allows the labels to be transferred even several frames later once their identity is known.
One of the problems with reverse engineering is the guesswork around exactly what different values mean. I was looking into why the controller movement felt “swimmy” under fast motions, and one thing I found was that the interpretation of the gyroscope readings from the IMU was incorrect.
The touch controllers report IMU angular velocity readings directly as a 16-bit signed integer. Previously the code would take the reading and divide by 1024 and use the value as radians/second.
From teardowns of the controller, I know the IMU is an Invensense MPU-6500. From the datasheet, the reported value is actually in degrees per second and appears to be configured for the +/- 2000 °/s range. That yields a calculation of Gyro-rad/s = Gyro-°/s * (2000 / 32768) * (?/180) – or a divisor of 938.734.
The 1024 divisor was under-estimating rotation speed by about 10% – close enough to work until you start moving quickly.
If we don’t find a device in the camera views, the fusion filter predicts motion using the IMU readings – but that quickly becomes inaccurate. In the worst case, the controllers fly off into the distance. To avoid that, I added a limit of 500ms for ‘coasting’. If we haven’t recovered the device pose by then, the position is frozen in place and only rotation is updated until the cameras find it again.
I implemented a 1-Euro exponential smoothing filter on the output poses for each device. This is an idea from the Project Esky driver for Project North Star/Deck-X AR headsets, and almost completely eliminates jitter in the headset view and hand controllers shown to the user. The tradeoff is against introducing lag when the user moves quickly – but there are some tunables in the exponential filter to play with for minimising that. For now I’ve picked some values that seem to work reasonably.
Communications with the touch controllers happens through USB radio command packets sent to the headset. The main use of radio commands in OpenHMD is to read the JSON configuration block for each controller that is programmed in at the factory. The configuration block provides the 3D model of LED positions as well as initial IMU bias values.
Unfortunately, reading the configuration block takes a couple of seconds on startup, and blocks everything while it’s happening. Oculus saw that problem and added a checksum in the controller firmware. You can read the checksum first and if it hasn’t changed use a local cache of the configuration block. Eventually, I’ll implement that caching mechanism for OpenHMD but in the meantime it still reads the configuration blocks on each startup.
As an interim improvement I rewrote the radio communication logic to use a state machine that is checked in the update loop – allowing radio communications to be interleaved without blocking the regularly processing of events. It still interferes a bit, but no longer causes a full multi-second stall as each hand controller turns on.
The hand controllers have haptic feedback ‘rumble’ motors that really add to the immersiveness of VR by letting you sense collisions with objects. Until now, OpenHMD hasn’t had any support for applications to trigger haptic events. I spent a bit of time looking at USB packet traces with Philipp Zabel and we figured out the radio commands to turn the rumble motors on and off.
In the Rift CV1, the haptic motors have a mode where you schedule feedback events into a ringbuffer – effectively they operate like a low frequency audio device. However, that mode was removed for the Rift S (and presumably in the Quest devices) – and deprecated for the CV1.
With that in mind, I aimed for implementing the unbuffered mode, with explicit ‘motor on + frequency + amplitude’ and ‘motor off’ commands sent as needed. Thanks to already having rewritten the radio communications to use a state machine, adding haptic commands was fairly easy.
The big question mark is around what API OpenHMD should provide for haptic feedback. I’ve implemented something simple for now, to get some discussion going. It works really well and adds hugely to the experience. That code is in the https://github.com/thaytan/OpenHMD/tree/rift-haptics branch, with a SteamVR-OpenHMD branch that uses it in https://github.com/thaytan/SteamVR-OpenHMD/tree/controller-haptics-wip
I’d say the biggest problem right now is unexpected tracking loss and incorrect pose extractions when I’m not expecting them. Especially my right controller will suddenly glitch and start jumping around. Looking at a video of the debug feed, it’s not obvious why that’s happening:
To fix cases like those, I plan to add code to log the raw video feed and the IMU information together so that I can replay the video analysis frame-by-frame and investigate glitches systematically. Those recordings will also work as a regression suite to test future changes.
The Kalman filter I have implemented works really nicely – it does the latency compensation, predicts motion and extracts sensor biases all in one place… but it has a big downside of being quite expensive in CPU. The Unscented Kalman Filter CPU cost grows at O(n^3) with the size of the state, and the state in this case is 43 dimensional – 22 base dimensions, and 7 per latency-compensation slot. Running 1000 updates per second for the HMD and 500 for each of the hand controllers adds up quickly.
At some point, I want to find a better / cheaper approach to the problem that still provides low-latency motion predictions for the user while still providing the same benefits around latency compensation and bias extraction.
To generate a convincing illusion of objects at a distance in a headset that’s only a few centimetres deep, VR headsets use some interesting optics. The LCD/OLED panels displaying the output get distorted heavily before they hit the users eyes. What the software generates needs to compensate by applying the right inverse distortion to the output video.
Everyone that tests the CV1 notices that the distortion is not quite correct. As you look around, the world warps and shifts annoyingly. Sooner or later that needs fixing. That’s done by taking photos of calibration patterns through the headset lenses and generating a distortion model.
The camera feeds are captured using a custom user-space UVC driver implementation that knows how to set up the special synchronisation settings of the CV1 and DK2 cameras, and then repeatedly schedules isochronous USB packet transfers to receive the video.
Occasionally, some people experience failure to re-schedule those transfers. The kernel rejects them with an out-of-memory error failing to set aside DMA memory (even though it may have been running fine for quite some time). It’s not clear why that happens – but the end result at the moment is that the USB traffic for that camera dies completely and there’ll be no more tracking from that camera until the application is restarted.
Often once it starts happening, it will keep happening until the PC is rebooted and the kernel memory state is reset.
Tracking generally works well when the cameras get a clear shot of each device, but there are cases like sighting down the barrel of a gun where we expect that the user will line up the controllers in front of one another, and in front of the headset. In that case, even though we probably have a good idea where each device is, it can be hard to figure out which LEDs belong to which device.
If we already have a good tracking lock on the devices, I think it should be possible to keep tracking even down to 1 or 2 LEDs being visible – but the pose assessment code will have to be aware that’s what is happening.
April 14th marks 2 years since I first branched off OpenHMD master to start working on CV1 tracking. How hard can it be, I thought? I’ll knock this over in a few months.
Since then I’ve accumulated over 300 commits on top of OpenHMD master that eventually all need upstreaming in some way.
One thing people have expressed as a prerequisite for upstreaming is to try and remove the OpenCV dependency. The tracking relies on OpenCV to do camera distortion calculations, and for their PnP implementation. It should be possible to reimplement both of those directly in OpenHMD with a bit of work – possibly using the fast LambdaTwist P3P algorithm that Philipp Zabel wrote, that I’m already using for pose extraction in the brute-force search.
I’ve picked the top issues to highlight here. https://github.com/thaytan/OpenHMD/issues has a list of all the other things that are still on the radar for fixing eventually.
At some point soon, I plan to put a pin in the CV1 tracking and look at adapting it to more recent inside-out headsets like the Rift S and WMR headsets. I implemented 3DOF support for the Rift S last year, but getting to full positional tracking for that and other inside-out headsets means implementing a SLAM/VIO tracking algorithm to track the headset position.
Once the headset is tracking, the code I’m developing here for CV1 to find and track controllers will hopefully transfer across – the difference with inside-out tracking is that the cameras move around with the headset. Finding the controllers in the actual video feed should work much the same.
This development happens mostly in my spare time and partly as open source contribution time at work at Centricular. I am accepting funding through Github Sponsorships to help me spend more time on it – I’d really like to keep helping Linux have top-notch support for VR/AR applications. Big thanks to the people that have helped get this far.
Today, 30 March, is World Bipolar Day.
Why that particular date? It’s Vincent van Gogh’s birthday (1853), and there is a fairly strong argument that the Dutch painter suffered from bipolar (among other things).
The image on the side is Vincent’s drawing “Worn Out” (from 1882), and it seems to capture the feeling rather well – whether (hypo)manic, depressed, or mixed. It’s exhausting.
Bipolar is complicated, often undiagnosed or misdiagnosed, and when only treated with anti-depressants, it can trigger the (hypo)mania – essentially dragging that person into that state near-permanently.
Hypo-mania is the “lesser” form of mania that distinguishes Bipolar I (the classic “manic depressive” syndrome) from Bipolar II. It’s “lesser” only in the sense that rather than someone going so hyper they may think they can fly (Bipolar I is often identified when someone in manic state gets admitted to hospital – good catch!) while with Bipolar II the hypo-mania may actually exhibit as anger. Anger in general, against nothing in particular but potentially everyone and everything around them. Or, if it’s a mixed episode, anger combined with strong negative thoughts. Either way, it does not look like classic mania. It is, however, exhausting and can be very debilitating.
Bipolar II people often present to a doctor while in depressed state, and GPs (not being psychiatrists) may not do a full diagnosis. Note that D.A.S. and similar test sheets are screening tools, they are not diagnostic. A proper diagnosis is more complex than filling in a form some questions (who would have thought!)
If you have a diagnosis of depression, only from a GP, and are on medication for this, I would strongly recommend you also get a referral to a psychiatrist to confirm that diagnosis.
Our friends at the awesome Black Dog Institute have excellent information on bipolar, as well as a quick self-test – if that shows some likelihood of bipolar, go get that referral and follow up ASAP.
I will be writing more about the topic in the coming time.
The post World bipolar day 2021 first appeared on BlueHackers.org.This post documented an older method of building SteamVR-OpenHMD. I moved them to a page here. That version will be kept up to date for any future changes, so go there.
I’ve had a few people ask how to test my OpenHMD development branch of Rift CV1 positional tracking in SteamVR. Here’s what I do:
git clone --recursive https://github.com/ChristophHaag/SteamVR-OpenHMD.git
cd subprojects/openhmd git remote add thaytan-github https://github.com/thaytan/OpenHMD.git git fetch thaytan-github git checkout -b rift-kalman-filter thaytan-github/rift-kalman-filter cd ../../
meson
to build and register the SteamVR-OpenHMD binaries. You may need tmeson
first (see below):meson -Dbuildtype=release build ninja -C build ./install_files_to_build.sh ./register.sh
./build/subprojects/openhmd/openhmd_simple_example
I prefer the Meson build system here. There’s also a cmake
build for SteamVR-OpenHMD you can use instead, but I haven’t tested it in a while and it sometimes breaks as I work on my development branch.
If you need to install meson
, there are instructions here – https://mesonbuild.com/Getting-meson.html summarising the various methods.
I use a copy in my home directory, but you need to make sure ~/.local/bin
is in your PATH
pip3 install --user meson
I spent some time this weekend implementing a couple of my ideas for improving the way the tracking code in OpenHMD filters and rejects (or accepts) possible poses when trying to match visible LEDs to the 3D models for each device.
In general, the tracking proceeds in several steps (in parallel for each of the 3 devices being tracked):
The goal is to always assign the correct LEDs to the correct device (so you don’t end up with the right controller in your left hand), and to avoid going back to the expensive brute-force search to re-acquire devices as much as possible
What I’ve been working on this week is steps 1 and 3 – initial acquisition of correct poses, and fast validation / refinement of the pose in each video frame, and I’ve implemented two new strategies for that.
The first new strategy is to reject candidate poses that don’t closely match the known direction of gravity for each device. I had a previous implementation of that idea which turned out to be wrong, so I’ve re-worked it and it helps a lot with device acquisition.
The IMU accelerometer and gyro can usually tell us which way up the device is (roll and pitch) but not which way they are facing (yaw). The measure for ‘known gravity’ comes from the fusion Kalman filter covariance matrix – how certain the filter is about the orientation of the device. If that variance is small this new strategy is used to reject possible poses that don’t have the same idea of gravity (while permitting rotations around the Y axis), with the filter variance as a tolerance.
The 2nd strategy is based around tracking with fewer LED correspondences once a tracking lock is acquired. Initial acquisition of the device pose relies on some heuristics for how many LEDs must match the 3D model. The general heuristic threshold I settled on for now is that 2/3rds of the expected LEDs must be visible to acquire a cold lock.
With the new strategy, if the pose prior has a good idea where the device is and which way it’s facing, it allows matching on far fewer LED correspondences. The idea is to keep tracking a device even down to just a couple of LEDs, and hope that more become visible soon.
While this definitely seems to help, I think the approach can use more work.
With these two new approaches, tracking is improved but still quite erratic. Tracking of the headset itself is quite good now and for me rarely loses tracking lock. The controllers are better, but have a tendency to “fly off my hands” unexpectedly, especially after fast motions.
I have ideas for more tracking heuristics to implement, and I expect a continuous cycle of refinement on the existing strategies and new ones for some time to come.
For now, here’s a video of me playing Beat Saber using tonight’s code. The video shows the debug stream that OpenHMD can generate via Pipewire, showing the camera feed plus overlays of device predictions, LED device assignments and tracked device positions. Red is the headset, Green is the right controller, Blue is the left controller.
Initial tracking is completely wrong – I see some things to fix there. When the controllers go offline due to inactivity, the code keeps trying to match LEDs to them for example, and then there are some things wrong with how it’s relabelling LEDs when they get incorrect assignments.
After that, there are periods of good tracking with random tracking losses on the controllers – those show the problem cases to concentrate on.
These lack of updates are also likely because I’ve been quite caught up with stuff.
Monday I had a steak from Bay Leaf Steakhouse for dinner. It was kind of weird eating it from packs, but then I’m reminded you could do this in economy class. Tuesday I wanted to attempt to go vegetarian and by the time I was done with a workout, the only place was a chap fan shop (Leong Heng) where I had a mixture of Chinese and Indian chap fan. The Indian stall is run by an ex-Hyatt staff member who immediately recognised me! Wednesday, Alice came to visit, so we got to Hanks, got some alcohol, and managed a smorgasbord of food from Pickers/Sate Zul/Lila Wadi. Night ended very late, and on Thursday, visited Hai Tian for their famous salted egg squid and prawns in a coconut shell. Friday was back to being normal, so I grabbed a pizza from Mint Pizza (this time I tried their Aussie variant). Saturday, today, I hit up Rasa Sayang for some matcha latte, but grabbed food from Classic Pilot Cafe, which Faeeza owns! It was the famous salted egg chicken, double portion, half rice.
As for workouts, I did sign up for Mantas but found it pretty hard to do, timezone wise. I did spend a lot of time jogging on the beach (this has been almost a daily affair). Monday I also did 2 MD workouts, Tuesday 1 MD workout, Wednesday half a MD workout, Thursday I did a Ping workout at Pwrhouse (so good!), Friday 1 MD workout, and Saturday an Audrey workout at Pwrhouse and 1 MD workout.
Wednesday I also found out that Rasmus passed away. Frankly, there are no words.
Thursday, my Raspberry Pi 400 arrived. I set it up in under ten minutes, connecting it to the TV here. It “just works”. I made a video, which I should probably figure out how to upload to YouTube after I stitch it together. I have to work on using it a lot more.
COVID-19 cases are through the roof in Malaysia. This weekend we’ve seen two days of case breaking records, with today being 5,728 (yesterday was something close). Nutty. Singapore suspended the reciprocal green lane (RGL) agreement with Malaysia for the next 3 months.
I’ve managed to finish Bridgerton. I like the score. Finding something on Netflix is proving to be more difficult, regardless of having a VPN. Honestly, this is why Cable TV wins… linear programming that you’re just fed.
Stock market wise, I’ve been following the GameStop short squeeze, and even funnier is the Top Glove one, that they’re trying to repeat in Malaysia. Bitcoin seems to be doing “reasonably well” and I have to say, I think people are starting to realise decentralised services have a future. How do we get there?
What an interesting week, I look forward to more productive time. I’m still writing in my Hobonichi Techo, so at least that’s where most personal stuff ends up, I guess?
I hit an important OpenHMD milestone tonight – I completed a Beat Saber level using my Oculus Rift CV1!
I’ve been continuing to work on integrating Kalman filtering into OpenHMD, and on improving the computer vision that matches and tracks device LEDs. While I suspect noone will be completing Expert levels just yet, it’s working well enough that I was able to play through a complete level of Beat Saber. For a long time this has been my mental benchmark for tracking performance, and I’m really happy
Check it out:
I should admit at this point that completing this level took me multiple attempts. The tracking still has quite a tendency to lose track of controllers, or to get them confused and swap hands suddenly.
I have a list of more things to work on. See you at the next update!
What an unplanned day. I woke up in time to do an MD workout, despite feeling a little sore. So maybe I was about 10 minutes late and I missed the first set, but his workouts are so long, and I think there were seven sets anyway. Had a good brunch shortly thereafter.
Did a bit of reading, and then I decided to do a beach boardwalk walk… turns out they were policing the place, and you can’t hit the boardwalk. But the beach is fair game? So I went back to the hotel, dropped off my slippers, and went for a beach jog. Pretty nutty.
Came back to read a little more and figured I might as well do another MD workout. Then I headed out for dinner, trying out a new place — Mint Pizza. Opened 20.12.2020, and they’re empty, and their pizza is actually pretty good. Lamb and BBQ chicken, they did half-and-half.
Twitter was discussing Raspberry Pi’s, and all I could see is a lot of misinformation, which is truly shocking. The irony is that open source has been running the Internet for so long, and progressive web apps have come such a long way…
Back in the day when I did OpenOffice.org or Linux training even, we always did say you should learn concepts and not tools. From the time we ran Linux installfests in the late-90s in Sunway Pyramid (back then, yes, Linux was hard, and you had winmodems), but I had forgotten that I even did stuff for school teachers and NGOs back in 2002… I won’t forget PC Gemilang either…
Anyway, I placed an order again for another Raspberry Pi 400. I am certain that most people talk so much crap, without realising that Malaysia isn’t a developed nation and most people can’t afford a Mac let alone a PC. Laptops aren’t cheap. And there are so many other issues…. Saying Windows is still required in 2021 is the nuttiest thing I’ve heard in a long time. Easy to tweet, much harder to think about TCO, and realise where in the journey Malaysia is.
Maybe the best thing was that Malaysian Twitter learned about technology. I doubt many realised the difference between a Pi board vs the 400, but hey, the fact that they talked about tech is still a win (misinformed, but a win).
Today is the first day that in the state of Pahang, we have to encounter what many Malaysians are referring to as the Movement Control Order 2.0 (MCO 2.0). I think everyone finally agrees with the terminology that this is a lockdown now, because I remember back in the day when I was calling it that, I’d definitely offend a handful of journalists.
This is one interesting change for me compared to when I last wrote Life with Rona — Day 56 of being indoors and not even leaving my household, in Kuala Lumpur. I am now not in the state, I am living in a hotel, and I am obviously moving around a little more since we have access to the beach.
KL/Selangor and several other states have already been under the MCO 2.0 since January 13 2021, and while it was supposed to end on January 26, it seems like they’ve extended and harmonised the dates for Peninsular Malaysia to end on February 4 2021. I guess everyone got the “good news” yesterday. The Prime Minister announced some kind of aid last week, but it is still mostly a joke.
Today was the 2nd day I woke up at around 2.30pm because I went to bed at around 8am. First day I had a 23.5 hour uptime, and the today was less brutal, but working from 1-8am with the PST timezone is pretty brutal. Consequently, I barely got too much done, and had one meal, vegetarian, two packs that included rice. I did get to walk by the beach (between Teluk Cempedak and Teluk Cempedak 2), did quite a bit of exercise there and I think even the monkeys are getting hungry… lots of stray cats and monkeys. Starbucks closes at 7pm, and I rocked up at 7.10pm (this was just like yesterday, when I arrived at 9.55pm and was told they wouldn’t grant me a coffee!).
While writing this entry, I did manage to get into a long video call with some friends and I guess it was good catching up with people in various states. It also is what prevented me from publishing this entry!
I did wake up reasonable early today because I had pre-ordered room service to arrive at 9am. There is a fixed menu at the hotel for various cuisines (RM48/pax, thankfully gratis for me) and I told them I prefer not having to waste, so just give me what I want which is off menu items anyway. Roti telur double telur (yes, I know it is a roti jantan) with some banjir dhal and sambal and a bit of fruit on the side with two teh tariks. They delivered as requested. I did forget to ask for a jar of honey but that is OK, there is always tomorrow.
I spent most of the day vacillating, and wouldn’t consider it productive by any measure. Just chit chats and napping. It did rain today after a long time, so the day seemed fairly dreary.
When I finally did awaken from my nap, I went for a run on the beach. I did it barefoot. I have no idea if this is how it is supposed to be done, or if you are to run nearer the water or further up above, but I did move around between the two quite often. The beach is still pretty dead, but it is expected since no one is allowed to go unless you’re a hotel guest.
The hotel has closed 3/4 of their villages (blocks) and moved everyone to the village I’m staying in (for long stay guests…). I’m thankful I have a pretty large suite, it is a little over 980sqft, and the ample space, while smaller than my home, is still welcome.
Post beach run, I did a workout with MD via Instagram. It was strength/HIIT based, and I burnt a tonne, because he gave us one of his signature 1.5h classes. It was longer than the 80 minute class he normally charges RM50 for (I still think this is undervaluing his service, but he really does care and does it for the love of seeing his students grow!).
Post-workout I decided to head downtown to find some dinner. Everything at the Teluk Cemepdak block of shops was closed, so they’re not even bothered with doing takeaway. Sg. Lembing steakhouse seemed to have cars parked, Vanggey was empty (Crocodile Rock was open, can’t say if there was a crowd, because the shared parking lot was empty), there was a modest queue at Sate Zul, and further down, Lena was closed, Pickers was open for takeaway but looked pretty closed, Tjantek was open surprisingly, and then I thought I’d give Nusantara a try again, this time for food, but their chef had just gone home at about 8pm. Oops. So I drove to LAN burger, initially ordering just one chicken double special; however they looked like they could use the business so I added on a beef double special. They now accept Boost payments so have joined the e-wallet era. One less place to use cash, which is also why I really like Kuantan. On the drive back, Classic Pilot Cafe was also open and I guess I’ll be heading there too during this lockdown.
Came back to the room to finish both burgers in probably under 15 minutes. While watching the first episode of Bridgerton on Netflix. I’m not sure what really captivates, but I will continue on (I still haven’t finished the first episode). I need to figure out how to use the 2 TVs that I have in this room — HDMI cable? Apple TV? Not normally using a TV, all this is clearly more complex than I care to admit.
I soaked longer than expected, ended up a prune, but I’m sure it will give me good rest!
One thought to leave with:
“Learn to enjoy every minute of your life. Be happy now. Don’t wait for something outside of yourself to make you happy in the future.” — Earl Nightingale
In my experience, the C programming language is still hard to beat, even 50 years after it was first developed (and I feel the same way about UNIX). When it comes to general-purpose utility, low-level systems programming, performance, and portability (even to tiny embedded systems), I would choose C over most modern or fashionable alternatives. In some cases, it is almost the only choice.
Many developers believe that it is difficult to write secure and reliable software in C, due to its free pointers, the lack of enforced memory integrity, and the lack of automatic memory management; however in my opinion it is possible to overcome these risks with discipline and a more secure system of libraries constructed on top of C and libc. Daniel J. Bernstein and Wietse Venema are two developers who have been able to write highly secure, stable, reliable software in C.
My other favourite language is Python. Although Python has numerous desirable features, my favourite is the light-weight syntax: in Python, block structure is indicated by indentation, and braces and semicolons are not required. Apart from the pleasure and relief of reading and writing such light and clear code, which almost appears to be executable pseudo-code, there are many other benefits. In C or JavaScript, if you omit a trailing brace somewhere in the code, or insert an extra brace somewhere, the compiler may tell you that there is a syntax error at the end of the file. These errors can be annoying to track down, and cannot occur in Python. Python not only looks better, the clear syntax helps to avoid errors.
The obvious disadvantage of Python, and other dynamic interpreted languages, is that most programs run extremely slower than C programs. This limits the scope and generality of Python. No AAA or performance-oriented video game engines are programmed in Python. The language is not suitable for low-level systems programming, such as operating system development, device drivers, filesystems, performance-critical networking servers, or real-time systems.
C is a great all-purpose language, but the code is uglier than Python code. Once upon a time, when I was experimenting with the Plan 9 operating system (which is built on C, but lacks Python), I missed Python’s syntax, so I decided to do something about it and write a little preprocessor for C. This converts from a “Pythonesque” indented syntax to regular C with the braces and semicolons. Having forked a little dialect of my own, I continued from there adding other modules and features (which might have been a mistake, but it has been fun and rewarding).
At first I called this translator Brace, because it added in the braces for me. I now call the language CZ. It sounds like “C-easy”. Ease-of-use for developers (DX) is the primary goal. CZ has all of the features of C, and translates cleanly into C, which is then compiled to machine code as normal (using any C compiler; I didn’t write one); and so CZ has the same features and performance as C, but enjoys a more pleasing syntax.
CZ is now self-hosted, in that the translator is written in the language CZ. I confess that originally I wrote most of it in Perl; I’m proficient at Perl, but I consider it to be a fairly ugly language, and overly complicated.
I intend for CZ’s new syntax to be “optional”, ideally a developer will be able to choose to use the normal C syntax when editing CZ, if they prefer it. For this, I need a tool to convert C back to CZ, which I have not fully implemented yet. I am aware that, in addition to traditionalists, some vision-impaired developers prefer to use braces and semicolons, as screen readers might not clearly indicate indentation. A C to CZ translator would of course also be valuable when porting an existing C program to CZ.
CZ has a number of useful features that are not found in standard C, but I did not go so far as C++, which language has been described as “an octopus made by nailing extra legs onto a dog”. I do not consider C to be a dog, at least not in a negative sense; but I think that C++ is not an improvement over plain C. I am creating CZ because I think that it is possible to improve on C, without losing any of its advantages or making it too complex.
One of the most interesting features I added is a simple syntax for fast, light coroutines. I based this on Simon Tatham’s approach to Coroutines in C, which may seem hacky at first glance, but is very efficient and can work very well in practice. I implemented a very fast web server with very clean code using these coroutines. The cost of switching coroutines with this method is little more than the cost of a function call.
CZ has hygienic macros. The regular cpp (C preprocessor) macros are not hygenic and many people consider them hacky and unsafe to use. My CZ macros are safe, and somewhat more powerful than standard C macros. They can be used to neatly add new program control structures. I have plans to further develop the macro system in interesting ways.
I added automatic prototype and header generation, as I do not like having to repeat myself when copying prototypes to separate header files. I added support for the UNIX #! scripting syntax, and for cached executables, which means that CZ can be used like a scripting language without having to use a separate compile or make command, but the programs are only recompiled when something has been changed.
For CZ, I invented a neat approach to portability without conditional compilation directives. Platform-specific library fragments are automatically included from directories having the name of that platform or platform-category. This can work very well in practice, and helps to avoid the nightmare of conditional compilation, feature detection, and Autotools. Using this method, I was able easily to implement portable interfaces to features such as asynchronous IO multiplexing (aka select / poll).
The CZ library includes flexible error handling wrappers, inspired by W. Richard Stevens’ wrappers in his books on Unix Network Programming. If these wrappers are used, there is no need to check return values for error codes, and this makes the code much safer, as an error cannot accidentally be ignored.
CZ has several major faults, which I intend to correct at some point. Some of the syntax is poorly thought out, and I need to revisit it. I developed a fairly rich library to go with the language, including safer data structures, IO, networking, graphics, and sound. There are many nice features, but my CZ library is more prototype than a finished product, there are major omissions, and some features are misconceived or poorly implemented. The misfeatures should be weeded out for the time-being, or moved to an experimental section of the library.
I think that a good software library should come in two parts, the essential low-level APIs with the minimum necessary functionality, and a rich set of high-level convenience functions built on top of the minimal API. I need to clearly separate these two parts in order to avoid polluting the namespaces with all sorts of nonsense!
CZ is lacking a good modern system of symbol namespaces. I can look to Python for a great example. I need to maintain compatibility with C, and avoid ugly symbol encodings. I think I can come up with something that will alleviate the need to type anything like gtk_window_set_default_size
, and yet maintain compatibility with the library in question. I want all the power of C, but it should be easy to use, even for children. It should be as easy as BASIC or Processing, a child should be able to write short graphical demos and the like, without stumbling over tricky syntax or obscure compile errors.
Here is an example of a simple CZ program which plots the Mandelbrot set fractal. I think that the program is fairly clear and easy to understand, although there is still some potential to improve and clarify the code.
#!/usr/local/bin/cz -- use b use ccomplex Main: num outside = 16, ox = -0.5, oy = 0, r = 1.5 long i, max_i = 50, rb_i = 30 space() uint32_t *px = pixel() # CONFIGURE! num d = 2*r/h, x0 = ox-d*w_2, y0 = oy+d*h_2 for(y, 0, h): cmplx c = x0 + (y0-d*y)*I repeat(w): cmplx w = c for i=0; i < max_i && cabs(w) < outside; ++i w = w*w + c *px++ = i < max_i ? rainbow(i*359 / rb_i % 360) : black c += d
I wrote a more elaborate variant of this program, which generates images like the one shown below. There are a few tricks used: continuous colouring, rainbow colours, and plotting the logarithm of the iteration count, which makes the plot appear less busy close to the black fractal proper. I sell some T-shirts and other products with these fractal designs online.
I am interested in graph programming, and have been for three decades since I was a teenager. By graph programming, I mean programming and modelling based on mathematical graphs or diagrams. I avoid the term visual programming, because there is no necessary reason that vision impaired folks could not use a graph programming language; a graph or diagram may be perceived, understood, and manipulated without having to see it.
Mathematics is something that naturally exists, outside time and independent of our universe. We humans discover mathematics, we do not invent or create it. One of my main ideas for graph programming is to represent a mathematical (or software) model in the simplest and most natural way, using relational operators. Elementary mathematics can be reduced to just a few such operators:
+ | add, subtract, disjoint union, zero |
× | multiply, divide, cartesian product, one |
^ | power, root, logarithm |
◢ | sin, cos, sin-1, cos-1, hypot, atan2 |
δ | differential, integral |
I think that a language and notation based on these few operators (and similar) can be considerably simpler and more expressive than conventional math or programming languages.
CZ is for me a stepping-stone toward this goal of an expressive relational graph language. It is more pleasant for me to develop software tools in CZ than in C or another language.
Thanks for reading. I wrote this article during the process of applying to join Toptal, which appears to be a freelancing portal for top developers; and in response to this article on toptal: After All These Years, the World is Still Powered by C Programming.
My CZ project has been stalled for quite some time. I foolishly became discouraged after receiving some negative feedback. I now know that honest negative feedback should be valued as an opportunity to improve, and I intend to continue the project until it lacks glaring faults, and is useful for other people. If this project or this article interests you, please contact me and let me know. It is much more enjoyable to work on a project when other people are actively interested in it!
The uBITX uses an Arduino internally. This article describes how to update its software.
The connector on the back is a Mini-B USB connector, so you'll need a "Mini-B to A" USB cable. This is not the same cable as used with older Android smartphones. The Mini-B connector was used with a lot of cameras a decade ago.
You'll also need a computer. I use a laptop with Fedora Linux installed.
In Fedora all the required software is installed with sudo dnf install arduino git. Add yourself to the users and lock groups with sudo usermod -a -G users,lock $USER (on Debian-style systems use sudo usermod -a -G dialout,lock $USER). You'll need to log out and log in again for that to have an effect (if you want to see which groups you are already in, then use the id command).
Run arduino as your ordinary non-root user to create the directories used by the Arduino IDE. You can quit the IDE once it starts.
$ cd ~/Arduino $ git clone https://github.com/afarhan/ubitxv6.git ubitx_v6.1_code
Plug in the USB cable and turn on the radio. Running dmesg will show the Arduino appearing as a "USB serial" device:
usb 1-1: new full-speed USB device number 6 using xhci_hcd usb 1-1: New USB device found, idVendor=1a86, idProduct=7523, bcdDevice= 2.64 usb 1-1: New USB device strings: Mfr=0, Product=2, SerialNumber=0 usb 1-1: Product: USB Serial usbcore: registered new interface driver ch341 usbserial: USB Serial support registered for ch341-uart ch341 1-1:1.0: ch341-uart converter detected usb 1-1: ch341-uart converter now attached to ttyUSB1
If you want more information about the USB device then use:
$ lsusb -d 1a86:7523 Bus 001 Device 006: ID 1a86:7523 QinHeng Electronics CH340 serial converter
In the last post I had started implementing an Unscented Kalman Filter for position and orientation tracking in OpenHMD. Over the Christmas break, I continued that work.
When reading below, keep in mind that the goal of the filtering code I’m writing is to combine 2 sources of information for tracking the headset and controllers.
The first piece of information is acceleration and rotation data from the IMU on each device, and the second is observations of the device position and orientation from 1 or more camera sensors.
The IMU motion data drifts quickly (at least for position tracking) and can’t tell which way the device is facing (yaw, but can detect gravity and get pitch/roll).
The camera observations can tell exactly where each device is, but arrive at a much lower rate (52Hz vs 500/1000Hz) and can take a long time to process (hundreds of milliseconds) to analyse to acquire or re-acquire a lock on the tracked device(s).
The goal is to acquire tracking lock, then use the motion data to predict the motion closely enough that we always hit the ‘fast path’ of vision analysis. The key here is closely enough – the more closely the filter can track and predict the motion of devices between camera frames, the better.
When I wrote the last post, I had the filter running as a standalone application, processing motion trace data collected by instrumenting a running OpenHMD app and moving my headset and controllers around. That’s a really good way to work, because it lets me run modifications on the same data set and see what changed.
However, the motion traces were captured using the current fusion/prediction code, which frequently loses tracking lock when the devices move – leading to big gaps in the camera observations and more interpolation for the filter.
By integrating the Kalman filter into OpenHMD, the predictions are improved leading to generally much better results. Here’s one trace of me moving the headset around reasonably vigourously with no tracking loss at all.
If it worked this well all the time, I’d be ecstatic! The predicted position matched the observed position closely enough for every frame for the computer vision to match poses and track perfectly. Unfortunately, this doesn’t happen every time yet, and definitely not with the controllers – although I think the latter largely comes down to the current computer vision having more troubler matching controller poses. They have fewer LEDs to match against compared to the headset, and the LEDs are generally more side-on to a front-facing camera.
Taking a closer look at a portion of that trace, the drift between camera frames when the position is interpolated using the IMU readings is clear.
This is really good. Most of the time, the drift between frames is within 1-2mm. The computer vision can only match the pose of the devices to within a pixel or two – so the observed jitter can also come from the pose extraction, not the filtering.
The worst tracking is again on the Z axis – distance from the camera in this case. Again, that makes sense – with a single camera matching LED blobs, distance is the most uncertain part of the extracted pose.
The trace above is good – the computer vision spots the headset and then the filtering + computer vision track it at all times. That isn’t always the case – the prediction goes wrong, or the computer vision fails to match (it’s definitely still far from perfect). When that happens, it needs to do a full pose search to reacquire the device, and there’s a big gap until the next pose report is available.
That looks more like this
This trace has 2 kinds of errors – gaps in the observed position timeline during full pose searches and erroneous position reports where the computer vision matched things incorrectly.
Fixing the errors in position reports will require improving the computer vision algorithm and would fix most of the plot above. Outlier rejection is one approach to investigate on that front.
There is inherent delay involved in processing of the camera observations. Every 19.2ms, the headset emits a radio signal that triggers each camera to capture a frame. At the same time, the headset and controller IR LEDS light up brightly to create the light constellation being tracked. After the frame is captured, it is delivered over USB over the next 18ms or so and then submitted for vision analysis. In the fast case where we’re already tracking the device the computer vision is complete in a millisecond or so. In the slow case, it’s much longer.
Overall, that means that there’s at least a 20ms offset between when the devices are observed and when the position information is available for use. In the plot above, this delay is ignored and position reports are fed into the filter when they are available. In the worst case, that means the filter is being told where the headset was hundreds of milliseconds earlier.
To compensate for that delay, I implemented a mechanism in the filter where it keeps extra position and orientation entries in the state that can be used to retroactively apply the position observations.
The way that works is to make a prediction of the position and orientation of the device at the moment the camera frame is captured and copy that prediction into the extra state variable. After that, it continues integrating IMU data as it becomes available while keeping the auxilliary state constant.
When a the camera frame analysis is complete, that delayed measurement is matched against the stored position and orientation prediction in the state and the error used to correct the overall filter. The cool thing is that in the intervening time, the filter covariance matrix has been building up the right correction terms to adjust the current position and orientation.
Here’s a good example of the difference:
Notice how most of the disconnected segments have now slotted back into position in the timeline. The ones that haven’t can either be attributed to incorrect pose extraction in the compute vision, or to not having enough auxilliary state slots for all the concurrent frames.
At any given moment, there can be a camera frame being analysed, one arriving over USB, and one awaiting “long term” analysis. The filter needs to track an auxilliary state variable for each frame that we expect to get pose information from later, so I implemented a slot allocation system and multiple slots.
The downside is that each slot adds 6 variables (3 position and 3 orientation) to the covariance matrix on top of the 18 base variables. Because the covariance matrix is square, the size grows quadratically with new variables. 5 new slots means 30 new variables – leading to a 48 x 48 covariance matrix instead of 18 x 18. That is a 7-fold increase in the size of the matrix (48 x 48 = 2304 vs 18 x 18 = 324) and unfortunately about a 10x slow-down in the filter run-time.
At that point, even after some optimisation and vectorisation on the matrix operations, the filter can only run about 3x real-time, which is too slow. Using fewer slots is quicker, but allows for fewer outstanding frames. With 3 slots, the slow-down is only about 2x.
There are some other possible approaches to this problem:
So far in this post, I’ve only talked about the headset tracking and not mentioned controllers. The controllers are considerably harder to track right now, but most of the blame for that is in the computer vision part. Each controller has fewer LEDs than the headset, fewer are visible at any given moment, and they often aren’t pointing at the camera front-on.
This screenshot is a prime example. The controller is the cluster of lights at the top of the image, and the headset is lower left. The computer vision has gotten confused and thinks the controller is the ring of random blue crosses near the headset. It corrected itself a moment later, but those false readings make life very hard for the filtering.
Here’s a typical example of the controller tracking right now. There are some very promising portions of good tracking, but they are interspersed with bursts of tracking losses, and wild drifting from the computer vision giving wrong poses – leading to the filter predicting incorrect acceleration and hence cascaded tracking losses. Particularly (again) on the Z axis.
One of the problems I was looking at in my last post is variability in the arrival timing of the various USB streams (Headset reports, Controller reports, camera frames). I improved things in OpenHMD on that front, to use timestamps from the devices everywhere (removing USB timing jitter from the inter-sample time).
There are still potential problems in when IMU reports from controllers get updated in the filters vs the camera frames. That can be on the order of 2-4ms jitter. Time will tell how big a problem that will be – after the other bigger tracking problems are resolved.
All the work that I’m doing implementing this positional tracking is a combination of my free time, hours contributed by my employer Centricular and contributions from people via Github Sponsorships. If you’d like to help me spend more hours on this and fewer on other paying work, I appreciate any contributions immensely!
The next things on my todo list are:
udev can be used to block a USB device (or even an entire class of devices, such as USB storage). Add a file /etc/udev/rules.d/99-local-blacklist.rules containing:
SUBSYSTEM=="usb", ATTRS{idVendor}=="0123", ATTRS{idProduct}=="4567", ATTR{authorized}="0"
While I hope to update this site again soon, here’s a photo I captured over the weekend in my back yard. The red flowering plant is attracting wattlebirds and honey-eaters. This wattlebird stayed still long enough for me to take this shot. After a little bit of editing, I think it has turned out rather well.
Photo taken with: Canon 7D Mark II & Canon 55-250mm lens.
Edited in Lightroom and Photoshop (to remove a sun glare spot off the eye).
Digital TV uses MPEG Transport Stream, which is a container for video designed for lossy transmission, such as radio. To save CPU cycles, Personal Video Records often save the MPEG-TS stream directly to disk. The more usual MPEG is technically MPEG Program Stream, which is designed for lossless transmission, such as storage on a disk.
Since these are a container formats, it should be possible to losslessly and quickly re-code from MPEG-TS to MPEG-PS.
ffmpeg -ss "${STARTTIME}" -to "${DURATION}" -i "${FILENAME}" -ignore_unknown -map 0 -map -0:2 -c copy "${FILENAME}.mpeg"
I gave the talk Practicality Beats Purity: The Zen of Python’s Escape Hatch as part of PyConline AU 2020, the very online replacement for PyCon AU this year. In that talk, I included a few interesting links code samples which you may be interested in:
@apply
def apply(transform):
def __decorator__(using_this):
return transform(using_this)
return __decorator__
numbers = [1, 2, 3, 4, 5]
@apply(lambda f: list(map(f, numbers)))
def squares(i):
return i * i
print(list(squares))
# prints: [1, 4, 9, 16, 25]
Init.java
public class Init {
public static void main(String[] args) {
System.out.println("Hello, World!")
}
}
@switch
and @case
__NOT_A_MATCHER__ = object()
__MATCHER_SORT_KEY__ = 0
def switch(cls):
inst = cls()
methods = []
for attr in dir(inst):
method = getattr(inst, attr)
matcher = getattr(method, "__matcher__", __NOT_A_MATCHER__)
if matcher == __NOT_A_MATCHER__:
continue
methods.append(method)
methods.sort(key = lambda i: i.__matcher_sort_key__)
for method in methods:
matches = method.__matcher__()
if matches:
return method()
raise ValueError(f"No matcher matches value {test_value}")
def case(matcher):
def __decorator__(f):
global __MATCHER_SORT_KEY__
f.__matcher__ = matcher
f.__matcher_sort_key__ = __MATCHER_SORT_KEY__
__MATCHER_SORT_KEY__ += 1
return f
return __decorator__
if __name__ == "__main__":
for i in range(100):
@switch
class FizzBuzz:
@case(lambda: i % 15 == 0)
def fizzbuzz(self):
return "fizzbuzz"
@case(lambda: i % 3 == 0)
def fizz(self):
return "fizz"
@case(lambda: i % 5 == 0)
def buzz(self):
return "buzz"
@case(lambda: True)
def default(self):
return "-"
print(f"{i} {FizzBuzz}")
fuck grey text on white backgrounds
fuck grey text on black backgrounds
fuck thin, spindly fonts
fuck 10px text
fuck any size of anything in px
fuck font-weight 300
fuck unreadable web pages
fuck themes that implement this unreadable idiocy
fuck sites that don’t work without javascript
fuck reactjs and everything like it
thank fuck for Stylus. and uBlock Origin. and uMatrix.
Fuck Grey Text is a post from: Errata
Earlier today I launched this site. It is the result of a lot of work over the past few weeks. It began as an idea to publicise some of my photos, and morphed into the site you see now, including a store and blog that I’ve named “Photekgraddft”.
In the weirdly named blog, I want to talk about photography, the stories behind some of my more interesting shots, the gear and software I use, my technology career, my recent ADHD diagnosis and many other things.
This scares me quite a lot. I’ve never really put myself out onto the internet before. If you Google me, you’re not going to find anything much. Google Images has no photos of me. I’ve always liked it that way. Until now.
ADHD’ers are sometimes known for “oversharing”, one of the side-effects of the inability to regulate emotions well. I’ve always been the opposite, hiding, because I knew I was different, but didn’t understand why.
The combination of the COVID-19 pandemic and my recent ADHD diagnosis have given me a different perspective. I now know why I hid. And now I want to engage, and be engaged, in the world.
If I can be a force for positive change, around people’s knowledge and opinion of ADHD, then I will.
If talking about Business Analysis (my day job), and sharing my ideas for optimising organisations helps anyone at all, then I will.
If I can show my photos and brighten someone’s day by allowing them to enjoy a sunset, or a flying bird, then I will.
And if anyone buys any of my photos, then I will be shocked!
So welcome to my little vanity project. I hope it can be something positive, for me, if for noone else in this new, odd world in which we now find ourselves living together.
Photo: Rain on leaves
Video: A Foggy Autumn Morning
Photo: Walking the dog on a cold Autumn morning
Some time ago, I wrote “floats, bits, and constant expressions” about converting floating point number into its representative ones and zeros as a C++ constant expression – constructing the IEEE 754 representation without being able to examine the bits directly.
I’ve been playing around with Rust recently, and rewrote that conversion code as a bit of a learning exercise for myself, with a thoroughly contrived set of constraints: using integer and single-precision floating point math, at compile time, without unsafe blocks, while using as few unstable features as possible.
I’ve included the listing below, for your bemusement and/or head-shaking, and you can play with the code in the Rust Playground and rust.godbolt.org
// Jonathan Adamczewski 2020-05-12 // // Constructing the bit-representation of an IEEE 754 single precision floating // point number, using integer and single-precision floating point math, at // compile time, in rust, without unsafe blocks, while using as few unstable // features as I can. // // or "What if this silly C++ thing https://brnz.org/hbr/?p=1518 but in Rust?" // Q. Why? What is this good for? // A. To the best of my knowledge, this code serves no useful purpose. // But I did learn a thing or two while writing it :) // This is needed to be able to perform floating point operations in a const // function: #![feature(const_fn)] // bits_transmute(): Returns the bits representing a floating point value, by // way of std::mem::transmute() // // For completeness (and validation), and to make it clear the fundamentally // unnecessary nature of the exercise :D - here's a short, straightforward, // library-based version. But it needs the const_transmute flag and an unsafe // block. #![feature(const_transmute)] const fn bits_transmute(f: f32) -> u32 { unsafe { std::mem::transmute::<f32, u32>(f) } } // get_if_u32(predicate:bool, if_true: u32, if_false: u32): // Returns if_true if predicate is true, else if_false // // If and match are not able to be used in const functions (at least, not // without #![feature(const_if_match)] - so here's a branch-free select function // for u32s const fn get_if_u32(predicate: bool, if_true: u32, if_false: u32) -> u32 { let pred_mask = (-1 * (predicate as i32)) as u32; let true_val = if_true & pred_mask; let false_val = if_false & !pred_mask; true_val | false_val } // get_if_f32(predicate, if_true, if_false): // Returns if_true if predicate is true, else if_false // // A branch-free select function for f32s. // // If either is_true or is_false is NaN or an infinity, the result will be NaN, // which is not ideal. I don't know of a better way to implement this function // within the arbitrary limitations of this silly little side quest. const fn get_if_f32(predicate: bool, if_true: f32, if_false: f32) -> f32 { // can't convert bool to f32 - but can convert bool to i32 to f32 let pred_sel = (predicate as i32) as f32; let pred_not_sel = ((!predicate) as i32) as f32; let true_val = if_true * pred_sel; let false_val = if_false * pred_not_sel; true_val + false_val } // bits(): Returns the bits representing a floating point value. const fn bits(f: f32) -> u32 { // the result value, initialized to a NaN value that will otherwise not be // produced by this function. let mut r = 0xffff_ffff; // These floation point operations (and others) cause the following error: // only int, `bool` and `char` operations are stable in const fn // hence #![feature(const_fn)] at the top of the file // Identify special cases let is_zero = f == 0_f32; let is_inf = f == f32::INFINITY; let is_neg_inf = f == f32::NEG_INFINITY; let is_nan = f != f; // Writing this as !(is_zero || is_inf || ...) cause the following error: // Loops and conditional expressions are not stable in const fn // so instead write this as type coversions, and bitwise operations // // "normalish" here means that f is a normal or subnormal value let is_normalish = 0 == ((is_zero as u32) | (is_inf as u32) | (is_neg_inf as u32) | (is_nan as u32)); // set the result value for each of the special cases r = get_if_u32(is_zero, 0, r); // if (iz_zero) { r = 0; } r = get_if_u32(is_inf, 0x7f80_0000, r); // if (is_inf) { r = 0x7f80_0000; } r = get_if_u32(is_neg_inf, 0xff80_0000, r); // if (is_neg_inf) { r = 0xff80_0000; } r = get_if_u32(is_nan, 0x7fc0_0000, r); // if (is_nan) { r = 0x7fc0_0000; } // It was tempting at this point to try setting f to a "normalish" placeholder // value so that special cases do not have to be handled in the code that // follows, like so: // f = get_if_f32(is_normal, f, 1_f32); // // Unfortunately, get_if_f32() returns NaN if either input is NaN or infinite. // Instead of switching the value, we work around the non-normalish cases // later. // // (This whole function is branch-free, so all of it is executed regardless of // the input value) // extract the sign bit let sign_bit = get_if_u32(f < 0_f32, 1, 0); // compute the absolute value of f let mut abs_f = get_if_f32(f < 0_f32, -f, f); // This part is a little complicated. The algorithm is functionally the same // as the C++ version linked from the top of the file. // // Because of the various contrived constraints on thie problem, we compute // the exponent and significand, rather than extract the bits directly. // // The idea is this: // Every finite single precision float point number can be represented as a // series of (at most) 24 significant digits as a 128.149 fixed point number // (128: 126 exponent values >= 0, plus one for the implicit leading 1, plus // one more so that the decimal point falls on a power-of-two boundary :) // 149: 126 negative exponent values, plus 23 for the bits of precision in the // significand.) // // If we are able to scale the number such that all of the precision bits fall // in the upper-most 64 bits of that fixed-point representation (while // tracking our effective manipulation of the exponent), we can then // predictably and simply scale that computed value back to a range than can // be converted safely to a u64, count the leading zeros to determine the // exact exponent, and then shift the result into position for the final u32 // representation. // Start with the largest possible exponent - subsequent steps will reduce // this number as appropriate let mut exponent: u32 = 254; { // Hex float literals are really nice. I miss them. // The threshold is 2^87 (think: 64+23 bits) to ensure that the number will // be large enough that, when scaled down by 2^64, all the precision will // fit nicely in a u64 const THRESHOLD: f32 = 154742504910672534362390528_f32; // 0x1p87f == 2^87 // The scaling factor is 2^41 (think: 64-23 bits) to ensure that a number // between 2^87 and 2^64 will not overflow in a single scaling step. const SCALE_UP: f32 = 2199023255552_f32; // 0x1p41f == 2^41 // Because loops are not available (no #![feature(const_loops)], and 'if' is // not available (no #![feature(const_if_match)]), perform repeated branch- // free conditional multiplication of abs_f. // use a macro, because why not :D It's the most compact, simplest option I // could find. macro_rules! maybe_scale { () => {{ // care is needed: if abs_f is above the threshold, multiplying by 2^41 // will cause it to overflow (INFINITY) which will cause get_if_f32() to // return NaN, which will destroy the value in abs_f. So compute a safe // scaling factor for each iteration. // // Roughly equivalent to : // if (abs_f < THRESHOLD) { // exponent -= 41; // abs_f += SCALE_UP; // } let scale = get_if_f32(abs_f < THRESHOLD, SCALE_UP, 1_f32); exponent = get_if_u32(abs_f < THRESHOLD, exponent - 41, exponent); abs_f = get_if_f32(abs_f < THRESHOLD, abs_f * scale, abs_f); }} } // 41 bits per iteration means up to 246 bits shifted. // Even the smallest subnormal value will end up in the desired range. maybe_scale!(); maybe_scale!(); maybe_scale!(); maybe_scale!(); maybe_scale!(); maybe_scale!(); } // Now that we know that abs_f is in the desired range (2^87 <= abs_f < 2^128) // scale it down to be in the range (2^23 <= _ < 2^64), and convert without // loss of precision to u64. const INV_2_64: f32 = 5.42101086242752217003726400434970855712890625e-20_f32; // 0x1p-64f == 2^64 let a = (abs_f * INV_2_64) as u64; // Count the leading zeros. // (C++ doesn't provide a compile-time constant function for this. It's nice // that rust does :) let mut lz = a.leading_zeros(); // if the number isn't normalish, lz is meaningless: we stomp it with // something that will not cause problems in the computation that follows - // the result of which is meaningless, and will be ignored in the end for // non-normalish values. lz = get_if_u32(!is_normalish, 0, lz); // if (!is_normalish) { lz = 0; } { // This step accounts for subnormal numbers, where there are more leading // zeros than can be accounted for in a valid exponent value, and leading // zeros that must remain in the final significand. // // If lz < exponent, reduce exponent to its final correct value - lz will be // used to remove all of the leading zeros. // // Otherwise, clamp exponent to zero, and adjust lz to ensure that the // correct number of bits will remain (after multiplying by 2^41 six times - // 2^246 - there are 7 leading zeros ahead of the original subnormal's // computed significand of 0.sss...) // // The following is roughly equivalent to: // if (lz < exponent) { // exponent = exponent - lz; // } else { // exponent = 0; // lz = 7; // } // we're about to mess with lz and exponent - compute and store the relative // value of the two let lz_is_less_than_exponent = lz < exponent; lz = get_if_u32(!lz_is_less_than_exponent, 7, lz); exponent = get_if_u32( lz_is_less_than_exponent, exponent - lz, 0); } // compute the final significand. // + 1 shifts away a leading 1-bit for normal, and 0-bit for subnormal values // Shifts are done in u64 (that leading bit is shifted into the void), then // the resulting bits are shifted back to their final resting place. let significand = ((a << (lz + 1)) >> (64 - 23)) as u32; // combine the bits let computed_bits = (sign_bit << 31) | (exponent << 23) | significand; // return the normalish result, or the non-normalish result, as appopriate get_if_u32(is_normalish, computed_bits, r) } // Compile-time validation - able to be examined in rust.godbolt.org output pub static BITS_BIGNUM: u32 = bits(std::f32::MAX); pub static TBITS_BIGNUM: u32 = bits_transmute(std::f32::MAX); pub static BITS_LOWER_THAN_MIN: u32 = bits(7.0064923217e-46_f32); pub static TBITS_LOWER_THAN_MIN: u32 = bits_transmute(7.0064923217e-46_f32); pub static BITS_ZERO: u32 = bits(0.0f32); pub static TBITS_ZERO: u32 = bits_transmute(0.0f32); pub static BITS_ONE: u32 = bits(1.0f32); pub static TBITS_ONE: u32 = bits_transmute(1.0f32); pub static BITS_NEG_ONE: u32 = bits(-1.0f32); pub static TBITS_NEG_ONE: u32 = bits_transmute(-1.0f32); pub static BITS_INF: u32 = bits(std::f32::INFINITY); pub static TBITS_INF: u32 = bits_transmute(std::f32::INFINITY); pub static BITS_NEG_INF: u32 = bits(std::f32::NEG_INFINITY); pub static TBITS_NEG_INF: u32 = bits_transmute(std::f32::NEG_INFINITY); pub static BITS_NAN: u32 = bits(std::f32::NAN); pub static TBITS_NAN: u32 = bits_transmute(std::f32::NAN); pub static BITS_COMPUTED_NAN: u32 = bits(std::f32::INFINITY/std::f32::INFINITY); pub static TBITS_COMPUTED_NAN: u32 = bits_transmute(std::f32::INFINITY/std::f32::INFINITY); // Run-time validation of many more values fn main() { let end: usize = 0xffff_ffff; let count = 9_876_543; // number of values to test let step = end / count; for u in (0..=end).step_by(step) { let v = u as u32; // reference let f = unsafe { std::mem::transmute::<u32, f32>(v) }; // compute let c = bits(f); // validation if c != v && !(f.is_nan() && c == 0x7fc0_0000) && // nans !(v == 0x8000_0000 && c == 0) { // negative 0 println!("{:x?} {:x?}", v, c); } } }
Over the weekend, the boredom of COVID-19 isolation motivated me to move my personal website from WordPress on a self-managed 10-year-old virtual private server to a generated static site on a static site hosting platform with a content delivery network.
This decision was overdue. WordPress never fit my brain particularly well, and it was definitely getting to a point where I wasn’t updating my website at all (my last post was two weeks before I moved from Hobart; I’ve been living in Petaluma for more than three years now).
Settling on which website framework wasn’t a terribly difficult choice (I chose Jekyll, everyone else seems to be using it), and I’ve had friends who’ve had success moving their blogs over. The difficulty I ended up facing was that the standard exporter that everyone to move from WordPress to Jekyll uses does not expect Debian’s package layout.
Backing up a bit: I made a choice, 10 years ago, to deploy WordPress on a machine that I ran myself, using the Debian system wordpress
package, a simple aptitude install wordpress
away. That decision was not particularly consequential then, but it chewed up 3 hours of my time on Saturday.
Why? The exporter plugin assumes that it will be able to find all of the standard WordPress files in the usual WordPress places, and when it didn’t find that, it broke in unexpected ways. And why couldn’t it find it?
Debian makes packaging choices that prioritise all the software on a system living side-by-side with minimal difficulty. It sets strict permissions. It separates application code from configuration from user data (which in the case of WordPress, includes plugins), in a way that is consistent between applications. This choice makes it easy for Debian admins to understand how to find bits of an application. It also minimises the chance of one PHP application from clobbering another.
10 years later, the install that I had set up was still working, having survived 3-4 Debian versions, and so 3-4 new WordPress versions. I don’t recall the last time I had to think about keeping my WordPress instance secure and updated. That’s quite a good run. I’ve had a working website despite not caring about keeping it updated for at least three years.
The same decisions that meant I spent 3 hours on Saturday doing a simple WordPress export saved me a bunch of time that I didn’t incrementally spend over the course a decade. Am I even? I have no idea.
Anyway, the least I can do is provide some help to people who might run into this same problem, so here’s a 5-step howto.
Should you find the Jekyll exporter not working on your Debian WordPress install:
Basically, the plugin works with a stock WordPress install. If you don’t have one of those, it’s easy to move it over.