Planet Linux Australia

,

Silvia PfeifferSWAY at RFWS using Coviu

A SWAY session by Joanne of Royal Far West School. http://sway.org.au/ via https://coviu.com/ SWAY is an oral language and literacy program based on Aboriginal knowledge, culture and stories. It has been developed by Educators, Aboriginal Education Officers and Speech Pathologists at the Royal Far West School in Manly, NSW.

Category: Array
Uploaded by: Silvia Pfeiffer
Hosted: youtube

The post SWAY at RFWS using Coviu first appeared on ginger's thoughts.

Silvia PfeifferSilvia Pfeiffer Live Stream

Silvia PfeifferPARADISEC catalog for Users

This screencast shows how a user of the PARADISEC catalog logs in and explores the collections, items and files that the archive contains.

Category: 2
Uploaded by: Silvia Pfeiffer
Hosted: youtube

The post PARADISEC catalog for Users first appeared on ginger's thoughts.

Silvia PfeifferPARADISEC catalog for Collectors

Screencast of how to use the PARADISEC catalog for managing and publishing collections.

Category: 2
Uploaded by: Silvia Pfeiffer
Hosted: youtube

The post PARADISEC catalog for Collectors first appeared on ginger's thoughts.

Silvia PfeifferPARADISEC catalog for Administrators

Screencast of how a PARADISEC administrator uses the PARADISEC catalog for managing the consistency of metadata and staying on top of uploaded files.

Category: 2
Uploaded by: Silvia Pfeiffer
Hosted: youtube

The post PARADISEC catalog for Administrators first appeared on ginger's thoughts.

Lev LafayetteSpartan Finally Receives Its Laurels

Spartan HPC certificateWay back in 2015 the University of Melbourne had a general-purpose high performance computer system called "Edward", which itself replaced an even smaller system called "Alfred", both named after the Kings of Wessex. Edward was a fairly typical machine for its vintage and, as is normal, when a system is being retired the main researchers were asked what should be different in the new system. What was also normal was their answers; more cores, faster CPUs, etc. Consideration was given to not having an HPC system at all, potentially offloading the demand to a national facility. But cooler heads that possibly understood network throughput and the advantages of fine-tuning a local system to the needs of local researchers prevailed.

One of the interesting things about the review of Edward's utilisation was how it differed from what many researchers thought they needed. Rather than a system with more cores etc, what was really needed was faster throughput. Researchers simply didn't like their jobs sitting in the queue. Coupled with the fact that finances to fund the system weren't great (the naming of Spartan was a laconic reference to its lean cost-efficiency), necessity became the mother of invention. The Nectar research cloud had plenty of cores and, according to the metrics, the overwhelming majority of Edward's jobs were being run for capacity, rather than capability; over 75% were single-core jobs and over 90% were single-node jobs. Rather than spend a lot of money on high-speed interconnect, which is typical in HPC systems, a decision was made to have a smaller traditional HPC partition ("physical") and use a partition virtual machines ("cloud") with a slow interconnect for those singe-node jobs.

It was an innovative design and received a well-deserved initial launch, followed by a world-tour explaining the architecture to various conferences and HPC centres, including Multicore World, Wellington, 2016, and 2017; eResearchAustralasia 2016, Center for Scientific Computing (CSC) Goethe University Frankfurt, 2016, High Performance Computing Center (HLRS) University of Stuttgart, 2016, High Performance Computing Centre Albert-Ludwigs-University Freiburg, 2016; European Organization for Nuclear Research (CERN), 2016, Centre Informatique National de l’Enseignement Supérieur, Montpellier, 2016; Centro Nacional de Supercomputación, Barcelona, 2016, and the OpenStack Summit, at Barcelona 2016, and featured in OpenStack and HPC Workload Management in Stig Telfer (ed), "The Crossroads of Cloud and HPC: OpenStack for Scientific Research" (Open Stack, 2016).

The success of Spartan's architecture soon became apparent. Whilst Edward had completed just over 375,000 jobs in 2015, Spartan completed more than a million in its first year from launch. The system expanded with additional compute nodes from specialist projects, departments, and research agencies that had purchased their own hardware. But the most significant expansion was the addition of a substantial GPGPU partition, of 68 nodes and 272 nVidia P100 GPGPU cards, funded by a Linkage Infrastructure, Equipment and Facilities (LIEF) grant. Later, Spartan also introduced FastX for interactive remote desktops, and interactive sessions through Open OnDemand for Jupyer notebooks, RStudio, and Cryosparc.

The introduction of the GPGPU partition really transformed Spartan. It was what changed Spartan from being a small, experimental, but extremely successful system, to a world-class computing system. At the time we estimated that it would have entered at c200 on the top500.org list. However, running the tests to enter into that celebrated list requires both a lot of fine-tuning and, of course, it means that users, which have priority on our system, won't be able to use the nodes. On Spartan, it is typical that 100% of workers nodes are fully allocated, so for literally years there was little opportunity for the tests to be conducted.

Recently however, Spartan finally took the leap to change from running RedHat 7.x, which we had been doing since 2015, gradually working our way up the point-released, to RedHat 9.x. This provided a well-advertised two-week window of opportunity and whilst many other changes occurred to the operating system, the hardware, and the recompilation of hundreds of applications, a work colleague, Naren Chinnam (with necessary coordination with the rest of the HPC, Network and DC teams in getting the cluster stable enough for the benchmarks to finish), completed the LINPACK test for part of the system. As a result, Spartan now has a nice certificate, rated at 454 in the world (and third in Australia, after NCI/Gadi and Pawsey), with a benchmark score of 2.14 PetaFlops, representing the performance of the GPU partitions alone. It has already been noted that we actually have 88 A100 GPU nodes, not the 72 that were tested, which would have brought us up to 337 in the world, plus another 1/3rd of our performance could have come from the CPU-only partitions.

At the time of writing, Spartan has run 53881908 HPC jobs. There are 6134 users from the University of Melbourne and around the world, across 2097 projects. The original architecture (with our friends at the University of Freiburg with their alternative cluster-cloud combination) was also featured at the IEEE 13th International Conference on e-Science in 2017, and in the Science, Technology and Engineering Systems Journal in 2019, with other presentations on Spartan including use of the GPGPU partition at eResearch 2018, its development path at eResearchAU 2020, interactive HPC at eResearchNZ 2021, and over 200 papers citing Spartan as a contributing factor their research. Spartan continues to grow in users, usage, performance and, most importantly, research outcomes. Spartan may have finally received its laurels, but we are not resting on them.

AttachmentSize
Image icon 2023spartan.png367.83 KB

,

Francois MarierAutomatically rebooting for kernel updates

I use reboot-notifier on most of my servers to let me know when I need to reboot them for kernel updates since I want to decide exactly when those machines go down. On the other hand, my home backup server has very predictable usage patterns and so I decided to go one step further there and automate these necessary reboots.

To do that, I first installed reboot-notifier which puts the following script in /etc/kernel/postinst.d/reboot-notifier to detect when a new kernel was installed:

#!/bin/sh

if [ "$0" = "/etc/kernel/postinst.d/reboot-notifier" ]; then
    DPKG_MAINTSCRIPT_PACKAGE=linux-base
fi

echo "*** System restart required ***" > /var/run/reboot-required
echo "$DPKG_MAINTSCRIPT_PACKAGE" >> /var/run/reboot-required.pkgs

Note that unattended-upgrades puts a similar script in /etc/kernel/postinst.d/unattended-upgrades:

#!/bin/sh

case "$DPKG_MAINTSCRIPT_PACKAGE::$DPKG_MAINTSCRIPT_NAME" in
   linux-image-extra*::postrm)
      exit 0;;
esac

if [ -d /var/run ]; then
    touch /var/run/reboot-required
    if ! grep -q "^$DPKG_MAINTSCRIPT_PACKAGE$" /var/run/reboot-required.pkgs 2> /dev/null ; then
        echo "$DPKG_MAINTSCRIPT_PACKAGE" >> /var/run/reboot-required.pkgs
    fi
fi

and so you only need one of them to be installed since they both write to /var/run/reboot-required. It doesn't hurt to have both of them though.

Then I created the following cron job (/etc/cron.daily/reboot-local) to actually reboot the server:

#!/bin/bash

REBOOT_REQUIRED=/var/run/reboot-required

if [ -s $REBOOT_REQUIRED ] ; then
    cat "$REBOOT_REQUIRED" | /usr/bin/mail -s "Rebooting $HOSTNAME" root
    /bin/systemctl reboot
fi

With that in place, my server will send me an email and then automatically reboot itself.

This is a work in progress because I'd like to add some checks later on to make sure that no backup is in progress during that time (maybe by looking for active ssh connections?), but it works well enough for now. Feel free to leave a comment if you've got a smarter script you'd like to share.

,

Linux AustraliaCouncil Meeting November 22, 2023 – Minutes

1. Meeting overview and key information

Present

  • Joel Addison (President)
  • Wil Brown (Vice-President)
  • Sae Ra Germaine  (Council)
  • Russell Stuart (Treasurer)
  • Jonathan Woithe (Council)
  • Neill Cox (Secretary)
  • Marcus Herstik (Council)

 

Apologies

  • None received

 

Not present

 

Meeting opened at 20:32 AEST by Joel and quorum was achieved.

Minutes taken by Neill.

 

2. Log of correspondence

  • To Russell Coker: Report for Linux Australia grant: Pinephone development

No response yet.

 

  • To Elena Williams: Report for Linux Australia grant: Girls Canberra #1

Elena has responded and will send in a report.

 

  • ASIC: Successful renewal: OPEN SOURCE AUSTRALIA
  • Michael Richardson Re:  Stripe Deposits – Russell responded
  • To Russell Coker: Pinephone activity talk submission for EO2024 – Russell promised a submission by 11 Nov
  • Katie McLaughlin Re: Intent to establish Independent Subcommittee for PyCon AU – latest response from Jonathan informing PyCon AU of the motion passed at the last council meeting. We’re now waiting on budgets.
  • Enquiry from Alex Simmons via the Linux Australia Website Contact Form – Jonathan has responded with the EO dates.
  • Drupal South: Payment approval request – Russell has responded and Michael has sorted out his identification issues with Westpac
  • Enquiry from Cornelius du Preez via the Linux Australia Website Contact Form – Wil has responded indicating that the account has been deleted as requested.
  • EO2024 CFP on lwn
  • Enquiry from Lauryn Westwood via the Linux Australia Website Contact Form – Jonathan has responded. Back with Lauryn now. She seems to have had some problems with password reset emails, but Jonathan hasn’t heard back after the last attempt to help.

 

We may have an issue with the contact form, as emails don’t seem to be reliably received. Sae Ra and Steve will check the server logs for problems, but the last four inquiries have been received.

 

3. Items for discussion

  • Canberra Linux Kernel conference – Russell was invited to attend tonight’s meeting, and was able to attend.

We had a general discussion about the logistics and funding of running an LA auspiced conference.

 

We can offer support with registration systems, websites, payments and insurance. The usual way of doing this is to form a subcommittee for running the conference and running a financial induction to provide access to Xero and a bank account.

 

Russell suggested a date later in the year after Everything Open. LA would like a date that does not conflict with other conferences Drupal South  (March),  EverythingOpen (April),  PyCon AU (November) and  Word Camp Sydney (November)

 

There is a surprisingly large community of kernel developers in Canberra between a number of employers and even government agencies. A Canberra focused event should be able to draw around 50 people, a larger event would be possible if not locally focused, but would be more expensive and complicated to run.

 

Starting with a smaller event seems like a good idea, and then build up if successful.

 

The event would be a single day affair for the first one at least.

 

Still brainstorming names – no decision yet.

 

Next steps are to find at least one more person on the organising team. Apart from anything else this will make payments/bank account access easier.

 

Joel will share a sample budget with Russell, which will probably be overkill but at least a good starting point.

 

Russell will send through a proposal soon.

 

  • We will aim to hold the AGM on 20 Jan 2024. Election dates will be calculated accordingly.

4. Items for noting

  • Sae Ra and Kathy Reid will be running a session on Sunday to help with writing Everything  Open proposals.

5. Other business

The post Council Meeting November 22, 2023 – Minutes appeared first on Linux Australia.

,

Pia AndrewsThe case for adaptive and end-to-end policy management

TL;DR: Better policy design and evaluation won’t save us 🙂

The APS Reform agenda provides a rare window of opportunity to address structural and systemic issues in the APS, so why not explore how might we transform the way policy is designed, delivered and managed end to end?

Why should we reform how we do policy? Simple. Because the gap between policy design and delivery has become the biggest barrier to delivering good public services and policy outcomes, and is a challenge most public servants experience daily, directly or indirectly. This gap wasn’t always the case, with policy design and delivery separated as part of the New Public Management reforms in the 90s. When you also consider the accelerating rate of change, increasing cadence of emergencies, and the massive speed and scale of new technologies, you could argue that end-to-end policy reform is our most urgent problem to solve.

Policy teams globally have been exploring new design methods like human-centred design, test-driven iteration (agile), and multi-disciplinary teams that get policy end users in the room (eg, NSW Policy Lab). There has also been an increased focus on improving policy evaluation across the world (eg, the Australian Centre for Evaluation). In both cases, I’m delighted to see innovative approaches being normalised across the policy profession, but it has become obvious that improving design and/or evaluation is still far from sufficient to drive better (or more humane) policy outcomes in an ever changing world. It is not only the current systemic inability to detect and respond to unintended consequences that emerge, but the lack of policy agility that perpetuates issues even long after they might be identified. 

Below I outline four current challenges for policy management and a couple of potential solutions, as something of a discussion starter 🙂

Current policy problems

Problem 1) The separation of (and mutual incomprehension between) policy design, delivery and the public

The lack of multi-disciplinary policy design, combined with a set-and-forget approach to policy, combined with delivery teams being left to interpret policy instructions without support, combined with a gap and interpretation inconsistency between policy modeling systems and policy delivery systems, all combined with a lack of feedback loops into improving policy over time, has led to a series of black holes throughout the process. Tweaking the process as it currently stands will not fix the black holes. We need a more holistic model for policy design, delivery and management.

A cartoon of a policy team celebrating because they completed their policy, and handed over the policy instructions to a picture of a black hole, all the while wondering what it would be like to see it through. The policy instructions are caught by an implementation team who know the policy design team have moved on, so do their best to interpret and implement their understanding. The impacts of the delivery are lost in a black hole as well, where the people affected by the policy can have their lives literally ruined, and eventually an evaluation team asks “why didn’t they just evaluate earlier?”.

CC-BY: Pia Andrews, 2023

There is also a significant gap with the public. From the start, there is usually a lack of diversity in expertise and experience in shaping a policy, and once an intervention is decided and rolled out, the people affected by policies have limited means to give feedback. Engaging the public early and often, and then providing clear feedback loops would help policies be better designed and improved over time.

Problem 2) The lack of real time monitoring of intended AND unintended impacts

The laudable efforts to improve policy evaluation are great, but formal evaluations usually have two limitations that could be better addressed with other mechanisms. Firstly, formal evaluations often tend to be positivist, in that they look for “has this initiative delivered what it said it would�, and aren’t often driven or set up to explore and understand unintended impacts, such as human or environmental patterns that emerged as a result of a new policy interacting in a complex domain.

Secondly, formal evaluations are usually a point in time assessment, rather than real time monitoring of policy impacts. Evaluation teams are not connected to the day to day delivery of policy interventions, creating a timeliness challenge in mitigating issues that are identified. Evolving and improving policy evaluation methods will create greater understanding, but perhaps too little, too late for those affected in between. Real time monitoring of intended and unintended impacts would nicely complement formal evaluation methods, while also providing a timely trigger if anything trended in the wrong direction.

Problem 3) A systemic inability to iterate policy in response to impact, feedback or change

Policies are often designed by a policy team, and then handed over to implementation, so that policy team can move on to the next policy priority, creating a systemic inability to iterate policies as the real impacts are felt in delivery. It doesn’t matter how collaborative or inclusive you are in designing a policy, there will always be perpetual change in the environment, and unintended impacts to mitigate. We need to take the lessons from the creation of “Continuous Integration and Continuous Delivery� (CI/CD) pipelines in service delivery, to create a “CI/CD Policy� approach which would manage policy design and delivery as part of the one continuum, drawing upon continuous feedback loops, monitoring and measurement of policy and human impacts to inform and iterate policies and the respective interventions. This would not only help policies to maximise the realisation of policy intent in a rapidly changing world, but would also provide the means to proactively identify and manage policy impacts (positive and negative) as they emerge.

Problem 4) Inconsistency in policy literacy and practice across the sector

Last, but not least, is the inconsistent definition, context and practice of “policy� across the sector, creating confusion and real issues of authority, decision making and accountability. Unfortunately today, many of the “policy guides� currently available limit themselves to Government Policy development, which has led to the common but dangerous assumption that Government Policies are the highest authority, and that the peak of good public service is to simply advise the Government.

To my mind, there are three highest level and fundamental categories of “policy�:

  1. Foundational Policies: the constitution, legislation and regulations which provide the context, framing and highest legal authorities and accountabilities of a department;
  2. Government Policies: the directions of the Government of the day via the respective Ministers, which is subject to foundational policy limitations; and
  3. Operational Policies: which covers all the operational, whole-of-government, department-defined rules and delegated policies, which are subject to both the government and foundational policy directions, but are the authorititative domain of Secretaries.

The diagram below provides a useful reference on the hierarchy of authority of different policy types, as well as a guide to decision making involved in each. This should help public servants realise that different actors are needed for change to different policy types, and that even Ministerial directions are constrained by the Foundational Policies above. It also should provide public servants more understanding as to what decision making is actually within their delegated authority, such as operational policies. 

This diagram shows 5 types of policies, starting at the highest authority with the Constitution, which is only changed by the people (public) via referenda, then Legislation (inc regulations) which is changed by the Parliament via Bills/Acts, followed by Gov Policy (Big P) which is changed by the Government via Ministerial directives, then Operational policies followed by the Department Secretary via departmental approvals, and finally internal operational decision making (implementation, program planning, delivery, etc) which is determined by department executives via internal delegations and processes.

CC-BY: Pia Andrews, 2023

Potential solutions

Solution #1: Adaptive policy management

So what might adaptive policy management look like? Well, let’s start with what the characteristics for delivering great policy and human outcomes might look like, and then we can reverse engineer an ideal policy operating model we could work towards.

FromTo
Narrowly informed, largely driven by generalist policy professionals, with occasional expertise or end user input.Multidisciplinary and diverse expertise and experience informing the whole process, including early testing of several interventions with representatives of those affected.
Static policies are defined, the policy team moves on, policy change is slow and difficult, often principles-based and subject to varied interpretation in delivery.Dynamic policies, with policy expertise present in policy interventions end to end (leg, services, reg, programs, grants, etc) with continuous, evidence-based policy iteration.
Reactive to issues, as they are identified. Constantly looking backwards, mitigating symptoms, without time to look forwards or address causes.Responsive to change as it happens, monitoring for impact (intended and unintended) and constantly adaptive to change in a forward looking way.
Assumptions driven, policy interventions are based on past or current assumptions, without testing, exploring or co-designing a range of approaches.Test driven, a diverse range of potential policy interventions are explored, with a range of stakeholders, with feasible options tested prior to finalising policy options or ratifications.
Culturally exclusive, policies are developed without culturally diverse experience or expertise. Culturally inclusive, policies are developed in a culturally inclusive way, embracing diverse knowledge systems and methods.
Split policy infrastructure, where policy design and modeling happen in one place, but policy delivery happens in a different place, leading to inconsistencies in implementation assumptions, and the inability for policy owners to monitor the reality of policy implementation. Modeling is often limited in scope and domain, so policy conflicts are only identified in delivery, too late to inform design.Shared policy infrastructure, common and shared digital policy models are used for both modeling/design and delivery, such that there is no gap between the two. Policy owners can have higher confidence in the likely impacts of change, whilst also keeping a finger on the pulse of actual policy impacts. Policy intent and impact are monitored alongside performance and CX measures, and feedback loops loop back to policy.
Policy realisation is slow, as the whole lifecycle requires policy options, legislation/regulation, operational policy development, with several opportunities for misinterpretation. Policy intent can take years to even start to be realised.Policy realisation is fast, policies are developed in a faster way with reference implementations resulting from rapid and test driven drafting of human and machine readable policy. This results in better rules & dramatically speeds up implementation.
Community engagement, engaging the public in research or testing ideas is currently ad hoc and inconsistent.Community empowerment, could refer to both the ability for communities to generate new policy ideas with government, but also that public sectors attempt to devolve more decision making on policy or investment to communities.

Perhaps policy making could be more of a team sport:

All teams involved in policy work together to co-design the policy intent, instructions, success criteria and to pre-test some interventions. The all teams work together to design, delivery and continuously improve all policy interventions, using shared policy tools, data, systems and methods, with impact monitored (intended and unintended) and evaluations triggered as required.

CC-BY: Pia Andrews, 2023

Below is a high level potential approach to the policy lifecycle, where policies are designed and delivered collaboratively, with shared policy infrastructure, and real impacts monitored,  escalated and fed into policy improvements over time, with formal evaluations able to be triggered when things go terribly wrong, not years later. Policy makers could, for instance, establish a theory of change between the vision / outcomes and the actions being taken, to ensure the indicators and measures are connected to and represented in delivery from the start. If all policies required a purpose statement, it would help implementers to ensure the delivery was aligned to the purpose and intent of the policies.

A diagram of a policy journey, from defining purpose, then outreach, definition of success, optioins, trials to decision point, followed by establishment of interventions, then a cycle of test/design-deliver-management, which continues till close or policy change.

CC-BY: Pia Andrews, 2023

In this model, there is only two phases in the policy lifecycle:

  • Policy purpose and authority – collaboratively developing the overarching policy purpose/intent, definition of success, and exploring options with a wide range of stakeholders, experts and those affected by the policy, including options testing, with clear definition of the measurable change(s) that should result, and the problem or opportunity the policy is trying to address. This all leads to a decision point, which varies according to the policy type above.
  • Policy interventions design & delivery – this includes the end to end co-design and management of all related policy interventions, including the program(s), services, grants, rules/legislation/regulation, or operational policy development. Policy interventions are continuously monitored individually and at a portfolio level for intended and unintended impacts, constantly improved and iterated based on feedback loops, and improvements are fed where relevant back into iterating overarching policies based on evidence and expertise.

Any form of policy could follow this model. Whether Constitutional reform, legislation/regulation reform, advice/options to Government, whole of government policies or operational policies, the intended outcome can be better realised through being a little more test-driven, participatory, multidisciplinary, iterative and through managing the whole policy lifecycle as an end to end approach with real time and continuous improvements to interventions (like services, regulations, etc), while also continuously monitoring for policy impact that can feed into policy improvements.

Proposals for reforming how policy is done are often – understandably – met with concerns at “slowing things downâ€�. But if you look at the full journey of policy today, policy intent realisation is already quite slow. If we had a more end to end and test driven approach, we’d get better policies designed that are easier and faster to implement, which would dramatically shorten the time to realise policy intent, even if it means a little more time up front. 

Solution #2: A focus and expansion of policy professionalism in the APS

We need to not only teach what all types of “goodâ€� public policy looks like, but create a culture of continuous learning and improvement for policy professionals. Perhaps we could start by complementing the excellent digital, data, HR and strategy professions coordinated by the APSC, with a “Policy Professionâ€�? 🙂 

But we also need to teach public service craft to all public servants, including what a healthy, politically neutral and evidence-based approach to public administration looks like, and why we aren’t achieving it as a norm across the sector. For instance, we need to have clear and consistent guidance on how to engage with Ministerial offices appropriately, so that everyone can maintain the integrity, dignity and trustworthiness expected of our public institutions. We also need clear guidance on how to promote an open APS that engages appropriately and regularly with the community, something which will hopefully be addressed in the APS Reform Agenda proposed Charter of Partnerships and Engagement.

All public servants should be confident to maintain real and long term stewardship of public good, above and beyond day to day pressures or policy objectives, and also be knowledgeable of their foundational policy accountabilities, which are found in the constitution and relevant legislation and regulations. For instance, I have been surprised and somewhat horrified to hear people talk about how AI is a problem in government because it isn’t regulated, seemingly unaware that all government systems, regardless of the technology, are subject to Administrative Law, the Privacy Act, PGPA and many other foundational policies (leg/reg). We have many checks and balances we can use to ensure good governance, we just need to be aware of and apply them more consistently across the whole sector. For example, here is a paper where I documented the “special context of government� and then applied that special context to the use of AI in government. It resulted in a holistic approach that is complementary to the concept and practice of responsible government. When everyone has a shared and common understanding of the special context and responsibilities of the public service, we have a good chance to get shared and high integrity approaches to everything we design, deliver and administer in the public sector.

Solution #3: Shared and end to end “Policy Infrastructure”

Given how long this post has become, I’ll share more on this concept in a subsequent post, but here’s a teaser 😉 Basically, whilst difference teams have different tools, including distinct and separate interpretations of policy, then we’ll continue to see an interpretation gap, and a lack of end to end policy visibility, which impedes end to end policy management.

CC-BY: Pia Andrews, 2023

The model above includes the following elements, aligned to the broad temporal phases of policy delivery:
·        To support test-driven policy ideation and announcements (pink):
o   Public engagement tools to explore, co-design & test policy options, both initially (new policies) & ongoing (continuous improvement to policies and policy interventions).
o   Linked and integrated admin data for research, policy modelling & patterns monitoring, best hosted by an independent, highly trusted entity, like the ABS.
o   Case law and gazettes as a utility to use for analysis and to test new ideas.
o   Publicly available modeling tools for testing and exploring policy change.
·        To support test-driven policy design, development & drafting (purple):
o   Consistently applied Human Impact Measurement Framework used across government, including for new policy proposals and for monitoring.
o   Public repository to share policy tools, government models, measurement frameworks, synthetic population data, etc.
·        To support the Parliamentary publishing and visibility (aqua):
o   A linked data representation of the administrative orders to automate reporting, accountability, auditing, security, access & to streamline MOGs.
o   Publicly available Policy as code (intended outcomes, legislation, models, defined target group) available at api.legislation.gov.au
o   Policy catalogue where all operational and Government policies can be discovered, along with measures and transparent reporting of progress. 
·        To support policy implementation (green):
o   A “Citizen’s ledgerâ€� to record all decisions with traceable explanations, for auditing & citizen access
o   Policy test suite to validate legality of system outputs in gov services & regulated entities.
·        To support policy compliance, iteration & improvement over time (yellow):
o   Open Feedback loops for public and staff about policies & services, to drive continuous improvement and to identify and mitigate harm.
o   Continuous monitoring of policy & human impacts, including dark patterns & quality of life indicators, alongside usual systems monitoring, to ensure adverse impacts are identified early and often.
o   Escalation and policy iteration mechanisms to ensure issues detected are acted upon at portfolio and whole of gov levels.

What do you think? 

What are the challenges you see, and what do you think needs to be done to improve policy management end to end? How might the APS Reform agenda help drive change, and how can we all do our part to improve things? How could we better deliver policy outcomes, and better public and community outcomes? How can we close the gap between policy and delivery? Would love to hear your thoughts and examples!

,

David RoweFreeDV blog activated

As our FreeDV project work ramps up, we’ve started blogging over at freedv.org. There’s a list of posts in the News section. I’ll be posting my monthly FreeDV updates there, and there will be posts from other team members as well.

,

Pia AndrewsBuilding agile and adaptive public institutions: insights and observations

Last week, I had the delightful opportunity to host some discussion tables at the 9th Annual FSTGov Government Summit in Canberra. It was an event designed just for public servants, to explore challenges and opportunities for reform and how to better serve the public. I hosted four groups in discussions about “how to build agile and adaptive public institutions�, which included 40 public servants from around 30 departments and agencies. The challenges, insights and highlights are captured below, for broader sharing and learning 🙂

What does agile and adaptive mean?

Challenge: The groups reflected that agile and adaptive still sound a bit buzzwordy, so we explored and documented roughly what they could and should mean in the context of public institutions. 

Insights: Several participants talked about the use of Agile in their orgs (usually in the IT departments) as a development methodology, which helped others to understand that context. When we discussed how to build agile and adaptive public institutions more broadly, we identified a few useful characteristics, which made it more broadly practical:

  • Evidence-based and purpose-led at every step – too many people think agile just means fast and iterative, but you can’t iterative towards a destination that is undefined, and if you aren’t using evidence, testing and experimentation to validate and invalidate along the way, then you end up building a lot of unnecessary or even counterproductive things.
  • Actively monitored and measured – we discussed how you can’t adapt to change if you don’t have the means and mechanisms to detect change in the first place. Active monitoring is already usually done for system performance (uptime, downtime, latency, etc) and for CX (user satisfaction, etc), but if we don’t also actively monitor for measurable policy impacts and for unintended quality of life impacts, then how can you adapt the policy interventions (which includes services) to ensure policy intent is being met in a humane way? In other words, how can you do no harm if you are unable to detect it? Active monitoring is critical to being adaptive, and to prioritising decision making about investment and efforts (including in the backlog).
  • Operationally enabled for continuous change – it’s not enough to just detect change. You need to be able to respond in a timely manner. This is the heart of being both agile and adaptive. If you have a hard and unmovable plan for what you are doing, then you systemically remove the ability to naturally change or adapt according to the results of user testing, to new evidence  or to new extrinsic pressures. This means change only tends to happen under the pressures of urgency, which leads to a lot of short term and techno-centric prioritisation, rather than policy or user-centric prioritisation. Modern organisations needs to be operationally enabled for continuous and evidence based change. This includes delegating actual decision making as far down as possible, so that the people closest to impact and expertise are able to respond quickly to change, with oversight but also trust from their managers, a serious culture change for many teams. 
  • Active feedback loops – Monitoring gives you quantitative data, but feedback gives you qualitative data, which can be key to identifying when things are not going well where the monitored measures might be missing something. Open and continuous mechanisms for feedback from end users AND from staff are key to keeping a finger on the pulse, to be adaptive to early indicators of problems before they snowball.
  • Staff capacity: necessary to experiment, innovate and think – most public servants are working 100% on the most urgent thing, with no time to stop, think, plan or try something new. Under such conditions, it is little surprising that public institutions are generally slow to detect and respond to change and challenging for individuals to innovate. All public servants, at all levels, could choose to free up 5% or 10% capacity to experiment, innovate and even just to think and plan. For those horrified at the idea who are looking at the exponentially growing backlogs of work, I would suggest that throwing more resources at doing the same thing (a linear response) will only continue to fail at addressing the backlog, because we need exponential solutions to exponential problems. A little time to innovation, to re-engineer, to address causal issues and to work smarter, would create the conditions for more agility and adaptation at a grassroots level. The best way to scale is to support all public servants at all levels to improve their impactfulness, which can’t be done without a little capacity.
  • Operational transparency – when you have easy to access visibility of your work program (what is currently being worked on, what’s done, what’s on the roadmap, etc) as well as in showing the outputs of your work (eg, sprint/code/policy reviews, showcases, blogs, research papers, etc), the you achieve two things that helps with institutional agility and adaptability. Firstly, you create an environment where anyone can offer peer review, expertise, experience and serendipitous networks of similarly motivated collaborators, providing the ability to deliver the best possible outcomes. This leads to the building of confidence in what you are doing, which speeds up delivery and adoption because we all work essentially at the speed of trust.
  • Financial agility – all tables spoke about the challenges of “waterfallâ€� budgeting and having to define every cent and deliverable years in advance of starting the work (through business cases, NPPs and the like). But even the Finance people in the discussion talked about wanting to shift to outcomes-based budgeting, and encouraging smaller investments in delivering an MVP rather than high risk big-bang launches of new systems at the end of the program plan. There is certainly opportunities to do small things within the cadence of budget planning that can help bring financial agility. For instance, choosing the use outcomes or epics as milestones in delivery roadmaps (rather than functionality or platform based milestones) which ensures you deliver something that works with flexibility on how to get there. We discussed sprints-based procurement, which I first saw at Dept Finance (APS), but none of the APS knew about it, so I dobbed in the incredible Sharyn Clarkson, from whom I learned about it 🙂

“We already have adopted agile in IT, what’s next?�

Challenge: We discussed how agile adoption in IT has helped, but not solved the big challenges facing service and policy delivery in government. When IT/dev teams adopt agile methods, they are usually still in the position of receiving “business requirementsâ€� from other parts of the department who are themselves disengaged from the process of designing or delivering the service/system. We identified the fact that many departments have maintained the issues of functionally segmentation structures, with multiple “product ownersâ€� emerging (eg, a business PO and a tech PO), which defeats the purpose and undermines the benefits of product management as a methodology. We also discussed how product management, where it has been adopted, still usually relates to managing platforms rather than services, so decision making, prioritisation and risk is analysed at a platform level, not at the service level, leading to cannablistic resourcing behaviours across “product teamsâ€� that are actually part of the same service. 

Insights: Ensuring each “product� being managed is at the service level to provide a more realistic and impactful way to priotise, manage risk, maximse intended policy impact and to actively manage the end user experience. Funding a diverse product team, with the design, dev, ops, business and policy expertise all represented (even if only part time, such as 3 hours a week for a policy person) dramatically helps to ensure the benefits, cadences and agility of a proper, agile, test driven and continuously improved product management approach. Explicitly adopting an MVP deployment strategy is also necessary for product management to work, otherwise the team is still driven to build and deploy everything all at once, which never works.

How do we balance risk and agility?

Challenge: Some of the group discussions reflected on what real risk is and isn’t, and we determined that the public sector reputation of being risk averse, has created a mythology that taking no action avoids risk, when the reality is that a culture of taking no action actually creates risk in a world that is continuously and unexpectedly changing around us.

Insights: Analysis of the risk of non-action should always be included in risk assessments, as well as risk to the public and those affected by the proposal. The SES reforms underway could include KPIs for executives to ensure that personal risk is well aligned to the portfolio and policy objectives, as well as aligned to the impact on the public, so that we avoid a situation where personal risk aversion can create risk for the public institutions and/or communities we serve. Risk needs to be assessed in the context of stewardship, looking at long term and short term implications, to ensure a balanced approach, that should also then be prioritised based on measurable policy and public impact.

These are just a few of the insights and observations from our discussion, what are your thoughts? How can you contribute to creating a more agile and adaptive organisation where you work? 🙂

,

Francois MarierUpgrading from Debian 11 bullseye to 12 bookworm

Over the last few months, I upgraded my Debian machines from bullseye to bookworm. The process was uneventful, but I ended up reconfiguring several things afterwards in order to modernize my upgraded machines.

Logcheck

I noticed in this release that the transition to journald is essentially complete. This means that rsyslog is no longer needed on most of my systems:

apt purge rsyslog

Once that was done, I was able to comment out the following lines in /etc/logcheck/logcheck.logfiles.d/syslog.logfiles:

#/var/log/syslog
#/var/log/auth.log

I did have to adjust some of my custom logcheck rules, particularly the ones that deal with kernel messages:

--- a/logcheck/ignore.d.server/local-kernel
+++ b/logcheck/ignore.d.server/local-kernel
@@ -1,1 +1,1 @@
-^\w{3} [ :[:digit:]]{11} [._[:alnum:]-]+ kernel: \[[0-9. ]+]\ IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+
+^\w{3} [ :[:digit:]]{11} [._[:alnum:]-]+ kernel: (\[[0-9. ]+]\ )?IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+

Then I moved local entries from /etc/logcheck/logcheck.logfiles to /etc/logcheck/logcheck.logfiles.d/local.logfiles (/var/log/syslog and /var/log/auth.log are enabled by default when needed) and removed some files that are no longer used:

rm /var/log/mail.err*
rm /var/log/mail.warn*
rm /var/log/mail.info*

Finally, I had to fix any unescaped | characters in my local rules. For example error == NULL || \*error == NULL must now be written as error == NULL \|\| \*error == NULL.

Networking

After the upgrade, I got a notice that the isc-dhcp-client is now deprecated and so I removed if from my system:

apt purge isc-dhcp-client

This however meant that I need to ensure that my network configuration software does not depend on the now-deprecated DHCP client.

On my laptop, I was already using NetworkManager for my main network interfaces and that has built-in DHCP support.

Migration to systemd-networkd

On my backup server, I took this opportunity to switch from ifupdown to systemd-networkd by removing ifupdown:

apt purge ifupdown
rm /etc/network/interfaces

putting the following in /etc/systemd/network/20-wired.network:

[Match]
Name=eno1

[Network]
DHCP=yes
MulticastDNS=yes

and then enabling/starting systemd-networkd:

systemctl enable systemd-networkd
systemctl start systemd-networkd

I also needed to install polkit:

apt install --no-install-recommends policykit-1

in order to allow systemd-networkd to set the hostname.

In order to start my firewall automatically as interfaces are brought up, I wrote a dispatcher script to apply my existing iptables rules.

Migration to predictacle network interface names

On my Linode server, I did the same as on the backup server, but I put the following in /etc/systemd/network/20-wired.network since it has a static IPv6 allocation:

[Match]
Name=enp0s4

[Network]
DHCP=yes
Address=2600:3c01::xxxx:xxxx:xxxx:939f/64
Gateway=fe80::1

and switched to predictable network interface names by deleting these two files:

  • /etc/systemd/network/50-virtio-kernel-names.link
  • /etc/systemd/network/99-default.link

and then changing eth0 to enp0s4 in:

  • /etc/network/iptables.up.rules
  • /etc/network/ip6tables.up.rules
  • /etc/rc.local (for OpenVPN)
  • /etc/logcheck/ignored.d.*/*

Then I regenerated all initramfs:

update-initramfs -u -k all

and rebooted the virtual machine.

Giving systemd-resolved control of /etc/resolv.conf

After reading this history of DNS resolution on Linux, I decided to modernize my resolv.conf setup and let systemd-resolved handle /etc/resolv.conf.

I installed the package:

apt install systemd-resolved

and then removed no-longer-needed packages:

apt purge resolvconf avahi-daemon

I also disabled support for Link-Local Multicast Name Resolution (LLMNR) after reading this person's reasoning by putting the following in /etc/systemd/resolved.conf.d/llmnr.conf:

[Resolve]
LLMNR=no

I verified that mDNS is enabled and LLMNR is disabled:

$ resolvectl mdns
Global: yes
Link 2 (enp0s25): yes
Link 3 (wlp3s0): yes
$ resolvectl llmnr
Global: no
Link 2 (enp0s25): no
Link 3 (wlp3s0): no

Note that if you want auto-discovery of local printers using CUPS, you need to keep avahi-daemon since cups-browsed doesn't support systemd-resolved. You can verify that it works using:

sudo lpinfo --include-schemes dnssd -v

Dynamic DNS

I replaced ddclient with inadyn since it doesn't work with no-ip.com anymore, using the configuration I described in an old blog post.

chkrootkit

I moved my customizations in /etc/chkrootkit.conf to /etc/chkrootkit/chkrootkit.conf after seeing this message in my logs:

WARNING: /etc/chkrootkit.conf is deprecated. Please put your settings in /etc/chkrootkit/chkrootkit.conf instead: /etc/chkrootkit.conf will be ignored in a future release and should be deleted.

ssh

As mentioned in Debian bug#1018106, to silence the following warnings:

sshd[6283]: pam_env(sshd:session): deprecated reading of user environment enabled

I changed the following in /etc/pam.d/sshd:

--- a/pam.d/sshd
+++ b/pam.d/sshd
@@ -44,7 +44,7 @@ session    required     pam_limits.so
 session    required     pam_env.so # [1]
 # In Debian 4.0 (etch), locale-related environment variables were moved to
 # /etc/default/locale, so read that as well.
-session    required     pam_env.so user_readenv=1 envfile=/etc/default/locale
+session    required     pam_env.so envfile=/etc/default/locale

 # SELinux needs to intervene at login time to ensure that the process starts
 # in the proper default security context.  Only sessions which are intended

I also made the following changes to /etc/ssh/sshd_config.d/local.conf based on the advice of ssh-audit 2.9.0:

-KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256
+KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,sntrup761x25519-sha512@openssh.com,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512

Tim SerongStill Going With The Flow

It’s time for a review of the second year of operation of our Redflow ZCell battery and Victron Energy inverter/charger system. To understand what follows it will help to read the earlier posts in this series:

In case ~12,000 words of background reading seem daunting, I’ll try to summarise the most important details here:

  • We have a 5.94kW solar array hooked up to a Victron MPPT RS solar charge controller, two Victron 5kW Multi-Plus II inverter/chargers, a Victron Cerbo GX console, and a single 10kWh Redflow ZCell battery. It works really well. We’re using most of our generated power locally, and it’s enabled us to blissfully coast through several grid power outages and various other minor glitches. The Victron gear and the ZCell were installed by Lifestyle Electrical Services.
  • Redflow batteries are excellent because you can 100% cycle them every day, and they aren’t a giant lump of lithium strapped to your house that’s impossible to put out if it bursts into flames. The catch is that they need to undergo periodic maintenance where they are completely discharged for a few hours at least every three days. If you have more than one, that’s fine because the maintenance cycles interleave (it’s all automatic). If you only have one, you can’t survive grid outages if you’re in a maintenance period, and you can’t ordinarily use the Cerbo’s Minimum State of Charge (MinSoC) setting to perpetually keep a small charge in the battery in case of emergencies. As we still only have one battery, I’ve spent a fair bit of time experimenting to mitigate this as much as I can.
  • The system itself requires a certain amount of power to run. Think of the pumps and fans in the battery, and the power used directly by the inverters and the console. On top of that a certain amount of power is simply lost to AC/DC conversion and charge/discharge inefficiencies. That’s power that comes into your house from the grid and from the sun that your loads, i.e. the things you care about running, don’t get to use. This is true of all solar PV and battery storage systems to a greater or lesser degree, but it’s not something that people always think about.

With the background out of the way we can get on to the fun stuff, including a roof replacement, an unexpected fault after a power outage followed by some mains switchboard rewiring, a small electrolyte leak, further hackery to keep a bit of charge in the battery most of the time, and finally some numbers.

The big job we did this year was replacing our concrete tile roof with colorbond steel. When we bought the house – which is in a rural area and thus a bushfire risk – we thought: “concrete brick exterior, concrete tile roof – sweet, that’s not flammable”. Unfortunately it turns out that while a tile roof works just fine to keep water out, it won’t keep embers out. There’s a gadzillion little gaps where the tiles overlap each other, and in an ember attack, embers will get up in there and ignite the fantastic amount of dust and other stuff that’s accumulated inside the ceiling over several decades, and then your house will burn down. This could be avoided by installing roof blanket insulation under the tiles, but in order to do that you have to first remove all the tiles and put them down somewhere without breaking them, then later put them all back on again. It’s a lot of work. Alternately, you can just rip them all off and replace the whole lot with nice new steel, with roof blanket insulation underneath.

The colour is called Bluegum.

Of course, you need good weather to replace a roof, and you need to take your solar panels down while it’s happening. This meant we had twenty-two solar panels stacked on our back porch for three weeks of prime PV time from February 17 – March 9, 2023, which I suspect lost us a good 500kW of power generation. Also, the roof job meant we didn’t have the budget to get a second ZCell this year – for the cost of the roof replacement, we could have had three new ZCells installed – but as my wife rightly pointed out, all the battery storage in the world won’t do you any good if your house burns down.

We had at least five grid power outages during the year. A few were brief, the grid being down for only a couple of minutes, but there were two longer ones in September (one for 30 minutes, one for about an hour and half). We got through the long ones just fine with either the sun high in the sky, or charge in the battery, or both. One of the earlier short outages though uncovered a problem. On the morning of May 30, my wife woke up to discover there was no power, and thus no running water. Not a good thing to wake up to. This happened while I was away, because of course something like this would happen while I was away. It turns out there had been a grid outage at about 02:10, then the grid power had come back, but our system had not. The Multis ended up in some sort of fault state and were refusing to power our loads. On the console was an alarm message: “#8 – Ground relay test failed”.

That doesn’t look good.

Note the times in the console messages are about 08:00. I confirmed via the logs from the VRM portal that the grid really did go out some time between 02:10 and 02:15, but after that there was nothing in the logs until 07:59, which is when my wife used the manual changeover switch to shift all our loads back to direct grid power, bypassing the Victron kit. That brought our internet connection back, along with the running water. I contacted Murray Roberts from Lifestyle Electrical and Simon Hackett for assistance, Murray logged in remotely and reset the Multis, my wife flicked the changeover switch back and everything was fine. But the question remained, what had gone wrong?

The ground relay in the Multis is there to connect neutral to ground when the grid fails. Neutral and ground are already physically connected on the grid (AC input) side of the Multis in the main switchboard, but when the grid power goes out, the Multis disconnect their inputs, which means the loads on the AC output side no longer have that fixed connection from neutral to ground. The ground relay activates in this case to provide that connection, which is necessary for correct operation of the safety switches on the power circuits in the house.

The ground relay is tested automatically by the Multis. Looking up Error 8 – Ground relay test failed on Victron’s web site indicated that either the ground relay really was faulty, or possibly there was a wiring fault or an issue with one of the loads in our house. So I did some testing. First, with the battery at 50% State of Charge (SoC), I did the following:

  1. Disconnected all loads (i.e. flipped the breaker on the output side of the Multis)
  2. Killed the mains (i.e. flipped the breaker on the input side of the Multis)
  3. Verified the system switched to inverting mode (i.e. running off the battery)
  4. Restored mains power
  5. Verified there was no error

This demonstrated that the ground relay and the Multis in general were fine. Had there been a problem at that level we would have seen an error when I restored mains power. I then reconnected the loads and repeated steps 2-5 above. Again, there was no error which indicated the problem wasn’t due to a wiring defect or short in any of the power or lighting circuits. I also re-tested with the heater on and the water pump running just in case there may have been an issue specifically with either of those devices. Again, there was no error.

The only difference between my test above and the power outage in the middle of the night was that in the middle of the night there was no charge in the battery (it was right after a maintenance cycle) and no power from the sun. So in the evening I turned off the DC isolators for the PV and deactivated my overnight scheduled grid charge so there’d be no backup power of any form in the morning. Then I repeated the test:

  1. Disconnected all loads
  2. Killed the mains.
  3. Checked the console which showed the system as “off”, as opposed to “inverting”, as there was no battery power or solar generation
  4. Restored mains power
  5. Shortly thereafter, I got the ground relay test failed error

The underlying detailed error message was “PE2 Closed”, which meant that it was seeing the relay as closed when it’s meant to be open. Our best guess is that we’d somehow hit an edge case in the Multi’s ground relay test, where they maybe tried to switch to inverting mode and activated the ground relay, then just died in that state because there was no backup power, and got confused when mains power returned. I got things running again by simply power cycling the Multis.

So it kinda wasn’t a big deal, except that if the grid went out briefly with no backup power, our loads would remain without power until one of us manually reset the system. This was arguably worse than not having the system at all, especially if it happened in the middle of the night, or when we were away from home. The fact that we didn’t hit this problem in the first year of operation is a testament to how unlikely this event is, but the fact that it could happen at all remained a problem.

One fix would have been to get a second battery, because then we’d be able to keep at least a tiny bit of backup power at all times regardless of maintenance cycles, but we’re not there yet. Happily, Simon found another fix, which was to physically connect the neutral together between the AC input and AC output sides of the Multis, then reconfigure them to use the grid code “AS4777.2:2015 AC Neutral Path externally joined”. That physical link means the load (output) side picks up the ground connection from the grid (input) side in the swichboard, and changing the grid code setting in the Multis disables the ground relay and thus the test which isn’t necessary anymore.

Murray needed to come out anyway to replace the carbon sock in the ZCell (a small item of annual maintenance) and was able to do that little bit of rewriting and configuration at the same time. I repeated my tests both with and without backup power and everything worked perfectly, i.e. the system came back immediately by itself after a grid outage with no backup power, and of course switched over to inverting just fine when there was backup power available.

This leads to the next little bit of fun. The carbon sock is a thing that sits inside the zinc electrolyte tank and helps to keep the electrolyte pH in the correct operating range. Unfortunately I didn’t manage to get a photo of one, but they look a bit like door snakes. Replacing the carbon sock means opening the case, popping one side of the Gas Handling Unit (GHU) off the tank, pulling out the old sock and putting in a new one. Here’s a picture of the ZCell with the back of the case off, indicating where the carbon sock goes:

The tank on the left (with the cooling fan) is for zinc electrolyte. The tank on the right is for bromine electrolyte. The blocky assembly of pipes going into both tanks is the GHU. The rectangular box behind that contains the electrode stacks.

When Murray popped the GHU off, he noticed that one of the larger pipes on one side had perished slightly. Thankfully he happened to have a spare GHU with him so was able to replace the assembly immediately. All was well until later that afternoon, when the battery indicated hardware failure due to “Leak 1 Trip” and shut itself down out of an abundance of caution. Upon further investigation the next day, Murry and I discovered there was a tiny split in one of the little hoses going into the GHU which was letting the electrolyte drip out.

Drip… Drip… Drip…

This small electrolyte leak was caught lower down in the battery, where the leak sensor is. Murray sucked the leaked electrolyte out of there, re-terminated that little hose and we were back in business. I was happy to learn that Redflow had obviously thought about the possibility of this type of failure and handled it. As I said to Murray at the time, we’d rather have a battery that leaks then turns itself off than a battery that catches fire!

Aside from those two interesting events, the rest of the year of operation was largely quite boring, which is exactly what one wants from a power system. As before I kept a small overnight scheduled charge and a larger late afternoon scheduled charge active on weekdays to ensure there was some power in the battery to use at peak (i.e. expensive) grid times. In spring and summer the afternoon charge is largely superfluous because the battery has usually been well filled up from the solar by then anyway, but there’s no harm in leaving it turned on. The one hack I did do during the year was to figure out a way to keep a small (I went with 15%) MinSoC in the battery at all times except for maintenance cycle evenings, and the morning after. This is more than enough to smooth out minor grid outages of a few minutes, and given our general load levels should be enough to run the house for more than an hour overnight if necessary, provided the hot water system and heating don’t decide to come on at the same time.

My earlier experiment along these lines involved a script that ran on the Cerbo twice a day to adjust scheduled charge settings in order to keep the battery at 100% SoC at all times except for peak electricity hours and maintenance cycle evenings. As mentioned in TANSTAAFL I ran that for all of July, August and most of September 2022. It worked fine, but ultimately I decided it was largely a waste of energy and money, especially when run during the winter months when there’s not much sun and you end up doing a lot of grid charging. This is a horribly inefficient way of getting power into the battery (AC to DC) versus charging the battery direct from solar PV. We did still use those scripts in the second year, but rather more judiciously, i.e. we kept an eye on the BOM forecasts as we always do, then occasionally activated the 100% charge when we knew severe weather and/or thunderstorms were on the way, those being the things most likely to cause extended grid outages. I also manually triggered maintenance on the battery earlier than strictly necessary several times when we expected severe weather in the coming days, to avoid having a maintenance cycle (and thus empty battery) coincide with potential outages. On most of those occasions this effort proved to be unnecessary. Bearing all that in mind, my general advice to anyone else with a single ZCell system (aside from maybe adding scheduled charges to time-shift expensive peak electricity) is to just leave it alone and let it do its thing. You’ll use most of your locally generated electricity onsite, you’ll save some money on your power bills, and you’ll avoid some, but not all, grid outages. This is a pretty good position to be in.

That said, I couldn’t resist messing around some more, hence my MinSoC experiment. Simon’s installation guide points out that “for correct system operation, the Settings->ESS menu ‘Min SoC’ value must be set to 0% in single-ZCell systems”. The issue here is that if MinSoC is greater than 0%, the Victron gear will try to charge the battery while the battery is simultaneously trying to empty itself during maintenance, which of course just isn’t going to work. My solution to this is the following script, which I run from a cron job on the Cerbo twice a day, once at midnight UTC and again at 06:00 UTC with the --check-maintenance flag set:

Midnight UTC corresponds to the end of our morning peak electricity time, and 06:00 UTC corresponds to the start of our afternoon peak. What this means is that after the morning peak finishes, the MinSoC setting will cause the system to automatically charge the battery to the value specified if it’s not up there already. Given it’s after the morning peak (10:00 AEST / 11:00 AEDT) this charge will likely come from solar PV, not the grid. When the script runs again just before the afternoon peak (16:00 AEST / 17:00 AEDT), MinSoC is set to either the value specified (effectively a no-op), or zero if it’s a maintenance day. This allows the battery to be discharged correctly in the evening on maintenance days, while keeping some charge every other day in case of emergencies. Unlike the script that tries for 100% SoC, this arrangement results in far less grid charging, while still giving protection from minor outages most of the time.

In case Simon is reading this now and is thinking “FFS, I wrote ‘MinSoC must be set to 0% in single-ZCell systems’ for a reason!” I should also add a note of caution. The script above detects ZCell maintenance cycles based solely on the configured maintenance time limit and the duration since last maintenance. It does not – and cannot – take into account occasions when the user manually forces maintenance, or situations in which a ZCell for whatever reason hypothetically decides to go into maintenance of its own accord. The latter shouldn’t generally happen, but it can. The point is, if you’re running this MinSoC script from a cron job, you really do still want to keep an eye on what the battery is doing each day, in case you need to turn that setting off and disable the cron job. If you’re not up for that I will reiterate my general advice from earlier: just leave the system alone – let it do its thing and you’ll (almost always) be perfectly fine. Or, get a second ZCell and you can ignore the last several paragraphs entirely.

Now, finally, let’s look at some numbers. The year periods here are a little sloppy for irritating historical reasons. 2018-2019, 2019-2020 and 2020-2021 are all August-based due to Aurora Energy’s previous quarterly billing cycle. The 2021-2022 year starts in late September partly because I had to wait until our new electricity meter was installed in September 2021, and partly because it let me include some nice screenshots when I started writing TANSTAAFL on September 25, 2022. I’ve chosen to make this year (2022-2023) mostly sane, in that it runs from October 1, 2022 through September 30, 2023 inclusive. This is only six days offset from the previous year, but notably makes it much easier to accurately correlate data from the VRM portal with our bills from Aurora. Overall we have five consecutive non-overlapping 12 month periods that are pretty close together. It’s not perfect, but I think it’s good enough to work with for our purposes here.

YeaRGrid InSolar InTotal InLoadsExport
2018-20199,0316,68215,71311,8273,886
2019-20209,3246,46815,79212,2553,537
2020-20217,5826,34713,92910,3583,571
2021-20228,5315,64014,17110,849754
2022-20238,9365,74414,68011,534799

Overall, 2022-2023 had a similar shape to 2021-2022, including the fact that in both these years we missed three weeks of solar generation in late summer. In 2022 this was due to replacing the MPPT, and in 2023 it was because we replaced the roof. In both cases our PV generation was lower than it should have been by an estimated 500-600kW. Hopefully nothing like this happens again in future years.

All of our numbers in 2022-2023 were a bit higher than in 2021-2022. We pulled 4.75% more power from the grid, generated 1.84% more solar, the total power going into the system (grid + solar) was 3.59% higher, our loads used 6.31% more power, and we exported 5.97% more power than the previous year.

I honestly don’t know why our loads used more power this year. Here’s a table showing our consumption for both years, and the differences each month (note that September 2022 is only approximate because of how the years don’t quite line up):

Month20222023Diff
October988873-115
November866805-61
December767965198
January822775-47
February63872183
March81391198
April7751,115340
May9531,098145
June1,0731,14976
July1,1181,103-15
August9661,06599
September1,070964-116

Here’s a graph:

WTF happened in December and April?!?

Did we use more cooling this December? Did we use more heating this April and May? I dug the nearest weather station’s monthly mean minimum and maximum temperatures out of the BOM Climate Data Online tool and found that there’s maybe a degree or so variance one way or the other each month year to year, so I don’t know what I can infer from that. All I can say is that something happened in December and April, but I don’t know what.

Another interesting thing is that what I referred to as “the energy cost of the system” in TANSTAAFL has gone down. That’s the kW figure below in the “what?” column, which is the difference between grid in + solar in – loads – export, i.e. the power consumed by the system itself. In 2021-2022, that was 2,568 kW, or about 18% of the total power than went into the system. In 2022-2023 it was only 1,838kWh, or 12.5%:

YearGrid InSolar InTotal InLoadsExportTotal Outwhat?
2021-20228,5315,64014,17110,84975411,6032,568
2022-20238,9635,74414,68011,53479912,3331,838

The cause of this reduction is almost certainly that we didn’t spend two and a half months doing lots of grid charging of the battery in 2022-2023. This again points to the advisability of just letting the system do its thing and not messing with it too much unless you really know you need to.

The last set of numbers I have involve actual money. Here’s what our electricity bills looked like over the past five years:

YearFrom GridTotal BillCost/kWh
2018-20199,031$2,278.33$0.25
2019-20209,324$2,384.79$0.26
2020-20217,582$1,921.77$0.25
2021-20228,531$1,731.40$0.20
2022-20238,936$1,989.12$0.22

Note that cost/kWh as I have it here is simply the total dollar amount of our bills divided by the total power drawn from the grid (I’m deliberately ignoring the additional power we use that comes from the sun in this calculation). The bills themselves say “peak power costs $X, off-peak costs $Y, you get $Z back for power exported and there’s a daily supply charge of $SUCKS_TO_BE_YOU”, but that’s all noise. What ultimately matters in my opinion is what I call the effective cost per kilowatt hour, which is why those things are all smooshed together here. The important point is that with our existing solar array we were previously effectively paying about $0.25 per kWh for grid power. After getting the battery and switching to Peak & Off-Peak billing, that went down to $0.20/kWh – a reduction of 20%. Now we’ve inched back up to $0.22/kWh, but it turns out that’s just because power prices have increased. As far as I can tell Aurora Energy don’t publish historical pricing data, so as a public service, I’ll include what I’ve been able to glean from our prior bills here:

  • July 2023 onwards:
    • Daily supply charge: $1.26389
    • Peak: $0.36198/kWh
    • Off-Peak: $0.16855/kWh
    • Feed-In Tariff: $0.10869/kWh
  • July 2022 – July 2023
    • Daily supply charge: $1.09903
    • Peak: $0.33399/kWh
    • Off-Peak: $0.15551/kWh
    • Feed-In Tariff: $0.08883/kWh
  • Before July 2022:
    • Daily supply charge: $0.98
    • Peak: $0.29852
    • Off-Peak: $0.139
    • Feed-In Tariff: $0.06501

It’s nice that the feed-in tariff (i.e. what you get credited when you export power) has gone up quite a bit, but unless you’re somehow able to export 2-3x more power than you import, you’ll never get ahead of the ~20% increase in power prices over the last two years.

Having calculated the effective cost/kWh for grid power, I’m now going to do one more thing which I didn’t think to do during last year’s analysis, and that’s calculate the effective cost/kWh of running our loads, bearing in mind that they’re partially powered from the grid, and partially from the sun. I’ve managed to dig up some old Aurora bills from 2016-2017, back before we put the solar panels on. This should make for an interesting comparison.

YearFrom GridTotal BillGrid $/kWhLoadsLoads $/kWh
2016-201717,026$4,485.45$0.2617,026$0.26
2018-20199,031$2,278.33$0.2511,827$0.19
2019-20209,324$2,384.79$0.2612,255$0.19
2020-20217,582$1,921.77$0.2510,358$0.19
2021-20228,531$1,731.40$0.2010,849$0.16
2022-20238,936$1,989.12$0.2211,534$0.17

The first thing to note is the horrifying 17 megawatts we pulled in 2016-2017. Given the hot water and lounge room heat pump were on a separate tariff, I was able to determine that four of those megawatts (i.e. about 24% of our power usage) went on heating that year. Replacing the crusty old conventional electric hot water system with a Sanden heat pump hot water service cut that in half – subsequent years showed the heating/hot water tariff using about 2MW/year. We obviously also somehow reduced our loads by another ~3MW/year on top of that, but I can’t find the Aurora bills for 2017-2018 so I’m not sure exactly when that drop happened. My best guess is that I probably got rid of some old, always-on computer equipment.

The second thing to note is how the cost of running the loads drops. In 2016-2017 the grid cost/kWh is the same as the loads cost/kWh, because grid power is all we had. From 2018-2021 though, the load cost/kWh drops to $0.19, a saving of about 26%. It remains there until 2021-2022 when we got the battery and it dropped again to $0.16 (another 15% or so). So the big win was certainly putting the solar panels on and swapping the hot water system, with the battery being a decent improvement on top of that.

Further wins are going to come from decreasing our power consumption. In previous posts I had mentioned the need to replace panel heaters with heat pumps, and also that some of our aging computer equipment needed upgrading. We did finally get a heat pump installed in the master bedroom this year, and we replaced the old undersized lounge room heat pump with a new correctly sized unit. This happened on June 30 though, so will have had minimal impact on this years’ figures. Likewise an always-on computer that previously pulled ~100W is now better, stronger and faster in all respects, while only pulling ~50W. That will save us ~438kW of power per year, but given the upgrade happened in mid August, again we won’t see the full effects until later.

I’m looking forward to doing another one of these posts in a year’s time. Hopefully I will have nothing at all interesting to report.

,

Linux AustraliaCouncil Meeting November 8, 2023 – Minutes

1. Meeting overview and key information

Present

  • Joel Addison (President)
  • Wil Brown (Vice-President)
  • Sae Ra Germaine  (Council)
  • Russell Stuart (Treasurer)
  • Jonathan Woithe (Council)
  • Neill Cox (Secretary)

 

Apologies

  • None received

 

Not present

  • Marcus Herstik (Council)

 

Meeting opened at 20:03 AEST by Joel and quorum was achieved.

Minutes taken by Jonathan and Neil.

 

2. Log of correspondence

  • 26 Oct 2023 Christopher Neugebauer re Intent to establish Independent Subcommittee for PyCon AU – Joel has responded
  • 28 Oct 2023 Russell Curry re Canberra Linux kernel conference – Joel has responded
  • 31 Oct 2023 Andrew Ruthven re NZ GST – Sae Ra has responded
  • 31 Oct 2023 Dave Sparks re Physical address in Australia for Visa
  • 01 Nov 2023 Kathy Reid re Change to auDA domain name rules – request to make submission under LA banner – Joel has responded
  • 01 Nov 2023 Les Kitchen – Possible donation to Linux Australia from disincorporation of Melbourne Functional Programming Association Incorporated
  • 6 Nov 2023 Membership enquiry from Julia Topliss via the Linux Australia Website Contact Form – Jonathan has responded

 

3. Items for discussion

  • PyCon AU Steering Committee

Motion: That Linux Australia establish PyCon AU as a Steering Committee subcommittee.

Seconded: Neill Cox

Motion passed unanimously

 

  • Drupal South request for a physical address in Australia for use with visa applications
  • Canberra Linux kernel conference

Need to ensure timing fits with other conferences, particularly Everything Open.

  • Melbourne Functional Programming Association Incorporated

Jonathan will speak to Les Kitchen to discuss the process. Perhaps we can use the funds for a functional programming award, or some other activity that relates to the work of the association.

 

4. Items for noting

  • Open Source Australia business name renewal

Motion: That the Open Source Australia business name be renewed for 3 years.

Seconded: Russell Stuart

Vote was held on the mailing list. For: 6, No: 0. Motion passed.

Russell will now renew the business name.

 

  • Feedback was provided to auDA on their proposed changes to the .au Domain Administration Rules: Licensing (.au Licensing Rules). Thanks to Kathy Reid for providing feedback to LA and assisting with this process.

 

  • Russell plans to not produce the same glossy annual report as in previous years. Will produce the same reports, but in a simpler format. Sae Ra will apply LA’s corporate branding to the text. Neill will request submissions from the subcommittees.

5. Other business

 

  • Drupal subcommittee update

Canberra Community day in two weeks. Has come together well. 100 delegates attending and 10 on a wait list. Follows on/attached to the GovCMS meetup. Attendance for government employees is free, but 40% of attendees gave purchased tickets and 7 of 6(!) sponsorship slots were taken up and the budget is healthy.

 

CFP not as strong as they would normally expect, but there are good presenters.

 

Sydney in March next year. Tickets are on sale and have sold 11 early birds already without much promotion. A track chair is in place. CFP has gone out. Hoping to get the schedule up before Christmas, but people aren’t very focussed on next year.

 

Sydney sponsorships are selling well, but still early. Hoping to get as much finalised before Christmas as time pressure will be ramping up after the end of the year.

 

There is a Drupal Asia conference coming up that Drupal South would like to participate in. There are scattered Drupal communities across Asia. It would be good to have an event that they could all participate in. Dates and venue are being considered and discussions with the Drupal Association for support. If the Drupal Association is not available then they will look for help from some other group (probably not LA) How interested would LA be in supporting the conference? Can Drupal South be a sponsor?

 

LA response: Interested in helping, but we only have insurance in Australia, so would have to look at feasibility and cost. LA has Australian and New Zealand bank accounts, and a Wise account. We would need more information before we could commit anything.

 

Visa address: Who for? What liability for LA if they overstay their visa? Needs to be a physical address not a PO box. COuld possibly use the LA Secretary’s address or perhaps the conference chair.

 

  • Admin team update

The archives of old PyCon AU websites have been extracted and sent through to the team, so this is now complete.

 

Planning an update for the LA website, want to have the email migration done before then.

 

Some thought about policy controls for managing mail under Fastmail for things like Everything Open. 

 

Existing mail will be moved to an archive namespace as part of the migration.

 

Shipping of banners and other LA gear to Rob Thomas for Everything Open? Steve can ship a palette to Gladstone if needed.

 

Steve will submit reimbursement and a budget for next year soon.

 

Some old servers to sell, but probably won’t get much for them, but Steve will make best effort to get a reasonable return.

 

Current servers are due for disk replacement. That will be part of next year’s budget, but Steve will provide an estimate so it can be provided as part of the reporting for the upcoming AGM.

 

  • Joomla subcommittee update

No response received.

 

  • PyCon AU subcommittee update

The PyCon AU team are unable to attend in person. Richard Jones has supplied this update:

 

We’ve met with council, and unanimously decided to form a Steering Committee (rather than Independent), waiting on next steps.

We’ve heard back from MCEC, pending exact quote, and sending a budget to council, for our 2024 event.

We are actively looking for someone to step in to chair the event.

 

  • Flounder subcommittee update

Russell did respond on the night, but we could not get him online in time.

 

  • LUV  subcommittee update

No response received to invitation.

 

  • WordPress subcommittee update

The conference will be called WordCamp Sydney

UTS looks like the best venue.

Trying to form an organising committee. There are 26 volunteers to help with the organising. Roles need to be defined.

 

  • Everything Open 2024

First meeting of the organising committee has happened.

The post Council Meeting November 8, 2023 – Minutes appeared first on Linux Australia.

,

Matt PalmerPostgreSQL Encryption: The Available Options

On an episode of Postgres FM, the hosts had a (very brief) discussion of data encryption in PostgreSQL. While Postgres FM is a podcast well worth a subscribe, the hosts aren’t data security experts, and so as someone who builds a queryable database encryption system, I found the coverage to be somewhat… lacking. I figured I’d provide a more complete survey of the available options for PostgreSQL-related data encryption.

The Status Quo

By default, when you install PostgreSQL, there is no data encryption at all. That means that anyone who gets access to any part of the system can read all the data they have access to.

This is, of course, not peculiar to PostgreSQL: basically everything works much the same way.

What’s stopping an attacker from nicking off with all your data is the fact that they can’t access the database at all. The things that are acting as protection are “perimeter” defences, like putting the physical equipment running the server in a secure datacenter, firewalls to prevent internet randos connecting to the database, and strong passwords.

This is referred to as “tortoise” security – it’s tough on the outside, but soft on the inside. Once that outer shell is cracked, the delicious, delicious data is ripe for the picking, and there’s absolutely nothing to stop a miscreant from going to town and making off with everything.

It’s a good idea to plan your defenses on the assumption you’re going to get breached sooner or later. Having good defence-in-depth includes denying the attacker to your data even if they compromise the database. This is where encryption comes in.

Storage-Layer Defences: Disk / Volume Encryption

To protect against the compromise of the storage that your database uses (physical disks, EBS volumes, and the like), it’s common to employ encryption-at-rest, such as full-disk encryption, or volume encryption. These mechanisms protect against “offline” attacks, but provide no protection while the system is actually running. And therein lies the rub: your database is always running, so encryption at rest typically doesn’t provide much value.

If you’re running physical systems, disk encryption is essential, but more to prevent accidental data loss, due to things like failing to wipe drives before disposing of them, rather than physical theft. In systems where volume encryption is only a tickbox away, it’s also worth enabling, if only to prevent inane questions from your security auditors. Relying solely on storage-layer defences, though, is very unlikely to provide any appreciable value in preventing data loss.

Database-Layer Defences: Transparent Database Encryption

If you’ve used proprietary database systems in high-security environments, you might have come across Transparent Database Encryption (TDE). There are also a couple of proprietary extensions for PostgreSQL that provide this functionality.

TDE is essentially encryption-at-rest implemented inside the database server. As such, it has much the same drawbacks as disk encryption: few real-world attacks are thwarted by it. There is a very small amount of additional protection, in that “physical” level backups (as produced by pg_basebackup) are protected, but the vast majority of attacks aren’t stopped by TDE. Any attacker who can access the database while it’s running can just ask for an SQL-level dump of the stored data, and they’ll get the unencrypted data quick as you like.

Application-Layer Defences: Field Encryption

If you want to take the database out of the threat landscape, you really need to encrypt sensitive data before it even gets near the database. This is the realm of field encryption, more commonly known as application-level encryption.

This technique involves encrypting each field of data before it is sent to be stored in the database, and then decrypting it again after it’s retrieved from the database. Anyone who gets the data from the database directly, whether via a backup or a direct connection, is out of luck: they can’t decrypt the data, and therefore it’s worthless.

There are, of course, some limitations of this technique.

For starters, every ORM and data mapper out there has rolled their own encryption format, meaning that there’s basically zero interoperability. This isn’t a problem if you build everything that accesses the database using a single framework, but if you ever feel the need to migrate, or use the database from multiple codebases, you’re likely in for a rough time.

The other big problem of traditional application-level encryption is that, when the database can’t understand what data its storing, it can’t run queries against that data. So if you want to encrypt, say, your users’ dates of birth, but you also need to be able to query on that field, you need to choose between one or the other: you can’t have both at the same time.

You may think to yourself, “but this isn’t any good, an attacker that breaks into my application can still steal all my data!”. That is true, but security is never binary. The name of the game is reducing the attack surface, making it harder for an attacker to succeed. If you leave all the data unencrypted in the database, an attacker can steal all your data by breaking into the database or by breaking into the application. Encrypting the data reduces the attacker’s options, and allows you to focus your resources on hardening the application against attack, safe in the knowledge that an attacker who gets into the database directly isn’t going to get anything valuable.

Sidenote: The Curious Case of pg_crypto

PostgreSQL ships a “contrib” module called pg_crypto, which provides encryption and decryption functions. This sounds ideal to use for encrypting data within our applications, as it’s available no matter what we’re using to write our application. It avoids the problem of framework-specific cryptography, because you call the same PostgreSQL functions no matter what language you’re using, which produces the same output.

However, I don’t recommend ever using pg_crypto’s data encryption functions, and I doubt you will find many other cryptographic engineers who will, either.

First up, and most horrifyingly, it requires you to pass the long-term keys to the database server. If there’s an attacker actively in the database server, they can capture the keys as they come in, which means all the data encrypted using that key is exposed. Sending the keys can also result in the keys ending up in query logs, both on the client and server, which is obviously a terrible result.

Less scary, but still very concerning, is that pg_crypto’s available cryptography is, to put it mildly, antiquated. We have a lot of newer, safer, and faster techniques for data encryption, that aren’t available in pg_crypto. This means that if you do use it, you’re leaving a lot on the table, and need to have skilled cryptographic engineers on hand to avoid the potential pitfalls.

In short: friends don’t let friends use pg_crypto.

The Future: Enquo

All this brings us to the project I run: Enquo. It takes application-layer encryption to a new level, by providing a language- and framework-agnostic cryptosystem that also enables encrypted data to be efficiently queried by the database.

So, you can encrypt your users’ dates of birth, in such a way that anyone with the appropriate keys can query the database to return, say, all users over the age of 18, but an attacker just sees unintelligible gibberish. This should greatly increase the amount of data that can be encrypted, and as the Enquo project expands its available data types and supported languages, the coverage of encrypted data will grow and grow. My eventual goal is to encrypt all data, all the time.

If this appeals to you, visit enquo.org to use or contribute to the open source project, or EnquoDB.com for commercial support and hosted database options.

,

Simon LyallAudiobooks – October 2023

Paved Paradise: How Parking Explains the World by Henry Grabar

Parking, it’s history and economics, land use and zoning. A fun, accessible book that might be good introduction to those new to the topic. 3/5

American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer by Kai Bird and Martin J. Sherwin

The source of the recent film. Comprehensive although not straying far from the subject and an easy read. 4/5

How 1954 changed History by Michael Flamm

A short series of lectures about major (mainly US) events during 1954 from medicine to politics to popular culture. A nice quick read. 3/5

Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration by Ed Catmull with Amy Wallace

A combination memoir, company history and management advice book. Works well for all 3. 4/5

My Rating System

  • 5/5 = Brilliant, top 5 book of the year
  • 4/5 = Above average, strongly recommend
  • 3/5 = Average. in the middle 70% of books I read
  • 2/5 = Disappointing
  • 1/5 = Did not like at all

Share

,

yifeiMonitor Upstream Updates for OpenBSD Packages

As an OpenBSD package maintainer, I often need to watch for updates on packages I maintain. I used to do this using repology.org, which has the benefit of tracking package updates in many distros, but it can be unreliable for OpenBSD packages due to network delay and parsing problems.

One better way to watch for upstream update is using OpenBSD’s portroach service, it monitors new upstream release and provides a JSON API that can be combined with jq(1) to produce clear information.

Querying portroach #

To find all packages that can be updated for a given maintainer, first find the maintainer page on portroach, you can search by maintainer name and the page’s URL should be similar to the following:

https://portroach.openbsd.org/yifei%20zhan%20%3Copenbsd@zhan.science%3E.html

Now to get JSON output, add /json/ to the URL and change the suffix from .html to .json:

https://portroach.openbsd.org/json/yifei%20zhan%20%3Copenbsd@zhan.science%3E.json 

This endpoint will return all the packages maintained by a given maintainer, regardless of having an update or not. To only show packaged that can be updated, jq(1) can be used as a powerful filter and formatter:

$ ftp -Vo - https://portroach.openbsd.org/json/yifei%20zhan%20%3Copenbsd@zhan.science%3E.json\
| jq -r '.[] | select(.newver!=null) | (.fullpkgpath)+": "+(.ver)+" -> "+(.newver)'

Which prints a nice list of package I need to work on:

converters/opencc: 1.1.6 -> er.1.1.7
inputmethods/fcitx: 5.0.23 -> 5.1.1
inputmethods/fcitx-chinese-addons: 5.0.17 -> 5.1.1
inputmethods/fcitx-config-qt: 5.0.17 -> 5.1.1
inputmethods/fcitx-gtk: 5.0.23 -> 5.1.0
inputmethods/fcitx-lua: 5.0.10 -> 5.0.11
inputmethods/fcitx-qt: 5.0.17 -> 5.1.1
inputmethods/fcitx-table-extra: 5.0.13 -> 5.1.0
inputmethods/libime: 1.0.17 -> 1.1.2

Closing note #

Please be mindful that portroach is not infaillible, it may produce inaccurate result for some upstreams. The hosted version is a community resource, so please don’t abuse it, If you want, you can selfhost it with source code from its GitHub repository.

,

Russell CokerLinks October 2023

The Daily Kos has an interesting article about a new more effective method of desalination [1].

Here is a video of a crazy guy zapping things with 100 car batteries [2]. This is sonmething you should avoid if you want to die of natural causes. Does dying while making a science video count for a Darwin Award?

A Hacker News comment has an interesting explanation of Unix signals [3].

Interesting documentary on the rise of mega corporations [4]. We need to split up Google, Facebook, and Amazon ASAP. Also every phone platform should have competing app stores.

Dave Taht gave an interesting LCA lecture about Internet congestion control [5]. He also referenced a web site about projects to alleviate the buffer bloat problem [6].

This tiny event based sensor is an interesting product [7]. It could lead to some interesting (but possibly invasive) technological developments in phones.

Tara Barnett’s Everything Open lecture Swiss Army GLAM had some interesting ideas for community software development [8]. Having lots of small programs communicating with APIs is an interesting way to get people into development.

Actually Hardcore Overclocking has an interesting youtube video about the differences between x8 and x14 DDR4 DIMMs [9].

Interesting YouTube video from someone who helped the Kurds defend against Turkey about how war tunnels work [10]. He makes a strong case that the Israeli invasion of the Gaza Strip won’t be easy or pleasant.

,

Russell CokerHello Kitty

I’ve just discovered a new xterm replacement named Kitty [1]. It boasts about being faster due to threading and using the GPU and it does appear faster on some of my systems but that’s not why I like it.

A trend in terminal programs in recent years has been tabbed operation so you can have multiple sessions in one OS window, this is something I’ve never liked just as I’ve never liked using Screen to switch between sessions when I had the option of just having multiple sessions on screen. The feature that I like most about Kitty is the ability to have a grid based layout of sessions in one OS window. Instead of having 16 OS windows on my workstation or 4 OS windows on a laptop with different entries in the window list and the possibility of them getting messed up if the OS momentarily gets confused about the screen size (a common issue with laptop use) I can just have 1 Kitty window that has all the sessions running.

Kitty has “Kitten” processes that can do various things, one is icat which displays an image file to the terminal and leaves it in the scroll-back buffer. I put the following shell code in one of the scripts called from .bashrc to setup an alias for icat.

if [ "$TERM" == "xterm-kitty" ]; then
  alias icat='kitty +kitten icat'
fi

The kitten interface can be supported by other programs. The version of the mpv video player in Debian/Unstable has a --vo=kitty option which is an interesting feature. However playing a video in a Kitty window that takes up 1/4 of the screen on my laptop takes a bit over 100% of a CPU core for mpv and about 10% to 20% for Kitty which gives a total of about 120% CPU use on my i5-6300U compared to about 20% for mpv using wayland directly. The option to make it talk to Kitty via shared memory doesn’t improve things.

Using this effectively requires installing the kitty-terminfo package on every system you might ssh to. But you can set the term type to xterm-256color when logged in to a system without the kitty terminfo installed. The fact that icat and presumably other advanced terminal functions work over ssh by default is a security concern, but this also works with Konsole and will presumably be added to other terminal emulators so it’s a widespread problem that needs attention.

There is support for desktop notifications in the Kitty terminal encoding [2]. One of the things I’m interested in at the moment is how to best manage notifications on converged systems (phone and desktop) so this is something I’ll have to investigate.

Overall Kitty has some great features and definitely has the potential to improve productivity for some work patterns. There are some security concerns that it raises through closer integration between systems and between programs, but many of them aren’t exclusive to Kitty.

,

Russell CokerBluetooth Versions and PineTime

I’ve done some tests with the PineTime [1] on different Android phones. On a Huawei Mate 10 Pro (from 2017 with Bluetooth 4.2) it has very slow transfer speeds for updating the firmware (less than 1KB/s) and unreliable connection to the phone. On a Huawei Nova 7i (from 2020 with Bluetooth 4.2) it has slow transfer speeds (about 2KB/s) and a more reliable connection to the phone. On a Pixel 4 XL (from 2019 with Bluetooth 5.0) it has very fast speeds for updating the firmware and also a reliable link.

Version 5 of the Bluetooth standard [2] was released in 2016 so it’s a little disappointing that the Mate 10 Pro doesn’t support it and very disappointing that the Nova 7i doesn’t support it either. Bluetooth 5 adds higher speeds and longer range for LE (Low Energy) modes which are used for things like smart watches.

It’s extremely disappointing that the PinePhonePro [3] only supports Bluetooth 4.1. It’s a phone released in 2021 that doesn’t even have Bluetooth 4.2 which was released in 2014.

For laptops the Thinkpad X1 Carbon 7th Gen release in 2019 [4] was the first in the X1 Carbon series to have Bluetooth 5. So I will probably be limited in my ability to use my personal laptop or PinePhone for testing Linux software that talks to the PineTime and I’ll have to use a laptop borrowed from work.

,

Russell CokerBrother MFC-J4440DW Printer

I just had to setup a Brother MFC-J4440DW for a relative. They were replacing an old HP laser printer that mysteriously stopped printing as dark as it should, I don’t know whether the HP printer had worn out or if the HP firmware decided to hobble it to make them buy a new printer. In either case HP is well known for shady behaviour with their printer firmware and should be avoided.

The new Brother printer has problems when using wifi and auto DNS. I don’t know how much of that was due to the printer itself and how much was due to the wifi AP provided by Foxtel. Anyway it works better with Ethernet and a fixed address (the wifi AP didn’t allow me to set a fixed address). I think the main thing was configuring CUPS to connect via the IP address and not use Avahi etc.

One problem I had with printing was that programs like Chrome and LibreOffice would hang for about a minute before printing, that turned out to be due to /etc/cups/lpoptions having the old printer (which had been removed) listed as the default. It would be nice if the web configuration for cups would change that when I set the default printer.

CUPS doesn’t seem to support USB printing. If it is possible to get this printer to print via USB then I welcome a comment describing how to do it.

Scanning only seems to work on Ethernet not on USB, the command for scanning that I ended up with was “scanimage -d escl:http://10.0.0.3:80“. Again I welcome comments from anyone who has had success in scanning via USB. There are probably some Linux users who would find it really inconvenient to setup a network interface specifically for printing. It’s easy for me as I have a pile of spare ethernet cards and a box of cables but some people would have to buy this. Also it’s disappointing that Brother didn’t include an Ethernet cable or a USB cable in the box. But if that makes it cheaper I can deal with that. The resolution for scanning is only 832*1163 and it’s black and white, I think that generally scanning in printers is a bad idea, taking a photo with a phone is a better way of scanning documents.

Generally this printer works well and is cheap at only $299, a price for disposable hardware by today’s standards.

There are Debian packages from Brother for the printer. The scanner package looks like it just configures scanimage, and I’m not sure whether the stock version of CUPS in Debian will do it without the Brother package. One thing I found interesting is that the package mfcj4440dwpdrv has the following shell code in the postinst to label for SE Linux:

if [ "$(which semanage 2> /dev/null)" != '' ];then
semanage fcontext -a -t cupsd_rw_etc_t '/opt/brother/Printers/mfcj4440dw/inf(/.*)?'
semanage fcontext -a -t bin_t          '/opt/brother/Printers/mfcj4440dw/lpd(/.*)?'
semanage fcontext -a -t bin_t          '/opt/brother/Printers/mfcj4440dw/cupswrapper(/.*)?'
if [ "$(which restorecon 2> /dev/null)" != '' ];then
restorecon -R /opt/brother/Printers/mfcj4440dw
fi
fi

This is the first time I’ve seen a Debian package from a hardware vendor with SE Linux specific code. I can’t just add those rules to the Debian policy as that would make the semanage commands fail to add an identical context spec will break the postinst.

In the latest policy I’m uploading to Debian/Unstable (version 2.20231010-1) there are the following 3 lines to deal with this, the first was already there for some time and the other 2 I just added:

/opt/brother/Printers/([^/]+/)?inf(/.*)?        gen_context(system_u:object_r:cupsd_rw_etc_t,s0)
/opt/brother/Printers/[^/]+/lpd(/.*)?   gen_context(system_u:object_r:bin_t,s0)
/opt/brother/Printers/[^/]+/cupswrapper(/.*)?   gen_context(system_u:object_r:bin_t,s0)

The Brother employee(s) who added the SE Linux code to their package are welcome to connect to me on LinkedIn.

,

Russell CokerMore About the PineTime

Since my initial review of the PineTime 10 days ago [1] I’ve used it in more situations. My initial tests were done connecting to a Huawei Nova 7i [2], I am now using it with a Huawei Mate 10 Pro. I’ve also upgraded the PineTime from version 1.11 (from memory) of the Infinitime software that runs on the watch to version 1.13 [3]. To upgrade it I had to download the file pinetime-mcuboot-app-dfu-1.13.0.zip to the Android phone and then use the File Installer option of the GadgetBridge Android app to upload it. The zip file does NOT need to be extracted first, I don’t know if GadgetBridge extracts it before upload or if the PineTime firmware has a copy of unzip, but it just works.

Version 1.13 is purported to take less battery, I haven’t directly verified this as I turned on the new feature of measuring my pulse 24*7 which significantly increases battery use. The end result is that the battery is being used up at about the same rate as before, overall adding a new battery-hungry feature while reducing battery use for other things to compensate is a good thing and strongly suggests that battery use has decreased overall.

I have noticed that now with a different phone and different version of the firmware it doesn’t reconnect as reliably. Sometimes I need to turn bluetooth on the watch off and on before it works (which indicates an issue with the firmware) and sometimes I need to turn bluetooth off and on on the phone which indicates a phone issue. Also I often unlock my phone to find the GadgetBridge notification saying that it’s disconnected and it usually connects fine, but I get the impression it’s often disconnected. Does the Mate 10 Pro have a problem that triggers a bug in the PineTime? Does the 1.13 version of InfiniTime have a problem that triggers a bug in the Mate 10 Pro? Are they both independently buggy? Is the new version of InfiniTime just disconnecting when it’s not doing stuff to save battery and triggering bugs that weren’t obvious before?

I’ve tested the media control which basically works, sometimes it gets out of sync and displays the name of the previous track which is annoying. The PineTime is IP67 rated and there are reports on Reddit of people wearing it in the shower and swimming pool. I wouldn’t recommend those things although it should work OK. It might be an option for controlling music when in the bath or when having a pool party.

When the watch is running normally and displays a new notification it’s not possible to swipe it away. You have to go to the notifications menu afterwards to swipe them which I find annoying. Also the notification of an inbound call remains in the notification list indefinitely while I think a more appropriate action is to have it disappear in an amount of time where it’s already been answered or gone to voicemail. Voicemail timeouts are as low as 15 seconds so having the notification disappear after 1 minute would be reasonable.

I have configured my PineTime to take 2 taps on the screen to wake up. I previously had it set to 1 tap and had problems with accidentally doing something it registered as a tap while in bed and waking me up. Also I found that if I want to turn the screen on when my hands are dirty so I don’t want to touch it with a finger then tapping it on my nose works well. Apparently it is programmed to ignore taps on large areas so I can’t wake it with my elbow.

I’ve setup a PineTime for an elderly relative who is greatly enjoying it. I don’t expect them to flash new firmware or do any other complex things, but they are doing well with using the device. They are considering getting a different band as they don’t like rubber. I’m sure their local jeweler has some leather and metal bands that could fit. There is a design on Thiniverse for a PineTime case [4], this could be used for making an adaptor to fit a PineTime to a greatly different type of band, an instrument console, etc.

Generally I think the PineTime is an OK smart watch for someone who’s not into FOSS for it’s own sake. My relative could have been happy with a slightly cheaper watch, but it’s still significantly cheaper than the Samsung and Apple options so it’s not particularly expensive. A benefit for them is that having the same type of SmartWatch as me they will get better tech support.

yifeiEncrypted and Version Controlled File Sync with git-annex(1)

git-annex(1) is a versatile and cross-platform tool build on top of git, it can sync, backup, archive files and provides many useful primitives for building customized workflow and storage system, for example, by combining git-annex with gcrypt, it’s possible to fully encrypt data stored on a remote.

Partially due to its versatility, it has a steeper learning curve than some other tools in this field and it took me some time to figure out how to make it work for me, here is a quick guide that documents my journey.

Prerequisite and Installation #

git-annex and git-remote-gcrypt is available from many package manager, to install them on Debian:

# apt-get install git-annex git-remote-gcrypt

git-annex supports multiple encryption mode, I will be going with the default hybrid mode since it allows more keys to be added in future. In this mode, data is encrypted with gpg using a symmetric key generated during remote initialization, the key then is encrypted by a gpg public key specified during initremote. After that, the symmetric key is checked into the git repository. This is useful when multiple users wish to access the same encrypted repository, but doing so is outside the scope of this post, for doing that and other advanced operations, read git-annex’s gcrypt guide for more details.

I opt to create a new key for this use case, but any gpg key will do.

Setup Local Repository #

The first step is to create a local repository as base, which will then be synced to remotes:

laptop$ git init myrepo
laptop$ cd myrepo
laptop$ git annex init

To checkin and commit some file into it:

laptop$ touch example
laptop$ git annex .
laptop$ git commit -a -m 'test'

Setup Encrypted Remote #

First, create a bare repository on the server, it will hold encrypted data later:

server$ git init --bare myrepo-remote

Then, on the local machine, add the newly created repository on the server as an encrypted remote, it’s a good practice to give it a descriptive name:

(To find the KEYID, run gpg --list-key)

laptop$ git annex initremote homeserver type=gcrypt gitrepo=rsync://server_hostname/path/to/myrepo-remote keyid=$KEYID
gcrypt: Repository not found: rsync://server_hostname/path/to/myrepo-remote
gcrypt: Setting up new repository
gcrypt: Remote ID is :id:XXX
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Compressing objects: 100% (3/3), done.
Total 5 (delta 0), reused 0 (delta 0), pack-reused 0
gcrypt: Encrypting to:  -r XXX
gcrypt: Requesting manifest signature
To gcrypt::rsync://server_hostname/path/to/myrepo-remote
 * [new branch]      git-annex -> git-annex
ok
(recording state in git...)

With this done, it should now be possible to sync local repository to the remote:

laptop$ git annex sync --content

Work with Multiple Local Machines #

To accecss this encrypted repository from another machine (e.g. a desktop PC), first setup the gpg key on such machine, then clone and decrypt the repository:

desktop$ git clone gcrypt::rsync://server_hostname/path/to/myrepo-remote myrepo
Cloning into 'myrepo'...
gcrypt: Decrypting manifest
gpg: Good signature from "omnirepo (annex)" [unknown]
gcrypt: Remote ID is :id:XXX
Receiving objects: 100% (5/5), done.

Sync command will also work on the new machine for sending modified files to the remote:

desktop$ git annex sync --content
commit 
[master cec51a4] git-annex in XXX
 1 file changed, 1 insertion(+)
ok
pull origin 
gcrypt: Decrypting manifest
ok
push origin 
gcrypt: Decrypting manifest
Enumerating objects: 6, done.
Counting objects: 100% (6/6), done.
Compressing objects: 100% (4/4), done.
Total 6 (delta 0), reused 0 (delta 0), pack-reused 0
gcrypt: Encrypting to: --throw-keyids --default-recipient-self
gcrypt: Requesting manifest signature
To gcrypt::rsync://server_hostname/path/to/myrepo-remote
   bbed528..cec51a4  master -> synced/master
   c387409..575869a  git-annex -> synced/git-annex
ok

Troubleshooting #

Cannot write to annex file #

Annexed file is set to readonly (locked) to prevent accidental modification, run git annex unlock locked_file to unlock the file first.

Remove Unwanted Remote #

git-annex manages its remotes via git, to delete a remote, run git remote remove oldremote

,

Tim RileyOpen source status update, September 2023

With the two big PRs introducing our next generation of asset support merged (here and here), September was a month for rapid iteration and working towards getting assets out in a 2.1 beta release.

The pace was lively! Towards the end of the month, Luca and I were trading PRs and code reviews on almost a daily basis. Thanks our opposing timezones, Hanami was being written nearly 24h a day!

Assorted small things

Most of the work was fairly minor: an error logging fix, some test updates for the new assets, error handling around asset manifests, and a bit of zeitwerkin’.

Making our better errors better

There was one interesting piece though. Earlier in this release cycle (back in June!), I overhauled our user-facing error handling. I added a middleware to catch errors and render static error pages intended display in production. As part of this change, I adjusted our router to raise exceptions for not found routes: doing this would allow the error to be caught and a proper 404 page displayed. So that was production sorted. For development, we integrated the venerable better_errors, wrapped by our own hanami-webconsole gem.

It was only some months later that we realised 404s in development were being returned as 500s. This turned out to be because better_errors defaults to a 500 response code at all times. In its middleware:

status_code = 500
# ...
response = Rack::Response.new(content, status_code, headers)

Well, maybe not quite at all times. The lines right beneath status_code = 500:

status_code = 500
if defined?(ActionDispatch::ExceptionWrapper) && exception
  status_code = ActionDispatch::ExceptionWrapper.new(env, exception).status_code
end

Looks like Ruby on Rails gets its own little exception carved out here, via some hard-coded constant checks that reach deep inside Rails internals. This will allow better_errors to return a 404 for a not found error in Rails, but not in any other Ruby framework.

This is not a new change. It arrived over ten years ago, and I can hardly blame the authors for wanting a way to make this work nicely with the predominant Ruby application framework of the day.

Today, however, is a different day! We’re here to change the Ruby framework balance. � So we needed a way to make this work for Hanami. What didn’t feel feasible at this point was a significant change to better_errors: our time was limited and at best we had the appetite only for a minor tactical fix.

Our resulting fix in webconsole (along with this counterpart in hanami) does monkey patch better_errors, but I was very pleased with how gently we could do it. The patch is tiny:

module BetterErrorsExtension
  # The BetterErrors middleware always returns a 500 status when rescuing an exception
  # (outside of Rails). This is not not always appropriate, such as for a
  # `Hanami::Router::NotFoundError`, which should be a 404.
  #
  # To account for this, gently patch `BetterErrors::Middleware#show_error_page` (which is
  # called only when an exception has been rescued) to pass that rescued exception to a proc
  # we inject into the rack env here in our own middleware. This allows our middleware to know
  # the about exception class and provide the correct status code after BetterErrors is done
  # with its job.
  #
  # @see Webconsole::Middleware#call
  def show_error_page(env, exception = nil)
    if (capture_proc = env[CAPTURE_EXCEPTION_PROC_KEY])
      capture_proc.call(exception)
    end

    super
  end
end
BetterErrors::Middleware.prepend(BetterErrorsExtension)

In order to know which response code to use for the page, we need access to the exception that better_error is catching. Right now it provides no hooks to expose that. So instead we prepend some behaviour in front of their #show_error_page, which is only called by the time an error is to be rendered. We look for a proc on the rack env, and if one is there, we pass the exception to it, and then let better_errors get on with the rest of its normal work.

Then, in our own webconsole middleware, we set that proc to capture the exception, using Ruby closure semantics to assign that exception directly to a local variable:

def call(env)
  rescued_exception = nil
  env[CAPTURE_EXCEPTION_PROC_KEY] = -> ex { rescued_exception = ex }

  # ...
end

After that, we call the better_errors middleware, letting it do its own thing:

def call(env)
  rescued_exception = nil
  env[CAPTURE_EXCEPTION_PROC_KEY] = -> ex { rescued_exception = ex }

  status, headers, body = @better_errors.call(env)
end

And then once that is done, we can use the exception (if we have one) to fetch an appropriate response code from the Hanami app config, and then override better_errors’ response code with our own:

def call(env)
  rescued_exception = nil
  env[CAPTURE_EXCEPTION_PROC_KEY] = -> ex { rescued_exception = ex }

  status, headers, body = @better_errors.call(env)

  # Replace the BetterErrors status with a properly configured one for the Hanami app
  if rescued_exception
    status = Rack::Utils.status_code(
      @config.render_error_responses[rescued_exception.class.name]
    )
  end

  [status, headers, body]
end

That’s it! Given how light touch this is, and how stable better_errors is, I’m confident this will serve our purposes quite well for now.

We don’t want to live with this forever, however. In our future I see a fit for purpose developer errors reporter that is fully integrated with Hanami’s developer experience. Given current timelines, this will probably won’t come for at least 12 months, so if this is something you’re interested in helping with, please reach out!

Kickstarting dry-operation!

While the work on Hanami continued, I also helped kickstart work on a new dry-rb gem: dry-operation! Serving as the successor to dry-transaction, with dry-operation we’ll introduce significant new flexibility to modelling composable business operations, while still keeping a high-level API that presents their key flows in an easy to follow way.

Much of the month was spent ideating on various approaches with Marc Busqué and Brooke Kuhlmann, and then by the end of the month, Marc was already underway with the development work. Go check out Marc’s September update for a little more of the background on this.

I’m excited we’re finally providing a bridge to the future for dry-transaction, and at the same time building one of the final pieces of the puzzle for full stack Hanami apps. This is an interesting one for me personally, too, since I’m acting more as a “product manager� for this effort, with Marc doing most of the direct development work. Marc’s been in the dry-rb/Hanami orbit for a while now, and I’m excited for this opportunity for him to step up his contributions. More on this in the future!

Releasing Hanami 2.1.0.beta2!

After all of this, we capped the month off with the release of Hanami 2.1.0.beta2! This was a big step: our first beta to include both views and assets together. In the time since this release we’ve already learnt a ton and found way to take things to another level… but more on that next month. 😉 See you then!

,

Russell CokerThe PineTime

I have just got a PineTime smart watch [1] from Pine64. They cost $US27 each which ended up as $144.63 Australian for three including postage when I ordered on the 16th of September, it’s annoying that you can’t order more than 3 at a time to reduce postage costs.

The Australian online store Kogan has smart watches starting at about $15 [2] with Bluetooth and support for phone notifications so the $48.21 for a PineTime doesn’t compare well on just price and features. The watches Kogan sells start getting into high resolution at around the $25 price and many of them have features like 24*7 heart monitoring that the PineTime lacks (it just measures when you request it). No-one would order a PineTime for being cheap or having lots of features, you order it because you want open hardware that allows you to do things your way. Also the PineTime isn’t going to be orphaned while it’s likely that in a few years most of the cheap watches sold by Kogan etc won’t support the new phones running the latest version of Android.

The screen of the PineTime is 240*240 resolution (about 260dpi) with 64k colors. The screen resolution is lower than some high-end smart watches but higher than most phones and almost all monitors. I doubt that much benefit could be gained from higher resolution. Even on minimum brightness the screen is easy to read on all but the brightest sunny days. The compute capabilities are 4.5MB of flash storage, 64k of RAM, and a 64MHz CPU – this can’t run Linux and nothing like it will run Linux for a long time.

I’ve had the PineTime for 6 days now, I charged it once and it’s now at 55% battery. It looks like it will last close to 2 weeks on a single charge and it’s claimed that a newer firmware will make the battery last longer.

Software

The main Android app for using with the PineTime is GadgetBridge which I installed from the f-droid repository. It had lots of click-through menus for allowing access to various Android features (contacts, bluetooth, draw over foreground, location, and more) but after that it was easy to setup. It was the first bluetooth device I’ve used which had a 6 digit PIN for connecting to a phone.

Initially I used the PineTime with my Huawei Nova 7i [3]. The aim is to eventually have it run from my PinePhonePro but my test of the PinePhonePro didn’t go as well as hoped [4]. Now I’m using it on my Huawei Mate 10 Pro.

It comes with InfiniTime [5] installed as the default firmware, mine had 1.11.0 which is a fairly recent version. I will probably upgrade it soon to get the better power optimisation and weather alerts in the watch face. I don’t have any plans to use different watch firmware and I don’t have any plans to contribute to firmware development – I just can’t hack on every FOSS project around it’s better to do big contributions to a small number of projects.

For people who don’t want the default firmware the Wasp-OS project seems interesting as it’s written in Python [6], I don’t like Python but it’s very popular. Python is particularly popular in ML development, it will be interesting to see if Wasp-OS becomes a preferred platform for smart watches that talk to GPT servers.

Generally the software works well, one annoyance is that when a notification goes away on the phone it remains on the PineTime and has to be manually dismissed. It would be nice if clearing notifications on the phone would clear them on the PineTime too.

The music control works with RocketPlayer on Android, it displays the track name and has options for pause/play and skipping forward and backward one track. Annoyingly the current firmware doesn’t allow configuring the main screens, from the primary screen you swipe down for notifications, right for settings, up for menus, and there’s nothing defined for swipe left. I’d like to make swipe left the command to get to music control.

Hardware

It has a detachable band that appears to be within the common range of watch bands. According to the PineTime Wiki page [7] there are a selection of alternate bands that will fit it, but some don’t because the band is recessed into the watch.

It is IP67 rated which means you can probably wear it while swimming. The charging contacts are exposed on the bottom of the case which means that any chemicals left by pool water can be cleaned off and also as they are apparently not expected to be harmed by sweat and skin oil there shouldn’t be a problem charging it. I have significant experience using a Samsung Galaxy S5 Mini which is rated at IP67 in swimming pools. I had two problems with the S5 Mini when getting out of the pool, firstly water in the headphone socket made the phone consider that it was in headphone mode and turn off the speakers and secondly it took hours to become dry enough to charge and after many swims the charge rate dropped presumably due to oxide on the contacts. There are reports of success when swimming with a PineTime.

Generally it feels well made and appears more solid than the cheapest Kogan devices appear to be.

Conclusion

If I wanted monitoring for medical reasons then I would choose a different smart watch. I’ve read about people doing things like tracking their body stats 24*7 and trying to discover useful things, the PineTime is not a good option for BioHacking type use. However if I did have a need for such things I’d probably just buy a second smart watch and have one on each wrist.

The PineTime generally works well. It’s a pity it has fewer hardware features than closed devices that are cheaper. But having a firmware that can be continually improved by the community is good.

The continually expanding use of mobile phone technology devices for custom use in corporations (such as mobile phone in custom case for scanning prices etc in a supermarket) has some potential for use with this. I can imagine someone adding some custom features to a PineTime for such use. When a supermarket chain has 200,000 employees (as Woolworths in Australia does) then paying for a few months of software development work to make a smart watch do specific things for that company could provide significant value. There are probably some business opportunities for FOSS developers to hack on extra hardware on a PineTime and write software to support it.

I recommend that everyone who’s into FOSS buy one of these. Preferably make a deal with two friends to get the minimum postage cost.

Linux AustraliaCouncil Meeting October 11, 2023 – Miunutes

1. Meeting overview and key information

Present

  • Joel Addison (President)
  • Wil Brown (Vice-President)
  • Sae Ra Germaine  (Council)
  • Russell Stuart (Treasurer)
  • Jonathan Woithe (Council)
  • Neill Cox (Secretary)

 

Apologies

  • None received

 

Not present

  • Marcus Herstik (Council)

 

Meeting opened at 19:48 AEST by Joel and quorum was achieved.

Minutes taken by Jonathan and Neil.

 

2. Log of correspondence

  • 20 Sep – Enquiry from Matthew Sherborne querying whether there will be a LA conference in 2024 – Jonathan responded 21 Sep
  • 22 Sep – Fwd: [FOSS4G SoTM Oceania] Sponsorship update – Forwarded by Jonathan
  • 22 Sep – Fwd: Your account has been updated – Forwarded by Russell from Westpac
  • 1 Oct – Fwd: 2023 Pacific GIS and Remote Sensing User Conference Invitation and Sponsorship – Forwarded by Jonathan
  • 2 Oct – Why was I not accepted as a member – Originally from Lauryn Westwood, sent to Josh Hesketh, council cc’d into his response

Lauryn has now been approved as a member.

  • 7 Oct – Treasurer questions for Joel / Sae Ra – From Russell 
  • 9 Oct – Is it time to shut down the IWS LA Subcommittee? – From Russell, notifying IWS that the subcommittee has now been shut down.
  • 9 Oct – Lodge Linux Australia Activity Statement July..September 2023
  • 10 Oct – Change to auDA domain name rules – request to make submission under LA banner – from Kathy Reid
  • 11 Oct –  Your Job Listing has been approved. From Kirin van der Veer. 

Despite the subject his job listing has not appeared. Joel will approve it shortly.

  • 10 Oct – Announcing Everything Open 2024 and 2025 – Yay! (Note: the announcement went earlier, but this is the date the email to the council list was approved)

 

3. Items for discussion

  • FOSS4G SoTM Oceania: LA is entitled to a complimentary ticket as part of our sponsorship. Offer this to Andrew Ruthven since he’s in Auckland. Jonathan to follow up.
  • Pacific GIS: A brief discussion was held. The consensus was that there was little direct overlap with LA’s activities. Motion by Joel:

That LA sponsor 2023 Pacific GIS and Remote Sensing User Conference being held in Fiji.

Seconded by Sae Ra.
The motion failed, with 0 for, 4 against, 1 abstention.

Jonathan will communicate this to the Pacific GIS and Remote Sensing Council (PGRSC).

  • Treasurer questions: Russell wanted to know what the EO2024 Treasurer situation was. It was agreed that for now Russell is acting Treasurer, and in that role he will arrange payment of a venue invoice which is due soon. Russell will ask Rob’s partner (Tharyn) if they might be interested to take this on since they are an accountant.
  •  Change to auDA domain name rules – request to make submission under LA banner 

Kathy has got on to this promptly – auDA have only just released these changes. The consultation finishes on 30 October, which means there is one more council meeting before submissions are due. As a result the council will review the document suggested by Kathy before approving the submission. Joel will send Kathy an email to inform her of the plan.

 

4. Items for noting

  • With daylight saving (DLS) now active, LA Council meetings will be shifted 30 minutes later (20:00 UT+1100). This is to make it easier for members in Queensland, which doesn’t do DLS.

 

5. Other business

I (Neill) managed to confuse the timezones for this meeting, which is probably why we didn’t get many responses from the subcommittees.

 

  • Drupal subcommittee update

No update: Dave sent an apology. He will email an update tomorrow.

 

  • Admin team update

No update: Steve sent an apology.

 

  • Joomla subcommittee update

No response received to invitation.

 

  • PyCon AU subcommittee update

The PyCon AU team are unable to attend in person. Richard Jones has supplied this update:

 

  • Several venues in Melbourne have now been evaluated. The process continues.
  • No team leadership for 2024 has been chosen yet.
  • Still waiting to have the meeting with Joel to discuss changing the terms of the subcommittee.

 

  • Flounder subcommittee update

No response received to invitation.

 

  • LUV  subcommittee update

No response received to invitation.

 

  • WordPress subcommittee update

Wil indicated that there was nothing more to report. The next actions are expected in December 2023 when further event planning will be commenced.

The post Council Meeting October 11, 2023 – Miunutes appeared first on Linux Australia.

,

Simon LyallAudiobooks – September 2023

When the heavens went on sale: The Misfits and Geniuses Racing to Put Space Within Reach by Ashlee Vance

Covers 4 rocket companies trying to follow SpaceX: Astra, Firefly, Planet Labs, and Rocket Labs. Good overview companies and their founders. 4/5

A Crime in Holland by Georges Simenon

Inspector Maigret travels to the Netherlands to assist a French professor who is suspected of murder. He is hampered by language barriers & lack of jurisdiction. 3/5

The Ultimate Engineer: The Remarkable Life of NASA’s Visionary Leader George M. Low by Richard Jurek

The biography of a senior NASA administrator during the Apollo era. Interesting although a bit less technical than most NASA books 3/5

Five Came Back: A Story of Hollywood and the Second World War by Mark Harris

The story of five legendary Hollywood who joined the US military in World War 2 to make films for the armed services. Great book, definitely recommend. 4/5

  • 5/5 = Brilliant, top 5 book of the year
  • 4/5 = Above average, strongly recommend
  • 2/5 = Disappointing
  • 1/5 = Did not like at all
  • 3/5 = Average. in the middle 70% of books I read

Share

,

Simon LyallSeptember 2023 Update

I thought I’d do an update on my current status and what I’m up to.

Work

Unfortunately I got made redundant from my job at Sharesies in March. This was part of company-wide layoffs that saw about 30% of all staff and 50% of my team get made redundant. I was very sorry to leave, it was a great company with great culture and I was working with a great team.

I’m still using their product for my share investments (mostly Smartshares Exchange Traded Funds) and I have a small number of shares/options in the company.

After a job search I started at a new company in April. It is a fairly large company with a complex internal system so I’ve spent the last 5 months getting my head around their internal systems and tools we have to use

My team is part of a global “follow the Sun” operations department so we get a handoff from the US when we start and handover to Ireland at the end of the day. Unfortunately timezones mean I start at around Noon and finish at 8pm and also have to work one in four weekends.

The new company has a pretty good culture, although since it is large there is a lot of corporate overhead. My first month was spent doing something like 40 training courses and that didn’t even cover much of my day-to-day.

Covid / Getting out

My job at Sharesies was 100% work from home so I only spent one week in the office the whole 18 months I was working there. My new job however is a fairly strict 50% in office so I go in 2 or 3 days each week. They do have free food in the office.

Outside of work I still mask on public transport and for most shopping trips. Currently the coid numbers are fairly low so I do about one cafe visit/week. I’ve also been going to Auckland Thursday Night Curry although my new shifts make this difficult.

New Zealand has eliminated all anti-covid measures (such as mask requirements) and we are currently between waves. However there are still a steady number of hospitalisations and deaths so I’m not in a hurry to increase my exposure, especially in places like the supermarket where there isn’t a lot of upside.

I haven’t yet caught covid, but I have caught a cold and persistent cough in mid- 2023.

Weight Loss and Exercise

Between July 2022 and March 2023 I was on a fairly strict diet to lose weight. I was consuming around 1000 Calories/day by just having a couple of small meals each day of potatoes, plus some cheat meals etc.

Peak was losing around 1kg/week but eventually the diet petered out with my new job etc. Overall I went from to 107kg to 79kg. But have put on around 6kg in the 6 months since. I might to a blog post at some point on my diet experience.

I also was using a rowing machine and doing lots of walking. This has been reduced since my new job and the cold winter of 2023.

I am planning to try and restart my diet and do more exercise.

Hobbies

Chess

I have not played any in-person Chess since late 2021. Unfortunately Chess is a high-risk activity for Covid. Lots of Kids and you are in a crowded room for hours at a time.

My new job also involves me working evenings so it will be difficult to play evening Club chess.

Tolkien

I am trying to get more involved with Tolkien Fandom. I’ve join The Tolkien Society and subscribed to their magazines. I’m trying to work though them as well as back issues.

I attended the Ausmoot conference Online in 2023 and I am considering attending in person to Ausmoot 2024. I took an Online course on the Silmarillion also listening to various podcasts.

Linux.conf.au

Unfortunately my regular Linux.conf.au conference was last held Online in 2022 and it looks like it will not be run in future. This means the Sysadmin Miniconf I’ve helped run since 2006 will probably not be held again.

Linux Australia has created the Everything Open as it’s new flagship conference. I didn’t attend in 2023, although I may in future.

Other

I’m still interested in Public Transport, especially Greater Auckland. I’m working on a new article or two on the subject although switching jobs has delayed things.

I’m still working to improve my programming skills

My proposed Business idea hasn’t progressed beyond the planning stages. I have things mapped out but the main gap is getting my programming skills up to being able to create a Django website to host it.

I’m still listening to Audiobooks and also doing around 30 minutes a day of reading books.

I’m still using Twitter but I’ve also joined the BlueSky Social (login required to see my account).

Share

,

Lev LafayetteThe Voice and Set Theory

The expression "Not all members of set N have characteristic r, but all elements with characteristic r are in set N" can be represented with standard set notation as follows.

1. There exist some elements in N that do not have characteristic r. Using the "∃" symbol, to denote "there exists", and the "∉" symbol, which denotes "not an element of."

∃x ∈ N : x ∉ R

This reads as "There exists an element x in N such that x is not an element of R."

2. For the second part, all elements with characteristic r are in set N:

This means that every element in set R is also in set N. This can be represented using the subset symbol "⊆."
R ⊆ N

This reads as "R is a subset of N," meaning every element in R is also an element in N.

3. So, combining both statements:

∃x ∈ N : x ∉ R and R ⊆ N

This expresses that not all members of set N ("No voters") have characteristic r ("racism"), but all elements with characteristic r ("racism") are in set N ("No voters").

Not all "no" voters are racist, but all racists are "no" voters.

,

Simon LyallAudiobooks – August 2023

America, Empire Of Liberty by David Reynolds

90 * 15 minute episodes covering US history. A fun listen although obviously not a huge amount of detail. 3/5

The Night at the Crossroads by Georges Simenon

When a perplexing murder occurs outside Paris, Inspector Maigret arrives at an isolated intersection marked only by two houses and a dingy garage. 3/5

Outlive꞉ The Science and Art of Longevity by Peter Attia

Lots of advice on how to extend your [healthy] years well above the average. Plenty of good advice even if you can’t follow it all. 3/5

Tehanu by Ursula K. Le Guin

The forth Earthsea Book, it follows Tenar (from The Tombs of Atuan) with Ged as a secondary character. Less fantasy and action than the previous books. But still interesting. 3/5

My Rating System

  • 5/5 = Brilliant, top 5 book of the year
  • 4/5 = Above average, strongly recommend
  • 3/5 = Average. in the middle 70% of books I read
  • 2/5 = Disappointing
  • 1/5 = Did not like at all

Share

,

Linux AustraliaCouncil Meeting September 13, 2023 – Minutes

1. Meeting overview and key information

Present

  • Joel Addison (President)
  • Wil Brown (Vice-President)
  • Neill Cox (Secretary)
  • Sae Ra Germaine  (Council)
  • Marcus Herstik (Council)
  • Russell Stuart (Treasurer)
  • Jonathan Woithe (Council)

 

Apologies

 

Meeting opened at 19:33 AEST by xx and quorum was achieved.

Minutes taken by Neill Cox

 

2. Log of correspondence

  • PyCon AU server exports – Steve Walsh has responded
  • DrupalSouth Sydney 2024 draft budget and venue contract – invoices have been paid
  • Credit Reference Check Request for Linux Australia Inc T/As Linux Australia for a Debtor Account with Gladstone Regional Council – Russell has responded 
  • Two more PyCon AU payments for approval – invoices have been paid

3. Items for discussion

4. Items for noting

5. Other business

  • Drupal sub committee update
    • Canberra community day is progressing well. Venue booked for 23 November, call for regos have opened. GovCMS megameetup. 23 registrations so far. 50-60 is viable for a good minimum. 3 of the 6 sponsors have been locked. It’s all low key and lower budget. Need to send out a more targeted sponsor update email to fill the rest of the 3 spots. Draft Agenda, and ask for volunteers to go out this week as well.
    • Sydney – March, save the date has gone out, opening regos soon. Going to the sponsor list soon. The platinum sponsors have already come back and are coming together. Made contact with the local agencies to provide volunteers. 

 

  • Admin team update

Fastmail testing is complete and Steve is mostly happy, with the exception of an issue when creating a new domain which is a bit clunky. Chasing up with Fastmail to see if there is an easier process.

 

Auto renewal of the application for opensource.au is in place and working. Sadly that means it is also working for the other applicant.

 

Anchor are making changes to their domain registration business. It may be time to find a new provider. Perhaps VentraIP or even Fastmail (although that would mean all our eggs would be in one basket)

 

May only be able to provide DB backups to PyConAu leaving them to recover the venv/webroot.

 

2024 budget is in progress and will have a large expenditure to rotate the disks in the servers. Disks will be bought in batches to avoid buying the same build of disks and having them all die at once.

 

Question from Russell about bounces on Russell Coker’s email address. Steve thinks this was caused by Russell Coker only returning IPv6 addresses. This problem will go away when we move to Fastmail.

 

  • Joomla sub committee update

 

  • PyCon AU sub committee update

 

None of the PyconAU subcommittee can attend. Richard Jones provided this update:

 

  • actively looking for a venue in Melbourne for the 2024 conference (due to it being simpler for steering committee to inspect prospective venues), to be held around October/November (in general better weather allowing for a nicer outdoor experience for attendees looking for external catering, which allows us to significantly reduce cost risk, and also to further separate our conference from Kiwi PyCon)
  •  have a lot of volunteers for 2024 express interest but no chair as yet
  •  are awaiting a response to our Intent to establish Independent Subcommittee for PyCon AU (Joel has sent an initial response)
  • are in the preliminary motions of compiling the 2023 conference report for the LA annual report
  • are still working on completing our archive of previous conference websites

 

  • Flounder sub committee update

No update.

 

  • LUV  sub committee update

No update

 

  • WordPress sub committee update

No update. Decision on venue due in November.  Plan to form the organising team in late Nov and submit the WordCamp proposal to WordCamp Central and also ask to form the LA sub committee. Likely to choose UTS as the budget is the lowest. Looking to talk to WCC to increase ticket price from $60 to $70.

The post Council Meeting September 13, 2023 – Minutes appeared first on Linux Australia.

,

Tim RileyOpen source status update, August 2023

After last month’s omnibus update, I’m back again, so soon!

August turned out to bring a lot of forward motion for our work on Hanami’s front end assets support. While Luca was taking his summer break, I carried on his work preparing hanami-assets 2.1 and its integration into the Hanami framework. Last week we caught up for a quick chat about these, and now both are merged!

Personally, I think this was an exciting evolution of how Luca and I work together. While previously we each took care of fairly distinct lines of work (there was enough to do, after all!), here we literally worked in tandem on one specific area, and it came out great!

Luca and I also hopped on another video call during August, this time with Seb Wilgosz of Hanami Mastery to record a special core team interview for the site’s 50th episode! I really enjoyed the chance to answer community questions about Hanami, and personally, it was a moment of reassurance that we’re still on the right track and are delivering useful things to people.

The episode isn’t published yet, but one thing that did arise from the episode is a new Hanami 2.1 GitHub project that I put together for tracking our remaining work for the release. Previously, this was in Trello, and with the move to GitHub I hope it will make not our remaining work move visible, but also create clearer opportunities for potential contributors.

Now, with those big two PRs merged and our remaining work more clearly listed, the pace is picking up! We’re now at the point where we can focus on the direct user experience of working with assets within a full Hanami app. I expect a lot will shake out from this in quick order. But more on that next month!

,

Stewart SmithPersonal Finance Apps

I (relatively) recently went down the rabbit hole of trying out personal finance apps to help get a better grip on, well, the things you’d expect (personal finances and planning around them).

In the past, I’ve had an off-again-on-again relationship with GNUCash. I did give it a solid go for a few months in 2004/2005 it seems (I found my old files) and I even had the OFX exports of transactions for a limited amount of time for a limited number of bank accounts! Amazingly, there’s a GNUCash port to macOS, and it’ll happily open up this file from what is alarmingly close to 20 years ago.

Back in those times, running Linux on the desktop was even more of an adventure than it has been since then, and I always found GNUCash to be strange (possibly a theme with me and personal finance software), but generally fine. It doesn’t seem to have changed a great deal in the years since. You still have to manually import data from your bank unless you happen to be lucky enough to live in the very limited number of places where there’s some kind of automation for it.

So, going back to GNUCash was an option. But I wanted to survey the land of what was available, and if it was possible to exchange money for convenience. I am not big on the motivation to go and spend a lot of time on this kind of thing anyway, so it had to be easy for me to do so.

For my requirements, I basically had:

  • Support multiple currencies
  • Be able to import data from my banks, even if manually
  • Some kind of reporting and planning tools
  • Be easy enough to use for me, and not leave me struggling with unknown concepts
  • The ability to export data. No vendor lock-in

I viewed a mobile app (iOS) as a Nice to Have rather than essential. Given that, my shortlist was:

GNUCash

I’ve used it before, its web site at https://www.gnucash.org/ looks much the same as it always has. It’s Free and Open Source Software, and is thus well aligned with my values, and that’s a big step towards not having vendor lock-in.

I honestly could probably make it work. I wish it had the ability to import transactions from banks for anywhere I have ever lived or banked with. I also wish the UI got to be a bit more consistent and modern, and even remotely Mac like on the Mac version.

Honestly, if the deal was that a web service would pull bank transactions in exchange for ~$10/month and also fund GNUCash development… I’d struggle to say no.

Quicken

Here’s an option that has been around forever – https://www.quicken.com/ – and one that I figured I should solidly look at. It’s actually one I even spent money on…. before requesting a refund. It’s Import/Export is so broken it’s an insult to broken software everywhere.

Did you know that Quicken doesn’t import the Quicken Interchange Format (QIF), and hasn’t since 2005?

Me, incredulously, when trying out quicken

I don’t understand why you wouldn’t support as many as possible formats that banks export your transaction data as. It cannot possibly be that hard to parse these things, nor can it possibly be code that requires a lot of maintenance.

This basically meant that I couldn’t import data from my Australian Banks. Urgh. This alone ruled it out.

It really didn’t build confidence in ever getting my data out. At every turn it seemed to be really keen on locking you into Quicken rather than having a good experience all-up.

Moneywiz

This one was new to me – https://www.wiz.money/ – and had a fancy URL and everything. I spent a bunch of time trying MoneyWiz, and I concluded that it is pretty, but buggy. I had managed to create a report where it said I’d earned $0, but you click into it, and then it gives actual numbers. Not being self consistent and getting the numbers wrong, when this is literally the only function of said app (to get the numbers right), took this out of the running.

It did sync from my US and Australian banks though, so points there.

Intuit Mint

Intuit used to own Quicken until it sold it to H.I.G. Capital in 2016 (according to Wikipedia). I have no idea if that has had an impact as to the feature set / usability of Quicken, but they now have this Cloud-only product called Mint.

The big issue I had with Mint was that there didn’t seem to be any way to get your data out of it. It seemed to exemplify vendor lock-in. This seems to have changed a bit since I was originally looking, which is good (maybe I just couldn’t find it?). But with the cloud-only approach I wasn’t hugely comfortable with having everything there. It also seemed to be lacking a few features that I was begging to find useful in other places.

It is the only product that links with the Apple Card though. No idea why that is the case.

The price tag of $0 was pretty unbeatable, which does make me wonder where the money is made from to fund its development and maintenance. My guess is that it’s through commission on the various financial products advertised through it, and I dearly hope it is not through selling data on its users (I have no reason to believe it is, there’s just the popular habit of companies doing this).

Banktivity

This is what I’ve settled on. It seemed to be easy enough for me to figure out how to use, sync with an iPhone App, be a reasonable price, and be able to import and sync things from accounts that I have. Oddly enough, nothing can connect and pull things from the Apple Card – which is really weird. That isn’t a Banktivity thing though, that’s just universal (except for Intuit’s Mint).

I’ve been using it for a bit more than a year now, and am still pretty happy. I wish there was the ability to attach a PDF of a statement to the Statement that you reconcile. I wish I could better tune the auto match/classification rules, and a few other relatively minor things.

,

Stewart SmithFitness watches and my descent into madness

Periodically in life I’ve had the desire to be somewhat fit, or at least have the benefits that come with that such as not dying early and being able to navigate a mountain (or just the city of Seattle) on foot without collapsing. I have also found that holding myself accountable via data is pretty vital to me actually going and repeatedly doing something.

So, at some point I got myself a Garmin watch. The year was 2012 and it was a Garmin Forerunner 410. It had a standard black/grey LCD screen, GPS (where getting a GPS lock could be utterly infuriatingly slow), a sensor you attached to your foot, a sensor you strap to your chest for Heart Rate monitoring, and an ANT+ dongle for connecting to a PC to download your activities. There was even some open source software that someone wrote so I could actually get data off my watch on my Linux laptops. This wasn’t a smart watch – it was exclusively for wearing while exercising and tracking an activity, otherwise it was just a watch.

However, as I was ramping up to marathon distance running, one huge flaw emerged: I was not fast enough to run a marathon in the time that the battery in my Garmin lasted. IIRC it would end up dying around 3hr30min into something, which at the time was increasingly something I’d describe as “not going for too long of a run”. So, the search for a replacement began!

The year was 2017, and the Garmin fenix 5x attracted me for two big reasons: a battery life to be respected, and turn-by-turn navigation. At the time, I seldom went running with a phone, preferring a tiny SanDisk media play (RIP, they made a new version that completely sucked) and a watch. The attraction of being able to get better maps back to where I started (e.g. a hotel in some strange city where I didn’t speak the language) was very appealing. It also had (what I would now describe as) rudimentary smart-watch features. It didn’t have even remotely everything the Pebble had, but it was enough.

So, a (non-trivial) pile of money later (even with discounts), I had myself a shiny and virtually indestructible new Garmin. I didn’t even need a dongle to sync it anywhere – it could just upload via its own WiFi connection, or through Bluetooth to the Garmin Connect app to my phone. I could also (if I ever remembered to), plug in the USB cable to it and download the activities to my computer.

One problem: my skin rebelled against the Garmin fenix 5x after a while. Like, properly rebelled. If it wasn’t coming off, I wanted to rip it off. I tried all of the tricks that are posted anywhere online. Didn’t help. I even got tested for what was the most likely culprit (a Nickel allergy), and didn’t have one of them, so I (still) have no idea what I’m actually allergic to in it. It’s just that I cannot wear it constantly. Urgh. I was enjoying the daily smart watch uses too!

So, that’s one rather expensive watch that is special purpose only, and even then started to get to be a bit of an issue around longer activities. Urgh.

So the hunt began for a smart watch that I could wear constantly. This usually ends in frustration as anything I wanted was hundreds of $ and pretty much nobody listed what materials were in it apart from “stainless steel”, “may contain”, and some disclaimer about “other materials”, which wasn’t a particularly useful starting point for “it is one of these things that my skin doesn’t like”. As at least if the next one also turned out to cause me problems, I could at least have a list of things that I could then narrow down to what I needed to avoid.

So that was all annoying, with the end result being that I went a long time without really wearing a watch. Why? The search resumed periodically and ended up either with nothing, or totally nothing. That was except if I wanted to get further into some vendor lock-in.

Honestly, the only manufacturer of anything smartwatch like which actually listed everything and had some options was Apple. Bizarre. Well, since I already got on the iPhone bandwagon, this was possible. Rather annoyingly, they are very tied together and thus it makes it a bit of a vendor-lock-in if you alternate phone and watch replacement and at any point wish to switch platforms.

That being said though, it does work well and not irritate my skin. So that’s a bonus! If I get back into marathon level distance running, we’ll see how well it goes. But for more common distances that I’ve run or cycled with it… the accuracy seems decent, HR monitor never just sometimes decides I’m not exerting myself, and the GPS actually gets a lock in reasonable time. Plus it can pair with headphones and be the only thing I take out with me.

,

Linux AustraliaCouncil Meeting August 30, 2023 – Minutes

1. Meeting overview and key information

Present

  • Joel Addison (President)
  • Neill Cox (Secretary)
  • Marcus Herstik (Council)
  • Russell Stuart (Treasurer)
  • Sae Ra Germaine  (Council)
  • Jonathan Woithe (Council)

 

Apologies 

  • Wil Brown (Vice-President)

 

Not Present

 

Meeting opened at 19:38 AEDT by Joel  and quorum was achieved.

Minutes taken by Neill

 

2. Log of correspondence

  • Katie McLaughlin – Intent to establish Independent Subcommittee for PyCon AU – to be discussed
  • Katie McLaughlin – PyCon AU server exports – Sae Ra has responded
  • Michael Richardson – Domestic Payment Request (Drupal South) – Paid by Russell
  • Michael Richardson – Batch payment for AV in Wellington – Paid by Russell
  • DrupalSouth Sydney 2024 draft budget and venue contract – Under review, but lots of discussion has taken place.

 

3. Items for discussion

  • Can the treasurer remove Denise Teal’s access to WP-Aust’s bank account and Xero?

To be discussed with Wil later.

  • We didn’t formally approve the Drupal South subcommittees:
    • Motion: LA accepts the establishment of the Drupal South (Sydney) 2024 conference subcommittee.
      • Moved by: Russell Stuart
      • Seconded: Neill Cox
      • Outcome: Passed unanimously
    • Motion: LA accepts the establishment of the Drupal Community 2023 conference subcommittee.
      • Moved by: Russell Stuart
      • Seconded: Sae Ra Germaine
      • Outcome: Passed unanimously
  • PyCon AU Subcommittee

See subcommittee policy v3

 

Independent: Is a subcommittee that brought their own funds when they became part of Linux Australia. They are financially independent of Linux Australia and are expected to remain so. Profits and losses remain with the subcommittee. An Independent subcommittee uses a Linux Australia bank to manage their funds. Their activities must be part of the annual Linux Australian audit; thus they use Linux Australia’s accounting systems and follow its accounting procedures. (Example: a LUG that charges membership fees.)

 

Joel will organise a meeting with PyCon AU to have an initial discussion. We will try to keep this discussion separate from the discussion about this year’s event scheduled for the next LA Council meeting.

 

  • Everything Open 2024 & 2025
    • Gladstone being confirmed for Tuesday 16 – Thursday 18 April 2024
      • Rob will be conference chair
      • Budget Gladstone Conference Budget
    • Adelaide tentatively booked for Tuesday 21 – Friday 24 January 2025
      • Mike will be conference chair
    • Motion: LA accepts the establishment of the Everything Open 2024 (Gladstone) conference subcommittee
      • Moved by: Joel Addison
      • Seconded: Neill Cox
      • Outcome: Passed unanimously
    • Motion: LA provisionally accepts Adelaide as the host of Everything Open 2025.
      • Moved by: Joel Addison
      • Seconded: Sae Ra Germaine
      • Outcome: Passed unanimously

 

4. Items for noting

  • Emails have been sent to Alexar and Russell Coker re Flounder and LUV attendance at subcommittee meetings.

5. Other business

  • None

The post Council Meeting August 30, 2023 – Minutes appeared first on Linux Australia.

,

Stewart SmithRandom useful macOS things for Linux developers

A few random notes about things that can make life on macOS (the modern one, as in, circa 2023) better for those coming from Linux.

For various reasons you may end up with Mac hardware with macOS on the metal rather than Linux. This could be anything from battery life of the Apple Silicon machines (and not quite being ready to jump on the Asahi Linux bandwagon), to being able to run the corporate suite of Enterprise Software (arguably a bug more than a feature), to some other reason that is also fine.

My approach to most of my development is to have a remote more powerful Linux machine to do the heavy lifting, or do Linux development on Linux, and not bank on messing around with a bunch of software on macOS that would approximate something on Linux. This also means I can move my GUI environment (the Mac) easily forward without worrying about whatever weird workarounds I needed to do in order to get things going for whatever development work I’m doing, and vice-versa.

Terminal emulator? iTerm2. The built in Terminal.app is fine, but there’s more than a few nice things in iTerm2, including tmux integration which can end up making it feel a lot more like a regular Linux machine. I should probably go read the tmux integration best practices before I complain about some random bugs I think I’ve hit, so let’s pretend I did that and everything is perfect.

I tend to use the Mac for SSHing to bigger Linux machines for most of my work. At work, that’s mostly to a Graviton 2 EC2 Instance running Amazon Linux with all my development environments on it. At home, it’s mostly a Raptor Blackbird POWER9 system running Fedora.

Running Linux locally? For all the use cases of containers, Podman Desktop or finch. There’s a GUI part of Podman which is nice, and finch I know about because of the relatively nearby team that works on it, and its relationship to lima. Lima positions itself as WSL2-like but for Mac. There’s UTM for a full virtual machine / qemu environment, although I rarely end up using this and am more commonly using a container or just SSHing to a bigger Linux box.

There’s XCode for any macOS development that may be needed (e.g. when you want that extra feature in UTM or something) I do use Homebrew to install a few things locally.

Have a read of Andrew‘s blog post on OpenBMC Development on an Apple M1 MacBook Pro too.

,

Michael StillUsing the openstacksdk with authentication arguments

I wanted to authenticate against OpenStack recently, and had a lot of trouble finding documentation about how to authenticate just by passing arguments (as opposed to by using clouds.yaml or environment variables). Now that I have a working incantation, I figure I should write it down so I can find it again. Its also disappointing the OpenStack documentation doesn’t appear to cover this particularly well…

from keystoneauth1.identity import v3
from keystoneauth1 import session
from openstack import connection


auth = v3.Password(
    auth_url='http://kolla.home.stillhq.com:5000',
    username='admin',
    password='...',
    project_name='admin',
    user_domain_id='default',
    project_domain_id='default')
sess = session.Session(auth=auth)

conn = connection.Connection(session=sess)

print([x.name for x in conn.list_servers()])

This code will authenticate using the arguments provided, and then list all the servers (instances) visible to that user. You’re welcome.

,

Michael StillFetching the most recent GitHub actions runner version

One of the struggles I have with running self-hosted GitHub actions runners is that GitHub releases new versions of the runner quite often and I don’t notice. That’s fine as long as you ignore the scary warnings on action output, until they drop support for whatever random old runner you’re using. They did just that to me this week. The best bit was that the “old runner” was only a month old!

I was left wondering if I could automate this. The answer is thankfully yes.

Specifically, I wanted to automate it with a GitHub action which downloads the runner and puts it into the self-hosted runner image. That looks like this:

- name: Install the github command line
  run: |
    sudo apt update
    sudo apt install -y curl

    curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
    sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
    sudo apt update
    sudo apt install -y gh

- name: Lookup latest version of the GitHub actions runner
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
  run: |
    actions_url=$(gh release view --repo actions/runner --json assets | \
        jq -r '.assets[].url | select (contains("linux-x64-2")) | select (test("[0-9].tar.gz$"))')
    echo "GITHUB_ACTIONS_URL=$actions_url" >> $GITHUB_ENV

- name: Cache github actions runner
  run: |
    curl -o /srv/ci/github-actions-runner.tar.gz ${GITHUB_ACTIONS_URL}

For my setup, this runs in an action which builds a new virtual machine image for the github runners each night. That third step downloads the runner image, and caches it to the virtual machine’s disk. The virtual machine then installs the github actions running on boot for each ephemeral worker.

,

yifeiMake you own 3.5mm serial cable

Doing anything close to the kernel/bootloader on PinePhone almost always requires a serial cable, Pine64 store has premade serial cable available for 7$ USD, but making your own serial cable can be both cheaper and more flexible as a DIY cable can support multiple logic level and pinout configuration.

Parts Overview #

You will need:

  • A 3.5mm audio cable, I got mine from a pair of broken headphone
  • A multimeter for continuity test
  • A USB-Serial adapter, you can get one online for around 3$ USD, make sure it supports 3.3v logic level if you want to use it with PinePhone
  • 3 jump wires, for TX/RX/GND. Make sure those cables have female endings for connecting to serial adapter
  • (Optional) Soldering iron, some flux core solder and heat shrink tubing for making proper connection. You can skip this and instead use twisted wires and electrical tape to make connection

Make Connection #

The serial pinout of PinePhone is available from this Pine64 Wiki wiki, to put it simply:

If your 3.5mm plug has 3 rings:

|=|=|=|)   <-Plug Tip 
 | | |_RX
 | |_Tx
 GND

Tip Ring (rightmost): Rx
Middle Ring: Tx
Last (Leftmost): GND

If your 3.5mm plug has 4 rings:

|=|=|=|=|)  <-Plug Tip
 | | | |_RX
 | | |_Tx
 | -GND
 ^---- Not used

Tip Ring (rightmost): Rx
Middle Ring: Tx
Second Middle Ring: GND
Last (leftmost): Unused

With the pinout in mind, cut the headphone cable open and split the wires inside, for a cable with 3 rings there should be 3 seperate wires, and 4 if that’s a 4 ring plug.

Next, remove about 1cm of the isolation layer for each wire, and then use multimeter’s continuity test mode to find out which wire corresponds to which serial pin, it’s likely a good idea to label each wire with pin name at this stage.

Then, cut a jump wire open, strip about 1cm of the isolation layer like with the headphone cable, and twist it together with a wire from the headphone cable, repeat this process 3 times for Tx/Rx/GND (There are many videos on YouTube on this topic). You can also use soldering iron to make stronger.

After finish, test continuity again with multimeter to ensure every wire is properly connected, then protect the joint with electrical tape or heat shrink tubing (which needs to be put on before making connection).

Now the only step left is connecting the jump wire to the serial adapter. Since the PinePhone and the serial adapter are both considered host device, a cross-over connection is required, so what is transmitted can be received on the other side:

---------------      ----------------
serial    Tx  |------| Rx   headphone
adapter   Rx  |------| Tx   cable
side      GND |------| GND  side          
---------------      ----------------

Connect to serial console #

Flip the DIP switch 6 (the rightmost, labeled Headphone) on the PinePhone to enable serial access, connect the newly made cable to the PinePhone and a computer, then use any serial console tool to open a session. The following example uses cu(1) on OpenBSD but screen(1) and minicom(1) should also work.

$ doas cu -s 115200 -l /dev/cuaU0

,

Linux AustraliaCouncil Meeting August 16, 2023 – Minutes

1. Meeting overview and key information

Present

  • Joel Addison (President)
  • Wil Brown (Vice-President)
  • Neill Cox (Secretary)
  • Sae Ra Germaine  (Council)
  • Marcus Herstik (Council)
  • Russell Stuart (Treasurer)
  • Jonathan Woithe (Council)

 

Apologies

 

Meeting opened at 19:35 AEST by Joel and quorum was achieved.

Minutes taken by Neill Cox

 

2. Log of correspondence

  • DrupalSouth Sydney 2024 draft budget and venue contract – Russell has responded
  • Russell Coker re Grant
  • E-textiles 2022 grant follow up report
  • Russell Keith McGee re PyCon AU pre-event invoice – Russell (Stuart) has responded and the invoice is paid
  • Kathy Reid re DjangoGirls event social media posts for reference – Jonathan has responded
  • Jonathan re Fwd: Django Girls Canberra #1 reaction and photos

3. Items for discussion

  • IWS LA Subcommittee

Current plan is to wind them up in September as we’ve received no response.

  • LUV Subcommittee – Neill to contact LUV to discuss attendance at subcommittee meetings
  • FLOUNDER Subcommittee – Neill to contact FLOUNDER to discuss attendance at subcommittee meetings
  • Everything Open

Joel has spoken to Mike indicating that we intend to accept both bids, one for 2024 and one for 2025.

Mike has asked Joel to talk to the Adelaide Convention Centre. 

The Adelaide bid has found a dinner option at the Adelaide Zoo which looks interesting.

Joel will also arrange a meeting with Rob to discuss the Gladstone bid.

 

  • DrupalSouth Community Day 2023 Budget
    • DrupalSouth operating budget – Canberra 2023 – Original DO NOT EDIT
    • Would like a separate bank account for Community Day 2023 and DrupalSouth 2024

      The budget is approved by Council and Russell will inform the Drupal South subcommittee.

4. Items for noting

5. Other business

  • Drupal sub committee update

Sydney

The contract is ready to sign for Drupal South in Sydney. The LA council has approved signing the Sydney contract.

 

A/V – is included in the venue quote. Recording will be an additional cost, and handled by the conference organisers. Will not use the venue’s hardware for video.

 

The actual A/V process will probably involve recording to a GoPro onto an SD card which will then be processed and uploaded. The cost for the Wellington conference was in the order of $7500. Wellington was recorded using phones. This is not live streaming. There was a conscious decision not to live stream once face to face conferences became possible.

 

In Wellington the slides were recorded via Google Meet.

 

Call for volunteers for Sydney coming soon. Looking to find a local organising committee, but with help and oversight from the Drupal South Subcommittee.

 

Canberra

The Canberra budget will hopefully be kept under $10,000

 

Non Conference operating costs

Between events, monthly retainer, mailchimp,and  other ongoing infra. Generally pushed into the most recent event, even though that event has been closed. When the Sydney account is opened 

 

Can there be a separate operational account and budget to pay for costs not directly related to running a conference?

 

Yes, but. There is already a steering committee account. It should have a budget and LA will transfer money across to that account.

 

Some initiatives to promote Drupal that aren’t events. For example sponsoring a booth at an event to promote Drupal. No revenue will be collected. How should these be paid?

 

LA Answer: Again, pay for this from the Steering Committee account. Provide a budget to LA, but then the Steering Committee can allocate funds as necessary.

 

Other

LA asks: Do we know what sort of overlap of attendees between EO and DS

 

DS Answer: More than a handful, but less than a lot.

 

The budgets sent to LA are not live, but represent the budget that DS have committed to. There is a separate live budget that DS will supply a link to for the LA committee.

 

DrupalSouth Video link: https://www.youtube.com/@DrupalSouth 

 

  • Admin team update

No update as Steve is unable to attend.

 

  • Joomla sub committee update

No update.

 

  • PyCon AU sub committee update

Event is looking to get just shy of 450 attendees. 40 online. People are arriving onsite to help.

 

Looking at a $20k profit after the LA costs. This is after a change to catering arrangements.

 

Numbers are just shy of the 2016 Melbourne event.

 

LA Question: Is there a pipeline of future events?

 

Answer: Not yet, this is something that will be looked at after the event.

 

  • Flounder sub committee update

No update.

 

  • LUV 

No update.

 

  • WordPress sub committee

Quote received from the Sydney Masonic Lodge. Attendance estimated between 350 to 450. Estimated cost is $100k. If the Grand Hall was used it could seat up to 600 people and would not be much more expensive, except for the A/V which would be $6,000 per day.

 

Waiting on some other quotes. Not expecting much more movement until about December.

 

Some concerns about calling the conference WordCamp Australia. Much of the push back has been from sponsors, about whether this would make it harder to run WordCamps in other cities. So may change the name back to WordCamp Sydney so as not to overshadow the other cities.

The post Council Meeting August 16, 2023 – Minutes appeared first on Linux Australia.

,

yifeiOpenBSD on PinePhone Pro: First Impression

Disclaimer #

OpenBSD does not support PinePhone Pro yet and there are real risks involved in running it on your PinePhone Pro now, as such, I do not recommand anyone to do that. You might fry your device due to unsupported power management IC and in a worse case the battery might catch fire due to unconfigured/untested charging safety features.

The purpose of this post is to document how to install OpenBSD on arm64 platforms not fully supported by OpenBSD, and much of this post is not PinePhone-specific, if you intend to follow what documented here, please be mindful about the risks and apply common sense.

Overview #

  • OpenBSD installer cannot be used on bare metal if you want to install OpenBSD to an sdcard, because of insufficiant hardware support. However, it’s possible to install OpenBSD to a virtual machine and then transfer the installed system to a SD card to boot from

  • This post assumes you have a PinePhone Pro running Mobian with KVM properly configured, and an sdcard to transfer the installed system to

  • As for now the only way to interact with the running system is via a serial console cable, wired and wireless network are not supported, same for screen, keyboard, and USB host mode

  • Jump to Support Status to see what work (not much)

Prepare Disk Image #

To make full use of the sdcard, we will create a disk image with size equal to our sdcard. We can find precise size of the sdcard with fdisk on Mobian:

mobian$ echo p | sudo fdisk /dev/mmcblk1

A line similar to above should appear, showing the size of sdcard in bytes:

Disk /dev/mmcblk1: 29.72 GiB, 31914983424 bytes, 62333952 sectors

We can now create our disk image:

mobian$ qemu-img create -f qcow2 openbsd.vm.qcow2 31914983424

Bootstrap via virtual machine #

Installing OpenBSD on VM is relatively strightforward, get the minirootXX.img from OpenBSD mirror (at the moment I’m using miniroot73.img), and follow instruction from my other post

Add support files #

A freshly installed OpenBSD/arm64 VM is not bootable on bare metal, to make it bootable, we will need:

  • Device Tree Blob (DTB) for PinePhone Pro, which describes the hardware environment
    • OpenBSD’s dtb package is compiled from Linux source tree, you can see how it is compiled here
  • Support files for uboot, extracted from installer image

(I’m not sure if all the uboot files are needed, but it’s easy to extract them all)

This can be done from the VM we prepared:

Create mount point for operating on disk image #

vm# mkdir /mnt/{img,disk}

Prepare dtb and installer image #

vm# pkg_add dtb
vm# ftp https://cdn.openbsd.org/pub/OpenBSD/snapshots/arm64/miniroot73.img

Prepare and mount boot partition of installer image #

vm# vnconfig vnd0 miniroot73.img
vm# mount /dev/vnd0i /mnt/img/

Mount VM boot partition #

vm# mount /dev/sd0i /mnt/disk/

Copy files from installer boot partition to VM boot partition #

vm# cp -r /mnt/img/* /mnt/disk/

Copy DTB #

vm# cp /usr/local/share/dtb/arm64/rockchip/rk3399-pinephone-pro.dtb /mnt/disk/

Clean up #

vm# umount /mnt/disk/
vm# umount /mnt/img/
vm# vnconfig -u vnd0

Disable ohci #

ohci controller is not yet supported by OpenBSD on this device, and the existing driver can prevent the kernel from booting, before the root problem is addressed, we can disable ohci driver in kernel to workaround this.

vm# config -ef /bsd                                                                                                         
ukc> find ohci                                                                                                              
167 ohci* at pci* dev -1 function -1 flags 0x0                                                                              
236 ohci* at apldc*|agintc*|ampintc*|qcdwusb*|imxsrc*|imxdwusb*|mvmdio*|rktcphy*|rkpinctrl*|rkgrf*|rkdwusb*|hidwusb*|amldwus
b*|syscon*|sxisyscon*|simplebus*|mainbus0 early 0 flags 0x0                                                                 
413 ohci* at acpi0 addr -1 flags 0x0                                                                                        
ukc> disable 236                                                                                                            
236 ohci* disabled                                                                                                          
ukc> quit                                                     
Saving modified kernel.               

vm# shutdown -hp now

Write image to SD card #

Make sure you VM is properly shutdown, and your sdcard is at /dev/mmcblk1, then write the VM image to the sdcard.

mobian$ sudo qemu-img dd -f qcow2 -O raw if=openbsd.vm.qcow2 of=/dev/mmcblk1 bs=20M

Boot OpenBSD from Tow-boot #

To boot OpenBSD from the sdcard with Tow-boot:

  • Insert sdcard into PinePhone Pro
  • Then flip the DIP switch 6 (the rightmost, labeled Headphone) to enable serial access
  • Connect a serial cable and open a console session, the example uses cu(1) since I’m using OpenBSD, but minicom can also work, towboot uses 115200 as baudrate but other u-boot build might differ
$ doas cu -s 115200 -l /dev/cuaU0

Something similar to the following output can help you confirm your serial connection is working:

U-Boot TPL 2021.10 (Oct 04 2021 - 15:09:26)                                                         
Channel 0: LPDDR4, 50MHz                                                                            
BW=32 Col=10 Bk=8 CS0 Row=15 CS1 Row=15 CS=2 Die BW=16 Size=2048MB                                  
Channel 1: LPDDR4, 50MHz                                                                            
BW=32 Col=10 Bk=8 CS0 Row=15 CS1 Row=15 CS=2 Die BW=16 Size=2048MB                                  
256B stride                                                                                         
lpddr4_set_rate: change freq to 400000000 mhz 0, 1                                                  
lpddr4_set_rate: change freq to 800000000 mhz 1, 0                                                  
Trying to boot from BOOTROM                                                                         
Returning to boot ROM...   
  • Repeatedly press ESC to trigger tow-boot’s boot menu, select Boot from SD
                          Boot from eMMC                                        
                          Boot from SD                                          
                          Boot from USB                                         
                          Boot from PXE                                         
                          Boot from DHCP                                        
                          Boot from (sf0)                                       
                                                                                
                          Rescan USB                                            
                          Firmware Console                                      
                                                                                
                          Reboot                                                
                          Shutdown                                              
                         _          
  • Something silmilar to the following should indicate OpenBSD is booting, and a login prompt will appear soon
boot>                                                                                               
booting sd0a:/bsd: 10625552+2504232+292520+843464 [792195+91+1216848+729496]=0x13b2240
[ using 2739408 bytes of bsd ELF symbol table ]                                                     
Copyright (c) 1982, 1986, 1989, 1991, 1993                                                          
        The Regents of the University of California.  All rights reserved.                          
Copyright (c) 1995-2023 OpenBSD. All rights reserved.  https://www.OpenBSD.org                      
                                                  
OpenBSD 7.3-current (GENERIC.MP) #2182: Thu Jul  6 15:02:37 MDT 2023                                
    deraadt@arm64.openbsd.org:/usr/src/sys/arch/arm64/compile/GENERIC.MP                            
real mem  = 4088885248 (3899MB)                                                                     
avail mem = 3883520000 (3703MB)

Support status #

Feature State Note
Screen No Screen lights up but no signal
USB Host No USB port is not powered
Built-in EMMC Yes sd1 at scsibus1
SD Card Yes sd0 at scsibus0
WIFI No bwfm0 at sdmmc0 needs brcmfmac43455-sdio.pine64,pinephone-pro.bin, loading this can lead to kernel crash
Sensors Partial GPU/CPU temperature is reported by rktemp(4), no other sensor detected
CPU Yes All 6 CPU cores are detected and run fine with MP kernel
Power off No Cannot power down system
Reboot Yes Reboot from OpenBSD works
Modem/other usb devices No Internal USB bus doesn’t seem to work

dmesg #

Full dmesg and other hardware info is available from PinePhone Pro installation report

,

Simon LyallAudiobooks – July 2023

Hollywood: The Oral History by Jeanine Basinger, Sam Wasson

Extracts from hundreds of Interviews by the American Film Institute. Great coverage of the Studio System especially. 4/5

Maigret and the Yellow Dog by Georges Simenon.

In the 6th Maigret Book. The leading citizens of a village are being attacked. Maigret must determine why and by whom. 3/5

Beyond Blue Skies: The Rocket Plane Programs That Led to the Space Age by Chris Petty

An account of the US Rocket Plane programs including the X-1 and X-15. Emphasizes the people, politics and stories 4/5

My Rating System

  • 5/5 = Brilliant, top 5 book of the year
  • 4/5 = Above average, strongly recommend
  • 3/5 = Average. in the middle 70% of books I read
  • 2/5 = Disappointing
  • 1/5 = Did not like at all

Share

,

Linux AustraliaCouncil Meeting August 02, 2023 – Minutes

1. Meeting overview and key information

Present

  • Joel Addison (President)
  • Wil Brown (Vice-President)
  • Neill Cox (Secretary)
  • Sae Ra Germaine  (Council)
  • Marcus Herstik (Council)
  • Russell Stuart (Treasurer)
  • Jonathan Woithe (Council)

 

Apologies

 

Meeting opened at 19:35 AEST by Joel and quorum was achieved.

Minutes taken by Neill Cox

 

2. Log of correspondence

  • Enquiry from Fheba Shibu re Linux conference

Wil has responded.

  • Russel Stuart: Is it time to shut down the IWS LA Subcommittee?
  • Lachlan de Waard: Help me understand Android development costs

 

3. Items for discussion

  • DrupalSouth 2024 Budget
    • Dave has sent through a proposed budget for review. 
    • A/V costs seem very low, and even so are not included in the P/L calculations. Possibly because it’s included in the venue cost? Is recording planned and is there a quote/budget for this?
    • Confirm with DrupalSouth who the team members are for the conference
    • Russell to respond following meeting.

4. Items for noting

  • Everything Open 2024
    • Received updated numbers from Gladstone today, still need to work through them. Will aim to do this by the end of the week and provide an update to everyone over the weekend. Looks like it should be financially viable, but a more detailed evaluation needs to be done.
    • The Gladstone dates are the same as the Drupal South dates which is not ideal.
    •  No further update from Adelaide.

5. Other business

  • Sae Ra and Neill are currently storing a large amount of gear for Linux Australia. It would be good to find a better home for it. The A/V equipment should be shipped to Joel.

 

Meeting closed at 20:14 AEST

Next meeting is scheduled for 2023-08-16

The post Council Meeting August 02, 2023 – Minutes appeared first on Linux Australia.

,

Simon LyallCan I Retire at 55 ?

I’ve recently been doing a review of my investments and retirement goals. I was made redundant in early 2023 and made an estimate on how long my savings would last. While it wasn’t enough to retire on it was a good percentage of the way there.

I got a new job after a few weeks but I decided to make some more detailed calculations to see how much I would really need and if I was on track.

Note: That all numbers in the blog post are 2023 New Zealand dollars and I’m assuming are inflation adjusted.

My Situation

I am a New Zealand citizen living in Auckland, New Zealand. I work in IT and have a stay-at-home partner and no children. We rent and don’t own property. We have Investments in Managed Funds and Term Investments plus Kiwisaver Retirement accounts. I am not including any inheritance.

Our total expenditure is around $50,000 per year. About half this is rent. This doesn’t include major purchases ( eg a replacement car ) or travel.

Why retire early?

The big reason to retire early is due to declining health and life expectancy. At 55 I can expect till live till around 85. Which probably means I’ll die in my 80s. If I’m lucky I’ll be able to be fairly active till 70 but probably not past that. Almost certainly at either myself or my partner will be unable to do active activities (eg walking around a city all day or tramping) by 70.

This means if I retire at 65 I might get 5 years of active retirement. Whereas retiring at 55 could give me 15 years, 3 times as much. If I get sick at 67 then the differene is even greater 12 years vs 2.

Retirement scenario

My working scenario is that I will retire at 55. We will then spend $70,000/year for 5 years on extra travel etc. Then $60,000/year in our 60s followed by $50,000/year from 70 onwards.

New Zealand Superannuation will kick in when we each turn 65. This provides around $15,000 after tax for each person.

Running the numbers

So to test this out I’ve been using a free app/site called ficalc.app . It lets you plug in your retirement length, portfolio and spending and it will run it against every starting year (in the US) since 1871. It will then show you the success rate including the “nearly failed”.

A hard year to check against is 1973. A falling stock market and high inflation wipe out a lot of your savings at the start so you need a good initial amount to keep ahead of your later withdrawals.

1973 starting date.

I found I would need around $1,350,000 starting amount for every year to be successful and no near failures for a 30 year retirement. The numbers were virtually the same for 40 years.

However if I adopted the Gayton-Klinger Guardrails strategy and spend up to $5000/year less when my portfolio was down I could get away with just $1,200,000 saved.

The result

It appears that we will need around $1.2 to $1.35m (in 2023 $NZ) to retire at at 55 with my assumed spending patterns. At my current saving rate there is a good chance I could reach this.

Delaying retirement beyond 55 to save more money loses healthy years of retirement with not a lot of upside in risk reduction. However a delay of a year or two greatly improves the expected outcome so it is an option if things look tight.

There will always be some risk. ie a Stockmarket Crash, financial loss, costs increase (eg rent) or health event could cause problems and I would no longer be working to adjust to it.

We also won’t have a lot of spare money to voluntarily spend on things. eg a $40,000 on an extended holiday wouldn’t be in the budget and would be hard to save for.

I ran the numbers assuming I buy rather than rent. However since Auckland housing prices are so high compared to rents it doesn’t seem to be significantly worse than paying rent out of savings.

Resources

Share

,

Tim SerongThe wrong way to debug CrashLoopBackOff

Last week I had occasion to test deploying ceph-csi on a k3s cluster, so that Kubernetes workloads could access block storage provided by an external Ceph cluster. I went with the upstream Ceph documentation, because assuming everything worked it’d then be really easy for me to say to others “just go do this”.

Everything did not work.

I’d gone through all the instructions, inserting my own Ceph cluster’s FSID and MON IP addresses in the right places, applied the YAML to deploy the provisioner and node plugins, and all the provisioner bits were running just fine, but the csi-rbdplugin pods were stuck in CrashLoopBackOff:

> kubectl get pods
NAME                                        READY   STATUS             RESTARTS          AGE
csi-rbdplugin-22zjr                         1/3     CrashLoopBackOff   107 (3m55s ago)   2d
csi-rbdplugin-pbtc2                         1/3     CrashLoopBackOff   104 (3m33s ago)   2d
csi-rbdplugin-provisioner-9dcfd56d7-c8s72   7/7     Running            28 (35m ago)      8d
csi-rbdplugin-provisioner-9dcfd56d7-hcztz   7/7     Running            28 (35m ago)      8d
csi-rbdplugin-provisioner-9dcfd56d7-w2ctc   7/7     Running            28 (35m ago)      8d
csi-rbdplugin-r2rzr                         1/3     CrashLoopBackOff   106 (3m39s ago)   2d

The csi-rbdplugin pod consists of three containers – driver-registrar, csi-rbdplugin, liveness-prometheus – and csi-rbdplugin wasn’t able to load the rbd kernel module:

> kubectl logs csi-rbdplugin-22zjr --container csi-rbdplugin
I0726 10:25:12.862125    7628 cephcsi.go:199] Driver version: canary and Git version: d432421a88238a878a470d54cbf2c50f2e61cdda
I0726 10:25:12.862452    7628 cephcsi.go:231] Starting driver type: rbd with name: rbd.csi.ceph.com
I0726 10:25:12.865907    7628 mount_linux.go:284] Detected umount with safe 'not mounted' behavior
E0726 10:25:12.872477    7628 rbd_util.go:303] modprobe failed (an error (exit status 1) occurred while running modprobe args: [rbd]): "modprobe: ERROR: could not insert 'rbd': Key was rejected by service\n"
F0726 10:25:12.872702    7628 driver.go:150] an error (exit status 1) occurred while running modprobe args: [rbd] 

Matching “modprobe: ERROR: could not insert ‘rbd’: Key was rejected by service” in the above was an error on each host’s console: “Loading of unsigned module is rejected”. These hosts all have secure boot enabled, so I figured it had to be something to do with that. So I logged into one of the hosts and ran modprobe rbd as root, but that worked just fine. No key errors, no unsigned module errors. And once I’d run modprobe rbd (and later modprobe nbd) on the host, the csi-rbdplugin container restarted and worked just fine.

So why wouldn’t modprobe work inside the container? /lib/modules from the host is mounted inside the container, the container has the right extra privileges… Clearly I needed to run a shell in the failing container to poke around inside when it was in CrashLoopBackOff state, but I realised I had no idea how to do that. I knew I could kubectl exec -it csi-rbdplugin-22zjr --container csi-rbdplugin -- /bin/bash but of course that only works if the container is actually running. My container wouldn’t even start because of that modprobe error.

Having previously spent a reasonable amount of time with podman, which has podman run, I wondered if there were a kubectl run that would let me start a new container using the upstream cephcsi image, but running a shell, instead of its default command. Happily, there is a kubectl run, so I tried it:

> kubectl run -it cephcsi --image=quay.io/cephcsi/cephcsi:canary --rm=true --command=true -- /bin/bash
If you don't see a command prompt, try pressing enter.
[root@cephcsi /]# modprobe rbd
modprobe: FATAL: Module rbd not found in directory /lib/modules/5.14.21-150400.24.66-default
[root@cephcsi /]# ls /lib/modules/
[root@cephcsi /]#  

Ohhh, right, of course, that doesn’t have the host’s /lib/modules mounted. podman run lets me add volume mounts using -v options , so surely kubectl run will let me do that too.

At this point in the story, the notes I wrote last week include an awful lot of swearing.

See, kubectl run doesn’t have a -v option to add mounts, but what it does have is an --overrides option to let you add a chunk of JSON to override the generated pod. So I went back to the relevant YAML and teased out the bits I needed to come up with this monstrosity:

> kubectl run -it cephcsi-test \
  --image=quay.io/cephcsi/cephcsi:canary --rm=true \
  --overrides='{
    "apiVersion": "v1",
    "spec": {
      "containers": [ {
        "name": "cephcsi",
        "command": ["/bin/bash"],
        "stdin": true, "tty": true,
        "image": "quay.io/cephcsi/cephcsi:canary",
        "volumeMounts": [ {
          "mountPath": "/lib/modules", "name": "lib-modules" }],
        "securityContext": {
          "allowPrivilegeEscalation": true,
          "capabilities": { "add": [ "SYS_ADMIN" ] },
          "privileged": true }
      } ],
      "volumes": [ {
        "name": "lib-modules",
        "hostPath": { "path": "/lib/modules", "type": "" }
      } ]
    } }'

But at least I could get a shell and reproduce the problem:

> kubectl run -it cephcsi-test [honking great horrible chunk of JSON]
[root@cephcsi-test /]# ls /lib/modules/
5.14.21-150400.24.66-default
[root@cephcsi-test /]# modprobe rbd
modprobe: ERROR: could not insert 'rbd': Key was rejected by service

A certain amount more screwing around looking at the source for modprobe and bits of the kernel confirmed that the kernel really didn’t think the module was signed for some reason (mod_verify_sig() was returning -ENODATA), but I knew these modules were fine, because I could load them on the host. Eventually I hit on this:

[root@cephcsi-test /]# ls /lib/modules/*/kernel/drivers/block/rbd*
/lib/modules/5.14.21-150400.24.66-default/kernel/drivers/block/rbd.ko.zst

Wait, what’s that .zst extension? It turns out we (SUSE) have been shipping zstd-compressed kernel modules since – as best as I can tell – some time in 2021. modprobe on my SLE Micro 5.3 host of course supports this:

# grep PRETTY /etc/os-release
PRETTY_NAME="SUSE Linux Enterprise Micro for Rancher 5.3"
# modprobe --version
kmod version 29
+ZSTD +XZ +ZLIB +LIBCRYPTO -EXPERIMENTAL

modprobe in the CentOS Stream 8 upstream cephcsi container does not:

[root@cephcsi-test /]# grep PRETTY /etc/os-release 
PRETTY_NAME="CentOS Stream 8"
[root@cephcsi-test /]# modprobe --version
kmod version 25
+XZ +ZLIB +OPENSSL -EXPERIMENTAL

Mystery solved, but I have to say the error messages presented were spectacularly misleading. I later tried with secure boot disabled, and got something marginally better – in that case modprobe failed with “modprobe: ERROR: could not insert ‘rbd’: Exec format error”, and dmesg on the host gave me “Invalid ELF header magic: != \x7fELF”. If I’d seen messaging like that in the first place I might have been quicker to twig to the compression thing.

Anyway, the point of this post wasn’t to rant about inscrutable kernel errors, it was to rant about how there’s no way anyone could be reasonably expected to figure out how to do that --overrides thing with the JSON to debug a container stuck in CrashLoopBackOff. Assuming I couldn’t possibly be the first person to need to debug containers in this state, I told my story to some colleagues, a couple of whom said (approximately) “Oh, I edit the pod YAML and change the container’s command to tail -f /dev/null or sleep 1d. Then it starts up just fine and I can kubectl exec into it and mess around”. Those things totally work, and I wish I’d thought to do that myself. The best answer I got though was to use kubectl debug to make a copy of the existing pod but with the command changed. I didn’t even know kubectl debug existed, which I guess is my reward for not reading the entire manual 😉

So, finally, here’s the right way to do what I was trying to do:

> kubectl debug csi-rbdplugin-22zjr -it \
    --copy-to=csi-debug --container=csi-rbdplugin -- /bin/bash
[root@... /]# modprobe rbd
modprobe: ERROR: could not insert 'rbd': Key was rejected by service

(...do whatever other messing around you need to do, then...)

[root@... /]# exit
Session ended, resume using 'kubectl attach csi-debug -c csi-rbdplugin -i -t' command when the pod is running
> kubectl delete pod csi-debug
pod "csi-debug" deleted 

In the above kubectl debug invocation, csi-rbdplugin-22zjr is the existing pod that’s stuck in CrashLoopBackOff, csi-debug is the name of the new pod being created, and csi-rbdplugin is the container in that pod that has its command replaced with /bin/bash, so you can mess around inside it.

,

Tim RileyOpen source status update, October 2022–July 2023

It’s been a hot minute since my last open source status update! Let’s get caught up, and hopefully we can resume the monthly cadence from here.

Released Hanami 2.0

In Novemver we released Hanami 2.0.0! This was a huge milestone! Both for the Hanami project and the Ruby communuity, but also for us as a development team: we’d spent a long time in the wilderness.

All of this took some doing. It was a mad scramble to get here. The team and I worked non-stop over the preceding couple of months to get this release ready (including me during the mornings of a family trip to Perth).

Anyway, if you’ve followed me here for a while, most of the Hanami 2 features should hopefully feel familiar to you, but if you’d like a refresher, check out the Highlights of Hanami 2.0 that I wrote to accompany the release announcement.

Spoke at RubyConf Thailand

Just two weeks after the 2.0 release, I spoke at RubyConf Thailand 2022!

Given I was 100% focused on Hanami dev work until the release, this is probably the least amount of time I’ve had for conference talk preparation, but I was happy with the result. I found a good hook (“new framework, new you�, given the new year approaching) and put together a streamlined introduction to Hanami that fit within the ~20 minutes alotted to the talks (in this case, it was a boon that we hadn’t yet released our view or persistence layers 😆).

Check it out here:


Overhauled hanami-view internals and improved performance

With the 2.0 release done, we decided to release our view and persistence layers progressively, as 2.1 and 2.2 respectively. This would allow us to keep our focus on one thing at a time and improve the timeliness of the upcoming releases.

So over the Christmas break (including several nights on a family trip to the coast), I started work on the first big blocker for our view layer: hanami-view performance. We were slower than Rails, and that just doesn’t cut the mustard for a framework that advertises itself as fast and light.

Finding the right approach here took several goes, and it was finally ready for this pull request at the end of February. I managed to find a >2x performance boost while simplifying our internals, improving the ergonomics of Hanami::View::Context and our part and scope builders, and still retaining all existing features.

Spoke at RubyConf Australia

Also in February, I spoke at RubyConf Australia 2023! After a 3 year hiatus, this was a wonderful reunion for the Ruby community across Australia and New Zealand. It looked like we lost no appetite for these events, so I’m encouraged for next year and beyond.

To fit the homecoming theme, I brought a strong tinge of Australiana to my talk, and expanded it to include a preview of the upcoming view and persistence layers. Check it out:


Created Hanami::View::ERB, a new ERB engine

After performance, the next big issue for hanami-view was having our particular needs met by our template rendering engines, as well as making auto-escaping the default for our “first party supported� engines (ERB, Haml, Slim) that output HTML.

ERB support was an interesting combination of all these issues. For hanami-view, we don’t expect any rendering engine to require explicit capturing of block content. This is what allows methods on parts and scopes simply to yield and have the returned value match content provided to the block from within the template.

To support this with ERB, we previously had to require our users install and use the erbse gem, a little-used and incomplete ERB implementation that provided this implicit block capturing behaviour by default (but did not support auto-escaping of HTML-unsafe values). For a long while we also had to require users use hamlit-block for the same reasons, and as such we had to build a compatibility check between ourselves and Tilt to ensure the right engines were available. This arrangement was awkward and untenable for the kind of developer experience we want for Hanami 2.

So to fix all of this, I wrote our own ERB engine! This provides everything we need from ERB (implicit block capture as well as auto-escaping) and also allows for hanami-view to be used out of the box without requiring manual installation of other gems.

Meanwhile, in the years since my formative work on hanami-view (aka dry-view), Haml and Slim evolved to both use Temple and provide configuration hooks for all the behaviour we require, so this allowed me to drop our template engine compatibility checks and instead just automatically configure Haml or Slim to match our needs if they’re installed.

To support our auto-escaping of HTML-unsafe values, we’ve adopted the Object and String #html_safe? patches that are prevalent across relevant libraries in the Ruby ecosystem. This gives us the broadest possible compatibility, as well as a streamlined and unsurprising user experience. While you might see folks decry monkey patches in general, this is one example where it makes sense for Hanami to take a pragmatic approach, and I’m very pleased with the outcome.

Implemented helpers for hanami-view

After performance and rendering/HTML safety, the last remaining pre-release item for hanami-view was support for helpers. This needed a bit of thinking to sort out, since the new hanami-view provides a significantly different set of view abstractions compared to the 1.x edition.

Here’s how I managed to sort it out:

After this, all helpers should appear whereer you need them in your views, whether in templates, part classes or scope classes. Each slice will also generate a Views::Helpers module to serve as the starting point for your own collection of helpers, too.

With hanami-view providing parts and scopes, the idea is that you can and should use available-everywhere helpers less than before, but they can still be valuable from time to time, and with their introduction, now you have every possible option available for building your views.

Added friendly error pages

While focused on views, I also took the chance to make our error views friendly too. Now we:

Worked on integrating hanami-assets

Alongside all of this, Luca has been working hard on our support for front end assets via an esbuild plugin and its integration with the framework. This has been nothing short of heroic: he’s been beset by numerous roadblocks but overcome each one, and now we’re getting really close.

Back in June, Luca and I had our first ever pairing session on this work! We got a long way in just a couple of hours. I’m looking forward to pitching in with this as my next focus.

Prepared the Hanami 2.1.0.beta1 release

With all the views work largely squared away, I figured it was time to make a beta release and get this stuff out there for people to test, so we released it as 2.1.0.beta1 at the end of June.

Spoke at Brighton Ruby!

Also at the end of June I spoke at Brighton Ruby! I’ve wanted to attend this event for the longest time, and it did not disappoint. I had a wonderful day at the conference and enjoyed meeting a bunch of new Ruby friends.

For my talk I further evolved the content from the previous iterations, and this time included a look at how we might grow a Hanami app into a more real thing, as well as reflections on what Hanami 2’s release might mean for the Ruby community. I also experimented with a fun new theme and narrative device, which you shall be able to see once the video is out 😜

Thank you so much to Andy for the invitation and the support. ��

Took a holiday

After all of that, I took a break! You might’ve noticed my mentions of all the Hanami work I was doing while ostensibly on family trips. Well, after Brighton Ruby, I was all the way in Europe with the family, and made sure to have a good proper 4 weeks of (bonus summer) holiday. It was fanastic, and I didn’t look at Ruby code one bit.

What’s next

Now that I’m back, I’ll focus on doing whatever is necessary to complete our front end assets integration and get that out as a 2.1 beta2 release. Our new assets stuff is the completely new, so some time for testing and bug fixing will be useful.

Over the rest of the beta period I hope to complete a few smaller general framework improvements and fixes, and from there we can head towards 2.1.0 final.

I suspect it will take at least one more OSS status updates before that all happens, so I can check in with you about it all then!

,

Linux AustraliaCouncil Meeting July 19, 2023 – Minutes

1. Meeting overview and key information

Present

  • Joel Addison (President)
  • Wil Brown (Vice-President)
  • Neill Cox (Secretary)
  • Sae Ra Germaine  (Council)
  • Marcus Herstik (Council)
  • Russell Stuart (Treasurer)
  • Jonathan Woithe (Council)

 

Apologies

 

Meeting opened at 19:35 AEST by Joel and quorum was achieved.

Minutes taken by Neill Cox

 

2. Log of correspondence

  • Kathy Reid: Does LA have an official position on the Voice to Parliament?

Council discussed this issue. The consensus was that while representation and inclusion are important to us as an organisation this is not an issue that there seems to be any benefit for the organisation and runs the risk of causing division for no good reason. 

 

Wil to  communicate with Kathy directly, explaining that we will not be taking an official position.

 

We will focus on trying to increase participation by indigenous people in our events and community.

 

  • Russell Coker: Fwd: Membership status?

 

This was specifically for Yifei, some necessary information was not provided. Joel has asked for the missing details.

 

  • FOSS4G SoTM Oceania 2023 Sponsorship Opportunity

 

The invoice has been paid.

 

3. Items for discussion

  • IWS Subcommittee

No recent activity, so should contact them to see if they are actually going to be active and otherwise disband it. There is a balance of 664.23 in their account. Russel will contact them to discuss. 

 

Motion: That we disband the IWS subcommittee unless they are about to become active.

Moved: Russell

Seconded: Sae Ra

Carried unanimously.

4. Items for noting

  •  VALA TechCamp

The diversity scholarship went to a male. Which is appropriate as men are underrepresented in the  library sector.

 

Linux Australia banners will be displayed at the event and they are grateful for the support.

 

A large number of Everything Open volunteers were also present.

 

  • EverythingOpen

Joel is talking to both Gladstone and Adelaide about the final details of the bids. Hopefully by next meeting we should be able to vote on the bids.

 

  • Membership Backlog

Joel has been working through the backlog. 

 

There have been some duplicates, and also some problems with email bounces. Some applications have been missing required data which means chasing up the applicants.

5. Other business

  • Drupal sub committee update

Venue and date for Sydney in 2024. Contract from the Masonic centre. Nicole has done a site visit. Costs will be a bit higher than previous years. Venue, A/V and catering are all included, but $105K. May increase ticket prices. Sydney is usually the best attended location. Sponsors are also keen and sponsorship amounts can probably be raised. There will be four tracks at the main conference.

 

Dave will send the contract to Russel and Joel for a check.

 

Michael (the Drupal South treasurer) had planned to present budgets but has been unwell this week, so Dave will email them later.

 

There is a single day event linked to the GovCMS event in Canberra in November. The plan is to have a mix of both presentations and workshops.

 

Discussions ongoing about whether the event will be free or have a nominal ticket charge. Charging may make approval difficult for government attendees, but charging may make people more likely to turn up. Possibly actually having a cost might make it easier for some agencies to send people?

 

  • Admin team update

Working through email setup with FastMail. Steve now has a reseller account. Investigating the possibility of moving the mailing lists behind FastMail’s filter.

 

A budget is being prepared, but there are no new big ticket items expected.

 

Fairfax are still blocking the linux.au domain.

 

Steve will not be available for the next meeting.

 

  • Joomla sub committee update

Little to report. The July meeting had to be postponed, due to other commitments of the committee.

 

Joomla 5 is coming out later in the year, so hopefully the August meeting will focus on that.

 

Meetup groups have been combined, so meetup is the best place for people to seek information on Joomla.

 

The Joomla committee should be able to come back to council with paperwork in time for the next meeting.

 

  • PyCon AU sub committee update

Richard is unable to attend tonight’s meeting but has provided the following update:

 

– Ticket sales are sluggish but enough to get us to a break-even.

– Currently in the red by ~60k. If we sell ~70 professional and ~40 enthusiast in the next month, we should be in the black. Assuming ticket sales continue to be linear, that seems achievable.

 

No substantive changes have been made to the budget or planning at this point, everything is on track.

 

  • Flounder sub committee update

No update. Neill will contact Russell directly.

 

  • LUV 

No update. Neill will contact LUV.

 

  • WordPress sub committee

No movement recently. Probably no updates for a while. Waiting on quotes from venues.

The post Council Meeting July 19, 2023 – Minutes appeared first on Linux Australia.

,

FLOSS Down Under - online free software meetingsJuly 2023 Meeting

Meeting Report

The July 2023 meeting sparked multiple new topics including Linux security architecture, Debian ports of LoongArch and Risc-V as well as hardware design of PinePhone backplates.

On the practical side, Russell Coker demonstrated running different applications in isolated environment with bubblewrap sandbox, as well as other hardening techniques and the way they interact with the host system. Russell also discussed some possible pathways of hardening desktop Linux to reach the security level of modern Android. Yifei Zhan demonstrated sending and receiving messages with the PineDio USB LoRa adapter and how to inspect LoRa signal with off-the-shelf software defined radio receiver, and discussed how the driver situation for LoRa on Linux might be improved. Yifei then gave a demonstration on utilizing KVM on PinePhone Pro to run NetBSD and OpenBSD virtual machines, more details on running VMs on the PinePhone Pro can be found on this blog post from Yifei.

We also had some discussion of the current state of Mobian and Debian ecosystem, along with how to contribute to different parts of Mobian with a Mobian developer who joined us.

Simon LyallAudiobooks – June 2023

The Player of Games by Iain M. Banks

A Culture novel about an expert game player who goes on a mission to an Empire built on a complex game. Interesting and recommended. 4/5

The Only Plane in the Sky: An Oral History of 9/11 by Garrett M. Graff

Interwoven accounts of the day from participants. 4/5

Last Man Standing by Craig A. Falconer

Lone Man tries to survive space emergency. Tries to be the next “The Martian” but doesn’t succeed. Science flaky. Many people like but I gave up at 30% through 2/5

My Scoring System

  • 5/5 = Brilliant, top 5 book of the year
  • 4/5 = Above average, strongly recommend
  • 3/5 = Average. in the middle 70% of books I read
  • 2/5 = Disappointing
  • 1/5 = Did not like at all

Share

,

Simon LyallPrometheus node_exporter crashed my server

I am in the middle of upgrading my home monitoring setup. I collect metrics via prometheus and query them with grafana. More details later but yesterday I ran into a little problem that crashed one of my computers.

Part of the prometheus ecosystem is node_exporter . This is a program that runs on every computer and exports cpu, ram, disk, network and other stats of the local machine back to prometheus.

One of my servers is a little HP Microserver gen7 I bought in late-2014 and installed Centos 7 on. It has a boot drive and 4 hard drives with data on it.

An HP Microserver gen7

I noticed this machine wasn’t showing up in the prometheus stats correctly. I logged in and checked and the version of node_exporter was very old and formatting it’s data in an obsolete way. So I download the latest version, copied it over the existing binary and restarted the service…

…and my server promptly crashes. So I reboot the server and it crashes a few seconds after the kernel starts.

Obviously the problem is with the new version of node_exporter. However node_exporter is set to start immediately after boot. So what I have to do is start Linux in “single user mode” ( which doesn’t run any services ) and edit the file that starts node_exporter and then reboot again go get the server up normally without it. I follow this guide for getting into single user mode.

After a big of googling I come across node_exporter bug 903 ( “node_exporter creating ACPI Error with Kernel error log ) which seems similar to what I was seeing. The main difference is that my machine crashed rather than just giving an error. I put that down to my machine running fairly old hardware, firmware and operating systems.

The problem seems to be a bug in HP’s hardware/firmware around some stats that the hardware exports. Since node_exporter is trying to get lots of stats from the hardware including temperature, cpu, clock and power usage it is hitting one of the dodgy interfaces and causing a crash.

The bug suggest disabling the “hwmon” check in node_exporter. I tried this but I was still getting a slightly different crash that looked like clock or cpu frequency. Rather than trying to trace further I disabled all the tests and then enabled the ones I needed one by one until the stats I wanted were populated ( except for uptime, because it turns out the time stats via –collector-time were one thing that killed it ).

So I ended up with the following command line

node_exporter --collector.disable-defaults
              --collector.filesystem
              --collector.uname
              --collector.vmstat
              --collector.meminfo
              --collector.loadavg
              --collector.diskstats
              --collector.cpu
              --collector.netstat
              --collector.netdev

which appears to work reliably.

Share

,

Stewart SmithGetting your photos out of Shotwell

Somewhat a while ago now, I wrote about how every time I return to write some software for the Mac, the preferred language has changed. The purpose of this adventure was to get my photos out of the aging Shotwell and onto my (then new) Mac and the Apple Photos App.

I’ve had a pretty varied experience with photo management on Linux over the past couple of decades. For a while I used f-spot as it was the new hotness. At some point this became…. slow and crashy enough that it was unusable. Today, it appears that the GitHub project warns that current bugs include “Not starting”.

At some point (and via a method I have long since forgotten), I did manage to finally get my photos over to Shotwell, which was the new hotness at the time. That data migration was so long ago now I actually forget what features I was missing from f-spot that I was grumbling about. I remember the import being annoying though. At some point in time Shotwell was no longer was the new hotness and now there is GNOME Photos. I remember looking at GNOME Photos, and seeing no method of importing photos from Shotwell, so put it aside. Hopefully that situation has improved somewhere.

At some point Shotwell was becoming rather stagnated, and I noticed more things stopping to work rather than getting added features and performance. The good news is that there has been some more development activity on Shotwell, so hopefully my issues with it end up being resolved.

One recommendation for Linux photo management was digiKam, and one that I never ended up using full time. One of the reasons behind that was that I couldn’t really see any non manual way to import photos from Shotwell into it.

With tens of thousands of photos (~58k at the time of writing), doing things manually didn’t seem like much fun at all.

As I postponed my decision, I ended up moving my main machine over to a Mac for a variety of random reasons, and one quite motivating thing was the ability to have Photos from my iPhone magically sync over to my photo library without having to plug it into my computer and copy things across.

So…. how to get photos across from Shotwell on Linux to Photos on a Mac/iPhone (and also keep a very keen eye on how to do it the other way around, because, well, vendor lock-in isn’t great).

It would be kind of neat if I could just run Shotwell on the Mac and have some kind of import button, but seeing as there wasn’t already a native Mac port, and that Shotwell is written in Vala rather than something I know has a working toolchain on macOS…. this seemed like more work than I’d really like to take on.

Luckily, I remembered that Shotwell’s database is actually just a SQLite database pointing to all the files on disk. So, if I could work out how to read it accurately, and how to import all the relevant metadata (such as what Albums a photo is in, tags, title, and description) into Apple Photos, I’d be able to make it work.

So… is there any useful documentation as to how the database is structured?

Semi annoyingly, Shotwell is written in Vala, a rather niche programming language that while integrating with all the GObject stuff that GNOME uses, is largely unheard of. Luckily, the database code in Shotwell isn’t too hard to read, so was a useful fallback for when the documentation proves inadequate.

So, I armed myself with the following resources:

Programming the Mac side of things, it was a good excuse to start looking at Swift, so knowing I’d also need to read a SQLite database directly (rather than use any higher level abstraction), I armed myself with the following resources:

From here, I could work on getting the first half going, the ability to view my Shotwell database on the Mac (which is what I posted a screenshot of back in Feb 2022).

But also, I had to work out what I was doing on the other end of things, how would I import photos? It turns out there’s an API!

A bit of SwiftUI code:

import SwiftUI
import AppKit
import Photos

struct ContentView: View {
    @State var favorite_checked : Bool = false
    @State var hidden_checked : Bool = false
    var body: some View {
        VStack() {
            Text("Select a photo for import")
            Toggle("Favorite", isOn: $favorite_checked)
            Toggle("Hidden", isOn: $hidden_checked)
            Button("Import Photo")
            {
                let panel = NSOpenPanel()
                panel.allowsMultipleSelection = false
                panel.canChooseDirectories = false
                if panel.runModal() == .OK {
                    let photo_url = panel.url!
                    print("selected: " + String(photo_url.absoluteString))
                    addAsset(url: photo_url, isFavorite: favorite_checked, isHidden: hidden_checked)
                }
            }
            .padding()
        }
    }
}

struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        ContentView()
    }
}

Combined with a bit of code to do the import (which does look a bunch like the examples in the docs):

import SwiftUI
import Photos
import AppKit

@main
struct SinglePhotoImporterApp: App {
    var body: some Scene {
        WindowGroup {
            ContentView()
        }
    }
}

func addAsset(url: URL, isFavorite: Bool, isHidden: Bool) {
    // Add the asset to the photo library.
    let path = "/Users/stewart/Pictures/1970/01/01/1415446258647.jpg"
    let url = URL(fileURLWithPath: path)
    PHPhotoLibrary.shared().performChanges({
        let addedImage = PHAssetChangeRequest.creationRequestForAssetFromImage(atFileURL: url)
        addedImage?.isHidden = isHidden
        addedImage?.isFavorite = isFavorite
    }, completionHandler: {success, error in
        if !success { print("Error creating the asset: \(String(describing: error))") } else
        {
            print("Imported!")
        }
    })
}

This all meant I could import a single photo. However, there were some limitations.

There’s the PHAssetCollectionChangeRequest to do things to Albums, so it would solve that problem, but I couldn’t for the life of me work out how to add/edit Titles and Descriptions.

It was so close!

So what did I need to do in order to import Titles and Descriptions? It turns out you can do that via AppleScript. Yes, that thing that launched in 1993 and has somehow survived the transition of m68k based Macs to PowerPC based Macs to Intel based Macs to ARM based Macs.

The Photos dictionary for AppleScript

So, just to make it easier to debug what was going on, I started adding code to my ShotwellImporter tool that would generate snippets of AppleScript I could run and check that it was doing the right thing…. but then very quickly ran into a problem…. it appears that the AppleScript language interpreter on modern macOS has limits that you’d be more familiar with in 1993 than 2023, and I very quickly hit limits where the script would just error out before running (I was out of dictionary size allegedly).

But there’s a new option! Everything you can do with AppleScript you can now do with JavaScript – it’s just even less documented than AppleScript is! But it does work! I got to the point where I could generate JavaScript that imported photos, into all the relevant albums, and set title and descriptions.

A useful write up of using JavaScript rather than AppleScript to do things with Photos: https://mudge.name/2019/11/13/scripting-photos-for-macos-with-javascript/

More recent than when I was doing my hacking, https://alexwlchan.net/2023/managing-albums-in-photos/ is a good read.

With luck I’ll find some time to write up a bit of a walkthrough of my code, and push it up somewhere.

,

yifeiVirtualization with KVM on the PinePhone Pro

Basic Setup #

All the tools we need for running VM are already packaged on Mobian, to install them, run:

sudo apt install virt-manager

then add your user to the libvirt group:

sudo adduser mobian libvirt

Reboot and then run virt-host-validate, it should indicate /dev/kvm exists and is accessible.

Trouble with Heterogeneous Architecture #

Trying to start qemu-system-aarch64 with -enable-kvm flag can yield the following, rather unhelpfully worded error:

qemu-system-aarch64: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Invalid argument

Turns out the RK3399s SoC used on this device is built around Arm’s heterogeneous big.Little architecture, and contains 4 slower Cortex A53 cores and 2 faster Cortex A72 cores, this allows the kernel to dynamically schedule tasks on different types of cores to improve performance and save energy. However, this configuration is not yet supported by KVM, and when the expected CPU type differs from the scheduled type (e.g. expecting A72 but kernel scheduled a process on an A53 core), it will panic.

Before KVM is able to work with this setup, we can workaround it by manually set the CPU affinity of qemu by launching it with taskset. To only use A72 cores:

taskset -c 4,5 qemu-system-aarch64 <qemu options>

To only use the slower A53 cores:

taskset -c 0,1,2,3 qemu-system-aarch64 <qemu options>

To apply this workaround globally, we need a wrapper.

dpkg-divert #

Simply replacing the qemu-system-aarch64 binary with a wrapper is not a great idea because upstream Debian package can override our warpper when upgrading qemu. To ensure Debian will not override it, we can divert package’s version of the binary to another location with dpkg-divert:

sudo dpkg-divert --rename /usr/bin/qemu-system-aarch64

The --rename option ensures the existing binary will be moved to a new name, which by default is qemu-system-aarch64.distrib. Finally, create the wrapper under usr/bin/qemu-system-aarch64 (I decide to only use faster cores, A53 cores are too slow for most workload):

#!/usr/bin/env sh
taskset -c 4,5 /usr/bin/qemu-system-aarch64.distrib "$@"

Launching VM #

The following scripts will launch VM of different BSD OS, doing the same for Linux distros is similar. I’m using [user networking (SLIRP)][1] as network backend which does not require root privileges. This backend has the drawback of lower performance compare to TAP or VDE, but still fast enough for me.

OpenBSD #

Setup

mkdir openbsd.vm
cd openbsd.vm
# create disk image
qemu-img create -f qcow2 openbsd.vm.qcow2 32G
# use arm64 uefi firmware from package qemu-efi-aarch64
cp /usr/share/AAVMF/AAVMF_CODE.fd ./

Boot to installer

Assume using miniroot73.img as installer, -smp is needed for installer to enable MP kernel.

qemu-system-aarch64 \
        -enable-kvm \
        -m 1024 \
        -cpu host -M virt \
        -nographic \
        -drive if=pflash,file=aavmf_code.fd,format=raw \
        -drive if=virtio,file=miniroot73.img,format=raw \
        -drive if=virtio,file=openbsd.vm.qcow2,format=qcow2 \
        -netdev user,id=obsd \
        -device virtio-net,netdev=obsd \
        -smp 2

Launch VM

qemu-system-aarch64 \
        -enable-kvm \
        -m 1024 \
        -cpu host -M virt \
        -nographic \
        -drive if=pflash,file=aavmf_code.fd,format=raw \
        -drive if=virtio,file=openbsd.vm.qcow2,format=qcow2 \
        -netdev user,id=obsd \
        -device virtio-net,netdev=obsd \
        -smp 2

NetBSD #

Setup

mkdir netbsd.vm
cd netbsd.vm
# use arm64 uefi firmware from package qemu-efi-aarch64
cp /usr/share/AAVMF/AAVMF_CODE.fd ./

Launch VM

NetBSD provides ready to boot image for arm64, the daily snapshot is available at:

https://nycdn.netbsd.org/pub/NetBSD-daily/HEAD/latest/evbarm-aarch64/binary/gzimg/arm64mbr.img.gz
qemu-system-aarch64 \
        -enable-kvm \
        -m 1024 \
        -cpu host -M virt \
        -nographic \
        -drive if=pflash,file=aavmf_code.fd,format=raw \
        -drive if=virtio,file=arm64mbr.img,format=raw \
        -netdev user,id=nbsd \
        -device virtio-net,netdev=nbsd \
        -smp 2

virt-manager and arm64 UEFI secure boot #

Virt-manager seems to use secure boot enabled firmware by default when creating new VM, this might not work for your prefered system (It certainly does not work with OpenBSD) and will yield a Script Error Status: Access Denied error for unsupported install media. To disable secure boot, select Customize configuration before install during the last step of creating new VM, go to Overview section, and change the firmware from AAVMF_CODE.ms.fd to UEFI aarch64: /usr/share/AAVMF/AAVMF_CODE.fd. This cannot be changed easily after VM is created.

,

yifeiA week with Mobian on PinePhone Pro

It’s been a bit more than a week since I start daily driving the PinePhone Pro with Mobian, some parts of my journey are documented here.

IME and Keyboard #

Both Phosh and Plasma provide their own work flow for setting up IME and adding extra language support, but so far I’m unable to get Phosh’s ibus-based input system to work with PinYin when using on-screen keyboard. I’m able to install PinYin and Anthy from Phosh’s software center, but those methods only work when used with external keyboard, switching to either of those from on-screen keyboard makes no difference when typing.

Plasma Mobile uses Maliit framework for on-screen keyboard, a small set of additional input methods including Chinese (PinYin) can be configured from Mobile Plasma Settings -> On-Screen Keyboard -> Configure Languages, then run im-config to make sure maliit is selected. After doing so it works mostly as expected.

Battery Life #

A fully charged battery provides around 1.5 hours of use time consisting of light web browsering via LTE network and messaging, anything intense like watching Youtube via Firefox can drain the battery within 30 minutes. In order to daily drive it, I always attach it to the PinePhone Keyboard which triples the battery life, combining with power saving tweaks (lowering screen brightness, disable wireless when not in use…), it’s possible to get 8 hours of run time, which is good enough for me.

At the moment of writing, Pine64 does not sell battery pack, but someone at Reddit finds out it’s possible to use Samsung’s EB-BJ700BBC battery pack (designed for Galaxy J7) on PinePhone, as PinePhone Pro uses the same battery as PinePhone, it should also work on my device, but I haven’t tested it. Pine64 is also said to be exploring a case with extended battery in 2020, but I haven’t heard any update on that.

I also experienced a few cases of battery not charging despite being connected to the power supply, in such cases the phone will display a very small current draw from the battery. Maybe that’s due to a bug between the OS and the RK818 PMIC chip but I haven’t dive deep enough to find the root cause.

Scale #

The default display scale is set to 200% under Phosh and similarly high under Plasma Mobile, which might be optimal for touch-focused usage, but is certainly not usable with most desktop applications in landscape mode, for example, Firefox won’t display application manu unless I lower the scale to 125%. Many application (e.g. Mumble) would not function correctly with anything higher than 125%, with most controls outside of display area and overflowing text. As such, I set both Phosh and Plasma Mobile to use 125% scale.

Messaging #

Install fluffychat via Flatpak #

I hardly use instant messaging, even less so on mobile devices because I find laptop to perform much better for reading and writing, but maybe the existence of Matrix can change this. As for now, I use fluffychat as my Matrix client. It’s not packaged for debian yet, so I decide to install it via flatpak, which seems to be the least intrusive method:

# setup flatpak
apt install flatpak

# optional: use flatpak plugin for Gnome Software manager
apt install gnome-software-plugin-flatpak

# setup flatpak repository
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

# install fluffychat
flatpak install im.fluffychat.Fluffychat

After using it for a few days I think it’s by far the most usable messaging software I tried on mobile.

SMS and MMS APN Setting #

SMS works out of the box, bidirectional messaging is possible and no configuration is required. On the other hand, Mobian doesn’t seem to autoconfigure the APN (Access Point Name) for MMS (Multimedia Messaging Service) globally, but both Spacebar (chat client from Plasma Mobile) and Gnome Chatty allow users to set custom MMS APN manually. Most mobile service provides will provide this information on their website, I find mine by searching Provider Name + MMS APN.

With the same correct APN setup for Spacebar and Gnome Chatty, only Chatty seems to work properly in terms of bidirectional image transfer, Spacebar would attempt to download the media file then fail instantly.

Network Sharing #

The mobile LTE connection can be shared to other devices either wirelessly or via USB cable.

Setup Wireless Hotspot from GUI #

Hotspot management is inside Settings -> Hotspot under Plasma mobile and Settings -> WiFi -> “Turn on WiFi Hotspot” under dots menu. Plasma’s wizard default to using WEP encryption with no way of changing it to more secure WPA2/3 but Gnome’s wizard does the right thing and default to WPA2. You might want to turn off auto sleep under Phosh Settings -> Power -> Automatic Suspend, otherwise your hotspot will be turn off after a 5 minutes timeout.

Setup Wireless Hotspot with nmcli(1) #

Doing things from GUI is not always desirable, and if you prefer cli, there is nmcli(1) available:

# setup hotspot
nmcli device wifi hotspot

# show password and SSID
nmcli dev wifi show-password

USB Ethernet #

PinePhone supports USB Host-to-Host bridges with ethernet subclass, it attaches to my OpenBSD laptop as a cede(4) device and my Debian laptop as an usb0 network interface using cdc_ether driver. To enable routing through it:

# turn on frame forward
sysctl net.ipv4.ip_forward=1

# install and enable nftables for forwarding
apt install nftables
systemctl enable nftables.service

# create a table
nft add table nat

# add the prerouting and postrouting chains
# this is required by the nftables framework for NAT
nft -- add chain nat prerouting { type nat hook prerouting priority -100 \; }
nft add chain nat postrouting { type nat hook postrouting priority 100 \; }

# enable masquerade NAT with upstream being wwan0
nft add rule nat postrouting oifname "wwan0" masquerade

You can replace wwan0 with other upstream you wish to use. (e.g. wg0)

Then set the default gateway of your client device to PinePhone’s usb0 IP address and traffic should start to flow, in my case:

ip route add default via 10.66.0.1

See this article from RedHat for setting up different type of NAT with nftables.

For adhoc network sharing, socks proxy over ssh might be simpler to setup than NAT.

Bluetooth Audio #

Bluetooth audio connections mostly work fine with Phosh’s setting panel with the exception of bluetooth low-energy protocol, which doesn’t seem to pair properly, I’m not sure if this is hardware issue or a software one. After the audio device is connected, it’s necessary to manually select it as the default output device, otherwise audio stream will continue to play with the internal speaker. Multiple codecs/profiles exist and can be switched on the fly, SBC/SBC-XQ/LDAC all work reasonably well with no difference in sound quality compare to Android devices as far as I can tell, however any profile making use of microphone will cause extremely bad audio play back quality.

If you cannot, or don’t want to run a bluetooth stack, it’s also possible to use a bluetooth audio adapter (like the Creative BT-W3 I used on my OpenBSD laptop, since OpenBSD doesn’t support bluetooth), such adapter can handle bluetooth codec logic in a dedicate hardware and present a generic audio output device to the OS, which also seems to help with audio jitter under high system load.

Epilogue #

There are many things to be explored and wrote about, from virtualization (KVM for aarch64 should just work) to LoRa backplate, I’m not sure what the future of this device would look like, but it’s certainly an interesting one.

,

Michael StillTurnover of Companies in OpenStack: Prevalence and Rationale

This paper examines the withdrawal behaviour of corporate contributors to OpenStack, which seems particularly relevant given most contributions in OpenStack are corporately supported, and corporate engagement is declining over time. Its also directly relevant to my own experiences contributing to the project, so seemed like a thing I should read.

One interesting aspect of the study is how they define withdrawal from contributions. For each company, they calculate an individual frequency of contribution, and then use that to determine if the company is still making contributions. That is, of a company only ever contributed once a year, we must wait at least a year to know that they have indeed stopped contributing.

The paper finds that in more recent OpenStack releases, more companies are leaving contributions than joining. The authors assert that in general engaged developers are now less experienced than previously, which presents risks in terms of developer effectiveness as well as code quality. However, the paper does note that companies with smaller contributions are more likely to disengage than “sustaining companies”, however that’s largely because there are a huge number of companies contributing only one developer who makes a small number of commits.

Unsurprisingly, the paper notes that companies which contribute more are more likely remain as contributors — both because of momentum, but also because they’re more likely to have a say in the roadmap direction of the project and therefore whether it fits their needs or priorities. They use some loaded words like “dominated by a small number of contributors”, but I don’t think that’s really helpful given that other companies could choose to contribute if they wanted to. I think some of this behaviour is what I would call “rent seeking” — players who contribute little but think that the project somehow owes them changes to make their commercialisation successful. The researchers also note an additional factor here — OpenStack isn’t well suited to small environments, so larger organizations are more likely to have a successful deployment and therefore stay as contributors.

Overall I’d describe this paper as not particularly groundbreaking, but perhaps useful when trying to decide what behaviour to encourage in an Open Source community in order to make a project sustainable.

,

Matt PalmerPrivate Key Redaction: Redux

[Note: the original version of this post named the author of the referenced blog post, and the tone of my writing could be construed to be mocking or otherwise belittling them. While that was not my intention, I recognise that was a possible interpretation, and I have revised this post to remove identifying information and try to neutralise the tone. On the other hand, I have kept the identifying details of the domain involved, as there are entirely legitimate security concerns that result from the issues discussed in this post.]

I have spoken before about why it is tricky to redact private keys. Although that post demonstrated a real-world, presumably-used-in-the-wild private key, I’ve been made aware of commentary along the lines of this representative sample:

I find it hard to believe that anyone would take their actual production key and redact it for documentation. Does the author have evidence of this in practice, or did they see example keys and assume they were redacted production keys?

Well, buckle up, because today’s post is another real-world case study, with rather higher stakes than the previous example.

When Helping Hurts

Today’s case study begins with someone who attempted to do a very good thing: they wrote a blog post about using HashiCorp Vault to store certificates and their private keys. In his post, they included some “test” data, a certificate and a private key, which they redacted.

Unfortunately, they did not redact these very well. Each base64 “blob” has had one line replaced with all xs. Based on the steps I explained previously, it is relatively straightforward to retrieve the entire, intact private key.

From Bad to OMFG

Now, if this post author had, say, generated a fresh private key (after all, there’s no shortage of possible keys), that would not be worthy of a blog post. As you may surmise, that is not what happened.

After reconstructing the insufficiently-redacted private key, you end up with a key that has a SHA256 fingerprint (in hex) of:

72bef096997ec59a671d540d75bd1926363b2097eb9fe10220b2654b1f665b54

Searching for certificates which use that key fingerprint, we find one result: a certificate for hiltonhotels.jp (and a bunch of other, related, domains, as subjectAltNames). As of the time of writing, that certificate is not marked as revoked, and appears to be the same certificate that is currently presented to visitors of that site.

This is, shall we say, not great.

Anyone in possession of this private key – which, I should emphasise, has presumably been public information since the post’s publication date of February 2023 – has the ability to completely transparently impersonate the sites listed in that certificate. That would provide an attacker with the ability to capture any data a user entered, such as personal information, passwords, or payment details, and also modify what the user’s browser received, including injecting malware or other unpleasantness.

In short, no good deed goes unpunished, and this attempt to educate the world at large about the benefits of secure key storage has instead published private key material. Remember, kids: friends don’t let friends post redacted private keys to the Internet.

,

Michael StillOn-demand Container Loading in AWS Lambda

My team at work now has a daily personal learning time called “egg time” — its a slightly silly story involving a manager who was good at taking some time to learn things each day, and an egg shaped chair.

Today I decided that I should read this paper about container image loading in AWS lambda, as recommended by Robert Collins on LinkedIn. The paper details the work they had to do to transition from all Lambda functions being packaged as relatively small zip files (250mb), to relatively large Docker containers (10gb+) while maintaining their aggressive target cold-start time of 50ms.

The paper starts by making some relatively obvious points: that Docker images are very cachable; and that they contain often reused layers. It also just throws out this initially surprising to me statement that only 6.4% of a container image’s data is actually ever used — this paper is referenced as a source for that number and definitely deserves a read later. They refer to this property as “sparsity”.

It then moves on to explore how AWS was able to exploit the sparsity of images in a different manner than previous implementations (slacker and starlight specifically). Instead of creating a fileystem oriented interface that uses overlayfs to mount each layer one on top of the other, they pre-render container images into ext4 filesystem block devices. This is a concept I’ve played with a little with Occy Strap, although not as much as I had intended to. Specifically, Occy Strap is capable of rendering a container image to a filesystem rendition without using overlayfs, but it does that more so you can inspect the image contents than to avoid IO entirely. The pre-rendering is definitely an interesting idea, and conceptually similar to Shaken Fist‘s idea of cached transcodes for virtual machine images. I should note that AWS has modified the ext4 implementation used to be deterministic about the filesystem created, so that the differences between different versions of a container image are also minimised.

The next part is really interesting to me as well — the ext4 block device images are then split into chunks, and those chunks named for their content (think named with a hash of their content), so that chunks which are shared between container images are only stored once. This is exactly what Blockstash is doing with virtual machine images, except I picked large values for the chunk size to reduce HTTP requests, and AWS picked 512KiB to ensure storage efficiency.

The chunks are then routed into the firecracker micro VM which runs a Lambda function by way of virtio, FUSE, and a local agent, which on-demand loads chunks as they are read.

AWS throws in a wild statistic at this point — 80% of uploaded Lambda functions contain zero unique chunks! That is, they’re re-uploads of previously seen container images. AWS points the finger at CI/CD systems for this behaviour, which seems reasonable to me. Of the remaining 20% of uploaded functions, the mean number of unique chunks is 4.3%, with a median of 2.5%. AWS is also quite clever, and stores their chunks encrypted, so that a given hypervisor only has access to the chunks it needs to run current workloads, and the content of container images is still confidential, whilst still being able to deduplicate those chunks across images from different customers. They do this via convergent encryption, as defined by FARSITE.

This raises two questions I don’t have answers to right now: how deterministic are the filesystems created by diskimage-builder, which is the source of the images for Blockstash; and how much of those images is actually used in the average runtime of a virtual machine? I suspect a custom virtio block driver for qemu / KVM virtual machines would be an interesting way to waste a few weeks one day. AWS’ initial implementation for their custom virtio driver was with FUSE, but they report performance problems because of the context switches required. I wonder if nbd would work reasonably?

Overall this paper was excellent and well worth the time to read if you’re interested in the performance of containerized systems.

,

yifeiMobian and Plasma Mobile on the PinePhone Pro

Setup Tow-boot #

Mobian as of now requires Tow-boot bootloader to be installed first, u-boot is no longer supported. To install Tow-boot, see this document, I find it easier to plug in a usb cable to start the phone while holding down the RE button. Be mindful that there will be no graphical boot menu after installation, at the moment tow-boot menu is only available via serial connection.

It’s also possible you can skip this step, according to the PinePhone Pro wiki:

The batches bought after July 2022 come with Tow-Boot flashed to the SPI, which offers additional functionality over U-Boot as bootloader.

Setup Mobian #

Since I want to have full disk encrytion (FDE) for all my devices including this, I went with the Mobian installer image that gives me an option to enable FDE. The installation is fairly simple and smooth, taking only around 20 minutes start to finish with very few configure options, if you want to know what the process is like, Mobian wiki has an article with an overview of the installation process as well as links to different images.

Install KDE Plasma Mobile #

By default Mobian ships with Phosh, a wayland shell for GNOME designed for mobile devices, it works OK, but I prefer KDE Plasma Mobile. Fortunately, Plasma mobile is packaged for Debian and can be installed via:

sudo apt update
sudo apt install plasma-mobile plasma-mobile-tweaks plasma-settings plasma-phonebook plasma-dialer spacebar angelfish okular-mobile

The password for user mobian is the same as the screen unlock password. After apt done its job, logout current session and there should be an option to login again using Plasma.

Other Shell #

Apart from Phosh and Plasma, Swmo, Lomini (from Ubuntu Touch), Desktop Gnome as well as LXDE are all available.

Current state: #

I’ve been daily driving it for a few days, it most certinaly had a long way to go, but I can live with it as is.

Feature State Note
Call Yes Poor audio quality heard by other side
Mobile Data Yes Plasma’s modem setting page cannot enable the modem, after doing it from a Phosh session it seems to be working, APN is autoconfigured
SMS Yes No unified way of storing chat history, history does not sync between different applications
Camera Partial Extremely high latency between frame (3s+), inaccurate color
WiFi Yes Drop-off seems to be more common than other phones, but might just be an isolated case
WiFi Hotspot Yes Plasma’s wizard default to WEP which is insecure and I cannot authenticate from other devices, but unprotected hotspot works
Bluetooth No Unable to connect to any paired device, seems to be a known issue

The PinePhone Pro Wiki page also has a list for hardware/software state.

,

yifeiThings I read this month

RetroBSD: a port of 2.11BSD Unix intended for embedded systems with fixed memory mapping.


DarkRiscV: a BSD-licensed RISC-V cpu core implemented in Verilog


,

Tim SerongLonghorn in a Sandbox

In my last post, I wrote about how I taught sesdev (originally a tool for deploying Ceph clusters on virtual machines) to deploy k3s, because I wanted a little sandbox in which I could break learn more about Kubernetes. It’s nice to be able to do a toy deployment locally, on a bunch of VMs, on my own hardware, in my home office, rather than paying to do it on someone else’s computer. Given the k3s thing worked, I figured the next step was to teach sesdev how to deploy Longhorn so I could break that learn more about that too.

Teaching sesdev to deploy Longhorn meant asking it to:

  • Install nfs-client, open-iscsi and e2fsprogs packages on all nodes.
  • Make an ext4 filesystem on /dev/vdb on all the nodes that have extra disks, then mount that on /var/lib/longhorn.
  • Use kubectl label node -l 'node-role.kubernetes.io/master!=true' node.longhorn.io/create-default-disk=true to ensure Longhorn does its storage thing only on the nodes that aren’t the k3s master.
  • Install Longhorn with Helm, because that will install the latest version by default vs. using kubectl where you always explicitly need to specify the version.
  • Create an ingress so the UI is exposed… from all nodes, via HTTP, with no authentication. Remember: this is a sandbox – please don’t do this sort of thing in production!

So, now I can do this:

> sesdev create k3s --deploy-longhorn
=== Creating deployment "k3s-longhorn" with the following configuration === 

Deployment-wide parameters (applicable to all VMs in deployment):

- deployment ID:    k3s-longhorn
- number of VMs:    5
- version:          k3s
- OS:               tumbleweed
- public network:   10.20.78.0/24 

Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y

=== Running shell command ===
vagrant up --no-destroy-on-error --provision
Bringing machine 'master' up with 'libvirt' provider…
Bringing machine 'node1' up with 'libvirt' provider…
Bringing machine 'node2' up with 'libvirt' provider…
Bringing machine 'node3' up with 'libvirt' provider…
Bringing machine 'node4' up with 'libvirt' provider…

[... lots more log noise here - this takes several minutes... ]

=== Deployment Finished ===

You can login into the cluster with:

  $ sesdev ssh k3s-longhorn

Longhorn will now be deploying, which may take some time.
After logging into the cluster, try these:

  # kubectl get pods -n longhorn-system --watch
  # kubectl get pods -n longhorn-system

The Longhorn UI will be accessible via any cluster IP address
(see the kubectl -n longhorn-system get ingress output above).
Note that no authentication is required.

…and, after another minute or two, I can access the Longhorn UI and try creating some volumes. There’s a brief period while the UI pod is still starting where it just says “404 page not found”, and later after the UI is up, there’s still other pods coming online, so on the Volume screen in the Longhorn UI an error appears: “failed to get the parameters: failed to get target node ID: cannot find a node that is ready and has the default engine image longhornio/longhorn-engine:v1.4.1 deployed“. Rest assured this goes away in due course (it’s not impossible I’m suffering here from rural Tasmanian internet lag pulling container images). Anyway, with my five nodes – four of which have an 8GB virtual disk for use by Longhorn – I end up with a bit less than 22GB storage available:

21.5 GiB isn’t much, but remember this is a toy deployment running in VMs on my desktop Linux box

Now for the fun part. Longhorn is a distributed storage solution, so I thought it would be interesting to see how it handled a couple of types of failure. The following tests are somewhat arbitrary (I’m really just kicking the tyres randomly at this stage) but Longhorn did, I think, behave pretty well given what I did to it.

Volumes in Longhorn consist of replicas stored as sparse files on a regular filesystem on each storage node. The Longhorn documentation recommends using a dedicated disk rather than just having /var/lib/longhorn backed by the root filesystem, so that’s what sesdev does: /var/lib/longhorn is an ext4 filesystem mounted on /dev/vdb. Now, what happens to Longhorn if that underlying block device suffers some kind of horrible failure? To test that, I used the Longhorn UI to create a 2GB volume, then attached that to the master node:

The Longhorn UI helpfully tells me the volume replicas are on node3, node4 and node1

Then, I ssh’d to the master node and with my 2GB Longhorn volume attached, made a filesystem on it and created a little file:

> sesdev ssh k3s-longhorn
Have a lot of fun...

master:~ # cat /proc/partitions 
major minor  #blocks  name 
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
   8        0    2097152 sda

master:~ # mkfs /dev/sda
mke2fs 1.46.5 (30-Dec-2021)
Discarding device blocks: done                            
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: 3709b21c-b9a2-41c1-a6dd-e449bdeb275b
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912
Allocating group tables: done                            
Writing inode tables: done                            
Writing superblocks and filesystem accounting information: done 

master:~ # mount /dev/sda /mnt
master:~ # echo foo > /mnt/foo
master:~ # cat /mnt/foo
foo

Then I went and trashed the block device backing one of the replicas:

> sesdev ssh k3s-longhorn node3
Have a lot of fun...

node3:~ # ls /var/lib/longhorn
engine-binaries  longhorn-disk.cfg  lost+found  replicas  unix-domain-socket

node3:~ # dd if=/dev/urandom of=/dev/vdb bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.486205 s, 216 MB/s

node3:~ # ls /var/lib/longhorn

node3:~ # dmesg|tail -n1
[ 6544.197183] EXT4-fs error (device vdb): ext4_map_blocks:607: inode #393220: block 1607168: comm longhorn: lblock 0 mapped to illegal pblock 1607168 (length 1) 

At this point, the Longhorn UI still showed the volume as green (healthy, ready, scheduled). Then, back on the master node, I tried creating another file:

master:~ # echo bar > /mnt/bar
master:~ # cat /mnt/bar
bar

That’s fine so far, but suddenly the Longhorn UI noticed that something very bad had happened:

The volume is still usable, but one of the replicas has failed

Ultimately node3 was rebooted and ended up stalled with the console requesting the root password for maintenance:

Failed to mount /var/lib/longhorn – Can’t find ext4 filesystem

Meanwhile, Longhorn went and rebuilt a third replica on node2:

All green again!

…and the volume remained usable the entire time:

master:~ # echo baz > /mnt/baz
master:~ # ls /mnt
bar  baz  foo  lost+found

That’s perfect!

Looking at the Node screen we could see that node3 was still down:

There may be disk size errors with down nodes (4.87 TiB looks a lot like integer overflow to me)

That’s OK, I was able to fix node3. I logged in on the console and ran mkfs.ext4 /dev/vdb then brought the node back up again.The disk remained unschedulable, because Longhorn was still expecting the ‘old’ disk to be there (I assume based on the UUID stored in /var/lib/longhorn/longhorn-disk.cfg) and of course the ‘new’ disk is empty. So I used the Longhorn UI to disable scheduling for that ‘old’ disk, then deleted it. Shortly after, Longhorn recognised the ‘new’ disk mounted at /var/lib/longhorn and everything was back to green across the board.

So Longhorn recovered well from the backing store of one replica going bad. Next I thought I’d try to break it from the other end by running a volume out of space. What follows is possibly not a fair test, because what I did was create a single Longhorn volume larger than the underlying disks, then filled that up. In normal usage, I assume one would ensure there’s plenty of backing storage available to service multiple volumes, that individual volumes wouldn’t generally be expected to get more than a certain percentage full, and that some sort of monitoring and/or alerting would be in place to warn of disk pressure.

With four nodes, each with a single 8GB disk, and Longhorn apparently reserving 2.33GB by default on each disk, that means no Longhorn volume can physically store more than a bit over 5.5GB of data (see the Size column in the previous screenshot). Given that the default setting for Storage Over Provisioning Percentage is 200, we’re actually allowed to allocate up to a bit under 11GB.

So I went and created a 10GB volume, attached that to the master node, created a filesystem on it, and wrote a whole lot of zeros to it:

master:~ # mkfs.ext4 /dev/sda
mke2fs 1.46.5 (30-Dec-2021)
[...]

master:~ # mount /dev/sda /mnt
master:~ # df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda        9.8G   24K  9.3G   1% /mnt

master:~ # dd if=/dev/zero of=/mnt/big-lot-of-zeros bs=1M status=progress
2357198848 bytes (2.4 GB, 2.2 GiB) copied, 107 s, 22.0 MB/s

While that dd was running, I was able to see the used space of the replicas increasing in the Longhorn UI:

Those little green bars eventually turn yellow as the disks approach full

After a few more minutes, the dd stalled…

master:~ # dd if=/dev/zero of=/mnt/big-lot-of-zeros bs=1M status=progress
9039773696 bytes (9.0 GB, 8.4 GiB) copied, 478 s, 18.9 MB/s

…there was a lot of unpleasantness on the master node’s console…

So many I/O errors!

…the replicas became unschedulable due to lack of space…

This doesn’t look good

…and finally the volume faulted:

This really doesn’t look good

Now what?

It turns out that Longhorn will actually recover if we’re able to somehow expand the disks that store the replicas. This is probably a good argument for backing Longhorn with an LVM volume on each node in real world deployments, because then you could just add another disk and extend the volume onto it. In my case though, given it’s all VMs and virtual block devices, I can actually just enlarge those devices. For each node then, I:

  1. Shut it down
  2. Ran qemu-img resize /var/lib/libvirt/images/k3s-longhorn_$NODE-vdb.qcow2 +8G
  3. Started it back up again and ran resize2fs /dev/vdb to take advantage of the extra disk space.

After doing that to node1, Longhorn realised there was enough space there and brought node1’s replica of my 10GB volume back online. It also summarily discarded the other two replicas from the still-full disks on node2 and node3, which didn’t yet have enough free space to be useful:

One usable replica is better than three unusable replicas

As I repeated the virtual disk expansion on the other nodes, Longhorn happily went off and recreated the missing replicas:

Finally I could re-attach the volume to the master node, and have a look to see how many of my zeros were actually written to the volume:

master:~ # cat /proc/partitions 
major minor  #blocks  name
 254        0   44040192 vda
 254        1       2048 vda1
 254        2      20480 vda2
 254        3   44016623 vda3
   8        0   10485760 sda

master:~ # mount /dev/sda /mnt
master:~ # ls -l /mnt
total 7839764
-rw-r--r-- 1 root root 8027897856 May  3 04:41 big-lot-of-zeros
drwx------ 2 root root      16384 May  3 04:34 lost+found

Recall that dd claimed to have written 9039773696 bytes before it stalled when the volume faulted, so I guess that last gigabyte of zeros is lost in the aether. But, recall also that this isn’t really a fair test – one overprovisioned volume deliberately being quickly and deliberately filled to breaking point vs. a production deployment with (presumably) multiple volumes that don’t fill quite so fast, and where one is hopefully paying at least a little bit of attention to disk pressure as time goes by.

It’s worth noting that in a situation where there are multiple Longhorn volumes, assuming one disk or LVM volume per node, the replicas will all share the same underlying disks, and once those disks are full it seems all the Longhorn volumes backed by them will fault. Given multiple Longhorn volumes, one solution – rather than expanding the underlying disks – is simply to delete a volume or two if you can stand to lose the data, or maybe delete some snapshots (I didn’t try the latter yet). Once there’s enough free space, the remaining volumes will come back online. If you’re really worried about this failure mode, you could always just disable overprovisioning in the first place – whether this makes sense or not will really depend on your workloads and their data usage patterns.

All in all, like I said earlier, I think Longhorn behaved pretty well given what I did to it. Some more information in the event log could perhaps be beneficial though. In the UI I can see warnings from longhorn-node-controller e.g. “the disk default-disk-1cdbc4e904539d26(/var/lib/longhorn/) on the node node1 has 3879731200 available, but requires reserved 2505089433, minimal 25% to schedule more replicas” and warnings from longhorn-engine-controller e.g. “Detected replica overprovisioned-r-73d18ad6 (10.42.3.19:10000) in error“, but I couldn’t find anything really obvious like “Dude, your disks are totally full!”

Later, I found more detail in the engine manager logs after generating a support bundle ([…] level=error msg=”I/O error” error=”tcp://10.42.4.34:10000: write /host/var/lib/longhorn/replicas/overprovisioned-c3b9b547/volume-head-003.img: no space left on device”) so the error information is available – maybe it’s just a matter of learning where to look for it.

,

Matt Palmerdev-dependencies and Rust's unused_crate_dependencies lint

I’m in the process of getting super-strict about the code quality of cretrit, the comparison-revealing encryption library that underlies the queryable encryption of the Enquo project. While I’m going to write a whole big thing about Rust linting in the future, I bumped across a rather gnarly problem that I thought was worth sharing separately. The problem, in short, is that the unused_crate_dependencies lint interacts badly with crates that are only needed for benchmarking, such as (in my case) criterion.

Rust has a whole bucketload of “lints” that can help your codebase adhere to certain standards, by warning (or exploding) if the problem is detected. The unused_crate_dependencies lint, as the name suggests, gets snippy when there’s a crate listed in your Cargo.toml that doesn’t appear to be used anywhere.

All well and good so far. However, while Rust has the ability to specify “crates needed for running the testsuite” (the [dev-dependencies] section of Cargo.toml) separately from “crates needed for actually using this thing” ([dependencies]), it doesn’t have a way to specify “crates needed for running the benchmarks”. That is a problem when you’re using something like criterion for benchmarking, because it doesn’t get refered to at all in your project code – it only gets used in the benchmarks.

When building your codebase for running the test suite, the compiler sees that you’ve specified criterion as one of your “testsuite dependencies”, but it never gets used in your testsuite. This causes the unused_crate_dependencies lint to lose its tiny mind, and make your build look ugly (or fail).

Thankfully, the solution is very simple, once you know the trick (this could be the unofficial theme song of the entire Rust ecosystem). You need to refer to the criterion crate somewhere in the code that gets built during the testsuite. The lint tells you most of what you need to do (like most things Rust, it tries hard to be helpful), but since it’s a development dependency, you need a little extra secret sauce.

All you need to do is add these two lines to the bottom of your src/lib.rs (or src/main.rs for a binary crate):

#[cfg(test)]
use criterion as _;

For the less Rust-literate, this means “when the build-time configuration flag test is set, import the criterion crate, but don’t, like, allow it to actually be referred to”. This is enough to make the lint believe that the dependency is being used, and making it only happen when the test build-time config flag is set avoids the ugliness of it trying to refer to the crate during regular builds (which would fail, because criterion is only a dev-dependency).

Simple? Yes. Did it take me a lot of skull-sweat to figure out? You betcha. That’s why I’m writing it down – even if everyone else already knows this, at least future Matt will find this post next time he trips over this problem.

,

Matt PalmerRutie and Magnus, Two Good Ways to Build Ruby Extensions in Rust

I wrote the Ruby bindings for the Enquo Project, my attempt to bring queryable encryption to all databases, using the Rutie library. Recently, I’ve rewritten the bindings to use Magnus instead, and I thought I’d put down my thoughts about the whole situation.

The Story So Far

The Enquo Project core cryptography is all written in Rust, as seems to be the vogue these days. Rust is fast, safe, and easily interoperable with most of the rest of the modern software development ecosystem, making it a good choice as a language to implement the cryptographic primitives that Enquo needs, like Order-Revealing Encryption.

Of course, since not everyone writes their applications in Rust, we need to provide the functionality of the Enquo client in the languages that people do use, such as Ruby, Python, and so on. Since re-writing all that cryptographic code in a myriad of languages would be tedious and error-prone, we instead provide bindings to the “core” Rust code. These are just thin shims of code that translate the data types and function calls between Rust and the target language.

Shim in a Can
Wrong sort of shim, but canned language bindings would be handy

As I’m most familiar with Ruby and its development ecosystem (particularly Ruby on Rails), it was natural that I’d make Ruby bindings for Enquo as my first target. Rummaging around, it seemed that Rutie was a good library to use, so off I went.

What are Rutie and Magnus, Anyway?

Both libraries share the same goal: provide the ability to write some Rust code, run that through a compiler, and produce something that can be loaded by the Ruby interpreter and used just like any other Ruby class. They’re both fairly “high level” interfaces, trying to abstract away much of the gory details, and do a lot of the common “heavy lifting” that can make writing bindings fiddly and annoying. Things like mapping data types (like strings and integers) between Rust data types and the closest equivalents in Ruby.

This mapping never goes perfectly smoothly. For example, Ruby integers don’t have a fixed range of values they can represent – you can store a huge number like 2256 more-or-less as easily as you can the number 12. But Rust, being a lower-level language, only has a set of integer types that have fixed boundaries, like the u32 type, which can only store integers between zero and about four billion (232 - 1, to be precise).

There’s also lots of little things that need to be just right, also, like translating the different memory management approaches of the languages, and dealing with a myriad of fiddly little issues like passing arguments and return values in and out of method calls, helpers for defining classes and methods (and pointing to the correct Rust functions), and so on.

A mass of tangled pipes and valves
This is what I imagine it looks like inside these libraries
(Hervé Cozanet / Wikimedia Commons, CC-BY-SA)

All in all, these libraries are fairly significant pieces of work, and I’m mighty glad that someone else has taken on the job of building (and maintaining!) them.

So Why the Change?

Good question.

It’s important to say at the outset that there’s nothing particularly wrong with Rutie. I found using Rutie to be very straightforward, and the Ruby bindings came together very quickly and easily. If someone chose to use Rutie for their project, I’m sure they’d have a good experience.

What made me take the time to rewrite using Magnus was a set of a few tiny things, which together gave me enough of a shove to do the work.

Firstly, I’d had a hiccup with Rutie’s support of newer versions of Ruby, particularly 3.2 (PR). Also, I’d hit a couple of segfault issues, which were ultimately caused by Ruby garbage-collecting data out from underneath me. These were ultimately my fault, of course, but Rutie wasn’t helping me out in avoiding the problems in the first place.

Finally, while Rutie helped translate data types, there was still a bit of boilerplate and ugliness that needed to be included. This wasn’t a showstopper, but I’m appreciating the extra smoothness that Magnus provides here.

As an example, here’s what’s required in Rutie to get “native” Rust data types from Ruby method parameters (and the self reference to the current object):

fn enquo_field_decrypt_text(ciphertext_obj: RString, context_obj: RString) -> RString {
    let ciphertext = ciphertext_obj.to_str_unchecked();
    let context = context_obj.to_vec_u8_unchecked();

    let field = rbself.get_data(&*FIELD_WRAPPER);
    // etc etc etc

The equivalent in Magnus is just the function signature:

fn decrypt_text(&self, ciphertext: String, context: String) -> Result<String, magnus::Error> {

You can also see there that Magnus signals an exception via the Result return value, while Rutie’s approach to raising an exception involves poking the Ruby VM directly, which always struck me as a bit ugly.

There are several other minor things in Magnus (like its cleaner approach to wrapping structs so they can be stored in Ruby objects) that I’m appreciating, too. Never discount the power of ergonomics for making a happy developer.

The End Result

I spent a bit over half of last weekend doing the rewrite – maybe ten hours of so. Since Magnus did more type checking and data validation, and its approach to error handling was smoother, I took the opportunity to rewrite a bunch of Ruby “wrapper” code I’d written (which just existed to check things like ranges of values and string encodings) into Rust, as well.

To make sure that the conversion was accurate, I added a heap more unit tests to the bindings. I also took the opportunity to restructure the codebase to split the code for the different Ruby classes into separate files, which I hadn’t done initially as the code had originally accreted, rather than being purposefully written.

All up, though, my rewrite ended up removing over 60 lines (excluding the extra specs I added):

$ git diff --stat -- lib ext/enquo/src
 ruby/ext/enquo/src/field.rs       | 342 ++++++++++++++++++++++++++++++++++++++
 ruby/ext/enquo/src/lib.rs         | 338 ++++---------------------------------
 ruby/ext/enquo/src/root.rs        |  39 +++++
 ruby/ext/enquo/src/root_key.rs    |  67 ++++++++
 ruby/lib/enquo.rb                 |   6 +-
 ruby/lib/enquo/field.rb           | 173 -------------------
 ruby/lib/enquo/root.rb            |  28 ----
 ruby/lib/enquo/root_key.rb        |   1 -
 ruby/lib/enquo/root_key/static.rb |  27 ---
 9 files changed, 479 insertions(+), 542 deletions(-)

Considering that I was translating from a “higher level” language into a “lower level” one, the removal of so much code is quite remarkable. Magnus was able to automagically replace rather a lot of raise ArgumentError if something.isnt_right code in those .rb files.

So, in conclusion, if you, too, are building Ruby extensions in Rust, while Rutie is a solid choice (and you probably should stick with it if you’re already using it), I highly recommend giving Magnus a look for your next extension.

,

Matt PalmerDatabase Encryption: If It's So Good, Why Isn't Everyone Doing It?

a wordcloud of organisations who have been reported to have had data breaches in 2022
Just some of the organisations that leaked data in 2022

It seems like just about every day there’s another report of another company getting “hacked” and having its sensitive data (or, worse, the sensitive data of its customers) stolen. Sometimes, people’s most intimate information gets dumped for the world to see. Other times it’s “just” used for identity theft, extortion, and other crimes. In the least worst case, the attacker gets cold feet, but people suffer stress and inconvenience from having to replace identity documents.

A great way to protect information from being leaked is to encrypt it. We encrypt data while it’s being sent over the Internet (with TLS), and we encrypt it when it’s “at rest” (with disk or volume encryption). Yet, everyone’s data seems to still get stolen on a regular basis. Why?

Because the data is kept online in an unencrypted form, sitting in the database while its being used. This means that attackers can just connect to the database, or trick the application into dumping the database, and all the data is just lying there, waiting to be misused.

It’s Not the Devs’ Fault, Though

You may be thinking that leaving an entire database full of sensitive data unencrypted seems like a terrible idea. And you’re right: it is a terrible idea. But it’s seemingly unavoidable.

The problem is that in order to do what a database does best (query, sort, and aggregate data), it needs to be able to know what the data is. When you encrypt data, however, all the database sees is a locked box.

a locked box
Not very useful for a database

The database can’t tell what’s in the locked box – whether it’s a number equal to 42, or a date that’s less than 2023-01-01, or a string that contains the substring “foo”. Every value is just an opaque blob of “stuff”, and the database is rendered completely useless.

Since modern applications usually rely pretty heavily on their database, it’s essentially impossible to build an application if you’ve turned your database into a glorified flat-file by encrypting everything in it. Thus, it’s hardly surprising that developers have to leave the data laying around unencrypted, for anyone to come along and take.

Introducing Enquo

I said before that having data unencrypted in a database is seemingly unavoidable. That’s because there are some innovative cryptographic techniques that can make it possible to query encrypted data.

Andy Dwyer being amazed
Indeed

The purpose of the Enquo project is to provide a common set of cryptographic primitives that implement ENcrypted QUery Operations (ie “Enquo”), and integrate those operations into databases, ORMs, and anywhere else that could benefit. The end goal is to provide the ability to encrypt all the data stored in any database server, while still allowing the data to be queried and aggregated.

So far, the project consists of these components:

  • the enquo-core library, that implements queryable encrypted integers, dates, and text in Rust and Ruby;
  • a PostgreSQL extension, pg_enquo, that allows PostgreSQL to query encrypted data; and
  • a Rails ActiveRecord extension, ActiveEnquo, that augments ActiveRecord to do the encryption/decryption required.

Support for other languages and ORMs is designed to be as straightforward as possible, and integration with other databases is mostly dependent on their own extensibility.

The project’s core tenets emphasise both uncompromising security, and a friendly developer experience.

Naturally, all Enquo code is open source, released under the MIT licence.

Would You Like To Know More?

Desire to know more intensifies
Everyone who uses a database...

If all this sounds relevant to your interests:

  1. If you use Ruby on Rails and PostgreSQL, you’re halfway home already. Follow the ActiveEnquo getting started tutorial and see how much of your data Enquo can already protect. When you find data you want to encrypt but can’t, tell me about it.

    • If you use Ruby and PostgreSQL with another ORM, such as Sequel, writing a plugin to support Enquo shouldn’t be too difficult. The ActiveEnquo code should give you a good start. If you get stuck, get in touch.
  2. If you use PostgreSQL with another programming language, tell me what language you use and we’ll work together to get bindings for that library created.

  3. If you use another database server, support is coming for your database of choice eventually, but at present there’s no timeline on support. On the off chance that you happen to be a hard-core database hacking expert, and would like to work on getting Enquo support in your preferred database server, I’d love to talk to you.

,

Lev LafayetteCOMP90024: Cluster and Cloud Computing For 2023

For the past few years, I have delivered some guest lectures and training for the University of Melbourne master's level course Cluster and Cloud Computing. This year's contribution has been expanded, which is not surprising as the course is apparently required for data science students as well as computer science students. Thus, for 2023 four presentations were given, with the workshop repeated three times! The first two presentations were an introduction to the Linux command line, followed by slightly more advanced content which included an introduction to shell scripting. The third lecture was the main presentation on Supercomputing and the Spartan HPC system in particular. The third presentation was a workshop on HPC job submission and and introduction to OpenMP and MPI programming with a concentration on using MPI4Py.

,

Michael StillHolman CLXRGB60 RGB WiFi garden light controllers and tasmota

Today I went forth to Bunnings in the rain to purchase a Holman CLXRGB60 RGB garden light controller so that I too could have fancy lighting in my garden and impress all those guests I never have over. I had been given hope by the Blakadder site that I would be able to flash tasmota onto the controller so it integrated with my Home Assistant home automation.

Unfortunately, it was not to be. Despite the device being TYWE3L based, the warning on the blakaddr site was correct, and this is a next-gen Tuya device where the crypto hasn’t been broken yet. Then again, I couldn’t even get this device to pair in the Holman app, so it clearly hates me.

This unfortunately means the excellent instructions from Jon Oxer were unforunately not helpful today. I think there is a theoretical option here to flash using the serial pins on the board, ala this guide. Also, it means my hair got wet for nothing.

So as to take revenge for my wet hair I have decided to pivot. The Holman lights seem quite well made, but they’re just 12 volt RGB PWM devices. So I can use their lights and build my own controller — although I need to ponder how to drive high current PWM I suppose. I thought it would therefore be useful to document the pinout on the Holman connector before I return the controller.

Labelled RGB Holman Garden Light 12v pinout

These pins map to the following colours of conductor on the cable I chopped up:

      • +12v: black
      • Red: brown
      • Green: earth (green and yellow)
      • Blue: white

I hope this is helpful to someone else as well. I wonder what this connector is called?

,

Tim SerongTeaching an odd dog new tricks

We – that is to say the storage team at SUSE – have a tool we’ve been using for the past few years to help with development and testing of Ceph on SUSE Linux. It’s called sesdev because it was created largely for SES (SUSE Enterprise Storage) development. It’s essentially a wrapper around vagrant and libvirt that will spin up clusters of VMs running openSUSE or SLES, then deploy Ceph on them. You would never use such clusters in production, but it’s really nice to be able to easily spin up a cluster for testing purposes that behaves something like a real cluster would, then throw it away when you’re done.

I’ve recently been trying to spend more time playing with Kubernetes, which means I wanted to be able to spin up clusters of VMs running openSUSE or SLES, then deploy Kubernetes on them, then throw the clusters away when I was done, or when I broke something horribly and wanted to start over. Yes, I know there’s a bunch of other tools for doing toy Kubernetes deployments (minikube comes to mind), but given I already had sesdev and was pretty familiar with it, I thought it’d be worthwhile seeing if I could teach it to deploy k3s, a particularly lightweight version of Kubernetes. Turns out that wasn’t too difficult, so now I can do this:

> sesdev create k3s
=== Creating deployment "k3s" with the following configuration === 
Deployment-wide parameters (applicable to all VMs in deployment):
deployment ID:    k3s
number of VMs:    5
version:          k3s
OS:               tumbleweed
public network:   10.20.190.0/24 
Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y
=== Running shell command ===
vagrant up --no-destroy-on-error --provision
Bringing machine 'master' up with 'libvirt' provider...
Bringing machine 'node1' up with 'libvirt' provider...
Bringing machine 'node2' up with 'libvirt' provider...
Bringing machine 'node3' up with 'libvirt' provider...
Bringing machine 'node4' up with 'libvirt' provider...

[...
  wait a few minutes
  (there's lots more log information output here in real life)
...]

=== Deployment Finished ===
 You can login into the cluster with:
 $ sesdev ssh k3s

…and then I can do this:

> sesdev ssh k3s
Last login: Fri Mar 24 11:50:15 CET 2023 from 10.20.190.204 on ssh
Have a lot of fun…

master:~ # kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   5m16s   v1.25.7+k3s1
node2    Ready                     2m17s   v1.25.7+k3s1
node1    Ready                     2m15s   v1.25.7+k3s1
node3    Ready                     2m16s   v1.25.7+k3s1
node4    Ready                     2m16s   v1.25.7+k3s1 

master:~ # kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-79f67d76f8-rpj4d   1/1     Running     0          5m9s
kube-system   metrics-server-5f9f776df5-rsqhb           1/1     Running     0          5m9s
kube-system   coredns-597584b69b-xh4p7                  1/1     Running     0          5m9s
kube-system   helm-install-traefik-crd-zz2ld            0/1     Completed   0          5m10s
kube-system   helm-install-traefik-ckdsr                0/1     Completed   1          5m10s
kube-system   svclb-traefik-952808e4-5txd7              2/2     Running     0          3m55s
kube-system   traefik-66c46d954f-pgnv8                  1/1     Running     0          3m55s
kube-system   svclb-traefik-952808e4-dkkp6              2/2     Running     0          2m25s
kube-system   svclb-traefik-952808e4-7wk6l              2/2     Running     0          2m13s
kube-system   svclb-traefik-952808e4-chmbx              2/2     Running     0          2m14s
kube-system   svclb-traefik-952808e4-k7hrw              2/2     Running     0          2m14s

…and then I can make a mess with kubectl apply, helm, etc.

One thing that sesdev knows how to do is deploy VMs with extra virtual disks. This functionality is there for Ceph deployments, but there’s no reason we can’t turn it on when deploying k3s:

> sesdev create k3s --num-disks=2
> sesdev ssh k3s
master:~ # for node in \
    $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') ;
    do echo $node ; ssh $node cat /proc/partitions ; done
master
major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
node3
major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node2
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node4
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node1
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc

As you can see this gives all the worker nodes an extra two 8GB virtual disks. I suspect this may make sesdev an interesting tool for testing other Kubernetes based storage systems such as Longhorn, but I haven’t tried that yet.

,

Michael StillMinor questions in Linux file semantics

I’ve known for a long time that if you delete a file on Unix / Linux but that file is open somewhere, the blocks used by the file aren’t freed until that user closes the file (or is terminated), but I was left wondering about some other edge cases.

Shaken Fist has a distributed blob store. It also has a cache of images that virtual machines are using. If the blob store and the image cache are on the same filesystem, sometimes the image cache entry can be a hard link to an entry in the blob store (for example, if the entry in the blob store doesn’t need to be transcoded before use by the virtual machine). However, if they are on different file systems, I instead use a symbolic link.

This raises questions — what happens if you rename a file which is open for writing in a program? What happens if you change a symbolic link to point somewhere else while it is open? I suspect in both cases the right thing happens, but I decided I should test these theories out.

First off, let’s cover the moving a file which is being written to case. Specifically, moving the file on the same filesystem. I wrote this little test program:

#!/usr/bin/python3

import datetime
import time

with open('a', 'w') as f:
    try:
        while True:
            f.write('%s\n' % datetime.datetime.now())
            time.sleep(1)

    except KeyboardInterrupt:
        f.close()

In one terminal I set it running. In another I then renamed ‘a’ to ‘b’ and waited a bit. The short answer? The newer writes from my script ended up in ‘b’ correctly. This makes sense when you remember that files don’t have names in most Unix filesystems — a directory has dirents with names, and they point to an inode. The open program is changing the content of an inode and associated blocks, and that’s quite separate from changing the dirent that points to that inode.

Secondly, what happens if I have a symlink to a different filesystem, move the file on that other filesystem and then update the symlink? All of course while the file is in use?

Unsurprisingly it works just like the previous example — the open file continues to be updated regardless of the move and the change of symlink.

This is good, because it makes re-sharding the blob store in Shaken Fist much easier. So there you go.

,

Michael StillMalware Analyst’s Cookbook and DVD

Another technical book, this time because my employer lets me buy random technical books as long as I pinky swear to read them and this one sounded interesting and got good reviews.

First off, the book is a bit dated given its from 2011 — there are lots of references to Ubuntu 10.10 for example and they say to avoid Python 3, which has its historical charm. This is unfortunate given the first section of the book talks about setting up honeypots to collect malware to examine, but Dionaea for example had its last commit in 2021. I am left wondering if there are more modern honey pot systems that people use these days.

Secondly the book is definitely a cookbook and that’s on me for not noticing this about the book before buying it — its a series of recipes / scripts that do interesting things with malware. That said, it isn’t really teaching a cohesive set of skills, its more of a series of stepping stones along the path you might follow. I think that’s an unintended piece of important learning — books with “cookbook” or “recipes” in their title probably aren’t very good as an overview of a topic area. My bad.

That said, some parts of the book are very good — the discussion of whois, DNS, and Real Time Black Lists (RTBLs) is helpful and less focussed on providing scripts you could run. The discussion of how to log changes to a Windows system, detect attempts to hide files in NTFS filesystems, and detect changes to registry hives were interesting in an abstract way, but perhaps obvious to someone who actually uses Windows.

Overall, I’m a bit disappointed in this book and it will be exhiled to a shelf at the office as a punishment.

Malware Analyst's Cookbook and DVD Book Cover Malware Analyst's Cookbook and DVD
Michael Ligh, Steven Adair, Blake Hartstein, Matthew Richard,
Computers
John Wiley & Sons
November 2, 2010
747

A computer forensics "how-to" for fighting malicious code and analyzing incidents. With our ever-increasing reliance on computers comes an ever-growing risk of malware. Security professionals will find plenty of solutions in this book to the problems posed by viruses, Trojan horses, worms, spyware, rootkits, adware, and other invasive software. Written by well-known malware experts, this guide reveals solutions to numerous problems and includes a DVD of custom programs and tools that illustrate the concepts, enhancing your skills. Security professionals face a constant battle against malicious software; this practical manual will improve your analytical capabilities and provide dozens of valuable and innovative solutions Covers classifying malware, packing and unpacking, dynamic malware analysis, decoding and decrypting, rootkit detection, memory forensics, open source malware research, and much more.

,

Michael StillThe BeyondCorp papers

Google’s BeyondCorp effort would probably be what we would now call Zero Trust, although I am surprised by how little name recognition BeyondCorp has when I talk to security people about Zero Trust. Perhaps there are subtle differences between the two, but if there are they aren’t obvious to me. I find myself reading the relevant Usenix papers for BeyondCorp, so I figure I’ll post a summary of what I got from each paper here.

The earliest of these papers are quite old now (2014), especially for something the rest of the industry is only starting to talk a lot about at the moment. I wonder if there is a viable business model in watching what papers megacorps like Google publish, and the implementing them as commercialized products before the rest of the market catches on?

Either way, here’s a summary of the various papers from the perspective of an interested bystander…

BeyondCorp: a new approach to enterprise security is an introductory paper that introduces the idea of what we would now call Zero Trust networks. That is, that the internal corporate network is not categorized as especially trusted, but instead serves as an access mechanism to services which define their own trust of an end user. This trust is enforced by access gateways, and derived from metrics such as how recently OS updates have been installed on the requesting device. This is a good introduction to the concept, especially given its age.

BeyondCorp: design to deployment at Google — unfortunately this paper was less useful I think. It is higher level than the first paper, and provides fewer actionable insights for someone thinking of implementing Zero Trust.

BeyondCorp: the access proxy describes the high level architecture of the access proxy, which is the frontend which takes requests from clients and authenticates / authorizes them before passing them onto the protected services. There aren’t a lot of surprises here, but it is a good overview of what you might encounter along the way (non-HTTP protocols requiring a client side helper for example).

Migrating to BeyondCorp: Maintaining Productivity While Improving Security is a discussion of the process of transitioning the Google network to the new zero trust access methodology while not breaking users’ ability to get things done. This was implemented by partitioning the problem space into smaller more tractable problems, and then transitioning clients to the new non-priviledged VLAN as these problems were solved. A key component of this was an enterprise wide rollout of 802.1x to ensure device identity was well understood. This paper is largely descriptive — while it might provide inspiration to other implementations, it does not provide a complete roadmap, largely because every organization’s legacy applications will differ.

That said, one interesting idea is that the network rules to control traffic were implemented in two places — in the network layer for the new VLAN, but also in an iptables implementation on client machines. This meant that it was easy to add clients in test mode (with the local implementation), but turn it off again if things didn’t work out. It also meant that they could add enforcement in locations where the new VLAN had not yet been deployed.

Another interesting idea is the provisioning of micro-VPNs for harder to convert applications such as those requiring non-HTTP access to network resources. This looks to my modern eyes as a lot like what tailscale does — exposing a single application via a micro-VPN accessed from the client routing table.

BeyondCorp: The User Experience details the gradual reduction in the demand for “traditional” VPN connectivity as users were moved to BeyondCorp, even as users initially expected a more traditional approach. It covers other user support scenarios as well, but most of them are quite Google-specific (for example their loaner laptop program).

BeyondCorp: Building a Healthy Fleet is the final paper in the series and discusses defining the threats your are mitigating by undertaking a Zero Trust approach to network security. In the case of BeyondCorp a large amount of the benefit is derived from enforcing regular updates on the user endpoint fleet, as well as controlling who can access what service based on their business needs.

,

Paul WayperThe Energica Experia

I recently bought an Energica Experia - the latest, largest and longest distance of Energica's electric motorbike models.

The decision to do this rather than build my own was complicated, and I'm going to mostly skip over the detail of that. At some time I might put it in another blog post. But for now it's enough to say that I'd accidentally cooked the motor in my Mark I, the work on the Mark II was going to take ages, and I was in the relatively fortunate situation of being able to afford the Experia if I sold my existing Triumph Tiger Sport and the parts for the Mark II.

For other complicated reasons I was planning to be in Sydney after the weekend that Bruce at Zen Motorcycles told me the bike would be arriving. Rather than have it freighted down, and since I would have room for my riding gear in our car, I decided to pick it up and ride it back on the Monday. In reconnoitering the route, we discovered that by pure coincidence Zen Motorcycles is on Euston Road in Alexandria, only 200 metres away from the entrance to WestConnex and the M8. So with one traffic light I could be out of Sydney.

I will admit to being more than a little excited that morning. Electric vehicles are still, in 2023, a rare enough commodity that waiting lists can be months long; I ordered this bike in October 2022 and it arrived in March 2023. So I'd had plenty of time to build my expectations. And likewise the thought of riding a brand new bike - literally one of the first of its kind in the country (it is the thirty-second Experia ever made!) - was a little daunting. I obtained PDF copies of the manual and familiarised myself with turning the cruise control on and off, as well as checking and setting the regen braking levels. Didn't want to stuff anything up on the way home.

There is that weird feeling in those situations of things being both very ordinary and completely unique. I met Bruce, we chatted, I saw the other Experia models in the store, met Ed - who had come down to chat with Bruce, and just happened to be the guy who rode a Harley Davidson Livewire from Perth to Sydney and then from Sydney to Cape Tribulation and back. He shared stories from his trip and tips on hypermiling. I signed paperwork, picked up the keys, put on my gear, prepared myself.

Even now I still get a bit choked up just thinking of that moment. Seeing that bike there, physically real, in front of me - after those months of anticipation - made the excitement real as well.

So finally, after making sure I wasn't floating, and making sure I had my ear plugs in and helmet on the right way round, I got on. Felt the bike's weight. Turned it on. Prepared myself. Took off. My partner followed behind, through the lights, onto the M8 toward Canberra. I gave her the thumbs up.

We planned to stop for lunch at Mittagong, while the NRMA still offers the free charger at the RSL there. One lady was charging her Nissan Leaf on the ChaDeMo side; shortly after I plugged in a guy arrived in his Volvo XC40 Recharge. He had the bigger battery and would take longer; I just needed a ten minute top up to get me to Marulan.

I got to Marulan and plugged in; a guy came thinking he needed to tell the petrol motorbike not to park in the electric vehicle bay, but then realised that the plug was going into my bike. Kate headed off, having charged up as well, and I waited another ten minutes or so to get a bit more charge. Then I rode back.

I stopped, only once more - at Mac's Reef Road. I turned off and did a U turn, then waited for the traffic to clear before trying the bike's acceleration. Believe me when I say this bike will absolutely do a 0-100km/hr in under four seconds! It is not a light bike, but when you pull on the power it gets up and goes.

Here is my basic review, given that experience and then having ridden it for about ten weeks around town.

The absolute best feature of the Energica Experia is that it is perfectly comfortable riding around town. Ease on the throttle and it gently takes off at the traffic lights and keeps pace with the traffic. Ease off, and it gently comes to rest with regenerative braking and a light touch on the rear brake after stopping to hold it still. If you want to take off faster, wind the throttle on more. It is not temperamental or twitchy, and you have no annoying gears and clutch to balance.

In fact, I feel much more confident lane filtering, because before I would have to have the clutch ready and be prepared to give the Tiger Sport lots of throttle lest I accidentally stall it in front of an irate line of traffic. With the Experia, I can simply wait peacefully - using no power - and then when the light goes green I simply twist on the throttle and I am away ahead of even the most aggressive car driver.

It is amazingly empowering.

I'm not going to bore you with the stats - you can probably look them up yourself if you care. The main thing to me is that it has DC fast charging, and watching 75KW go into a 22.5KWHr battery is just a little bit terrifying as well as incredibly cool. The stated range of 250km on a charge at highway speeds is absolutely correct, from my experience riding it down from Sydney. And that plus the fast charging means that I think it is going to be quite reasonable to tour on this bike, stopping off at fast or even mid-level chargers - even a boring 22KW charger can fill the battery up in an hour. The touring group I travel with stops often enough that if those stops can be top ups, I will not hold anyone up.

Some time in the near future I hope to have a nice fine day where I can take it out on the Cotter Loop. This is an 80km stretch of road that goes west of Canberra into the foothills of the Brindabella Ranges, out past the Deep Space Tracking Station and Tidbinbilla Nature Reserve. It's a great combination of curving country roads and hilly terrain, and reasonably well maintained as well. I did that on the Tiger Sport, with a GoPro, before I sold it - and if I can ever convince PiTiVi to actually compile the video from it I will put that hour's ride up on a platform somewhere.

I want to do that as much to show off Canberra's scenery as to show off the bike.

And if the CATL battery capacity improvement comes through to the rest of the industry, and we get bikes that can do 400km to 500km on a charge, then electric motorbike touring really will be no different to petrol motorbike touring. The Experia is definitely at the forefront of that change, but it is definitely possible on this bike.

,

Robert CollinsRustup CI / test suite performance

Rustup (the community package manage for the Rust language) was starting to really suffer : CI times were up at ~ one hour.

We’ve made some strides in bringing this down.

Caching factory for test scenarios

The first thing, which achieved about a 30% reduction in test time was to stop recreating all the test context every time.

Rustup tests the download/installation/upgrade of distributions of Rust. To avoid downloading gigabytes in the test suite, the suite creates mocks of the published Rust artifacts. These mocks are GPG signed and compressed with multiple compression methods, both of which are quite heavyweight operations to perform – and not actually the interesting code under test to execute.

Previously, every test was entirely hermetic, and usually the server state was also unmodified.

There were two cases where the state was modified. One, a small number of tests testing error conditions such as GPG signature failures. And two, quite a number of tests that were testing temporal behaviour: for instance, install nightly at time A, then with a newer server state, perform a rustup update and check a new version is downloaded and installed.

We’re partway through this migration, but compare these two tests:

fn check_updates_some() {
    check_update_setup(&|config| {
        set_current_dist_date(config, "2015-01-01");
        config.expect_ok(&["rustup", "update", "stable"]);
        config.expect_ok(&["rustup", "update", "beta"]);
        config.expect_ok(&["rustup", "update", "nightly"]);
        set_current_dist_date(config, "2015-01-02");
        config.expect_stdout_ok(
            &["rustup", "check"],
            for_host!(
                r"stable-{0} - Update available : 1.0.0 (hash-stable-1.0.0) -> 1.1.0 (hash-stable-1.1.0)
beta-{0} - Update available : 1.1.0 (hash-beta-1.1.0) -> 1.2.0 (hash-beta-1.2.0)
nightly-{0} - Update available : 1.2.0 (hash-nightly-1) -> 1.3.0 (hash-nightly-2)
"
            ),
        );
    })
}
fn check_updates_some() {
    test(&|config| {
        config.with_scenario(Scenario::ArchivesV2_2015_01_01, &|config| {
            config.expect_ok(&["rustup", "toolchain", "add", "stable", "beta", "nightly"]);
        });
        config.with_scenario(Scenario::SimpleV2, &|config| {
        config.expect_stdout_ok(
            &["rustup", "check"],
            for_host!(
                r"stable-{0} - Update available : 1.0.0 (hash-stable-1.0.0) -> 1.1.0 (hash-stable-1.1.0)
beta-{0} - Update available : 1.1.0 (hash-beta-1.1.0) -> 1.2.0 (hash-beta-1.2.0)
nightly-{0} - Update available : 1.2.0 (hash-nightly-1) -> 1.3.0 (hash-nightly-2)
"
            ),
        );
            })
    })
}

The former version mutates the date with set_current_dist_date; the new version uses two scenarios, one for the earlier time, and one for the later time. This permits the server state to be constructed only once. On a per-test basis it can move as much as 50% of the time out of the test.

Single binary for the integration test suite

The next major gain was moving from having 14 separate integration test binaries to just one. This reduces the link cost of linking the test binaries, all of which link in the same library. It also permits us to see unused functions in our test support library, which helps with cleaning up cruft rather than having it accumulate.

Hard linking rather than copying ‘rustup-init’

Part of the test suite for each test is setting up an installed rustup environment. Why not start from scratch every time? Well, we obviously have tests that do that, but most tests are focused on steps beyond the new-user case. Setting up an installed rustup environment has a few steps, but particular ones are copying a binary of rustup into the test sandbox, and hard linking it under various names: cargo, rustc, rustup etc.

A debug build of rustup is ~20MB. Running 400 tests means about 8GB of IO; on some platforms most of that IO won’t hit disk, on others it will.

In review now is a PR that changes the initial copy to a hardlink: we hardlink the rustup-init built by cargo into each test, and then hardlink that to the various binaries. That saves 8GB of IO, which isn’t much from some perspectives, but it adds pressure on the page cache, and is wasted work. One wrinkle is a very low max-links limit on NTFS of 1023; to mitigate that we count the links made to rustup-init and generate a new inode for the original to avoid failures happening.

Future work

In GitHub actions this lowers our test time to 19m for Linux, 24m for Windows, which is a lot better but not great.

I plan on experimenting with separate actions for building release artifacts and doing CI tests – at the moment we have the same action do both, but they don’t share artifacts in the cache in any meaningful way, so we can probably gain parallelism there, as well as turning off release builds entirely for CI.

We should finish the cached test context work and use it everywhere.

Also we’re looking at having less integration tests and more narrow close to the code tests.

,

Lev Lafayette2022 HPC Training Utilisation and Results

Unique identifiers for 263 users who received HPC training in 2022 was determined from collected attendee records. Note that users may enrol in multiple courses (e.g., Introduction to Spartan, Advanced Spartan, Parallel Processing, etc) and may return for revision. All these users are counted once only.

From the unique users a total of 212 usernames could be determined from email addresses. When enrolling for training users do not include their Spartan usernmae or their university ID; sometimes they don't even use a university email address, despite requests.

There were 97 users who established an account but did not use Spartan (compute hours = 0). Of the remaining 115 users the total of job hours was determined from trained users was 6280454, after they received training. This calculation ensured that users who had already run jobs on Spartan prior to receiving training was not counted. e.g.,

$ sreport cluster AccountUtilizationByUser cluster=spartan user=$username start=2022-11-01 end=2022-12-31 -t hours

The total allocated hours of cluster utilisation 11597951, from the command:

$ sreport cluster Utilization cluster=spartan start=2022-11-01 end=2022-12-31 -t hours

The means that at least 54.14% of cluster utilisation in 2022 was conducted by users after receiving training.

The following steps are recommended to improve record-keeping and utilisation.

1) Emphasising the need to enrollees to use University of Melbourne email addresses only, and rejecting applications that do not do this.

2) Contacting those who attended training but did not use Spartan to ascertain why this was the case.

,

Tim SerongHack Week 22: An Art Project

Back in 2012, I received a box of eight hundred openSUSE 12.1 promo DVDs, which I then set out to distribute to local Linux users’ groups, tech conferences, other SUSE crew in Australia, and so forth. I didn’t manage to shift all 800 DVDs at the time, and I recently rediscovered the remaining three hundred and eighty four while installing some new shelves. As openSUSE 12.1 went end of life in May 2013, it seemed likely the DVDs were now useless, but I couldn’t bring myself to toss them in landfill. Instead, given last week was Hack Week, I decided to use them for an art project. Here’s the end result:

Geeko mosaic made of cut up openSUSE DVDs, on a 900mm x 600mm piece of plywood

Making that mosaic was extremely fiddly. It’s possibly the most annoying Hack Week project I’ve ever done, but I’m very happy with the outcome 🙂

The backing is a piece of 900mm x 600mm x 6mm plywood, primed with some leftover kitchen and bathroom undercoat, then spray pained black. I’d forgotten how bad spray paint smells, but it makes for a nice finish. To get the Geeko shape, I took the official openSUSE logo, then turned it into an outline in Inkscape, saved that as a PNG, opened it in GIMP, and cut it into nine 300mm x 200mm pieces which I then printed on A4 paper, stuck together with tape, and cut out to make a stencil. Of course, the first time I did that, nothing quite lined up, so I had to reprint it but with “Ignore page margins” turned off and “Draw crop marks” turned on, then cut the pages down along the crop marks before sticking them together the second time. Then I placed the stencil on the backing, glued the eye down (that just had to be made from the centre of a DVD!) and started laying out cut up DVD shards.

Geeko mosaic work in progress

I initially tried cutting the DVDs with tin snips, which is easy on the hands, but had a tendency to sometimes warp the DVD pieces and/or cause them to delaminate, so I reverted to a large pair of scissors which was more effort but ultimately less problematic.

After placing the pieces that made up the head, tail, feet and spine, and deciding I was happy with how they looked, I glued each piece down with superglue. Think: carefully pick up DVD shard without moving too many other shards, turn over, dab on a few tiny globs of superglue, lower into place, press for a few seconds, move to next piece. Do not get any superglue on your fingers, or you’ll risk sticking your fingers together and/or make a gluey mess on the shiny visible side of the DVD shards.

It was another three sessions of layout-then-glue-down to fill in the body. I think I stuck my fingers together about six, or eight, or maybe twenty times. Also, despite my best efforts to get superglue absolutely nowhere near the stencil at all, when I removed the stencil, it had stuck to the backing in several places. I managed to scrape/cut that off with a combination of fingernails, tweezers, and the very sharp knife in my SLE 12 commemorative Leatherman tool, then touched up the remaining white bits with a fine point black Sharpie.

SLE 12 commemorative Leatherman tool (it seemed appropriate to use this)

Judging from the leftover DVD centre pieces, this mosaic used about 12 DVDs in all, which isn’t very many considering my initial stash. I had a few other ideas for the remainder, mostly involving hanging them up somehow, which I messed around with earlier on while waiting for the paint to dry on the plywood.

One (failed) idea was to use a cutting wheel on my Dremel tool to slice half way through a few DVDs, then slot them into each other to make a hanging thingy that would spin in the wind. I was unable to make a smooth/straight enough cut for this to work, and superglue doesn’t bridge gaps. You can maybe get an idea of what I was aiming at from this photo:

Four DVDs slotted into each other vertically, kinda, one with nasty superglue smear

My wife had an idea for a better way to do this, which is to take a piece of dowel, cut slots in the sides, and glue DVD halves into the slots using Araldite (that’s an epoxy resin, in case you didn’t grow up with that brand name). I didn’t get around to trying this, but I reckon she’s onto something. Next time I’m at the hardware store, I’ll try to remember to pick up some suitably sized dowel.

I did make one somewhat simpler hanging thingy, which I call “Geeko’s Tail (Uncurled)”. It’s just DVDs superglued together on the flat, hanging from fishing line, but I think it’s kinda cool:

No, it’s not an upside down question mark, it’s “Geeko’s Tail (Uncurled)”

Also, I’ve discovered that Officeworks has an e-waste recycling program, so any DVDs I don’t use in future projects needn’t go to landfill.

Update 2023-02-20: For photos of the mosaic, plus wallpapers made from the photos, see https://github.com/tserong/hackweek22

,

Lev LafayetteThe Importance of Supercomputing

Most people use their computers (which includes mobile phones) for communication, social media, games, entertainment, office applications, and the like. Most of the time these activities are not particularly onerous in terms of computing as such or do not lead to enormous benefits in productivity, inventions, and discovery. There is one field, however, rarely discussed, that does do this - and that is supercomputing. It is through supercomputing that we are witnessing the most important technological advances of our day, including astronomy, weather and climate forecasting, materials science and engineering, molecular modeling, genomics, neurology, geoscience, and finance - all with numerous success stories.

Usually, I draw a distinction between supercomputing and high-performance computing. Specifically, a supercomputer is any computer system that has exceptional computational power at a particular point in time, many (but not all) of which are measured in the bi-annual Top500 list. Once upon a time dominated by monolithic mainframes supercomputers, in a contemporary sense, are a subset of high-performance computing, which is typically arranged as a cluster of commodity-grade servers with a high-speed interconnect and message-passing software that allows the entire unit to be treated as a whole. One can even put together a "supercomputer" from Raspberry Pi systems, as the University of Southhampton illustrates.

How important is this? For many years now we've known that there is a strong association between research output and access to such systems. Macroeconomic analysis shows that for every dollar invested in supercomputing, there is a return of forty-four dollars in profits or cost-savings. Both these metrics are almost certainly going to increase in time; datasets and problem complexity are growing at a rate greater than the computational performance of personal systems. More researchers need access to supercomputers.

However, researchers do require training to use such systems. The environment, the interface, the use of schedulers on a shared system, the location of data, is all something that needs to be learned. This is a big part of my life; in the last week, I spent three days teaching researchers from the basic of using a supercomputer system to scripting jobs, to using Australia's most powerful system Gadi at NCI, along with contributions at a board meeting of the international HPC Certification Forum. It is often a challenging vocation, but I feel confident that it is making a real difference to our shared lives. For that, I am very grateful.

,

Colin CharlesLong Malaysians, Short Malaysia

I have long said “Long Malaysians, Short Malaysia” in conversation to many. Maybe it took me a while to tweet it, but this was the first example: Dec 29, 2021. I’ve tweeted it a lot more since.

Malaysia has a 10th Prime Minister, but in general, it is a very precarious partnership. Consider it, same shit, different day?

I just have to get off the Malaysian news diet. Malaysians elsewhere, are generally very successful. Malaysians suffering by their daily doldrums, well, they just need to wake up, see the light, and succeed.

In the end, as much as people paraphrase, ask not what the country can do for you, legitimately, this is your life, and you should be taking good care of yourself and your loved ones. You succeed, despite of. Politics and the state happens, regardless of.

Me, personally? Ideas are abound for how to get Malaysians who see the light, to succeed elsewhere. And if I read, and get angry at something (tweet rage?), I’m going to pop RM50 into an investment account, which should help me get off this poor habit. I’ll probably also just cut subscriptions to Malaysian news things… Less exposure, is actually better for you. I can’t believe that it has taken me this long to realise this.

Time to build.

Colin CharlesHello 2023

I did poorly blogging last year. Oops. I think to myself when I read, This Thing Still On?, I really have to do better in 2023. Maybe the catalyst is the fact that Twitter is becoming a shit show. I doubt people will leave the platform in droves, per se, but I think we are coming back to the need for decentralised blogs again.

I have 477 days to becoming 40. I ditched the Hobonich Techo sometime in 2022, and just focused on the Field Notes, and this year, I’ve got a Monocle x Leuchtturm1917 + Field Notes combo (though it seems my subscription lapsed Winter 2022, I should really burn down the existing collection, and resubscribe).

2022 was pretty amazing. Lots of work. Lots of fun. 256 days on the road (what a number), 339,551km travelled, 49 cities, 20 countries.

The getting back into doing, and not being afraid of experimenting in public is what 2023 is all about. The Year of The Rabbit is upon us tomorrow, hence why I don’t mind a little later Hello 2023 :)

Get back into the habit of doing. And publishing by learning and doing. No fear. Not that I wasn’t doing, but its time to be prolific with what’s been going on.

I better remember that.

,

Lev LafayetteInstalling VASP 6.x on x86_64 RHEL 7.9 Linux

In the past I have posted two sets of instructions for installing VASP (Vienna Ab-initio Simulation Package for quantum-mechanical molecular dynamics (MD) using pseudopotentials and a plane wave basis set), each for VASP 5.X on an Opteron system. Now, many years later, I find myself in the position of having to install VASP once again.

The installation approach is still pretty horrible but it has improved a great deal. Previously there was a small mountain of makefiles for different architectures and one had to find a file that was "close enough" and modify as required. This process is still required, but the quantity of makefiles is dramatically reduced with improved abstraction, directory management, and a test suite.

The structure (once extracted) is as follows:

                   vasp.X.X.X (root directory)
                                |
          ------------------------------------------------
         |        |        |         |          |         |
        arch     bin     build      src     testsuite   tools

* `root/`. Holds the high-level makefile and several subdirectories.
* `root/src`. Holds the source files of VASP and a low-level makefile.
* `root/arch`. Holds a collection of `makefile.include.*` files.
* `root/build`. The different versions of VASP, i.e., the standard, gamma-only, non-collinear, and CUDA-GPU versions will be build in separate subdirectories of this directory.
* `root/bin`. Here make will store the binaries.
* `root/testsuite`. Holds a suite of correctness tests to check your build.
* `root/tools`. Holds several python scripts related to the (optional) use of HDF5 input/output files.

Installation involves copying one the makefile.include.xxx files to the root directory as makefile.include, modifying it running make. The most straightforward, used in this example, is makefile.include.linux_gnu.

Within the makefile one will have to enter the values of the Fortran library directory, the LIBDIR for Blas, LAPACK, SCALAPACK, and FFTW. c.f.,

# LIBDIR     = /opt/gfortran/libs/
LIBDIR     = /usr/local/easybuild-2019/easybuild/software/core/gcccore/10.2.0/lib64/lib
BLAS       = -L$(LIBDIR) -lrefblas
LAPACK     = -L$(LIBDIR) -ltmglib -llapack
BLACS      = 
SCALAPACK  = -L$(LIBDIR) -lscalapack $(BLACS)

LLIBS      = $(SCALAPACK) $(LAPACK) $(BLAS)

In this particular installation, there is the use of the EasyBuild foss/2020b toolchain, which consists of GCC/10.2.0 and OpenMPI 4.0.5. Once that toolchain is loaded then one can also load FFTW/3.3.8, scalapack/2.1.0, openblas/0.3.12. Note that loading the modules will not be read by the VASP makefile. They still have to be hard-coded - it is convenient however when checking the PATH to the libraries.

The above code snippet from the makefile is a little deceptive. Something like the following is recommended instead:

# LIBDIR     = /opt/gfortran/libs/
LIBDIR     = /usr/local/easybuild-2019/easybuild/software/core/gcccore/10.2.0/lib64/lib
# BLAS       = -L$(LIBDIR) -lrefblas
# LAPACK     = -L$(LIBDIR) -ltmglib -llapack
BLACS      = 
# SCALAPACK  = -L$(LIBDIR) -lscalapack $(BLACS)

OPENBLAS_ROOT ?= /usr/local/easybuild-2019/easybuild/software/compiler/gcc/10.2.0/openblas/0.3.12/
BLASPACK    = -L$(OPENBLAS_ROOT)/lib -lopenblas

SCALAPACK_ROOT ?= /usr/local/easybuild-2019/easybuild/software/mpi/gcc/10.2.0/openmpi/4.0.5/scalapack/2.1.0
SCALAPACK   = -L$(SCALAPACK_ROOT)/lib -lscalapack

LLIBS      += $(SCALAPACK) $(BLASPACK)

# FFTW       ?= /opt/gfortran/fftw-3.3.6-GCC-5.4.1
FFTW       ?= /usr/local/easybuild-2019/easybuild/software/mpi/gcc/10.2.0/openmpi/4.0.5/fftw/3.3.8
LLIBS      += -L$(FFTW)/lib -lfftw3 -lfftw3_omp
INCS       = -I$(FFTW)/include

There is a further issue. If one is using GCC 10.x or greater there will be an argument mismatch. An error will occur like the following:

Error: Rank mismatch between actual argument at (1) and actual argument at (2) (rank-1 and scalar)

To get around this an additional Fortran flag must be added, resulting in:

#  For gcc-10 and higher require -fallow
FFLAGS     = -w -march=native -fallow-argument-mismatch

Another interesting error is that VASP has only been built for up to GCC7. The use of GCC 10 and MPI will result in an error and the reader_base.F file needs to be patched. This has been discussed on the VASP forums, which also has a copy of the patchfile. Modify the headers if necessary and apply the patch. e.g.,

patch < reader.patch 
patching file reader_base.F

Following this, an incremental build of the three core VASP binaries should work.

make std
make gam
make ncl

,

Dave HallUpgrading to AWS Lambda Powertools for Python v2

Learn how easy it is to upgrade AWS Lambda Powertools to version.

,

Lev LafayetteThe End of Duolingo?

In late 2015 I started using Duolingo, completing sixteen skills trees across ten languages since then. These were mainly, but not exclusively, European languages (including that pan-European auxiliary language, Esperanto). All of which gave some practical use on what were annual trips. In addition to the skill trees I made reasonable progress in Dutch, some progress in Catalan (from Spanish), Czech, and even the trio of Norwegian, Swedish, and Danish (on account of someone saying they were so similar). To say that I am a consistently active user of the application is fair; last year I was rated in the top 0.1% of users in the world.

For several years I have been a paid subscriber to the service at the princely sum of c$100 per year. However, a few days ago, I cancelled my subscription. The reason for this is quite simple; utterly horrendous changes to the user interface. Ignoring the principle of "if it ain't broke, don't fix it", the powers that be at Duolingo have foisted these changes upon a user community that is less than pleased. The Youtube video explaining the changes, at the time of writing, has 116K views; a mere 536 have liked the video and 1.8K had voted it down.

The problems are well-explained by many of the comments on the video; the loss of the self-paced and self-directed learning path in favour of a one-path-only approach, reminiscent of the boardgame, "Candyland", the substantial loss of screen real-estate, and the frustration of trying to reach a lesson of choice, the enforced combination of "lessons" and "stories" (I don't mind this), and the unnecessary animations (I always turned these off in the past). Of the close to 700 comments, almost every single one is negative. It would seem that others too will be ending their subscription, not because of the content of the application, but because of the changes, they made to how people use it.

Certainly, Duolingo has lost people in the past - closing down their forums, ending the language incubators, etc. Those policy changes were annoying, but the comments suggest this is different; this is a visceral hatred, "nerd rage quit" level of disappointment.

This is, of course, not the first time that one has witnessed a mass exodus from an application following radical changes to a user interface. Three years ago Niantic did the same to the game Ingress. When Ubuntu introduced the Unity Desktop, there was a significant switch by users to alternatives. Even when the Luna style was introduced with Windows XP, many users continued to use Windows Classic. One could even give the controversy when Dungeons & Dragons 4th edition was released compared to the 3/3.5 editions - alternative companies, still operating today - made a small fortune by continuing the old system.

The lesson to learn here is that even when usability experts point out numerous benefits to a new interface (e.g., Unity Desktop) or there is an improvement to content, or the marketing people think that making a system similar to other popular products (e.g., D&D 4th edition), a radical change to a user interface system is a hostile attack on the existing user community. The reason for this is deeply part of educational psychology; people engage primarily with content. The user interface is a system to get them to the content. When a user learns a system the process becomes unconscious. Only when the system actively gets in way of the user accessing the content is there a problem, and when that is the case small and incremental changes generate popularity, not rejection.

This is why radical changes in the interface invariably frustrate existing users. What was an unconscious process has to be relearned. If the design actually is more accessible, the learning curve is short, and the access to content is easier, then the period of frustration will be less if users remain with the product. Niantic managed not to bleed to zero Ingress users by allowing users the opportunity to tone down the resource-intensive and overwhelming graphics, for example. The Unity Desktop environment eventually became acceptable to Ubuntu users, as did the Luna style for MS Windows. Duolingo has made a new interface where the design is less accessible, and whilst with a short learning curve (you can only follow one path), access to content is significantly harder.

Duolingo's CEO, Luis von Ahn has made it clear that the "simpler" interface will not be changed and new users must use it now and (almost) all users by the end of October. He is betting that Duolingo's new interface can grow the number of users and operating income of the company. This is probably not going to happen; whilst Duolingo's revenues increased in 2022, the operating income and profits are in the territory of a a $60 million USD loss. Massive investment has been made in the changes, with the hope to reverse a decline in operating income and monthly users.

This is a crash-through or crash approach when a principle of sunk costs should be applied. Perhaps if the backlash from the user community is strong enough and they vote with their feet (and their wallets), the company will revert back to the more popular environment. In the meantime, and as a little hack, there is one way users can keep the old interface; set up a "school" (left-hand column), give it a name, go to "settings", select "older version" and whatever other options you desire (e.g., "multiple languages" taught), and then go back to Duolingo via the left-hand column. Apparently, this will remain in place until the end of the year. After that, there's a golden opportunity for different language applications.

,

Andrew RuthvenLet's Encrypt with Octavia in OpenStack

I like using Catalyst Cloud to host some of my personal sites. In the past I used to use CAcert for my TLS certificates, but more recently I've been using Let's Encrypt for my TLS certificates as they're trusted in all browsers. Currently the LoadBalancer as a Service (LBaaS) in Catalyst Cloud doesn't have built in support for Let's Encrypt. I could use an apache2/nginx proxy and handle the TLS termination there and have that manage the Let's Encrypt lifecycle, but really, I'd rather use LBaaS.

So I thought I'd set about working out how to get Dehydrated (the Let's Encrypt client I've been using) to drive LBaaS (known as Octavia). I figured this would be of interest to other people using Octavia with OpenStack in general, not just Catalyst Cloud.

There's a few things you need to do. These instructions are specific to Debian:

  1. Install and configure Dehydrated to create the certificates for the domain(s) you want.
    • apt install barbican
  2. Create the LoadBalancer (use the API, ClickOps, whatever), just forward port 80 for now (see sample Apache configs below).
  3. Save the sample hook.sh below to /etc/dehydrated/hook.sh, you'll probably need to customise it, mine is a bit more complicated!
  4. Insert the UUID of your LoadBalancer in hook.sh where LB_LISTENER is set.
  5. Create /etc/dehydrated/catalystcloud/password as described in hook.sh
  6. Save OpenRC file from the Catalyst Cloud dashboard as /etc/dehydrated/catalystcloud/openrc.sh
  7. Install jq, openssl and the openstack tools, on Debian this is:
    • apt install jq openssl python3-openstackclient python3-barbicanclient python3-octaviaclient
  8. Add TLS termination to your LoadBalancer
  9. You should be able to rename the latest certs /var/lib/dehydrated/certs/$DOMAIN and then run dehydrated -c to have it reissue and then deploy a cert.

As we're using HTTP-01 Challenge Type here, you need to have the LoadBalancer forwarding port 80 to your website to allow for the challenge response. It is good practice to have a redirect to HTTPS, here's an example virtual host for Apache:

<VirtualHost *:80>
    ServerName www.example.com
    ServerAlias example.com

    RewriteEngine On
    RewriteRule ^/.well-known/ - [L]
    RewriteRule ^/(.*)$ https://www.example.com/$1 [R=301,L]

    <Location />
        Require all granted
    </Location>
</VirtualHost>
You all also need this in /etc/apache2/conf-enabled/letsencrypt.conf:
Alias /.well-known/acme-challenge /var/lib/dehydrated/acme-challenges

<Directory /var/lib/dehydrated/acme-challenges>
        Options None
        AllowOverride None

        # Apache 2.x
        <IfModule !mod_authz_core.c>
                Order allow,deny
                Allow from all
        </IfModule>

        # Apache 2.4
        <IfModule mod_authz_core.c>
                Require all granted
        </IfModule>
</Directory>

And that should be all that you need to do. Now, when Dehydrated updates your certificate, it should update your LoadBalancer as well!

Sample hook.sh:
deploy_cert() {
    local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" \
          CHAINFILE="${5}" TIMESTAMP="${6}"
    shift 6

    # File contents should be:
    #   export OS_PASSWORD='your password in here'
    . /etc/dehydrated/catalystcloud/password

    # OpenRC file from the Catalyst Cloud dashboard
    . /etc/dehydrated/catalystcloud/openrc.sh --no-token

    # UUID of the LoadBalancer to be managed
    LB_LISTENER='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'

    # Barbican uses P12 files, we need to make one.
    P12=$(readlink -f $KEYFILE \
        | sed -E 's/privkey-([0-9]+)\.pem/barbican-\1.p12/')
    openssl pkcs12 -export -inkey $KEYFILE -in $CERTFILE -certfile \
        $FULLCHAINFILE -passout pass: -out $P12

    # Keep track of existing certs for this domain (hopefully no more than 100)
    EXISTING_URIS=$(openstack secret list --limit 100 \
        -c Name -c 'Secret href' -f json \
        | jq -r ".[]|select(.Name | startswith(\"$DOMAIN\"))|.\"Secret href\"")

    # Upload the new cert
    NOW=$(date +"%s")
    openstack secret store --name $DOMAIN-$TIMESTAMP-$NOW -e base64 \
        -t "application/octet-stream" --payload="$(base64 < $P12)"

    NEW_URI=$(openstack secret list --name $DOMAIN-$TIMESTAMP-$NOW \
        -c 'Secret href' -f value) \
        || unset NEW_URI

    # Change LoadBalancer to use new cert - if the old one was the default,
    # change the default. If the old one was in the SNI list, update the
    # SNI list.
    if [ -n "$EXISTING_URIS" ]; then
        DEFAULT_CONTAINER=$(openstack loadbalancer listener show $LB_LISTENER \
            -c default_tls_container_ref -f value)

        for URI in $EXISTING_URIS; do
            if [ "x$URI" = "x$DEFAULT_CONTAINER" ]; then
                openstack loadbalancer listener set $LB_LISTENER \
                    --default-tls-container-ref $NEW_URI
            fi
        done

        SNI_CONTAINERS=$(openstack loadbalancer listener show $LB_LISTENER \
            -c sni_container_refs -f value | sed "s/'//g" | sed 's/^\[//' \
            | sed 's/\]$//' | sed "s/,//g")

        for URI in $EXISTING_URIS; do
            if echo $SNI_CONTAINERS | grep -q $URI; then
                SNI_CONTAINERS=$(echo $SNI_CONTAINERS | sed "s,$URI,$NEW_URI,")
                openstack loadbalancer listener set $LB_LISTENER \
                    --sni-container-refs $SNI_CONTAINERS
            fi
        done

        # Remove old certs
        for URI in $EXISTING_URIS; do
            openstack secret delete $URI
        done
    fi
}

HANDLER="$1"; shift
#if [[ "${HANDLER}" =~ ^(deploy_challenge|clean_challenge|sync_cert|deploy_cert|deploy_ocsp|unchanged_cert|invalid_challenge|request_failure|generate_csr|startup_hook|exit_hook)$ ]]; then
if [[ "${HANDLER}" =~ ^(deploy_cert)$ ]]; then
    "$HANDLER" "$@"
fi

,

Dave HallTracking Infrastructure with SSM and Terraform

Use AWS SSM Parameter Store to share resource references with other teams.

,

Tim RileyOpen source status update, September 2022

Hello there, friends! This is going to be a short update from me because I’m deep in the throes of Hanami 2.0 release preparation right now. Even still, I didn’t want to let September pass without an update, so let’s take a look.

A story about Hanami::Action memory usage

September started and ended with me looking at the r10k memory usage charts for hanami-controller versus Rails. The results were surprising!

Initial memory usage for Hanami::Action vs Rails

We’d been running some of these checks as part of our 2.0 release prep, the idea being that it’d help us shake out any obvious performance improvements we’d need to make. And it certainly did in this case! Hanami (just like its dry-rb underpinnings) is meant to be the smaller and lighter framework; why were we being outperformced by Rails?

To address this I wrote a simple memory profile script for Hanami::Action inheritance (now checked in here) and started digging.

Here were there initial results:

Total allocated: 184912288 bytes (1360036 objects)
Total retained:  104910880 bytes (780031 objects)

allocated memory by gem
-----------------------------------
  56242240  concurrent-ruby-1.1.10
  53282480  dry-configurable-0.15.0
  34120000  utils-8585be837309
  30547488  other
  10720080  controller/lib

That’s 185MB allocated for 10k subclasses, with concurrent-ruby, dry-configurable and hanami-utils being the top three gems allocating memory.

This led me straight to dry-configurable, and after a couple of weeks of work, I arrived at this PR, separating our storage of setting definitions from their configured values, among other things. This change allows us to copy less data at the moment of class inheritance, and in the case of a dry-configurable-focused memory profile, cut the allocated memory by more than half.

From there, I moved back into hanami-controller and updated it to use dry-configurable for all of its inheritable attributes (some were handled separately), also taking advantage the support for custom config classes that Piotr added so we could preserve Hanami::Action’s existing configuration API.

This considerably improved our benchmark! Behold:

Total allocated: 32766232 bytes (90004 objects)
Total retained:  32766232 bytes (90004 objects)

allocated memory by gem
-----------------------------------
  21486072  other
  10880120  dry-configurable-0.16.1
    400040  3.1.2/lib

Yes, we brought 185MB allocated memory down to 33MB! This also brought us on par with Rails in the extreme end of the r10k memory usage benchmark:

Updated memory usage for Hanami::Action vs Rails

Here’s a thing though: the way r10k generates actions for its Rails benchmark is to create a single controller class with a method per action. So for the point on the far right of that chart, that’s a single class with 10k methods. Hardly realistic.

So I made a quick tweak to see how things would look if the r10k Rails benchmark generated a class per endpoint like we do with Hanami::Action:

Hanami::Action vs Rails with a separate controller class per action

That’s more like it. This is another extreme, however: more realistically, we’d see Rails apps with somewhere between 5-10 actions per controller class, which would lower its dot a little in that graph. In my opinion this would be a useful thing to upstream into r10k. It’s already a contrived benchmark, yes, but it’d be more useful if it at least mimicked realistic application structures.

Either way, we finished the month much more confident that we’ll be delivering on our promise of Hanami as the lighter, faster framework alternative. A good outcome!

Along the way, however, things did feel bleak at times. I wasn’t confident that I’d be able to make things right, and it didn’t feel great to think we might’ve spent years putting somethign together that wasn’t going to be able to deliver on some of those core promises. Luckily, I found all the wins we needed, and learnt a few things along the way.

Hanami 2.0, here we come

What else happened in September? Possibly the biggst thing is that we organised ourselves for the runway towards the final Hanami 2.0.0 release.

We want to do everything possible to make sure the release happens this year, so I spent some time organising the remaining tasks on our Trello board into date-based lists, aiming for a release towards the end of November. It looked achievable! The three of us in the core team re-committed ourselves to doing everything we could to complete these tasks in our estimated timeframes.

So far, things have gone very well!

Hanami 2.0.0 release progress on Trello

We’ve all been working tremendously hard, and so far, this has let us keep everything to the schedule. I’ll have a lot to share about our work across October, but that’s all for next month’s update. So in the meantime, I have to put my head back down and get back to shipping a framework. See you all again soon!

Lev LafayetteBorderline Personality Disorder: A Summary

This is a summary of what I have learned over the past twothree years, after my first direct encounter with what is called Borderline Personality Disorder (BPD). Whilst I do not have BPD (although everyone is a little bit on every mental health continuum), I do endeavour to be a loyal and committed ally of people with BPD (pwBPD). In a very real sense, I wish I knew then what I do now; but at least I have made the effort to learn. I hope that these notes are useful to others. For anyone who wishes to be a sincere ally (a catch-all term that should include partners, family, and friends) of a person with BPD it is absolutely necessary to make the effort to listen to the pwBPD and to educate yourself using scholarly sources. Not making the effort means that you're not an ally, regardless of how close you think you are to the person, and not using scholarly sources will cause more harm and prejudice than good.

This document was initially written at the end of BPD Awareness Week 2022 in Australia and for World Mental Health Day, and will be updated as new information comes to hand. Throughout all the content here it is emphasised that (a) quantifiers are always required (many, most, some, etc) and every BPD person is unique and will not show all characteristics and (b) always see the person. Please note that I am not a psychologist, although I am a student of the subject and have completed a Graduate Diploma of Applied Psychology at the University of Auckland. I encourage people to donate to the Australian BPD Foundation.

Warning: This article mentions suicide, self-harm, and abuse.

Last update: November 03, 2023

Definition and Prevalence

"Borderline Personality Disorder" is a mental health condition marked by a long-term pattern of intense emotional reactions, divergent moods, unstable interpersonal relations, impulsivity, and issues in self-identity and self-direction. The term itself was coined when the condition of behaviours was deemed to be on the borderline of psychosis (difficulty in determining what is real) and neuroticism (disorders that cause constant distress), where a neurotic person in a time of stress would show signs of psychosis. Whilst neither 'psychosis' nor 'neuroticism' are used as formal mental health descriptors, the term "borderline" has stuck. The term was included in DSM-III (1980) where it remains to the current edition. An alternative, and more intuitive term, is "Emotionally unstable personality disorder" (EUPD).

The median prevalence of BPD is c1% (Ellison et al, 2018). In clinical settings, BPD prevalence is around 10-12% in outpatient psychiatric clinics and 20-22% among inpatient clinics. Prevalence is notably higher among incarcerated individuals and notably lower among the elderly. There is a pronounced gender distinction with women diagnosed over men at a 3:1 ratio (Skodol, Bender, 2003). Underdiagnosis and misdiagnosis are unfortunately common, with over 40% of pwBPD had been previously misdiagnosed with other disorders like bipolar disorder or major depressive disorder (Ruggero et al, 2010).

Diagnosis and Symptoms

The DSM-5 (p663, 2013) gives the following as diagnostic criteria. Formal diagnosis requires satisfying of five or more of the criteria.

1. Frantic efforts to avoid real or imagined abandonment (Note: Do not include suicidal or self-mutilating behaviour covered in Criterion 5)
2. A pattern of unstable and intense interpersonal relationships characterised by alternating
between extremes of idealisation and devaluation
3. Identity disturbance: markedly and persistently unstable self-image or sense of self
4. Impulsivity in at least two areas that are potentially self-damaging (e.g. spending, sex, substance abuse, reckless driving, binge eating) (Note: Do not include suicidal or self-mutilating behaviour covered in Criterion 5)
5. Recurrent suicidal behaviour, gestures, or threats, or self-mutilating behaviour
6. Affective instability due to a marked reactivity of mood (e.g. intense episodic dysphoria, irritability or anxiety usually lasting a few hours and only rarely more than a few days)
7. Chronic feelings of emptiness
8. Inappropriate, intense anger or difficulty controlling anger (e.g. frequent displays of temper,
constant anger, recurrent physical fights)
9. Transient, stress-related paranoid ideation or severe dissociative symptoms

There are similar criteria for the International Classification of Diseases (11th Revision) which describes "the borderline pattern descriptor" as follows:

A pervasive pattern of instability of interpersonal relationships, self-image, and affects, and marked impulsivity, as indicated by many of the following:

1. Frantic efforts to avoid real or imagined abandonment
2. A pattern of unstable and intense interpersonal relationships
3. Identity disturbance, manifested in markedly and persistently unstable self-image or sense of self
4. A tendency to act rashly in states of high negative affect, leading to potentially self-damaging behaviours
5. Recurrent episodes of self-harm
6. Emotional instability due to marked reactivity of mood
7. Chronic feelings of emptiness
8. Inappropriate intense anger or difficulty controlling anger
9. Transient dissociative symptoms or psychotic-like features in situations of high affective arousal

Such ranking scales are either/or in many of their assessments. A more nuanced version that recognises that borderline traits are continuous has been developed, the Zanarini Scale (Zanarini et al, 2003)

If one thinks that they fit the criteria for BPD it is essential to seek a professional diagnosis. Without professional treatment, one is taking an enormous risk of harm to themselves and others. Likewise, if one thinks that another person fits the criteria, raise the matter very gently and delicately with a motivation of care and with recognition and self-awareness that you are not a professional.

Causes and Neurology

Borderline personality disorder often begins in adolescence or early adulthood. It is characterized by problems with interpersonal relationships (they are intense, alternating between idealization and devaluation), mood (depression and especially inappropriate, intense anger), and unstable self-image. Current estimates of the general population prevalence of borderline personality disorder range up to 5.9 percent, and recent studies of college students suggest that up to 17 percent struggle with significant borderline traits. Borderline personality disorder is associated with psychiatric disability, substance abuse, eating disorders, and medical problems. BPD patients showed significantly higher scores on both primary and secondary global rates of psychopathic behaviour associated with patterns of executive dysfunction (López-Villatoro et al, 2020)

The heritability of BPD is between 37% to 69%, a rather wide range (Gunderson et al, 2011), with indications that is one of the most heritable disorders (Torgersen et al, 2000). However, even when researchers do note specific linkages to genetics variation between genetic and environmental factors are balanced at 42%/58% (Distel et al, 2008). These environmental factors are commonly associated with the result of childhood trauma such as neglect and abuse; there is little doubt that a person who has experienced childhood trauma is at an increased risk for developing BPD and PTSD (Cattane et al 2017).

Real-time brain imaging scans have established that pwBPD are physically unable to regulate emotions (Nauret, 2017). Neuroimaging shows that pwBPD typically has a reduction in the brain's regions that regulate stress responses, emotions, and decision-making including the amygdala, the hippocampus, and the orbitofrontal cortex (O'Neill, Frodl, 2012). There is dysregulation of the hypothalamic-pituitary-adrenal axis, responsible for the production of cortisol, released during times of stress; pwBPD have abnormal levels of cortisol production (Cattane, et al 2017), reflected in damage erosion of the very areas of the brain responsible for stress regulation and decision-making. Amygdala damage is associated with impulsive behaviour, a lessens aversion to risk and loss (Gupta et al 2011), and also with hypervigilance (Terburg et al, 2012). Damage to the amygdala (emotional processing) and the hippocampus (declarative of episodic recollection) also reduces the capacity for memory (Yang, Wang, 2017). These all contribute to BPD being described as the mental illness with the highest level of psychological pain.

Comorbidities

There are a number of comorbidities with BPD. The following are a few words on the most common, including Eating Disorders, Attention Deficit Hyperactivity Disorder, (complex and chronic) Post-Traumatic Stress Disorder, Narcissistic Personality Disorder, and Bipolar Disorders

Eating disorders and BPD are co-morbid, with some 53.8% co-occurrence from one extensive study (Zanarini, et al 2010), compared to 24.6% of patients with other personality disorders and more specifically, 21.7% of patients with BPD met criteria for anorexia nervosa and 24.1% for bulimia nervosa. Like other co-morbidities, an association has been drawn between eating disorders, BPD, and the environmental factor of childhood trauma, whether in the form of neglect or abuse (Sansone, Sansone, 2017).

Attention deficit hyperactivity disorder (ADHD) and BPD, is another frequent comorbidity, in a clinical setting, ranging from 16.1 to 38% of BPD patients (Weiner et al, 2019). This comorbidity questions whether it is appropriate to view either as an entirely early-onset neurological disorder (ADHD) or a later-onset environmental disorder (BPD). As with many other comorbidities, the expression of characteristics is more severe, hence people with ADHD and BPD are even more impulsive than those with BPD alone, and with a higher level of emotional dysregulation than those with ADHD alone.

Post-traumatic stress disorder (PTSD) including complex PTSD, and borderline personality disorder commonly co-occur, approximately 25-30% (Pagura et al 2010, Frías and Palma, 2015) of the time. Whilst PTSD is characterised by (a) a sense of threat, (b) avoidance, and (c) re-experiencing. complex PTSD has, in addition, (d) interpersonal avoidance and difficult interpersonal relationships (e) negative self-concept, and (f) affective instability. BPD does not have (a), (b), and (c), but does have (d), (e), (f) and, in addition, (g) anger, (h) chronic emptiness, (i) self-injury behaviours (j) transient psychotic and dissociation and (k) fear of abandonment. Individuals with comorbid PTSD-BPD have a poorer quality of life on average, with higher levels of self-harm.

Narcissistic Personality Disorder (NPD) and Borderline Personality Disorder (BPD) are both "cluster B" disorders, characterised by dramatic and intense behaviour (at least to observers), and impulsive behaviour. This cluster includes NPD, BPD, anti-social personality disorder, histrionic personality disorder, etc In addition to this general overlap the co-occurrence of BPD and NPD has been assessed from a range of 13% (Hörz-Sagstetter et al 2018) to 39% (Grant et al, 2008). There is a possibility that it is particularly associated with "vulnerable narcissism", whose traits include hypersensitivity, defensiveness, and low self-esteem. People with NPD and BPD are less likely to see a remission of BPD as NPD people have a lower motivation to seek therapy, and NPD is very difficult to treat (Caligor et al, 2015).

Bipolar Disorders and BPD also occur, in approximately 20% of cases (Zimmerman, 2019) and there is an ongoing discussion on whether BPD should be part of the bipolar spectrum, although most recent literature suggests that they are distinct, and the debate has actually sidetracked from the substantive issue. Like other comorbid states, people with "borderpolar" have higher levels of impaired functioning, substance abuse disorders, and self-harm (Patel et al, 2019). Further, people with BPD and a Bipolar Disorder are more likely to have PTSD as well, generating an especially challenging combination that has been insufficiently researched and is likely underdiagnosed.

Prognosis

BPD conditions remain throughout the lifespan, although with variations in symptoms (Biskin, 2015). In some cases BPD symptoms can be observed in childhood, however, there is an absence of evidence regarding the course of development of those who do not meet the full criteria. Adolescence is usually when BPD is recognised, although there is evidence of remission in follow-up studies ranging from 40% to 65%, although residual symptoms are not always predictable. Adult BPD longitudinal studies also suggest a gradual decline in symptoms, with periods of remission of recurrence. The decline of symptoms was mainly in the behavioural aspects of impulsivity; self-harm and suicide remained a factor with one large study indicating a 10% suicide rate after 27 years of follow-up, mainly patients in their 30s with multiple failed treatments. Even with a decline in symptoms over time functional recovery - defined as remission along with full-time vocational or educational activity and at least one stable and supportive relationship with a close friend or partner - occurred in only just more than 50% of patients (Zanarini et al, 2012).

People with BPD have a reduced life expectancy ranging from 14 to 27.5 years, with a median value of 20 (Castle, 2019). Most of the early mortality is largely due to cardiovascular deaths with major risk factors (e.g., obesity, smoking, poor diet, and lack of exercise) significantly greater among people with BPD. Other notable risk factors include arteriosclerosis, hypertension, hepatic disease, arthritis, gastrointestinal disease, cardiovascular disease, and sexually transmitted diseases. These can be attributed to maladaptive lifestyle choices (smoking, drugs, alcohol, diet) as well as iatrogenic (prescription medicines). This is hardly helped by chronic sleep issues (Selby, 2013). The problems are often compounded with a person with BPD having comorbidity, and also by the stigma attached by BPD even in the responses of health professionals. Suicide rates vary from up to 10% of cases from follow-back research, or from 3-6% in prospectively followed cohorts, and most occur later in life (mean age of 37, standard deviation of 10) (Paris, 2019)

Treatment

There is no cure for BPD, but recovery and management are possible. There has only been very modest evidence of neurogenesis of the amygdala (Jhaveri, 2018) and mood disorders are known to weaken the prospect of neurogenesis of the hippocampus. In other words, the very experience of having BPD reduces the possibility of recovery from BPD (Toda et al, 2019). There is evidence that deep brain stimulation can help relieve some psychological and behavioral side effects, such as hypervigilance (Langevin, 2012). There are some regularly prescribed medications for pwBPD, typically antipsychotics (Grootens, Verkes, 2005) and mood stabilisers (Lieb et al, 2010).

Usually, psychotherapy has been shown to be particularly beneficial, with Dialectical Behaviour Therapy (DBT) offering the greatest rates of success (Choi-Kain, 2017). It is, of course, not something that necessarily works for everybody with BPD, and other therapies may be more appropriate depending on the individual (e.g., schema therapy, mentalisation-based treatment, transference-focussed psychotherapy). A particular warning is raised for matters of misdiagnosis, especially with common co-morbidities such as PTSD. In many cases, a treatment that is very effective for PTSD can aggravate BPD and vice-versa e.g., trauma history, mood swings, and alienation from others (Hammond, 2020).

Unfortunately, people with borderline personality disorder (BPD) leave treatment programs about 70 percent of the time. The personality disorder includes commitment instability, and whilst they are known to open up to a therapist, they are also prone to "splitting" on the therapist, therapy in general, and experiencing their own sense of failure (Dingfelder, 2004)

Community

The BPD Community consists of people with lived experience of Borderline Personality Disorder. This can include people with the illness (diagnosed, undiagnosed, treated, and in remission), their close friends, family, partners, and allies. The following are a few short comments on the lived experiences of both pwBPD and those in their life. This section of the summary is somewhat more informal than what has preceded.

Availability, Understanding, Solutions

A common error by allies have when a pwBPD is having an episode of extreme emotions (anger, sadness, anxiety, etc) is that they seek to provide rational solutions to what is a perceived problem. This may be a genuine response motivated by care and love, but it is not the appropriate approach. A person in such a situation is experiencing an emotional disturbance and the experience must also be dealt with emotionally. People with BPD have at least equal and often heightened levels of emotional empathy, but their emotional cognition and performance are quite poor (Niedtfeld, 2017). This is often referred to as "the Borderline Empathy Paradox", where it is common of pwBPD to detect even subtle emotional states of others the also typically have serious deficits of cognitive and behavioural empathy (Salgado et al 2020).

The following steps - availability, understanding, and solutions - must be carried out in order with each step depending on the preceding. An alternative name for these is "SET theory" (Kreisman, 2018), standing for "Support, Empathy, Truth", although that will get confusing for people with an interest in discrete mathematics (such as the author).

Availability: One should recall that a pwBPD suffers a chronic fear of abandonment; thus availability must be of the first priority. Simply being present can help alleviate the fear. Statements of support and engagement are also valuable: "I am here for you", "I care about you", "I want to help", etc.

Understanding: Once availability is established, the pwBPD is likely to express their feelings. Their ally must display empathy and understanding at this point. The pwBPD may seek to ground their feelings in events or interpretations that might be completely erroneous, conflated, etc. The ally should not seek to correct them or downplay the real or imagined causes, but rather validate the emotions. This requires some attention on the behalf of the ally to listen to the feelings as well as the words being used. Feelings are ALWAYS valid, even if the reasons are not and the pwBPD feels their feelings more viscerely than anyone else. The ally should give statements that validate the feelings: "This must be very frustrating for you".

Solutions: Only once the pwBPD has an assurance of an ally's availability, and the empathic rapport of understanding and validating their emotional state is established, should potential solutions be offered. These need to be factual or based on the ally's commitments (and the ally had better follow through): "This is what I can do to help", "If you do x, then y will happen. Perhaps consider z", etc.

Shame, Guilt, and Remorse

It is virtually a given that pwBPD will engage in words or actions that are very hurtful and damaging to those close to them. Unlike people who have limited emotional range or capacity, a pwBPD feels emotions, including shame and guilt, intensely. The coping mechanisms and responses of pwBPD however are typically very poor and they will often hold on to shame and guilt in a manner that is damaging to their self-esteem (through self-loathing), despair (avoiding establishing commitment through fear of hurting people in the future), or even various of self-harm. For pwBPD it is essential that they learn to turn shame into guilt and guilt into remorse, otherwise the pain will be ongoing.

Shame is more prevalent among pwBPD than guilt (Peters, Geiger 2016). Shame reflects the individual's negative self-concept and self-loathing, and represents the accumulated negative beliefs that the individual has toward themselves. With pwBPD it is an important contributor to anger, unstable mood, instability of interpersonal relationships, externalisation of blame, and self-harm. When a pwBPD is triggered by events that generate shame, including the results of their own impulsivity and other behaviours. However, it is necessary for a pwBPD to develop guilt about actions rather than accumulating further shame about them. Guilt at least focuses on the event and identifies the need to change behaviour, rather than adding to the negative self-image of shame which is an interrnalised and private pain.

The difference between guilt and remorse involves taking ownership of what a person has done that has hurt another. Contact, even indirect contact if necessary, of those that have been impacted is suggested. Informing the wronged party that one feels that they have wronged them and that one is changing themself so it doesn't happen again, is required. Further, asking if there's anything that can be done to make amends is a full acceptance of responsibility. There is no onus on the wronged party to give forgiveness or to accept any offer of amends. However, in most cases, people are forgiving when they see a genuine attempt in a person to change.

Mirroring, Splitting, Discarding, Reconnection

Borderline
Feels like I'm going to lose my mind
You just keep on pushing my love
Over the borderline
-- Madonna, Borderline (1983)

The famous Madonna song, for what it's worth, actually is not about BPD, but for those in a close relationship with a pwBPD, they experience of having one's love "pushed" and that the close ally may lose their own mind is a very common experience. A common descriptor used by both pwBPD and their loved ones is that the experience is like being on an emotional roller-coaster. Many of those who have experienced a relationship with a pwBPD describe a cycle of behaviour that constitutes manipulative abuse (Brüne, 2016). Partners of a pwBPD are significantly more likely to experience intimate partner violence (Jackson et al, 2015). The actions carried out by a pwBPD are unconsciously driven as they desperately fear abandonment whilst at the same time having high levels of chronic mistrust and a belief that they are unlovable, and a lack of object constancy with their loved ones (Matejko, 2022).

A typical cycle will consist of an initial and often incredible connection between the pwBPD and their loved one, their "favourite person" as described in the culture. The pwBPD will engage in "mirroring", elevating their loved one, affirming their beliefs, dreams, and activities, and will present themselves as exciting and adventurous in the process. For the loved one, they will often describe the experience in highly romantic terms, such as finally meeting their soulmate. This experience, however, does not last; the inevitable flaws of the loved one and the affective instability of the pwBPD will usually mean that the pwBPD will engage in "splitting" against their loved one. Where once they were exalted, they are now treated with equivalent disdain (often with rage and vitriol), and will soon be discarded. During the negative side of the split, the pwBPD, with far greater frequency than others, will establish a new love interest (Michael, 2021). "Splitting" itself is a malformed defense mechanism on the part of the person with BPD (Fetruck et al, 2018) where they convince themselves of the validity of the impending discard.

With the new love interest, the same process is very likely to repeat itself. Often enough, the pwBPD will then engage in the same process and re-establish connection with their original partner with a similar level of original elevation, and the cycle will repeat, or they will find an entirely new relationship; perhaps unsurprisingly, pwBPD tend to have a larger number of romantic relationships over their lifetime (Navarro-Gómez et al, 2017). Assuming a return to the original interest, it is not uncommon for loved ones of a pwBPD to describe how, over a number of years, their partners have discarded them several times and more. Whilst patience and commitment are admirable in any relationship, they will be insufficient in this situation. Therapy for pwBPD and couples therapy for the pwBPD and their partner is also required for success. Establishing clear boundaries and agreed consequences for particular actions should also assist.

Lying, Gaslighting, Lovebombing

The perception of reality for a pwBPD is driven by the current emotional state, which is subject to heightened levels of intensity and instability. As a direct consequence of this, a pwBPD engages in activities that, to an outsider, are like lying or gaslighting but are driven by fearful states rather than an act of malicious deception - it's more an act of desperation, rather than malice. For example, pwBPD often have a weakened level of promissory commitment to the expressions that they provide. Their reality is very much in the "here and now", rather than in the longer term, even when expressed in those terms. At the moment a pwBPD will quite sincerely and wholeheartedly believe what they are saying but will either forget the content entirely or have a radical change in affective orientation. Reminding the pwBPD of their prior commitments is important, but even more can be gained with a reminder of the emotional content of the commitment.

Another result of this emotional, rather than factual, perception is that pwBPD present statements that seem like gaslighting. Emotionally healthy people will develop feelings based on facts. However, pwBPD may unconsciously revise the facts to fit their current feelings or invent facts to fill in memory gaps. Tragically, this behaviour also weakens the ability of the pwBPD to develop a coherent autobiographical sense of self or firm memories. This can also be very confronting to an ally, whose immediate reaction will be to correct the factual error; this is a mistake and instead, the same SET principles described previously should be applied; the facts are secondary; empathy and understanding of the feeling must have priority.

Another experience that loved ones of a pwBPD experience is "lovebombing". These are overwhelming displays of affection and attachment. As the Oxford English Dictionary states: "the action or practice of lavishing someone with attention or affection, especially in order to influence or manipulate them". For a pwBPD the lavishing is real, at the moment. They are not consciously trying to manipulate their loved one. They are, in fact, both terrified of losing their loved one (thus the ovewhelming display of affection) and, at the same time, ready to engage in a "protective discard" on the assumption that their loved one will leave them, and equally fearful of engulfment. Love-bombing can be seen as a symptom of an insecure attachment style, that matches with 90%+ of pwBPD, to the point that is considered almost tautological (Kaurin et al, 2020), and the disorganized insecure attachment style in particular (Agrawal et al, 2004).

Concluding Remarks

This summary is a compilation of my own notes and research over the past twothree years or so. It really is a personal essay, albeit written with my own tendency to an academic style, to make sense of what is a common and often debilitating mental illness. Despite the various difficulties, emphasis is again placed on the importance of individual variation with pwBPD, the legitimacy of their voice in explaining the lived experience of the condition, and the fact that the person who has BPD is also so much more than the illness that they carry. There are a terrible stigma (Aviram, et al, 2006) attached to pwBPD in popular culture and the media, and there are prejudices that abound and most surprisingly in the professions that should be the most helpful. Of course, pwBPD are just as prone to engaging in consciously hurtful acts toward others as anyone else, but in the main, they are incredibly empathic and caring although often unable to fully control their impulses. Genuine sympathy, understanding, and treatment all will help make life much better for them and us.

References

Agrawal, H. R., Gunderson, J., Holmes, B. M., & Lyons-Ruth, K. (2004). Attachment studies with borderline patients: a review. Harvard review of psychiatry, 12(2), 94–104. https://doi.org/10.1080/10673220490447218

Aviram RB, Brodsky BS, Stanley B (2006). "Borderline personality disorder, stigma, and treatment implications". Harvard Review of Psychiatry. 14 (5): 249–256. doi:10.1080/10673220600975121

Biskin RS. (2015). The Lifetime Course of Borderline Personality Disorder. Can J Psychiatry. 2015 Jul;60(7):303-8. doi: 10.1177/070674371506000702. PMID: 26175388; PMCID: PMC4500179.

Brüne M. Borderline Personality Disorder: Why 'fast and furious'?. (2016) Evol Med Public Health;2016(1):52–66. doi:10.1093/emph/eow002

Caligor E, Levy KN, Yeomans FE. (2015). Narcissistic personality disorder: Diagnostic and clinical challenges. AJP. 2015;172(5):415-422. doi:10.1176/appi.ajp.2014.14060723

Castle, D. J. (2019). The complexities of the borderline patient: how much more complex when considering physical health?. Australasian Psychiatry, 27(6), 552-555.

Cattane N, Rossi R, Lanfredi M, Cattaneo A. (2017). Borderline personality disorder and childhood trauma: exploring the affected biological systems and mechanisms. BMC Psychiatry. 2017;17(1):221. doi:10.1186/s12888-017-1383-2

Choi-Kain LW, Finch EF, Masland SR, Jenkins JA, Unruh BT. (2017). What works in the treatment of borderline personality disorder. Curr Behav Neurosci Rep. 2017;4(1):21-30. doi:10.1007/s40473-017-0103-z

Dingfelder, S. F. (2004, March 1). Personality disorders--Treatment for the 'untreatable'. Monitor on Psychology, 35(3).

Distel et al. Heritability of borderline personality disorder features is similar across three countries. Psychological Medicine, 2008; 38 (9): DOI: 10.1017/S0033291707002024

Ellison WD, Rosenstein LK, Morgan TA, Zimmerman M. (2018). Community and Clinical Epidemiology of Borderline Personality Disorder. Psychiatr Clin North Am. 2018 Dec;41(4):561-573. doi: 10.1016/j.psc.2018.07.008. Epub 2018 Oct 16. PMID: 30447724.

Fertuck EA, Fischer S, Beeney J. (2018). Social cognition and borderline personality disorder: Splitting and trust impairment findings. Psychiatr Clin North Am. 2018;41(4):613-632 doi:10.1016/j.psc.2018.07.003

Frías Á, Palma C. Comorbidity between post-traumatic stress disorder and borderline personality disorder: A review. Psychopathology. 2015;48(1):1-10. doi:10.1159/000363145
https://pubmed.ncbi.nlm.nih.gov/25227722/

Grant BF, Chou SP, Goldstein RB, et al. Prevalence, correlates, disability, and comorbidity of DSM-IV borderline personality disorder: Results from the Wave 2 National Epidemiologic Survey on Alcohol and Related Conditions. J Clin Psychiatry. 2008;69(4):533-545. doi:10.4088/jcp.v69n0404

Grootens KP, Verkes RJ. (2005). "Emerging evidence for the use of atypical antipsychotics in borderline personality disorder". Pharmacopsychiatry. 38 (1): 20–3. doi:10.1055/s-2005-837767.

Gunderson JG, Zanarini MC, Choi-Kain LW, Mitchell KS, Jang KL, Hudson JI (August 2011). "Family Study of Borderline Personality Disorder and Its Sectors of Psychopathology". JAMA: The Journal of the American Medical Association. 68 (7): 753–762. doi:10.1001/archgenpsychiatry.2011.65.

Gupta R, Koscik TR, Bechara A, Tranel D. The amygdala and decision-making. (2011). Neuropsychologia. 2011 Mar;49(4):760-6. doi: 10.1016/j.neuropsychologia.2010.09.029. Epub 2010 Oct 8. PMID: 20920513; PMCID: PMC3032808.

Hammond, C. (rev) (2020). How PTSD can look like Borderline Personality Disorder. Psych Central.

Hörz-Sagstetter S, Diamond D, Clarkin JF, et al. (2018) Clinical characteristics of comorbid narcissistic personality disorder in patients with borderline personality disorder. J Pers Disord. 2018;32(4):562-575. doi:10.1521/pedi_2017_31_306

Jackson, M. A., Sippel, L. M., Mota, N., Whalen, D., & Schumacher, J. A. (2015). Borderline personality disorder and related constructs as risk factors for intimate partner violence perpetration. Aggression and violent behavior, 24, 95-106.

Jhaveri, D. (2018). Neurogenesis in the emotion-processing centre of the brain. Australasian Science, 39(1), 24-26.

Kaurin, A., Beeney, J. E., Stepp, S. D., Scott, L. N., Woods, W. C., Pilkonis, P. A., & Wright, A. G. C. (2020). Attachment and Borderline Personality Disorder: Differential Effects on Situational Socio-Affective Processes. Affective science, 1(3), 117–127. https://doi.org/10.1007/s42761-020-00017-7

Kreisman JJ. (2018). Talking to a Loved One with Borderline Personality Disorder, Communication Skills to Manage Intense Emotions, Set Boundaries, and Reduce Conflict. New Harbinger Publications.

Langevin JP. (2012). The amygdala as a target for behavior surgery. Surg Neurol Int. 2012;3(Suppl 1):S40-6. doi: 10.4103/2152-7806.91609. Epub 2012 Jan 14. PMID: 22826810; PMCID: PMC3400485.

Lieb, Klaus; Völlm, Birgit; Rücker, Gerta; Timmer, Antje; Stoffers, Jutta M. (2010). "Pharmacotherapy for borderline personality disorder: Cochrane systematic review of randomised trials". The British Journal of Psychiatry. 196 (1): 4–12. doi:10.1192/bjp.bp.108.062984

López-Villatoro, J. M., Diaz-Marsá, M., Mellor-Marsá, B., De la Vega, I., & Carrasco, J. L. (2020). Executive dysfunction associated with the primary psychopathic features of borderline personality disorder. Frontiers in Psychiatry, 11, 514905.

Matejko, S. (2022). Understanding Object Constancy in Borderline Personality Disorder and Narcissism, PsychCentral, 2022
https://psychcentral.com/disorders/borderline-personality-disorder/objec...

Michael J, Chennells M, Nolte T, et al (2021). Probing commitment in individuals with borderline personality disorder. J Psychiatric Res. 2021;137:335-341. doi:10.1016/j.jpsychires.2021.02.062

Nauert, R., (2017). Brain Scans Clarify Borderline Personality Disorder. PsychCentral
https://psychcentral.com/news/2017/09/04/brain-scans-clarify-borderline-...

Navarro-Gómez S, Frías Á, Palma C. Romantic relationships of people with borderline personality: A narrative review. (2017) PSP. 2017;50(3):175-187. doi:10.1159/000474950

Niedtfeld I. (2017) Experimental investigation of cognitive and affective empathy in borderline personality disorder: effects of ambiguity in multimodal social information processing. Psychiatry Res 253:58–63. doi: 10.1016/j.psychres.2017.03.037

O'Neill A, Frodl T (October 2012). "Brain structure and function in borderline personality disorder". Brain Structure & Function. 217 (4): 767–782. doi:10.1007/s00429-012-0379-4

Pagura, J., Stein, M. B., Bolton, J. M., Cox, B. J., Grant, B., & Sareen, J. (2010). Comorbidity of borderline personality disorder and posttraumatic stress disorder in the US population. Journal of psychiatric research, 44(16), 1190-1198.

Paris J. (2019). Suicidality in Borderline Personality Disorder. Medicina (Kaunas). 2019 May 28;55(6):223. doi: 10.3390/medicina55060223. PMID: 31142033; PMCID: PMC6632023.

Patel RS, Manikkara G, Chopra A. Bipolar Disorder and Comorbid Borderline Personality Disorder: Patient Characteristics and Outcomes in US Hospitals. (2019) Medicina (Kaunas). 2019 Jan 14;55(1):13. doi: 10.3390/medicina55010013. PMID: 30646620; PMCID: PMC6358827.

Peters JR, Geiger PJ. (2016). Borderline personality disorder and self-conscious affect: Too much shame but not enough guilt? Personal Disord. 2016 Jul;7(3):303-8. doi: 10.1037/per0000176. Epub 2016 Feb 11. PMID: 26866901; PMCID: PMC4929016.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4929016/

Ruggero CJ, Zimmerman M, Chelminski I, Young D. Borderline personality disorder and the misdiagnosis of bipolar disorder. (2010). J Psychiatr Res. 2010;44(6):405–408. doi:10.1016/j.jpsychires.2009.09.011

Salgado, R. M., Pedrosa, R., & Bastos-Leite, A. J. (2020). Dysfunction of Empathy and Related Processes in Borderline Personality Disorder: A Systematic Review. Harvard review of psychiatry, 28(4), 238–254. https://doi.org/10.1097/HRP.0000000000000260

Sansone RA, Sansone LA. Childhood trauma, borderline personality, and eating disorders: a developmental cascade. Eat Disord. 2007;15(4):333-46. doi:10.1080/10640260701454345

Selby E. A. (2013). Chronic sleep disturbances and borderline personality disorder symptoms. Journal of consulting and clinical psychology, 81(5), 941–947. https://doi.org/10.1037/a0033201

Skodol AE, Bender DS. (2003) Why are women diagnosed borderline more than men? Psychiatr Q. 2003 Winter;74(4):349-60. doi: 10.1023/a:1026087410516. PMID: 14686459.

Terburg D, Morgan BE, Montoya ER, Hooge IT, Thornton HB, Hariri AR, Panksepp J, Stein DJ, van Honk J. Hypervigilance for fear after basolateral amygdala damage in humans. (2012). Transl Psychiatry. 2012 May 15;2(5):e115. doi: 10.1038/tp.2012.46. PMID: 22832959; PMCID: PMC3365265.

Toda, T., Parylak, S.L., Linker, S.B. et al. (2019). The role of adult hippocampal neurogenesis in brain health and disease. Mol Psychiatry 24, 67–87. https://doi.org/10.1038/s41380-018-0036-2

Torgersen S, Lygren S, Oien PA, Skre I, Onstad S, Edvardsen J, Tambs K, Kringlen E (2000). "A twin study of personality disorders". Comprehensive Psychiatry. 41 (6): 416–425. doi:10.1053/comp.2000.16560

Weiner L, Perroud N, Weibel S. (2019). Attention Deficit Hyperactivity Disorder And Borderline Personality Disorder In Adults: A Review Of Their Links And Risks. Neuropsychiatr Dis Treat. 2019 Nov 8;15:3115-3129. doi: 10.2147/NDT.S192871. PMID: 31806978; PMCID: PMC6850677.

Yang Y, Wang JZ. From Structure to Behavior in Basolateral Amygdala-Hippocampus Circuits. (2017). Front Neural Circuits. 2017 Oct 31;11:86. doi: 10.3389/fncir.2017.00086. PMID: 29163066; PMCID: PMC5671506.

Zanarini, M. C., Vujanovic, A. A., Parachini, E. A., Boulanger, J. L., Frankenburg, F. R., & Hennen, J. (2003). Zanarini Rating Scale for Borderline Personality Disorder (ZAN-BPD): a continuous measure of DSM-IV borderline psychopathology. Journal of personality disorders, 17(3), 233–242.

Zanarini MC, Reichman CA, Frankenburg FR, Reich DB, Fitzmaurice G. (2010) The course of eating disorders in patients with borderline personality disorder: a 10-year follow-up study. Int J Eat Disord. 2010;43(3):226-32. doi:10.1002/eat.20689

Zanarini MC, Frankenburg FR, Reich DB, et al. (2012) Attainment and stability of sustained symptomatic remission and recovery among patients with borderline personality disorder and Axis II comparison subjects: a 16-year prospective follow-up study. Am J Psychiatry. 2012;169(5):476–483.

Zimmerman, M., (2019). Borderpolar: Patients with Borderline Personality Disorder and Bipolar Disorder. Psychiatric Times, Psychiatric Times Vol 36, Issue 12, Volume 36, Issue 12

,

Tim SerongTANSTAAFL

It’s been a little over a year since our Redflow ZCell battery and Victron Energy inverter/charger kit were installed on our existing 5.94kW solar array. Now that we’re past the Southern Hemisphere spring equinox it seems like an opportune time to review the numbers and try to see exactly how the system has performed over its first full year. For background information on what all the pieces are and what they do, see my earlier post, Go With The Flow.

As we look at the figures for the year, it’s worth keeping in mind what we’re using the battery for, and how we’re doing it. Naturally we’re using it to store PV generated electricity for later use when the sun’s not shining. We are also charging the battery from the grid at certain times so it can be drawn down if necessary during peak times, for example I set up a small overnight charge to ensure there was power for the weekday morning peak, when the sun isn’t really happening yet, but grid power is more than twice as expensive. More recently in the winter months, I experimented with keeping the battery full with scheduled charges during most non-peak times. This involved quite a bit more grid charging, but got us through a couple of three hour grid outages without a hitch during some severe weather in August.

I spent some time going through data from the VRM portal for the last year, and correlating that with current bills from Aurora energy, and then I tried to compare our last year of usage with a battery, to the previous three years of usage without a battery. For reasons that will become apparent later, this turned out to be a massive pain in the ass, so I’m going to start by looking only at what we can see in the VRM portal for the past year.

The VRM portal has three summary views: System Overview, Consumption and Solar. System Overview tells us overall how much total power was pulled from the grid, how much was exported to the grid, how much was produced locally, and how much was consumed by our loads. The Consumption view (which I wish they’d named “Loads”, because I think that would be clearer) gives us the same consumption figure, but tells us how much of that came from the grid, vs. what came from the battery vs. what came from solar. The Solar view tells us how much PV generation went to the grid, how much went to the battery, and how much was used directly. There is some overlap in the figures from these three views, but there are also some interesting discrepancies, notably: the “From Grid” and “To Grid” figures shown under System Overview are higher than what’s shown in the Consumption and Solar views. But, let’s start by looking at the Consumption and Solar views, because those tell us what the system gives us, and what we’re using. I’ll come back after that to the System Overview, which is where things start to get weird and we discover what the system costs to run.

The VRM portal lets you chose any date range you like to get historical figures and bar charts. It also gives you pie charts of the last 24 hours, 7 days, 30 days and 365 days. To make the figures and bar charts match the pie charts, the year we’re analysing starts at 4pm on September 25, 2021 and ends at 4pm on September 25, 2022, because that’s exactly when I took the following screenshots. This means we get a partial September at each end of the bar chart. I’m sorry about that.

Here’s the Consumption view:

Consumption view from VRM portal, 2021-09-25 16:00 – 2022-09-25 16:00

This shows us that in the last 12 months, our loads consumed 10,849kWh of electricity. Of that, 54% (5,848kWh) came from the grid, 23% (2,506kWh) came direct from solar PV and the final 23% (2,494kWh) came from the battery.

From the rough curve of the bar chart we can see that our consumption is lower in the summer months and higher in the winter months. I can’t say for certain, but I have to assume that’s largely due to heating. The low in February was 638kWh (an average of 22.8kWh/day). The high in July was 1,118kWh (average 36kWh/day).

Now let’s look at the Solar view:

Solar view from VRM portal, 2021-09-25 16:00 – 2022-09-25 16:00

In that same time period we generated 5,640kWh with our solar array, of which 44% (2,506kWh) was used directly by our loads, 43% (2,418kWh) went into the battery and 13% (716kWh) was exported to the grid.

Unsurprisingly our generation is significantly higher in summer than in winter. We got 956kWh (average 30kWh/day) in December but only 161kWh (5.3kWh/day) in June. Peak summer figures like that mean we’ll theoretically be able to do without grid power at all during that period once we get a second ZCell (note that we’re still exporting to the grid in December – that’s because we’ve got more generation capacity than storage). The winter figures clearly indicate that there’s no way we can provide anywhere near all our own power at that time of year with our current generation capacity and loads.

Now look closely at the summer months (December, January and February). There should be a nice curve evident there from December to March, but instead January and February form a weird dip. This is because we were without solar generation for three weeks from January 20 – February 11 due to replacing a faulty MPPT. Based on figures from previous years, I suspect we lost 500-600kWh of potential generation in that period.

Another interesting thing is that if we compare “To Battery” on the Solar view (2,418kWh) with “From Battery” on the Consumption view (2,494kWh), we see that our loads consumed 76kWh more from the battery than we actually put into it with solar generation. This discrepancy is due to the fact that in addition to charging the battery from solar, we’ve also been charging it from the grid at certain times, but the amount of power sent to the battery from the grid isn’t broken out explicitly anywhere in the VRM portal.

Now let’s look at the System Overview:

System Overview view from VRM portal, 2021-09-25 16:00 – 2022-09-25 16:00

Here we see the same figures for “Production” (5,640kWh) and “Consumption” (10,849kWh) as were in the Consumption and Solar views, and the bar chart shows the same consumption and generation curves (ignore the blue overlay and line which indicate battery minimum/maximum and average state of charge – that information is largely meaningless at this scale, given we cycle the battery completely every day).

Now look at “To Grid” and “From Grid”. “To Grid” is 754 kWh, i.e. we somehow sent 38kWh more to the grid than came from solar. “From Grid”, at 8,531kWh, is a whopping 2,683kWh more than the 5,848kWh grid power consumed by our loads (i.e. close to half as much again).

So, what’s going on here?

One factor is that we’re charging the battery from the grid at certain times. Initially that was a few hours overnight and a few hours in the afternoon on weekdays, although the afternoon charge is obviously also provided by the solar if the sun is shining. For all of July, August and most of September though I was using a charge schedule to keep the battery full except for peak times and maintenance cycle nights, which meant quite a bit more grid charging overnight than earlier in the year, as well as grid charging most of the day during days with no or minimal sunshine. Grid power sent to the battery isn’t visible in the “From Grid” figure on the Consumption view – that view shows only our loads, i.e. the equipment the system is powering – but it is part of the “From Grid” figure in the System Overview.

Similarly, some of the power we export to the grid is actually exported from the battery, as opposed to being exported from solar generation. That usually only happens during maintenance cycles when our loads aren’t enough to draw the battery down at the desired discharge rate. But again, same thing, that figure is present here on the system overview page as part of “To Grid”, but of course is not part of the “To Grid” figure on the Solar view.

Another factor is that the system itself needs some amount of power to operate. The Victron kit (the MultiPlus II Inverter/Chargers, the Cerbo GX, the MPPT) use some small amount of power themselves. The ZCell battery also requires power to operate its pumps and fans. When the sun is out this power can of course come from solar. When solar power is not available, power to run the system needs to come from some combination of the remaining charge in the battery, and the grid.

On that note, I did a little experiment to see how much power the system uses just to operate. On July 9 (which happened to be a maintenance cycle day), I disabled all scheduled battery charges, and I shut off the DC isolators for the solar PV, so the battery would remain online (pumps and fans running) but empty for all of July 10. The following day I went and checked the figures on the System Overview, which showed we drew 35kWh, but that our consumption was 33kWh. So, together, the battery doing nothing other than running its pumps and fans, plus the Multis doing nothing other than passing grid power through, used 2kWh of power in 24 hours. Over a year, that’s 730kWh. As mentioned above, ordinarily some of that will be sourced from mains and some from solar, but if we look at the total power that came into the system as a whole (5,640kWh from solar + 8,531kWh from the grid = 14,171kWh), 730kWh is just slightly over 5% of that.

The final factor in play is that a certain amount of power is naturally lost due to conversion at various points. The ZCell has a maximum 80% DC-DC stack efficiency, meaning in the absolute best case if you want to get 10kW out of it, you have to put 12.5kW in. In reality you’ll never hit the best case: the lifetime charge and discharge figures the BMS currenly shows for our ZCell are 4,423 and 3,336kWh respectively, which is a bit over 75%. The Multis have a maximum efficiency of 96% when doing their invert/charge dance, so if we grid charge the battery, we lose at least 4% on the way in, and at least 4% on the way out as well, going to and from AC/DC. Again, in reality that loss will be higher than 4% each way, because 96% is the maximum efficiency.

A bunch of the stuff above just doesn’t apply to the previous system with the ABB inverter and no battery. I also don’t have anything like as much detailed data to go on for the old system, which makes comparing performance with the new system fiendishly difficult. The best comparison I’ve been able to come up with so far involves looking at total power input to the system (power from grid plus solar generation), total consumption by loads (i.e. actual locally usable power), and total power exported.

Prior to the Victron gear and Redflow battery installation, I had grid import and export figures from my Aurora Energy bills, and I had total generation figures from the ABB inverter. From this I can synthesise what are hopefully reasonably accurate load consumption figures by adding adding grid input to total PV generation minus grid export.

I had hoped to do this analysis on a quarterly basis to line up with Aurora bills, because then I would also be able to see how seasonal solar generation and usage went up and down. Unfortunately the billing for 2020 and 2021 was totally screwed up by the COVID-19 pandemic, because there were two quarters during which nobody was coming out to read the electricity meter. The bills for those quarters stated estimated usage (i.e. were wrong, especially given they estimated grid export as zero), with subsequent quarters correcting the figures. I have no way to reliably correlate that mess with my PV generation figures, except on an annual basis. Also, using billing periods from pre-battery years, the closest I can get to the September 25 based 2021-2022 year I’m looking at now is billing periods starting and ending in mid-August. But, that’s close enough. We’ve still got four pretty much back-to-back 12 month periods to look at.

YearGrid InSolar InTotal InLoadsExport
2018-20199,0316,68215,71311,8273,886
2019-20209,3246,46815,79212,2553,537
2020-20217,5826,34713,92910,3583,571
2021-20228,5315,64014,17110,849754

One thing of note here is that in the 2018-2019 and 2019-2020 years, our annual consumption was pretty close to 12MWh, whereas in 2020-2021 and 2021-2022 it was closer to 10.5MWh. If I had to guess, I’d say that ~1.5MWh/year drop is due to a couple of pieces of computer equipment that were previously always on, now mostly running in standby mode except when actually needed. A couple of hundred watts constant draw is a fair whack of power over the course of a year. Another thing to note is the big drop in power exported in 2021-2022, because most of our solar generation is now used locally.

The thing that freaked me out when looking at these figures is that in the battery year, while our loads consumed 491kWh more than in the previous non-battery year, we pulled 949kWh more power in from the grid! This is the opposite of what I had expected to see, especially having previously written:

In the eight months the system has been running we’ve generated 4631kWh of electricity and “only” sent 588kWh to the grid, which means we’ve used 87% of what we generated locally – much better than the pre-battery figure of 45%. I suspect we’ve reduced the amount of power we pull from the grid by about 30% too, but I’ll have to wait until we have a full year’s worth of data to be sure.

– by me at the end of Go With The Flow

When I wrote that, I was looking at August 31, 2021 through April 27, 2022, and comparing that to the August 2020 to May 2021 grid power figures from my old Aurora bills. The mistake I must have made back then was to look at “From Grid” on the Consumption view, rather than “From Grid” on the System Overview. I’ve just done this exercise again, and the total grid draw from our Aurora bills from August 2020 to May 2021 is 4,980kWh. “From Grid” on the Consumption view for August 2021 to May 2022 is 3,575kWh, which is about 30% less, but “From Grid” on the System Overview is 4,754kWh, which is only about 5% less. So our loads pulled about 30% less from the grid than the same time the year before, but our system as a whole didn’t.

Now let’s break our ridiculous September-based year down further into months, to see if we can see more detail. I’ve highlighted some interesting periods in bold.

MonthGrid InSolar InTotal InLoadsExport
Sep 21 (part)1531012542136
Oct 216366291,26598855
Nov 214307471,17786697
Dec 212329561,188767176
Jan 226524501,10282274
Feb 2247043090063883
Mar 224985681,06681364
Apr 2260937798677527
May 229102381,1489533
Jun 221,1141611,27510732
Jul 221,1632231,386111811
Aug 229103751,28596664
Sep 22 (part)7543851,13985792
Total8,5315,64014,17110,849754

December is great. We generated about 25% more power than our loads use (956/767=1.25), and our grid input was only about 30% of the total of our loads (232/767=0.30).

January and February show the effects of missing three weeks of potential generation. I mean, just look at December through February 2021-2022 versus the previous three summers.

PV Generation December through January 2018-2022
 2018-20192019-20202020-20212021-2022
December919882767956
January936797818450
February699656711430

June and July are terrible. They’re our highest load months, with the lowest solar generation and we pulled 3-4% more power from the grid than our loads actually consumed. I’m going to attribute the latter largely to grid charging the battery.

If I dig a couple of interesting figures out for June and July I see “To Battery” on the Solar view shows 205kWh, and “From Battery” on the Consumption view shows 558kWh. Total consumption in that period was 2,191kWh, with the total “From Grid” reported in System Overview of 2,277kWh. Let’s mess with that a bit.

Bearing in mind the efficiency numbers mentioned earlier, if 205kWh went to the battery from PV, that means no more than 154kWh of what we got out of the battery was from PV generation (remember: real world DC-DC stack efficiency of about 75%). The remaining 404kWh out of the battery is power that went into it from the grid. And that means at least 538kWh in (404/0.75). Note that total from grid for these two months was 86kWh more than the 2,191kWh used by our loads. If I hadn’t been keeping the battery topped up from the grid, I’d’ve saved at least 134kWh of grid power, which would have brought our grid input figure back down below our consumption figure. Note also that this number will actually be higher in reality because I haven’t factored in AC/DC conversion losses from the Multis.

Now let’s look at some costs. When I started trying to compare the new system to the previous system, I went in thinking to look at in in terms of total power input to the system, total consumption by loads, and total power exported. There’s one piece missing there, so let’s add another couple of columns to an earlier table:

YearGrid InSolar InTotal InLoadsExportTotal Outwhat?
2021-20228,5315,64014,17110,84975411,6032,568

The total usable output of the system was 11,603kWh for 14,171kWh input. The difference between these two figures – 2,568kWh, or about 18% – went somewhere else. Per my earlier experiment, 5% is power that went to actually operate the system components, including the battery. That means about 13% of the power input to the system over the course of the year must have gone to some combination of charge/discharge and AC/DC conversion (in)efficiencies. We can consider this the energy cost of the system. To have the ability to time-shift expensive peak grid electricity, and to run the house without the grid if the sun is out, or from the battery when it has charge, costs us 18% of the total available energy input.

Grid power has energy costs too, but we’re not usually aware of this because it happens somewhere else. I haven’t yet found Tasmanian figures, but this 2021 Transmission Annual Planning Report PDF from Powerlink in Queensland has historical figures showing that about 7% of generation there went to auxiliaries, i.e. fans and pumps and things running at the power stations. And according to the Australian Energy Market Operator (AEMO), 10% of grid power generated is lost during transmission and distribution. Stanwell (a power company in Queensland) have a neat explainer of all this on their What’s Watt site.

Finally, speaking of expensive grid electricity, let’s look at how much we paid Aurora Energy over the past four years for our power. The bills are broken out into different tariffs, for which you’re charged different amounts per kilowatt hour and then there’s an additional daily supply charge, and also credits for power exported. We can simplify that by just taking the total dollar value of all the power bills and dividing that by the total power drawn from the grid to arrive at an effective cost per kilowatt hour for the entire year. Here it is:

YearFrom GridTotal BillCost/kWh
2018-20199,031$2,278.33$0.25
2019-20209,324$2,384.79$0.26
2020-20217,582$1,921.77$0.25
2021-20228,531$1,731.40$0.20

So, the combination of the battery plus the switch from Flat Rate to Peak & Off-Peak billing has reduced the cost of our grid power by about 20%. I call that a win.

Going forwards it will be interesting to see how the next twelve months go, and, in particular, what we can do to reduce our power consumption. A significant portion of our power is used by a bunch of always-on computer equipment. Some of that I need for my work, and some of that provides internet access, file storage and email for us personally. Altogether, according to the UPSes, this kit pulls 200-250 watts continuously, but will pull more than that during the day when it’s being used interactively. If we call it 250W continuous, that’s a minimum of 6kWh/day, which is 2,190kWh/year, or about 20% of the 2021-2022 consumption. Some of that equipment should be replaced with newer, more power efficient kit. Some of it could possibly even be turned off or put into standby mode some of the time.

We still need to get a heat pump to replace the 2400W panel heater in our bedroom. That should save a huge amount of power in winter. We’re also slowly working our way through the house installing excellent double glazed windows from Elite Double Glazing, which will save on power for heating and cooling year round.

And of course, we still need to get that second ZCell.

,

Tim RileyOpen source status update, August 2022

August’s OSS work landed one of the last big Hanami features, saw another Hanami release out the door, began some thinking about memory usage, and kicked off a fun little personal initiative. Let’s dive in!

Conditional slice loading in Hanami

At the beginning of the month I merged support for conditional slice loading in Hanami. I’d wanted this feature for a long time, and in fact I’d hacked in workarounds to achieve the same more than 2 years ago, so I was very pleased to finally get this done, and for the implementation work to be as smooth as it was.

The feature provides a new config.slices setting on your app class, which you can configure like so:

module MyApp
  class App < Hanami::App
    config.slices = %w[admin]
  end
end

For an app consisting of both Admin and Main slices and for the config above, when the app is booted, only the Admin slice will be loaded:

require "hanami/prepare"

Hanami.app.slices.keys # => [:admin]

Admin::Slice # exists, as expected
Main         # raises NameError, since it was never loaded

As we see from Main above, slices absent from this list will not have their namespace defined, nor their slice class loaded, nor any of their Ruby source files. Within that Ruby process, they effectively do not exist.

Specifying slices to load can be very helpful to improve boot time and minimize memory usage for specific deployed workloads of your app.

Imagine you have a subset of background jobs that run via a dedicated job runner, but whose logic is otherwise unneeded for the rest of your app to function. In this case, you could organize those jobs into their own slice, and then load only that slice for the job runner’s process. This arrangement would see the job runner boot as quickly as possible (no extraneous code to load) as well as save all the memory otherwise needed by all those classes. You could also do the invserse for your main deployed process: specify all slices except this jobs slice, and you gain savings there too.

Organising code into slices to promote operational efficiency like this also gives you the benefit of greater clarity in the separation of responsibilities between those slices: when a single slice of code is loaded and the rest of your app is made to disappear, that will quickly surface any insidious dependencies from that slice to the rest of your code (they’ll be raised as exceptions!). Cleaning these up will help ensure your slices remain useful as abstractions for reasoning about and maintaining your app.

To make it easy to tune the list of slices to load, I also introduced a new HANAMI_SLICES env var that sets this config without you having to write code inside your app class. In this way, you could use them in your Procfile or other similar deployment code:

web: HANAMI_SLICES=main,admin bundle exec puma -C config/puma.rb
feed_worker: HANAMI_SLICES=feed bundle exec rake jobs:work

This effort was also another example of why I’m so happy to be working alongside the Hanami core team. After initially proposing a more complex arrangement including separate lists for including or excluding slices, Luca jumped in and help me dial this back to the much simpler arrangement of the single list only. For an Hanami release in which we’re going to be introducing so many new ideas, the more we can keep simple around them, the better, and I’m glad to have people who can remind me of this.

Fixed how slice config is applied to component classes

Our action and view integration code relies on their classes detecting when they’re defined inside a slice’s namespace, then applying relevant config from the slice to their own class-level config object. It turned out our code for doing this broke a little when we adjusted our default class hierarchies. Thanks to some of our wonderful early adopters, we picked this up quickly and I fixed it. Now things just work like you expect however you choose to configure your action classes, whether through the app-level config.actions object, or by directly updating config in a base action class.

In doing this work, I became convinced we need an API on dry-configurable to determine whether any config value has been assigned or mutated by the user, since it would help so much in reliably detecting whether or not we should ignore config values at particular levels. For now, we could work around it, but I hope to bring this to dry-configurable at some point in the future.

Released Hanami 2.0.0.beta2

Another month passed, so it was time for another release! With my European colleagues mostly enjoying some breaks over their summer, I hunkered down in chilly Canberra and took care of the 2.0.0.beta2 release. Along with the improvements above, this release also included slice and action generators (hanami generate slice and hanami generate action, thank you Luca!), plus a very handle CLI middlewares inspector (thank you Marc!):

$ hanami middlewares

/    Dry::Monitor::Rack::Middleware (instance)
/    Rack::Session::Cookie

The list of things to do over the beta phase is getting smaller. I don’t expect we’ll need too many more of these releases!

Created memory usage benchmarks for dry-configurable

As the final 2.0 release gets closer, we’ve been doing various performance tests just to make sure the house is in order. One thing we discovered is that Hanami::Action is not as memory efficient as we’d like it to be. One of the biggest opportunities to improve this looked to be in dry-configurable, since that’s what is used to manage the per-class action configuration.

I suspected any effort here would turn out to be involved (and no surprise, it turned out to be involved 😆), so I thought it would be useful as a first step to establish a memory benchmark to revisit over the course of any work. This was also a great way to get my head in this space, which turned out to take over most of my September (but more on that next month).

Quietly relaunched Decaf Sucks

Decaf Sucks was once a thriving little independent online café review community, with its own web site (starting from humble beginnings as a Rails Rumble entry in 2009) and even native iOS app (two iterations, in fact).

I was immensitely proud of what Decaf Sucks became, and for the collaboration with Max Wheeler in building it.

Unfortunately, as various internet APIs changed, the site atrophied, eventually became disfunctional, and we had to take it down. I still have the database, however, and I want to bring it back!

This time around, my plan is to do it as a fully open source Hanami 2 example application. Max is even on board to bring back all the UI goodness. For now, you can follow along with the early steps on GitHub. Right now the app is little more than the basic Hanami skeleton with added database integration and a CI setup (Hello Buildkite!), but I plan to grow it bit by bit. Perhaps I’ll try to have something small that I can share with each of these monthly OSS updates.

After Hanami 2 ships, hopefully this will serve as a useful resource for people wanting to see how it plays out in a real working app. And beyond that, I look forward to it serving once again as a place for me to commemorate my coffee travels!

,

Tim SerongAn S3 Storage Experiment

My team at SUSE is working on a new S3-compatible storage solution for Kubernetes, based on Ceph’s RADOS Gateway (RGW), except without any of the RADOS bits. The idea is that you can deploy our s3gw container on top of Longhorn (which provides the underlying replicated storage), and all this is running in your Kubernetes cluster, along with your applications which thus have convenient access to a local S3-compatible object store.

We’ve done this by adding a new storage backend to RGW. The approach we’ve taken is to use SQLite for metadata, with object data stored as files in a regular filesystem. This works quite neatly in a Kubernetes cluster with Longhorn, because Longhorn can provide a persistent volume (think: an ext4 filesystem), on which s3gw can store its SQLite database and object data files. If you’d like to kick the tyres, check out Giuseppe’s deployment tutorial for the 0.2.0 release, but bear in mind that as I’m writing this we’re all the way up to 0.4.0 so some details may have changed.

While s3gw on Longhorn on Kubernetes remains our primary focus for this project, the fact that this thing only needs a filesystem for backing storage means it can be run on top of just about anything. Given “just about anything” includes an old school two node Pacemaker cluster with DRBD for replicated storage, why not give that a try? I kinda like the idea of a good solid highly available S3-compatible storage solution that you could shove into the bottom of a rack somewhere without too much difficulty.

It’s probably eight years since I last deployed Pacemaker and DRBD, so to refresh my memory I ran with SUSE’s latest Highly Available NFS Storage with DRBD and Pacemaker document, but skipped all the NFS bits. That gives a filesystem mounted on one node, which will fail over to the other node if something breaks. On top of that, we need to run the s3gw container, the s3gw-ui container, an nginx HTTPS reverse proxy to smoosh those two together, and a virtual/floating IP, so the whole lot is accessible to the outside world.

Here’s the interesting parts of my Pacemaker configuration:

# crm configure show
[...]
primitive drbd_s3 ocf:linbit:drbd \
        params drbd_resource=s3 drbdconf="/etc/drbd.conf" \
        op monitor interval=29s role=Master \
        op monitor interval=31s role=Slave
primitive fs_s3 Filesystem \
        params device="/dev/drbd0" directory="/data" fstype=ext4 \
        meta target-role=Started \
        op start timeout=60s interval=0 \
        op stop timeout=60s interval=0 \
        op monitor interval=20s timeout=40s
primitive https nginx \
        op start timeout=40s interval=0 \
        op stop timeout=60s interval=0 \
        op monitor timeout=30s interval=10s \
        op monitor timeout=30s interval=30s \
        op monitor timeout=60s interval=20s
primitive s3-ip IPaddr2 \
        params ip=192.168.100.50 \
        op monitor interval=10 timeout=20
primitive s3gw podman \
        params image="ghcr.io/aquarist-labs/s3gw:latest" run_opts="-p 7480:7480 -v/data:/data" \
        op start interval=0 timeout=90s \
        op stop interval=0 timeout=90s \
        op monitor interval=30s timeout=30s
primitive s3gw-ui podman \
        params image="ghcr.io/aquarist-labs/s3gw-ui:latest" run_opts="-p 8080:8080 -e RGW_SERVICE_URL=https://s3gw.sleha.test" \
        op start interval=0 timeout=90s \
        op stop interval=0 timeout=90s \
        op monitor interval=30s timeout=30s
group g-s3 fs_s3 s3gw s3gw-ui https s3-ip
ms ms-drbd_s3 drbd_s3 \
        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
colocation col-s3_on_drbd inf: g-s3 ms-drbd_s3:Promoted
order o-drbd_before_fs Mandatory: ms-drbd_s3:promote g-s3:start
[...]

The g-s3 group ensures that the ext4 filesystem (fs_s3), s3gw container (s3gw), s3gw-ui container (s3gw-ui), nginx instance (https) and virtual IP (s3-ip) all run on the same node, and start one after another. The colocation and ordering constraints ensure that g-s3 runs on whichever node is currently the DRBD (ms-drbd_s3) primary.

The important pieces of glue here are:

  • The fs_s3 resource mounts /dev/drbd0 on /data
  • The s3gw resource passes -p 7480:7480 -v/data:/data to podman, so the container can write to /data on the host, and the S3 service is accessible via HTTP on port 7480.
  • The s3gw-ui resource passes -p 8080:8080 -e RGW_SERVICE_URL=https://s3gw.sleha.test to podman, so the UI is accessible via HTTP on port 8080, and it expects the S3 service to be externally available via https://s3gw.sleha.test.
  • nginx is configured to reverse proxy https://s3gw.sleha.test to http://localhost:7480, and https://s3gw-ui.sleha.test to http://localhost:8080.
  • I’ve got an entry in /etc/hosts to point s3gw.sleha.test and s3gw-ui.sleha.test at the virtual IP (192.168.100.50).
  • I’m using self-signed certificates (openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout cert.key -out cert.pem) for s3gw and s3gw-ui, so I had to go visit both https://s3gw.sleha.test and https://s3gw-ui.sleha.test in my browser and accept the SSL certificate before the UI would work.
  • The DRBD config, nginx config and SSL certificates and keys need to be present on all nodes. I used csync2 for this.

Here’s my /etc/nginx/nginx.conf. I’m not entirely convinced I’ve got everything 100% right here, but it seems to work (this is, incredibly, my first time doing anything with nginx, and my first time dealing with CORS):

worker_processes  1;

events {
    worker_connections  1024;
    use epoll;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        return       301 https://$host$request_uri; 
    }

    server {
        listen       443 ssl;
        server_name  s3gw.sleha.test;

        access_log /var/log/nginx/s3gw.access.log;

        location / {
            proxy_set_header        Host $host;
            proxy_set_header        X-Real-IP $remote_addr;
            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header        X-Forwarded-Proto $scheme;

            add_header Access-Control-Allow-Origin 'https://s3gw-ui.sleha.test';
            add_header Access-Control-Allow-Methods 'GET,HEAD,PUT,POST,DELETE';
            add_header Access-Control-Allow-Headers '*';
            add_header 'Access-Control-Allow-Credentials' 'true';

            if ($request_method = 'OPTIONS') {
                add_header Access-Control-Allow-Origin 'https://s3gw-ui.sleha.test';
                add_header Access-Control-Allow-Methods 'GET,HEAD,PUT,POST,DELETE';
                add_header Access-Control-Allow-Headers '*';
                add_header 'Access-Control-Allow-Credentials' 'true';
                add_header 'Content-Type' 'text/plain charset=UTF-8';
                add_header 'Content-Length' 0;
                return 204;
            }

            proxy_pass          http://localhost:7480;
            proxy_read_timeout  90;
            proxy_redirect      http://localhost:7480 https://s3gw.sleha.test;
        }

        ssl_certificate      cert.pem;
        ssl_certificate_key  cert.key;
        ssl_protocols        TLSv1.2;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;
    }

    server {
        listen       443 ssl;
        server_name  s3gw-ui.sleha.test;

        access_log /var/log/nginx/s3gw-ui.access.log;

        location / {
            proxy_set_header        Host $host;
            proxy_set_header        X-Real-IP $remote_addr;
            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header        X-Forwarded-Proto $scheme;

            proxy_pass          http://localhost:8080;
            proxy_read_timeout  90;

            proxy_redirect      http://localhost:8080 https://s3gw-ui.sleha.test;
        }

        ssl_certificate      cert-ui.pem;
        ssl_certificate_key  cert-ui.key;
        ssl_protocols        TLSv1.2;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;
    }
}

A couple of important points about Pacemaker’s support for running containers with podman:

So what was the end result? TL;DR: It pretty much All Just WorkedTM, which is exactly what you’d hope for when running a new application on a mature HA stack. I can use s3cmd to mess around with the S3 service, and use my web browser to play with the UI. Failover is nice and quick (think: a few seconds) if I kill a node. For the sake of convenience I did this experiment on a couple of VMs using the external/libvirt STONITH plugin, but I don’t expect a real deployment to be hugely different in behaviour. Also, I’d forgotten how good Pacemaker is at highlighting poorly behaved applications – prior to this experiment the s3gw-ui container didn’t stop well, but we weren’t aware of that until I tried a manual failover which took too long and resulted in an unexpected STONITH due to a stop timeout. Moritz has since fixed that.

One thing I tripped over when doing this deployment was the correct values to use for the access_key and secret_key of the default user when talking to the S3 service. These are actually settable for the s3gw container via the RGW_DEFAULT_USER_ACCESS_KEY and RGW_DEFAULT_USER_SECRET_KEY environment variables, but if left unset, they default to “test” and “test” respectively. The interesting bits of my s3cmd.cfg are thus:

access_key = test
secret_key = test
host_base = https://s3gw.sleha.test/
host_bucket = htts://s3gw.sleha.test/%(bucket)

In retrospect I probably should have added -e RGW_DEFAULT_USER_ACCESS_KEY=tserong -e RGW_DEFAULT_USER_SECRET_KEY=do_not_tell_anyone_this_is_your_password to the run_opts parameter of the s3gw resource in the Pacemaker config.

,

Tim RileyOpen source status update, May–July 2022

Hi there friends, it’s certainly been a while, and a lot has happened across May, June and July: I left my job, took some time off, and started a new job. I also managed to get a good deal of open source work done, so let’s take a look at that!

Released Hanami 2.0.0.alpha8

Since we’d skipped a month in our releases, I helped get Hanami 2.0.0.alpha8 out the door in May. The biggest change here was that we’d finished relocating the action and view integration code into the hanami gem itself, wrapped up in distinct “application� classes, like Hanami::Application::Action. In the end, this particular naming scheme turned out to be somewhat short lived! Read on for more :)

Resurrected work using dry-effects within hanami-view

As part of an effort to make it easy to use our conventional view “helpers� in all parts of our view layer, I resurrected my work from September 2020(!) on using dry-effects within hanami-view. The idea here was to achieve two things:

  1. To ensure we keep only a single context object for the entire view rendering, allowing its state to be preserved and accessed by all view components (i.e. allowing both templates, partials and parts all to access the very same context object)
  2. To enable access to the current template/partial’s #locals from within the context, which might help make our helpers feel a little more streamlined through implicit access to those locals

I got both of those working (here’s my work in progress), but I discovered the performance had worsened due to the cost of using an effect to access the locals. I took a few extra passes at this, reducing the number of effects to one, and memoziing it, leaving us with improved performance over the main branch, but with a slightly different stance: the single effect is for accessing the context object only, so any helpers, instead of expecting access to locals, will instead only have access to that context. The job from here will be to make sure that the context object we build for Hanami’s views has everything we need for an ergonomic experience working with our helpers. I’m feeling positive about the direction here, but it’ll be a little while before I get back to it. Read on for more on this (again!).

Unified application and slice

The biggest thing I did over this period was to unify Hanami’s Application and Slice. This one took some doing, and I was glad that I had a solid stretch of time to work on it between jobs.

I already wrote about this back in April’s update, noting that I’d settled on the approach of having a composed slice inside the Hanami::Application class to providing slice-like functionality at the application level. This was the approach I continued with, and as I went, I was able to move more and more functionality out of Hanami::Application and into Hanami::Slice, with that composed “application slice� being the thing that preserved the existing application behaviour. At some point, a pattern emerged: the application is a slice, and we could achieve everything we wanted (and more) by turning class Hanami::Application into class Hanami::Application < Hanami::Slice.

Turning the application into a slice sublcass is indeed how I finished the work, and I’m extremely pleased with how it turned out. It’s made slices so much more powerful. Now, each slice can have its own config, its own dedicated settings and routes, can be run on its own as a Rack application, and can even have its own set of child slices.

As a user of Hanami you won’t be required to use all of this per-slice power features, but they’ll be there if or when you want them. This is a great example of progressive disclosure, a principle I follow as much as possible when designing Hanami’s features: a user should be able to work with Hanami in a simple, straightforward way, and then as their needs grow, they can then find additional capabilities waiting to serve them.

Let’s explore this with a concrete example. If you’re building a simple Hanami app, you can start with a single top-level config/settings.rb that defines all of the app’s own settings. This settings object is made available as a "settings" component registration in both the app as well as all its slices. As the app grows and you add a slice or two, you start to add more slice-specific settings to this component. At this point you start to feel a little uncomfortable that settings specific to SliceA are also available inside SliceB and elsewhere. So you wonder, could you go into slices/slice_a/ and drop a dedicated config/settings.rb there? The answer to that is now yes! Create a config/settings.rb inside any slice directory and it will now become a dedicated settings component for that slice alone. This isn’t a detail you had to burden yourself with in order to get started, but it was ready for you when you needed it.

Another big benefit of this code reorganisation is that the particular responsibilities of Hanami::Application are much clearer: its job is to provide the single entrypoint to the app and coordinate the overall boot process; everything else comes as part of it also being a slice. This distinction is made clear through the number of public methods that exist across the two classes: Application now has only 2 distinct public methods, whereas Slice currently brings 27.

There’s plenty more detail over in the pull request: go check it out!

The work here also led to changes across the ecosystem:

This is one the reasons I’m excited about Hanami’s use of the dry-rb gems: it’s pushing them in directions no one has had to take them before. The result is not only the streamlined experience we want for Hanami, but also vastly more powerful underpinnings.

Devised a slimmed down core app structure

While I had my head down working on internal changes like the above, Luca had been thinking about Hanami 2 adoption and the first run user experience. As we had opted for a slices-only approach for the duration of our alpha releases, it meant a fairly bulky overall app structure: every slice came with multiple deeply nested files. This might be overwhelming to new users, as well as feeling like overkill for apps that are intended to start small and stay small.

To this end, we agreed upon a stripped back starter structure. Here’s how it looks at its core (ignoring tests and other general Ruby files):

├── app/
│   ├── action.rb
│   └── actions/
├── config/
│   ├── app.rb
│   ├── routes.rb
│   └── settings.rb
├── config.ru
└── lib/
    ├── my_app/
    │   └── types.rb
    └── tasks/

That’s it! Much more lightweight. This approach takes advantage of the Hanami app itself becoming a fully-featured slice, with app/ now as its source directory.

In fact, I took this opportunity to unify the code loading rules for both the app and slices, which makes for a much more intuitive experience. You can now drop any ruby source file into app/ or a slices/[slice_name]/ slice dir and it will be loaded in the same way: starting at the root of each directory, classes defined therein are expected to inhabit the namespace that the app or slice represents, so app/some_class.rb would be MyApp::SomeClass and slices/my_slice/some_class would be MySlice::SomeClass. Hat tip to me of September 2021 for implementing the dry-system namespaces feature that enabled this! 😜

(Yet another little dry-system tweak came out of preparing this too, with Component#file_name now exposed for auto-registration rules).

This new initial structure for starter Hanami 2.0 apps is another example of progressive disclosure in our design. You can start with a simple all-in-one approach, everything inside an app/ directory, and then as various distinct concerns present themselves, you can extract them into dedicated slices as required.

Along with this, some of our names have become shorter! Yes, “application� has become “app� (and Hanami::Application has become Hanami::App, and so on). These shorter names are easier to type, as well as more reflective of the words we tend to use when verbally describing these structures.

We also tweaked our actions and views integration code so that it is automatically available when you inherit directly from Hanami::Action, so it will no longer be necessary to have the verbose Hanami::Application::Action as the superclass for the app’s actions. We also ditched that namespace for both routes and settings too, so now you can just inherit from Hanami::Settings and the like.

Devised a slimmed down release strategy

Any of you following my updates would know by now that the Hanami 2.0 release has been a long time coming. We have ambitious goals, we’re doing our best, and everything is slowly coming together. But as hard as it might’ve been for folks who’re waiting, it’s been doubly so for us, feeling the weight of both the work along with everyone’s expectations.

So to make sure we can focus our efforts and get something out the door sooner rather than later, we decided to stagger our 2.0 release. We’ll start off with an initial 2.0 release centred around hanami, hanami-cli, hanami-controller, and hanami-router (enough to write some very useful API applications, for example), then follow up with a “full stack� 2.1 release including database persistence, views, helpers, assets and everything else.

I’m already feeling empowered by this strategy: 2.0 feels actually achievable now! And all of the other release-related work like updated docs and a migration guide will become correspondingly easier too.

Released Hanami 2.0.0.beta1!

With greater release clarity as well as all the above improvements under our belt, it was time to usher in a new phase of Hanami 2.0 development, so we released 2.0.0.beta1 in July! This new version suffix represents just how close we feel we are to our final vision for 2.0. This is an exciting moment!

And a bunch more

This update is getting rather long, so let me list a bunch of other Hanami improvements I managed to get done:

Outside my Hanami development, a new job and a new computer meant I also took the change to reboot my dotfiles, which are now powered by chezmoi. I can’t speak highly enough of chezmoi, it’s an extremely powerful tool and I’m loving the flexibility it affords!

That’s it from me for now. I’ll come back to you all in another month!

,

Ian BrownHigh Velocity Migrations with GCVE and HCX

What is HCX? VMware HCX is an application mobility platform designed for simplifying application migration, workload rebalancing and business continuity across datacenters and clouds. VMware HCX was formerly known as Hybrid Cloud Extension and NSX Hybrid Connect. GCVE HCX GCVE deploys the Enterprise version of HCX as part of the cost of the solution. HCX Enterprise has the following benefits: Hybrid Interconnect WAN Optimisation Bulk Migration, Live Migration and HCX Replication Assisted vMotion Cloud to cloud migration Disaster Protection KVM & Hyper-V to vSphere migrations Traffic Engineering Mobility Groups Mobility Optimised Networking Changeover scheduling Definitions Cold Migration

,

Ian BrownInfrastructure as Code with Terraform in GCVE

We have seen a lot of Google Cloud VMware Engine over the last few months and for the entire time we have used click-ops to provision new infrastructure, networks and VM’s. Now we are going to the next level and we will be using Terraform to manage our infrastructure as code so that it is version controlled and predictable. Installing Terraform The first part of getting this working is installing Terraform on your local machine.

,

Tim SerongHack Week 21: Keeping the Battery Full

As described in some detail in my last post, we have a single 10kWh Redflow ZCell zinc bromine flow battery hooked up to our solar PV via Victron inverter/chargers. This gives us the ability to:

  • Store almost all the excess energy we generate locally for later use.
  • When the sun isn’t shining, grid charge the battery at off-peak times then draw it down at peak times to save on our electricity bill (peak grid power is slightly more than twice as expensive as off-peak grid power).
  • Opportunistically survive grid outages, provided they don’t happen at the wrong time (i.e. when the sun is down and the battery is at 0% state of charge).

By their nature, ZCell flow batteries needs to undergo a maintenance cycle at least every three days, where they are discharged completely for a few hours. That’s why the last point above reads “opportunistically survive grid outages”. With a single ZCell, we can’t use the “minimum state of charge” feature of the Victron kit to always keep some charge in the battery in case of outages, because doing so conflicts with the ZCell maintenance cycles. Once we eventually get a second battery, this problem will go away because the maintenance cycles automatically interleave. In the meantime though, as my project for Hack Week 21, I decided to see if I could somehow automate the Victron scheduled charge configuration based on the ZCell maintenance cycle timing, to always keep the battery as full as possible for as long as possible.

There are three goals somewhat in tension with each other here:

  • Keep the battery full, except during maintenance cycles.
  • Don’t let the battery get too full immediately before a maintenance cycle, lest the discharge take too long and maintenance still be active the following morning.
  • Don’t schedule charges during peak electricity times (we still want to draw the battery down then, to avoid using the expensive gold plated electrons the power company sends down the wire between 07:00-10:00 and 16:00-21:00).

Here’s the solution I came up with:

  • On non-maintenance cycle days, set two no-limit scheduled charges, one from 10:00 for 6 hours, the other from 21:00 for 10 hours. That means the battery will be charged from the grid and/or the sun continuously, except for peak electricity times, when it will be drawn down. Our loads aren’t high enough to completely deplete the battery during peak times, so there will always be some juice in case of a grid outage on non-maintenance cycle days.
  • On maintenance cycle days, set a 50% limit scheduled charge from 13:00 for 3 hours, so the battery won’t be too full before that evening’s maintenance cycle, which kicks in at sunset. The day after a maintenance cycle, set a no limit scheduled charge from 03:00 for 4 hours. At our site, maintenance has almost always finished before 03:00, so there’s no conflict here, and we still have time to get some charge into the battery to handle the next morning’s peak.

Now, how to automate that?

The ZCell Battery Management System (BMS) has a REST API which we can query to find out useful information about the battery. Unfortunately it won’t actually tell us for certain whether maintenance will be run on any given day, but we can get the maintenance time limit, and subtract from that the amount of time that’s passed since the last maintenance cycle. If the resultant figure is less than one day, we know that maintenance will happen today. It is possible for maintenance to happen at other times, e.g. I can force maintenance manually, and also it can happen more often than every three days if you mess with the allowed days setting in the BMS, so this solution arguably isn’t perfect, but I think it’s good enough under the circumstances, at least at our site.

The Victron Cerbo GX (the little box that controls everything) runs Linux, and you can easily get root on it, so it’s possible to write scripts that run locally there. Here’s what I ended up with:

One important point about installing things on the Cerbo GX, is that the root partition is overwritten during firmware updates, but there’s a separate data partition which is preserved. The root user’s home directory is symlinked to /data/home/root, so my script lives at /data/home/root/sched.py to ensure it remains present. Then we need to get it into /etc/crontab, which doesn’t survive firmware updates. This is done by adding a /data/rc.local script which the Cerbo GX runs on boot:

After a few days of testing and observation, I can confirm that it all works perfectly! At least, at our site, right now, with our current loads and daylight ours. The whole thing will want revisiting (or probably just turning off) as we get into summer, when we’ll be able to rely on significantly more sunlight to keep the battery full than we get now. I may well just go back to a single 03:00-for-four-hours grid charge then, once the days are nice and long. See how we go…

,

Tim RileyJoining Buildkite, and sticking with Ruby

Last week I finished up at Culture Amp, and I’m excited to announce that I’ll be joining Buildkite as an engineer!

My time at Culture Amp was special. It was my first role after a decade of running Icelab with Max and Michael. Culture Amp hired everyone at Icelab after we decided to close the business, providing both a smooth transition and new opportunities to a singular group. I built a great working relationship with my manager, I was trusted to do big things, and I relished the chance to work with and learn from a large group of engineers. I’m deeply thankful for all of this.

Towards the end, I was serving as Culture Amp’s Director of Back End Engineering, and moving into engineering management. However, as any astute reader of this blog might attest, I am deeply motivated by hands on programming work, and all the learning and collaboration opportunities that go with it. I realised it was not the time to draw that chapter to a close (it might never!), and through that consideration I connected with Buildkite.

I’m excited to join Buildkite for many reasons! It’s a great Australian company with heart and personality. It brims with people I’ve long dreamt of working with. Developer tooling is an area close to my heart. And they’re growing a (majestic) Ruby app at the core of their tech. I can’t wait to dig in.

For me, this is also an intentional decision to stick with Ruby. The work I’m doing in Ruby OSS right now might be one of the biggest “dents in the universe” I get to make. I want to see this effort through, to complete our vision for Hanami 2.0, then learn from how it’s adopted by our community.

I have some time off between jobs, which I’ll use to give our Hanami work a real boost: I’ll be commiting nearly 6 weeks of full-time work to Hanami! Based on previous experience, this should see me get through what otherwise might have taken 6 months of part-time effort. I’m hoping this will get us significantly closer to 2.0. I’ll likely start another tweet thread of my efforts, so find me on Twitter if you’d like to follow along!

,

Ian BrownGCVE Backup and Disaster Recovery

Picking up where we left off last month, let’s dive into disaster recovery and how to use Site Recovery Manager and Google Backup & Protect to DR into and within the cloud with GCVE. But before we do, a quick advertisement: If you are in Brisbane, Australia, I suggest coming to the awesome Google Infrastructure Group (GIG) which focuses on GCVE where on 04 July 2022 I will be presenting on Terraform in GCVE.

,

Tim RileyOpen source status update, April 2022

April was a pretty decent month for my OSS work! Got some things wrapped up, kept a few things moving, and opened up a promising thing for investigation. What are these things, you say? Let’s take a look!

Finished centralisation of Hanami action and view integrations

I wrote about the need to centralise these integrations last month, and in April, I finally got the work done!

This was a relief to get out. As a task, while necessary, it felt like drudge work – I’d been on it since early March, after all! I was also conscious that this was also blocking Luca’s work on helpers all the while.

My prolonged work on this (in part, among other things like Easter holidays and other such Real Life matters) contributed to us missing April’s Hanami release. The good thing is that it’s done now, and I’m hopeful we can have this released via another Hanami alpha sometime very soon.

In terms of the change to Hanami apps, the biggest change from this is that your apps should use a new superclass for actions and views:

require "hanami/application/action"

module Main
  module Action
    # Used to inherit from Hanami::Action
    class Base < Hanami::Application::Action
    end
  end
end

Aside from the benefit to us as maintainers of having this integration code kept together, this distinct superclass should also help make it clearer where to look when learning about how actions and views work within full Hanami apps.

Enabled proper access to full locals in view templates

I wound up doing a little more work in actions and views this month. The first was a quickie to unblock some more of Luca’s helpers work: making access to the locals hash within templates work like we always expected it would.

This turned out to be a fun one. For a bit of background, the context for every template rendering in hanami-view (i.e. what self is for any given template) is an Hanami::View::Scope instance. This instance contains the template’s locals, makes the full locals hash available as #locals (and #_locals, for various reasons), and uses #method_missing to make also make each local directly available via its own name.

Luca found, however, that calling locals within the template didn’t work at all! After I took a look, it seemed that while locals didn’t work, self.locals or just plain _locals would work. Strange!

Turns out, this all came down to implementation details in Tilt, which we use as our low-level template renderer. The way Tilt works is that it will compile a template down into a single Ruby method that receives a locals param:

def compile_template_method(local_keys, scope_class=nil)
  source, offset = precompiled(local_keys)
  local_code = local_extraction(local_keys)

  # <...snip...>

  method_source << <<-RUBY
    TOPOBJECT.class_eval do
      def #{method_name}(locals)
        #{local_code}
  RUBY

Because of this, locals is actually a local variable in the context of that method execution, which will override any other methods also available on the scope object that Tilt turns into self for the rendering.

Here is how we were originally rendering with Tilt:

tilt(path).render(scope, &block)

My first instinct was simply to pass our locals hash as the (optional) second argument to Tilt’s #render:

tilt(path).render(scope, scope._locals)

But even that didn’t work! Because in generating that local_code above, Tilt will actually take the locals and explode it out into individual variable assignments:

def local_extraction(local_keys)
  local_keys.map do |k|
    if k.to_s =~ /\A[a-z_][a-zA-Z_0-9]*\z/
      "#{k} = locals[#{k.inspect}]"
    else
      raise "invalid locals key: #{k.inspect} (keys must be variable names)"
    end
  end.join("\n")
end

But we don’t need this at all, since hanami-view’s scope object is already making those locals available individually, and we want to ensure access to those locals continues to run through the scope object.

So the ultimate fix is to make locals of our locals. Yo dawg:

tilt(path).render(scope, {locals: scope._locals}, &block)

This gives us our desired access to the locals hash in templates (because that locals key is itself turned into a solitary local variable), while preserving the rest of our existing scope-based functionality.

It also shows me that I probably should’ve written an integration test back when I introduced access to a scope’s locals back in January 2019. 😬

Either way, I’m excited this came up and I could fix it, because it’s an encouraging sign of just how much of this view system we’ll be able to put to use in creating a streamlined and powerful view layer for our future Hanami users!

Merged a fix to stop unwanted view rendering of halted requests

Thanks to our extensive use of Hanami at Culture Amp, my friend and colleague Andrew discovered and fixed a bug with our automatic rendering of views within actions, which I was happy to merge in.

Shipped some long awaited dry-configurable features

After keeping poor ojab waiting way too long, I also merged a couple of nice enhancements he made to dry-configurable:

I then released these as dry-configurable 0.15.0.

Started work on unifying Hanami slices and application

Last but definitely not least, I started work on one of the last big efforts we need in place before 2.0: making Hanami slices act as much as possible like complete, miniature Hanami applications. I’m going to talk about this a lot more in future posts, but for now, I can point you to a few PRs:

  • Introducing Hanami::SliceName (a preliminary, minor refactoring to fix some slice and application name determination responsibilities that had somehow found their way into our configuration class).
  • A first, abandoned attempt at combining slices and applications, using a mixin for shared behaviour.
  • A much more promising attempt using a composed slice object within the application class, which is currently the base of my further work in this area.

Apart from opening up some really interesting possibilities around making slices fully a portable, mountable abstraction (imagine bringing in slices from gems!), even for our shorter-term needs, this work looks valuable, since I think it should provide a pathway for having application-wide settings kept on the application class, while still allowing per-slice customisation of those settings in whichever slices require them.

The overall slice structure is also something that’s barely changed since I put it in place way back in late 2019. Now it’s going to get the spit and polish it deserves. Hopefully I’ll be able to share more progress on this next month :) See you then!

,

Ian BrownGCVE Advanced Auto-Scaling

Let’s pick up where we left off from last months article and start setting up some of the features of GCVE, starting with Advanced Autoscaling. What is Advanced Auto-Scaling? Advanced Autoscaling automatically expands or shrinks a private cloud based on CPU, memory and storage utilisation metrics. GCVE monitors the cluster based on the metrics defined in the autoscale policy and decides to add or remove nodes automatically. Remember: GCVE is physical Dell Poweredge servers, not a container/VM running in Docker or on a hypervisor like VMware.

,

BlueHackersFree psychologist service at conferences: April 2022 update

We’ve done this a number of times over the last decade, from OSDC to LCA. The idea is to provide a free psychologist or counsellor at an in-person conference. Attendees can do an anonymous booking by taking a stickynote (with the timeslot) from a signup sheet, and thus get a free appointment.

Many people find it difficult taking the first (very important) step towards getting professional help, and we’ve received good feedback that this approach indeed assists.

So far we’ve always focused on open source conferences. Now we’re moving into information security! First BrisSEC 2022 (Friday 29 April at the Hilton in Brisbane, QLD) and then AusCERT 2022 (10-13 May at the Star Hotel, Gold Coast QLD). The awesome and geek friendly Dr Carla Rogers will be at both events.

How does this get funded? Well, we’ve crowdfunded some, nudged sponsors, most mostly it gets picked up by the conference organisers (aka indirectly by the sponsors, mostly).

If you’re a conference organiser, or would like a particular upcoming conference to offer this service, do drop us a line and we’re happy to chase it up for you and help the organisers to make it happen. We know how to run that now.

In-person is best. But for virtual conferences, sure contact us as well.

The post Free psychologist service at conferences: April 2022 update first appeared on BlueHackers.org.

,

FLOSS Down Under - online free software meetingsApril Hack Day Report

The hack day didn’t go as well as I hoped, but didn’t go too badly. There was smaller attendance than hoped and the discussion was mostly about things other than FLOSS. But everyone who attended had fun and learned interesting things so generally I think it counts as a success. There was discussion on topics including military hardware, viruses (particularly Covid), rocketry, and literature. During the discussion one error in a Wikipedia page was discussed and hopefully we can get that fixed.

I think that everyone who attended will be interested in more such meetings. Overall I think this is a reasonable start to the Hack Day meetings, when I previously ran such meetings they often ended up being more social events than serious hacking events and that’s OK too.

One conclusion that we came to regarding meetings is that they should always be well announced in email and that the iCal file isn’t useful for everyone. Discussion continues on the best methods of announcing meetings but I anticipate that better email will get more attendance.

,

Ian BrownIntroduction to GCVE

What is GCVE? Google Cloud VMware Engine, or GCVE, is a fully managed VMware hypervisor and associated management and networking components, (vSphere, NSX-T, vSAN and HCX) built on top of Google’s highly performant and scalable infrastructure with fully redundant and dedicated 100Gbps networking that provides 99.99% availability. The solution is integrated into Google Cloud Platform, so businesses benefit from having full access to GCP services, native VPC networking, Cloud VPN or Interconnect as well as all the normal security features you expect from GCP.

,

FLOSS Down Under - online free software meetingsMarch 2022 Meeting

Meeting Report

The March 2022 meeting went reasonably well. Everyone seemed to have fun and learn useful things about computers. After 2 hours my Internet connection dropped out which stopped the people who were using VMs from doing the tutorial. Fortunately most people seemed ready for a break so we ended the meeting. The early and abrupt ending of the meeting was a disappointment but it wasn’t too bad, the meeting would probably only have gone for another half hour otherwise.

The BigBlueButton system was shown to be effective for training when one person got confused with the Debian package configuration options for Postfix and they were able to share the window with everyone else to get advice. I was also confused by that stage.

Future Meetings

The main feature of the meeting was training in setting up a mailserver with Postfix, here are the lecture notes for it [1]. The consensus at the end of the meeting was that people wanted more of that for the April meeting. So for the April meeting I will add to the Postfix Training to include SpamAssassin, SPF, DKIM, and DMARC. For the start of the next meeting instead of providing bare Debian installations for the VMs I’ll provide a basic Postfix/Dovecot setup so people can get straight into SpamAssassin etc.

For the May meeting training on SE Linux was requested.

Social Media

Towards the end of the meeting we discussed Matrix and federated social media. LUV has a Matrix server and I can give accounts to anyone who’s involved in FOSS in the Australia and New Zealand area. For Mastodon the NZOSS Mastodon server [2] seems like a good option. I have an account there to try Mastodon, my Mastodon address is @etbe@mastodon.nzoss.nz .

We are going to make Matrix a primary communication method for the Flounder group, the room is #flounder:luv.asn.au . My Matrix address is @etbe:luv.asn.au .

,

FLOSS Down Under - online free software meetingsMailing List

We now have a mailing list see https://lists.linux.org.au/mailman/listinfo/flounder for information, the address to post to the list is flounder@lists.linux.org.au..

We also have a new URL for the blog and events. See the right sidebar for the link to the iCal file which can be connected to Google Calendar and most online calendaring systems.

,

FLOSS Down Under - online free software meetingsFirst Meeting Success

We just had the first Flounder meeting which went well. Had some interesting discussion of storage technology, I learnt a few new things. Some people did the ZFS training and BTRFS training and we had lots of interesting discussion.

Andrew Pam gave a summary of new things in Linux and talked about the sites lwn.net, gamingonlinux.com, and cnx-software.com that he uses to find Linux news. One thing he talked about is the latest developments with SteamDeck which is driving Linux support in Steam games. The site protondb.com tracks Linux support in Steam games.

We had some discussion of BPF, for an introduction to that technology see the BPF lecture from LCA 2022.

Next Meeting

The next meeting (Saturday 5th of March 1PM Melbourne time) will focus on running your own mail server which is always of interest to people who are interested in system administration and which is probably of more interest than usual because of Google forcing companies with “a legacy G Suite subscription” to transition to a more expensive “Business family” offering.

,

Stewart SmithAdventures in the Apple Partition Map (Part 2 of the continuing adventures with the Apple Power Macintosh 7200/120 PC Compatible)

I “recently” wrote about obtaining a new (to me, actually quite old) computer over in The Apple Power Macintosh 7200/120 PC Compatible (Part 1). This post is a bit of a detour, but may help others understand why some images they download from the internet don’t work.

Disk partitioning is (of course) a way to divide up a single disk into multiple volumes (partitions) for different uses. While the idea is similar, computer platforms over the ages have done this in a variety of different ways, with varying formats on disk, and varying limitations. The ones that you’re most likely to be familiar with are the MBR partitioning scheme (from the IBM PC), and the GPT partitioning scheme (common for UEFI systems such as the modern PC and Mac). One you’re less likely to be familiar with is the Apple Partition Map scheme.

The way all IBM PCs and compatibles worked from the introduction of MS-DOS 2.0 in 1983 until some time after 2005 was the Master Boot Record partitioning scheme. It was outrageously simple: of the first 512 byte sector of a disk, the first 446 bytes was for the bootstrapping code (the “boot sector”), the last 2 bytes were for the magic two bytes telling the BIOS this disk was bootable, and the other 64 bytes were four entries of 16 bytes, each describing a disk partition. The Wikipedia page is a good overview of what it all looks like. Since “four partitions should be enough for anybody” wasn’t going to last, DOS 3.2 introduced “extended partitions” which was just using one of those 4 partitions as another similar data structure that could point to more partitions.

In the 1980s (similar to today), the Macintosh was, of course, different. The Apple Partition Map is significantly more flexible than the MBR on PCs. For a start, you could have more than four partitions! You could actually have a lot more than four partitions, as the Apple Partition Map is a single 512-byte sector for each partition, and the partition map is itself a partition. Instead of being block 0 (like the MBR is), it actually starts at block 1, and is contiguous (The Driver Descriptor Record is what’s at block 0). So, once created, it’s hard to extend. Typically it’d be created as 64×512-byte entries, for 32kb… which turns out is actually about enough for anyone.

The Inside Macintosh reference on the SCSI Manager goes through more detail as to these structures. If you’re wondering what language all the coding examples are in, it’s Pascal – which was fairly popular for writing Macintosh applications in back in the day.

But the actual partition map isn’t the “interesting” part of all this (and yes, the quotation marks are significant here), because Macs are pretty darn finicky about what disks to boot off, which gets to be interesting if you’re trying to find a CD-ROM image on the internet from which to boot, and then use to install an Operating System from.

Stewart SmithEvery time I program a Mac…

… the preferred programming language changes.

I never programmed a 1980s Macintosh actually in the 1980s. It was sometime in the early 1990s that I first experienced Microsoft Basic for the Macintosh. I’d previously (unknowingly at the time as it was branded Commodore) experienced Microsoft BASIC on the Commodore 16, Commodore 64, and even the Apple ][, but the Macintosh version was something else. It let you do some pretty neat things such as construct a GUI with largely the same amount of effort as it took to construct a Text based UI on the micros I was familiar with.

Okay, to be fair, I’d also dabbled in Microsoft QBasic that came bundled with MS-DOS of the era, which let you do a whole bunch of graphics – so you could theoretically construct a GUI with it. Something I did attempt to do. Programming on the Mac was so much easier to construct a GUI.

Of course, Microsoft Basic wasn’t the preferred way to program on the Macintosh. At that time it was largely Pascal, with C being something that also existed – but you were going to see Pascal in Inside Macintosh. It was probably somewhat fortuitous that I’d poked at Pascal a bit as something alternate to look at in the high school computing classes. I can only remember using TurboPascal on DOS systems and never actually writing Pascal on the Macintosh.

By the middle part of the 1990s though, I was firmly incompetently writing C on the Mac. No doubt the quality of my code increased after I’d done some university courses actually covering the language rather than the only practical way I had to attempt to write anything useful being looking at Inside Macintosh examples in Pascal and “C for Dummies” which was very not-Macintosh. Writing C on UNIX/Linux was a lot easier – everything was made for it, including Actual Documentation!

Anyway, in the early 2000s I ran MacOS X for a bit on my white iBook G3, and did a (very) small amount of any GUI / Project Builder (the precursor to Xcode) related development – instead largely focusing on command line / X11 things. The latest coolness being to use Objective-C to program applications (unless you were bringing over your Classic MacOS Carbon based application, then you could still write C). Enter some (incompetent) Objective-C coding!

Then Apple went to x86, so the hardware ceased being interesting, and I had no reason to poke at it even as a side effect of having hardware that could run the software stack. Enter a long-ass time of Debian, Ubuntu, and Fedora on laptops.

Come 2022 though, and (for reasons I should really write up), I’m poking at a Mac again and it’s now Swift as the preferred way to write apps. So, I’m (incompetently) hacking away at Swift code. I have to admit, it’s pretty nice. I’ve managed to be somewhat productive in a relative short amount of time, and all the affordances in the language gear towards the kind of safety that is a PITA when coding in C.

So this is my WIP utility to be able to import photos from a Shotwell database into the macOS Photos app:

There’s a lot of rough edges and unknowns left, including how to actually do the import (it looks like there’s going to be Swift code doing AppleScript things as the PhotoKit API is inadequate). But hey, some incompetent hacking in not too much time has a kind-of photo browser thing going on that feels pretty snappy.

,

Robert Collinshyper combinators in Rust

Recently I read Michael Snoyman’s post on combining Axum, Hyper, Tonic and Tower. While his solution worked, it irked me – it seemed like there should be a much tighter solution possible.

I can deep dive into the code in a later post perhaps, but I think there are four points of difference. One, since the post was written Axum has started boxing its routes : so the enum dispatch approach taken, which delivers low overheads actually has no benefits today.

Two, while writing out the entire type by hand has some benefits, async code is much more pithy.

Thirdly, the code in the post is entirely generic, except the routing function itself.

And fourth, the outer Service<AddrStream> is an unnecessary layer to abstract over: given the similar constraints – the inner Service must take Request<..>, it is possible to just not use a couple of helpers and instead work directly with Service<Request...>.

So, onto a pithier version.

First, the app server code itself.

use std::{convert::Infallible, net::SocketAddr};

use axum::routing::get;
use hyper::{server::conn::AddrStream, service::make_service_fn};
use hyper::{Body, Request};
use tonic::async_trait;

use demo::echo_server::{Echo, EchoServer};
use demo::{EchoReply, EchoRequest};

struct MyEcho;

#[async_trait]
impl Echo for MyEcho {
    async fn echo(
        &self,
        request: tonic::Request<EchoRequest>,
    ) -> Result<tonic::Response<EchoReply>, tonic::Status> {
        Ok(tonic::Response::new(EchoReply {
            message: format!("Echoing back: {}", request.get_ref().message),
        }))
    }
}

#[tokio::main]
async fn main() {
    let addr = SocketAddr::from(([0, 0, 0, 0], 3000));

    let axum_service = axum::Router::new().route("/", get(|| async { "Hello world!" }));

    let grpc_service = tonic::transport::Server::builder()
        .add_service(EchoServer::new(MyEcho))
        .into_service();

    let both_service =
        demo_router::Router::new(axum_service, grpc_service, |req: &Request<Body>| {
            Ok::<bool, Infallible>(
                req.headers().get("content-type").map(|x| x.as_bytes())
                    == Some(b"application/grpc"),
            )
        });

    let make_service = make_service_fn(move |_conn: &AddrStream| {
        let both_service = both_service.clone();
        async { Ok::<_, Infallible>(both_service) }
    });

    let server = hyper::Server::bind(&addr).serve(make_service);

    if let Err(e) = server.await {
        eprintln!("server error: {}", e);
    }
}

Note the Router: it takes the two services and Fn to determine which to use on any given request. Then we just drop that composed service into make_service_fn and we’re done.

Next up we have the Router implementation. This is generic across any two Service<Request<...>> types as long as they are both Into<Bytes> for their Data, and Into<Box<dyn Error>> for errors.

use std::{future::Future, pin::Pin, task::Poll};

use http_body::combinators::UnsyncBoxBody;
use hyper::{body::HttpBody, Body, Request, Response};
use tower::Service;

#[derive(Clone)]
pub struct Router<First, Second, F> {
    first: First,
    second: Second,
    discriminator: F,
}

impl<First, Second, F> Router<First, Second, F> {
    pub fn new(first: First, second: Second, discriminator: F) -> Self {
        Self {
            first,
            second,
            discriminator,
        }
    }
}

impl<First, Second, FirstBody, FirstBodyError, SecondBody, SecondBodyError, F, FErr>
    Service<Request<Body>> for BinaryRouter<First, Second, F>
where
    First: Service<Request<Body>, Response = Response<FirstBody>>,
    First::Error: Into<Box<dyn std::error::Error + Send + Sync>> + 'static,
    First::Future: Send + 'static,
    First::Response: 'static,
    Second: Service<Request<Body>, Response = Response<SecondBody>>,
    Second::Error: Into<Box<dyn std::error::Error + Send + Sync>> + 'static,
    Second::Future: Send + 'static,
    Second::Response: 'static,
    F: Fn(&Request<Body>) -> Result<bool, FErr>,
    FErr: Into<Box<dyn std::error::Error + Send + Sync>> + Send + 'static,
    FirstBody: HttpBody<Error = FirstBodyError> + Send + 'static,
    FirstBody::Data: Into<bytes::Bytes>,
    FirstBodyError: Into<Box<dyn std::error::Error + Send + Sync>> + 'static,
    SecondBody: HttpBody<Error = SecondBodyError> + Send + 'static,
    SecondBody::Data: Into<bytes::Bytes>,
    SecondBodyError: Into<Box<dyn std::error::Error + Send + Sync>> + 'static,
{
    type Response = Response<
        UnsyncBoxBody<
            <hyper::Body as HttpBody>::Data,
            Box<dyn std::error::Error + Send + Sync + 'static>,
        >,
    >;
    type Error = Box<dyn std::error::Error + Send + Sync + 'static>;
    type Future =
        Pin<Box<dyn Future<Output = Result<Self::Response, Self::Error>> + Send + 'static>>;

    fn poll_ready(
        &mut self,
        cx: &mut std::task::Context<'_>,
    ) -> std::task::Poll<Result<(), Self::Error>> {
        match self.first.poll_ready(cx) {
            Poll::Ready(Ok(())) => match self.second.poll_ready(cx) {
                Poll::Ready(Ok(())) => Poll::Ready(Ok(())),
                Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),
                Poll::Pending => Poll::Pending,
            },
            Poll::Ready(Err(e)) => Poll::Ready(Err(e.into())),
            Poll::Pending => Poll::Pending,
        }
    }

    fn call(&mut self, req: Request<Body>) -> Self::Future {
        let discriminant = { (self.discriminator)(&req) };
        let (first, second) = if matches!(discriminant, Ok(false)) {
            (Some(self.first.call(req)), None)
        } else if matches!(discriminant, Ok(true)) {
            (None, Some(self.second.call(req)))
        } else {
            (None, None)
        };
        let f = async {
            Ok(match discriminant.map_err(Into::into)? {
                true => second
                    .unwrap()
                    .await
                    .map_err(Into::into)?
                    .map(|b| b.map_data(Into::into).map_err(Into::into).boxed_unsync()),
                false => first
                    .unwrap()
                    .await
                    .map_err(Into::into)?
                    .map(|b| b.map_data(Into::into).map_err(Into::into).boxed_unsync()),
            })
        };
        Box::pin(f)
    }
}

Interesting things here – I use boxed_unsync to abstract over the body concrete type, and I implement the future using async code rather than as a separate struct. It becomes much smaller even after a few bits of extra type constraining.

One thing that flummoxed me for a little was the need to capture the future for the underlying response outside of the async block. Failing to do so provokes a 'static requirement which was tricky to debug. Fortunately there is a bug on making this easier to diagnose in rustc already. The underlying problem is that if you create the async block, and then dereference self, the type for impl of .first has to live an arbitrary time. Whereas by capturing the future immediately, only the impl of the future has to live an arbitrary time, and that doesn’t then require changing the signature of the function.

This is almost worth turning into a crate – I couldn’t see an existing one when I looked, though it does end up rather small – < 100 lines. What do you all think?

FLOSS Down Under - online free software meetingsFirst Meeting Agenda

The first meeting will start at 1PM Australian Eastern time (Melbourne/Sydney) which is +1100 on Saturday the 5th of February.

I will start the video chat an hour early in case someone makes a timezone mistake and gets there an hour before it starts. If anyone else joins early we will have random chat until the start time (deliberately avoiding topics worthy of the main meeting). The link http://b.coker.com.au will redirect to the meeting URL on the day.

The first scheduled talk is a summary and discussion of free software related news. Anyone who knows of something new that excites them is welcome to speak about it.

The main event is discussion of storage technology and hands-on training on BTRFS and ZFS for those who are interested. Here are the ZFS training notes and here are the BTRFS training notes. Feel free to do the training exercises on your own VM before the meeting if you wish.

Then discussion of the future of the group and the use of FOSS social media. While social media is never going to be compulsory some people will want to use it to communicate and we could run some servers for software that is considered good (lots of server capacity is available).

Finally we have to plan future meetings and decide on which communication methods are desired.

The BBB instance to be used for the video conference is sponsored by NZOSS and Catalyst Cloud.

,

FLOSS Down Under - online free software meetingsFlounder Overview

Flounder is a new free software users group based in the Australia/NZ area. Flounder stands for FLOSS (Free Libre Open Source Software) down under.

Here is my blog post describing the initial idea, the comment from d3Xt3r suggested the name. Flounder is a group of fish that has species native to Australia and NZ.

The main aim is to provide educational benefits to free software users via an online meeting that can’t be obtained by watching YouTube videos etc in a scope that is larger than one country. When the pandemic ends we will keep running this as there are benefits to be obtained from a meeting of a wide geographic scope that can’t be obtained by meetings in a single city. People from other countries are welcome to attend but they aren’t the focus of the meeting.

Until we get a better DNS name the address http://b.coker.com.au will redirect to the BBB instance used for online meetings (the meeting address isn’t yet setup so it redirects to the blog). The aim is that there will always be a short URL for the meeting so anyone who has one device lose contact can quickly type the URL into their backup device.

The first meeting will be on the 5th of Feb 2022 at 1PM Melbourne time +1100. When we get a proper domain I’ll publish a URL for an iCal file with entries for all meetings. I will also find some suitable way for meeting times to be localised (I’m sure there’s a WordPress plugin for that).

For the hands-on part of the meetings there will be virtual machine images you can download to run on your own system (tested with KVM, should work with other VM systems) and the possibility of logging in to a running VM. The demonstration VMs will have public IPv6 addresses and will also be available through different ports on a single IPv4 address, having IPv6 on your workstation will be convenient for you but you can survive without it.

Linux Australia has a list of LUGs in Australia, is there a similar list for NZ? One thing I’d like to see is a list of links for iCal files for all the meetings and also an iCal aggregator that for all iCal feeds of online meetings. I’ll host it myself if necessary, but it’s probably best to do it via Linux Australia (Linux Australasia?) if possible.

,

Jan SchmidtPulling on a thread

I’m attending the https://linux.conf.au/ conference online this weekend, which is always a good opportunity for some sideline hacking.

I found something boneheaded doing that today.

There have been a few times while inventing the OpenHMD Rift driver where I’ve noticed something strange and followed the thread until it made sense. Sometimes that leads to improvements in the driver, sometimes not.

In this case, I wanted to generate a graph of how long the computer vision processing takes – from the moment each camera frame is captured until poses are generated for each device.

To do that, I have a some logging branches that output JSON events to log files and I write scripts to process those. I used that data and produced:

Pose recognition latency.
dt = interpose spacing, delay = frame to pose latency

Two things caught my eye in this graph. The first is the way the baseline latency (pink lines) increases from ~20ms to ~58ms. The 2nd is the quantisation effect, where pose latencies are clearly moving in discrete steps.

Neither of those should be happening.

Camera frames are being captured from the CV1 sensors every 19.2ms, and it takes that 17-18ms for them to be delivered across the USB. Depending on how many IR sources the cameras can see, figuring out the device poses can take a different amount of time, but the baseline should always hover around 17-18ms because the fast “device tracking locked” case take as little as 1ms.

Did you see me mention 19.2ms as the interframe period? Guess what the spacing on those quantisation levels are in the graph? I recognised it as implying that something in the processing is tied to frame timing when it should not be.

OpenHMD Rift CV1 tracking timing

This 2nd graph helped me pinpoint what exactly was going on. This graph is cut from the part of the session where the latency has jumped up. What it shows is a ~1 frame delay between when the frame is received (frame-arrival-finish-local-ts) before the initial analysis even starts!

That could imply that the analysis thread is just busy processing the previous frame and doesn’t get start working on the new one yet – but the graph says that fast analysis is typically done in 1-10ms at most. It should rarely be busy when the next frame arrives.

This is where I found the bone headed code – a rookie mistake I wrote when putting in place the image analysis threads early on in the driver development and never noticed.

There are 3 threads involved:

  • USB service thread, reading video frame packets and assembling pixels in framebuffers
  • Fast analysis thread, that checks tracking lock is still acquired
  • Long analysis thread, which does brute-force pose searching to reacquire / match unknown IR sources to device LEDs

These 3 threads communicate using frame worker queues passing frames between each other. Each analysis thread does this pseudocode:

while driver_running:
    Pop a frame from the queue
    Process the frame
    Sleep for new frame notification

The problem is in the 3rd line. If the driver is ever still processing the frame in line 2 when a new frame arrives – say because the computer got really busy – the thread sleeps anyway and won’t wake up until the next frame arrives. At that point, there’ll be 2 frames in the queue, but it only still processes one – so the analysis gains a 1 frame latency from that point on. If it happens a second time, it gets later by another frame! Any further and it starts reclaiming frames from the queues to keep the video capture thread fed – but it only reclaims one frame at a time, so the latency remains!

The fix is simple:

while driver_running:
   Pop a frame
   Process the frame
   if queue_is_empty():
     sleep for new frame notification

Doing that for both the fast and long analysis threads changed the profile of the pose latency graph completely.

Pose latency and inter-pose spacing after fix

This is a massive win! To be clear, this has been causing problems in the driver for at least 18 months but was never obvious from the logs alone. A single good graph is worth a thousand logs.

What does this mean in practice?

The way the fusion filter I’ve built works, in between pose updates from the cameras, the position and orientation of each device are predicted / updated using the accelerometer and gyro readings. Particularly for position, using the IMU for prediction drifts fairly quickly. The longer the driver spends ‘coasting’ on the IMU, the less accurate the position tracking is. So, the sooner the driver can get a correction from the camera to the fusion filter the less drift we’ll get – especially under fast motion. Particularly for the hand controllers that get waved around.

Before: Left Controller pose delays by sensor
After: Left Controller pose delays by sensor

Poses are now being updated up to 40ms earlier and the baseline is consistent with the USB transfer delay.

You can also visibly see the effect of the JPEG decoding support I added over Christmas. The ‘red’ camera is directly connected to USB3, while the ‘khaki’ camera is feeding JPEG frames over USB2 that then need to be decoded, adding a few ms delay.

The latency reduction is nicely visible in the pose graphs, where the ‘drop shadow’ effect of pose updates tailing fusion predictions largely disappears and there are fewer large gaps in the pose observations when long analysis happens (visible as straight lines jumping from point to point in the trace):

Before: Left Controller poses
After: Left Controller poses

,

Colin CharlesThis thing is still on?

Yes, the blog is still on. January 2004 I moved to WordPress, and it is still here January 2022. I didn’t write much last year (neither here, not experimenting with the Hey blog). I didn’t post anything to Instagram last year either from what I can tell, just a lot of stories.

August 16 2021, I realised I was 1,000 days till May 12 2024, which is when I become 40. As of today, that leads 850 days. Did I squander the last 150 days? I’m back to writing almost daily in the Hobonichi Techo (I think last year and the year before were mostly washouts; I barely scribbled anything offline).

I got a new Apple Watch Series 7 yesterday. I can say I used the Series 4 well (79% battery life), purchased in the UK when I broke my Series 0 in Edinburgh airport.

TripIt stats for last year claimed 95 days on the road. This is of course, a massive joke, but I’m glad I did get to visit London, Lisbon, New York, San Francisco, Los Angeles without issue. I spent a lot of time in Kuantan, a bunch of Langkawi trips, and also, I stayed for many months at the Grand Hyatt Kuala Lumpur during the May lockdowns (I practically stayed there all lockdown).

With 850 days to go till I’m 40, I have plenty I would like to achieve. I think I’ll write a lot more here. And elsewhere. Get back into the habit of doing. And publishing by learning and doing. No fear. Not that I wasn’t doing, but its time to be prolific with what’s been going on.

,

,

,

Gary PendergastWordPress and web3

Blockchain. Cryptocurrency. Ethereum. NFTs. DAOs. Smart Contracts. web3. It’s impossible to avoid the blockchain hype machine these days, but it’s often just as difficult to decipher what it all means.

On top of that, discourse around web3 is extremely polarising: everyone involved is very keen to a) pick a team, and b) get you to join their team. If you haven’t picked a team, you must be secretly with the other team.

Max Read made a compelling argument that the web3 debate is in fact two different debates:

But, OK, what is the root disagreement, exactly? The way I read it there are two broad “is web3 bullshit?” debates, not just one, centered around the following questions:

Can the blockchain do anything that other currently existing technology cannot do and/or do anything better or more efficiently than other currently existing technology?

Will the blockchain form the architecture of the internet of the future (i.e. “web3”), and/or will blockchain-native companies and organizations become important and powerful?

Max Read — Is web3 bullshit?

I’m inclined to agree with Max’s analysis here: there’s a technical question, and there’s a business/cultural question. It’s hard to separate the two when every day sees new headlines about millions of dollars being stolen or scammed; or thousands of people putting millions of dollars into highly optimistic ventures. There are extreme positives and extreme negatives happening all the time in the web3 world.

With that in mind, I want to take a step back from the day-to-day excitement of cryptocurrency and web3, and look at some of the driving philosophies espoused by the movement.

Philosophies of web3

There are a lot of differing viewpoints on web3, every individual has a slightly different take on it. There are three broad themes that stand out, however.

Decentralised

Blockchain-based technology is inherently distributed (with some esoteric caveats, but we can safely ignore them for now). In a world where the web centres around a handful of major services, where we’ve seen the harm that the likes of Facebook and YouTube can inflict on society, it’s not surprising that decentralisation would be a powerful theme drawing in anyone looking for an alternative.

Decentralisation isn’t new to the Internet, of course: it’s right there in the name. This giant set of “interconnected networks” has been decentralised from the very beginning. It’s not perfect, of course: oppressive governments can take control of the borders of their portion of the Internet, and we’ve come to rely on a handful of web services to handle the trickier parts of using the web. But fundamentally, that decentralised architecture is still there. I can still set up a web site hosted on my home computer, which anyone in the world could access.

I don’t do that, however, for the same reason that web3 isn’t immune from centralised services: Centralisation is convenient. Just as we have Facebook, or Google, or Amazon as giant centralised services on the current web, we can already see similar services appearing for web3. For payments, Coinbase has established itself as a hugely popular place exchange cryptocurrencies and traditional currencies. For NFTs, OpenSea is the service where you’ll find nearly every NFT collection. MetaMask keeps all of your crypto-based keys, tokens, and logins in a single “crypto wallet”.

Centralisation is convenient.

While web3 proponents give a lot of credence to the decentralised nature of cryptocurrency being a driver of popularity, I’m not so sure. At best, I’m inclined to think that decentralisation is table stakes these days: you can’t even get started as a global movement without a strong commitment to decentralisation.

But if decentralisation isn’t the key, what is?

Ownership

When we talk about ownership in web3, NFTs are clearly the flavour of the month, but recent research indicates that the entire NFT market is massively artificially inflated.

Rather than taking pot-shots at the NFT straw man, I think it’s more interesting to look at the idea of ownership in terms of attribution. The more powerful element of this philosophy isn’t about who owns something, it’s who created it. NFTs do something rather novel with attribution, allowing royalty payments to the original artist every time an NFT is resold. I love this aspect: royalties shouldn’t just be for movie stars, they should be for everyone.

Comparing that to the current web, take the 3 paragraphs written by Max Read that I quoted above. I was certainly under no technical obligation to show that it was a quote, to attribute it to him, or to link to the source. In fact, it would have been easier for me to just paste his words into this post, and pretend they were my own. I didn’t, of course, because I feel an ethical obligation to properly attribute the quote.

In a world where unethical actors will automatically copy/paste your content for SEO juice (indeed, I expect this blog post to show up on a bunch of these kinds of sites); where massive corporations will consume everything they can find about you, in order to advertise more effectively to you, it’s not at all surprising that people are looking for a technical solution for taking back control of their data, and for being properly attributed for their creations.

The interesting element of this philosophy isn’t about who owns something, it’s who created it.

That’s not to say that existing services discourage attribution: a core function of Twitter is retweets, a core function of Tumblr is reblogging. WordPress still supports trackbacks, even if many folks turn them off these days.

These are all blunt instruments, though, aimed at attributing an entire piece, rather than a more targeted approach. What I’d really like is a way to easily quote and attribute a small chunk of a post: 3 paragraphs (or blocks, if you want to see where I’m heading 😉), inserted into my post, linking back to where I got them from. If someone chooses to quote some of this post, I’d love to receive a pingback just for that quote, so it can be seen in the right context.

The functionality provide by Twitter and Tumblr is less of a technologically-based enforcement of attribution, and more of an example of paving the cow path: by and large, people want to properly attribute others, providing the tools to do so can easily become a fundamental part of how any software is used.

These tools only work so long as there’s an incentive to use them, however. web3 certainly provides the tools to attribute others, but much like SEO scammers copy/pasting blog posts, the economics of the NFT bubble is clearly a huge incentive to ignore those tools and ethical obligations, to the point that existing services have had to build additional features just to detect this abuse.

Monetisation

With every major blockchain also being a cryptocurrency, monetisation is at the heart of the entire web3 movement. Every level of the web3 tech stack involves a cryptocurrency-based protocol. This naturally permeates through the entire web3 ecosystem, where money becomes a major driving factor for every web3-based project.

And so, it’s impossible to look at web3 applications without also considering the financial aspect. When you have to pay just to participate, you have to ask whether every piece of content you create is “worth it”.

Again, let’s go back to the 3 paragraphs I quote above. In a theoretical web3 world, I’d publish this post on a blockchain in some form or another, and that act would also likely include noting that I’d quoted 3 blocks of text attributed to Max Read. I’d potentially pay some amount of money to Max, along with the fees that every blockchain charges in order to perform a transaction. While this process is potentially helpful to the original author at a first glance, I suspect the second and third order effects will be problematic. Having only just clicked the Publish button a few seconds earlier, I’m already some indeterminate amount of money out of pocket. Which brings me back to the question, is this post “worth it”? Will enough people tip/quote/remix/whatever me, to cover the cost of publishing? When every creative work must be viewed through a lens of financial impact, it fundamentally alters that creative process.

When you have to pay just to participate, you have to ask whether every piece of content you create is “worth it”.

Ultimately, we live in a capitalist society, and everyone deserves the opportunity to profit off their work. But by baking monetisation into the underlying infrastructure of web3, it becomes impossible to opt-out. You either have the money to participate without being concerned about the cost, or you’re going to need to weigh up every interaction by whether or not you can afford it.

Web3 Philosophies in WordPress

After breaking it all down, we can see that it’s not all black-and-white. There are some positive parts of web3, and some negative parts. Not that different to the web of today, in fact. 🙂 That’s not to say that either approach is the correct one: instead, we should be looking to learn from both, and produce something better.

Decentralised

I’ve long been a proponent of leveraging the massive install base of WordPress to provide distributed services to anyone. Years ago, I spoke about an idea called “Connected WordPress” that would do exactly that. While the idea didn’t gain a huge amount of traction at the time, the DNA of the Connected WordPress concept shares a lot of similar traits to the decentralised nature of web3.

I’m a big fan of decentralised technologies as a way for individuals to claw back power over their own data from the governments and massive corporations that would prefer to keep it all centralised, and I absolutely think we should be exploring ways to make the existing web more resistant to censorship.

At the same time, we have to acknowledge that there are certainly benefits to centralisation. As long as people have the freedom to choose how and where they participate, and centralised services are required to play nicely with self hosted sites, is there a practical difference?

I quite like how Solid allows you have it both ways, whilst maintaining control over your own data.

Ownership Attribution

Here’s the thing about attribution: you can’t enforce it with technology alone. Snapchat have indirectly demonstrated exactly this problem: in order to not lose a message, people would screenshot or record the message on their phone. In response, Snapchat implemented a feature to notify the other party when you screenshot a message from them. To avoid this, people will now use a second phone to take a photo or video of the message. While this example isn’t specifically about attribution, it demonstrates the problem that there’s no way to technologically restrict how someone interacts with content that you’ve published, once they’ve been granted access.

Instead of worrying about technical restrictions, then, we should be looking at how attribution can be made easier.

IndieWeb is a great example of how this can be done in a totally decentralised fashion.

Monetisation

I’m firmly of the opinion that monetisation of the things you create should be opt-in, rather than opt-out.

Modern society is currently obsessed with monetising everything, however. It comes in many different forms: hustle culture, side gigs, transforming hobbies into businesses, meme stocks, and cryptocurrencies: they’re all symptoms of this obsession.

I would argue that, rather than accepting as fait accompli that the next iteration of the web will be monetised to the core, we should be pushing back against this approach. Fundamentally, we should be looking to build for a post scarcity society, rather than trying to introduce scarcity where there previously was none.

While we work towards that future, we should certainly be easier for folks to monetise their work, but the current raft of cryptocurrencies just aren’t up to the task of operating as… currencies.

What Should You Do?

Well, that depends on what your priorities are. The conversations around web3 are taking up a lot of air right now, so it’s possible to get the impression web3 will be imminently replacing everything. It’s important to keep perspective on this, though. While there’s a lot of money in the web3 ecosystem right now, it’s dwarfed by the sheer size of the existing web.

If you’re excited about the hot new tech, and feeling inspired by the ideas espoused in web3 circles? Jump right in! I’m certain you’ll find something interesting to work on.

Always wanted to get into currency speculation, but didn’t want to deal with all those pesky “regulations” and “safeguards”? Boy howdy, are cryptocurrencies or NFTs the place for you. (Please don’t pretend that this paragraph is investment advice, it is nothing of the sort.)

Want to continue building stuff on the web, and you’re willing to learn new things when you need them, but are otherwise happy with your trajectory? Just keep on doing what you’re doing. Even if web3 does manage to live up to the hype, it’ll take a long time for it to be adopted by the mainstream. You’ll have years to adapt.

Final Thoughts

There are some big promises associated with web3, many of which sound very similar to the promises that were made around web 2.0, particularly around open APIs, and global interoperability. We saw what happened when those kinds of tools go wrong, and web3 doesn’t really solve those problems. It may exacerbate them in some ways, since it’s impossible to delete your data from a blockchain.

That said, (and I say this as a WordPress Core developer), just because a particular piece of software is not the optimal technical solution doesn’t mean it won’t become the most popular. Market forces can be a far stronger factor that technical superiority. There are many legitimate complaints about blockchain (including performance, bloat, fit for purpose, and security) that have been levelled against WordPress in the past, but WordPress certainly isn’t slowing down. I’m not even close to convinced that blockchain is the right technology to base the web on, but I’ve been doing this for too long to bet everything against it.

Markets can remain irrational a lot longer than you and I can remain solvent.

—A. Gary Shilling

As for me, well… 😄

I remain sceptical of web3 as it’s currently defined, but I think there’s room to change it, and to adopt the best bits into the existing web. Web 1.0 didn’t magically disappear when Web 2.0 rolled in, it adapted. Maybe we’ll look back in 10 years and say this was a time when the web fundamentally changed. Or, maybe we’ll refer to blockchain in the same breath as pets.com, and other examples from the dotcom boom of the 1990’s.

The Net interprets censorship as damage and routes around it.

—John Gilmore

This quote was originally referring to Usenet, but it’s stayed highly relevant in the decades since. I think it applies here, too: if the artificial scarcity built into web3 behaves too much like censorship, preventing people from sharing what they want to share, the internet (or, more accurately, the billions of people who interact with the internet) will just… go around it. It won’t all be smooth sailing, but we’ll continue to experiment, evolve, and adapt as it changes.

Personally, I think now is a great time for us to be embracing the values and ideals of projects like Solid, and IndieWeb. Before web3 referred to blockchains, it was more commonly used in reference to the Semantic Web, which is far more in line with WordPress’ ideals, whilst also matching many of the values prioritised by the new web3. As a major driver of the Open Web, WordPress can help people own their content in a sustainable way, engage with others on their own terms, and build communities that don’t depend on massive corporations or hand-wavy magical tech solutions.

Don’t get too caught up in the drama of whatever is the flavour of the month. I’m optimistic about the long term resilience of the internet, and I think you should be, too. 🥳

,

Jan Schmidt2.5 years of Oculus Rift

Once again time has passed, and another update on Oculus Rift support feels due! As always, it feels like I’ve been busy with work and not found enough time for Rift CV1 hacking. Nevertheless, looking back over the history since I last wrote, there’s quite a lot to tell!

In general, the controller tracking is now really good most of the time. Like, wildly-swing-your-arms-and-not-lose-track levels (most of the time). The problems I’m hunting now are intermittent and hard to identify in the moment while using the headset – hence my enthusiasm over the last updates for implementing stream recording and a simulation setup. I’ll get back to that.

Outlier Detection

Since I last wrote, the tracking improvements have mostly come from identifying and rejecting incorrect measurements. That is, if I have 2 sensors active and 1 sensor says the left controller is in one place, but the 2nd sensor says it’s somewhere else, we’ll reject one of those – choosing the pose that best matches what we already know about the controller. The last known position, the gravity direction the IMU is detecting, and the last known orientation. The tracker will now also reject observations for a time if (for example) the reported orientation is outside the range we expect. The IMU gyroscope can track the orientation of a device for quite a while, so can be relied on to identify strong pose priors once we’ve integrated a few camera observations to get the yaw correct.

It works really well, but I think improving this area is still where most future refinements will come. That and avoiding incorrect pose extractions in the first place.

Plot of headset tracking – orientation and position

The above plot is a sample of headset tracking, showing the extracted poses from the computer vision vs the pose priors / tracking from the Kalman filter. As you can see, there are excursions in both position and orientation detected from the video, but these are largely ignored by the filter, producing a steadier result.

Left Touch controller tracking – orientation and position

This plot shows the left controller being tracked during a Beat Saber session. The controller tracking plot is quite different, because controllers move a lot more than the headset, and have fewer LEDs to track against. There are larger gaps here in the timeline while the vision re-acquires the device – and in those gaps you can see the Kalman filter interpolating using IMU input only (sometimes well, sometimes less so).

Improved Pose Priors

Another nice thing I did is changes in the way the search for a tracked device is made in a video frame. Before starting looking for a particular device it always now gets the latest estimate of the previous device position from the fusion filter. Previously, it would use the estimate of the device pose as it was when the camera exposure happened – but between then and the moment we start analysis more IMU observations and other camera observations might arrive and be integrated into the filter, which will have updated the estimate of where the device was in the frame.

This is the bit where I think the Kalman filter is particularly clever: Estimates of the device position at an earlier or later exposure can improve and refine the filter’s estimate of where the device was when the camera captured the frame we’re currently analysing! So clever. That mechanism (lagged state tracking) is what allows the filter to integrate past tracking observations once the analysis is done – so even if the video frame search take 150ms (for example), it will correct the filter’s estimate of where the device was 150ms in the past, which ripples through and corrects the estimate of where the device is now.

LED visibility model

To improve the identification of devices better, I measured the actual angle from which LEDs are visible (about 75 degrees off axis) and measured the size. The pose matching now has a better idea of which LEDs should be visible for a proposed orientation and what pixel size we expect them to have at a particular distance.

Better Smoothing

I fixed a bug in the output pose smoothing filter where it would glitch as you turned completely around and crossed the point where the angle jumps from +pi to -pi or vice versa.

Improved Display Distortion Correction

I got a wide-angle hi-res webcam and took photos of a checkerboard pattern through the lens of my headset, then used OpenCV and panotools to calculate new distortion and chromatic aberration parameters for the display. For me, this has greatly improved. I’m waiting to hear if that’s true for everyone, or if I’ve just fixed it for my headset.

Persistent Config Cache

Config blocks! A long time ago, I prototyped code to create a persistent OpenHMD configuration file store in ~/.config/openhmd. The rift-kalman-filter branch now uses that to store the configuration blocks that it reads from the controllers. The first time a controller is seen, it will load the JSON calibration block as before, but it will now store it in that directory – removing a multiple second radio read process on every subsequent startup.

Persistent Room Configuration

To go along with that, I have an experimental rift-room-config branch that creates a rift-room-config.json file and stores the camera positions after the first startup. I haven’t pushed that to the rift-kalman-filter branch yet, because I’m a bit worried it’ll cause surprising problems for people. If the initial estimate of the headset pose is wrong, the code will back-project the wrong positions for the cameras, which will get written to the file and cause every subsequent run of OpenHMD to generate bad tracking until the file is removed. The goal is to have a loop that monitors whether the camera positions seem stable based on the tracking reports, and to use averaging and resetting to correct them if not – or at least to warn the user that they should re-run some (non-existent) setup utility.

Video Capture + Processing

The final big ticket item was a rewrite of how the USB video frame capture thread collects pixels and passes them to the analysis threads. This now does less work in the USB thread, so misses fewer frames, and also I made it so that every frame is now searched for LEDs and blob identities tracked with motion vectors, even when no further analysis will be done on that frame. That means that when we’re running late, it better preserves LED blob identities until the analysis threads can catch up – increasing the chances of having known LEDs to directly find device positions and avoid searching. This rewrite also opened up a path to easily support JPEG decode – which is needed to support Rift Sensors connected on USB 2.0 ports.

Session Simulator

I mentioned the recording simulator continues to progress. Since the tracking problems are now getting really tricky to figure out, this tool is becoming increasingly important. So far, I have code in OpenHMD to record all video and tracking data to a .mkv file. Then, there’s a simulator tool that loads those recordings. Currently it is capable of extracting the data back out of the recording, parsing the JSON and decoding the video, and presenting it to a partially implemented simulator that then runs the same blob analysis and tracking OpenHMD does. The end goal is a Godot based visualiser for this simulation, and to be able to step back and forth through time examining what happened at critical moments so I can improve the tracking for those situations.

To make recordings, there’s the rift-debug-gstreamer-record branch of OpenHMD. If you have GStreamer and the right plugins (gst-plugins-good) installed, and you set env vars like this, each run of OpenHMD will generate a recording in the target directory (make sure the target dir exists):

export OHMD_TRACE_DIR=/home/user/openhmd-traces/
export OHMD_FULL_RECORDING=1

Up Next

The next things that are calling to me are to improve the room configuration estimation and storage as mentioned above – to detect when the poses a camera is reporting don’t make sense because it’s been bumped or moved.

I’d also like to add back in tracking of the LEDS on the back of the headset headband, to support 360 tracking. I disabled those because they cause me trouble – the headband is adjustable relative to the headset, so the LEDs don’t appear where the 3D model says they should be and that causes jitter and pose mismatches. They need special handling.

One last thing I’m finding exciting is a new person taking an interest in Rift S and starting to look at inside-out tracking for that. That’s just happened in the last few days, so not much to report yet – but I’ll be happy to have someone looking at that while I’m still busy over here in CV1 land!

As always, if you have any questions, comments or testing feedback – hit me up at thaytan@noraisin.net or on @thaytan Twitter/IRC.

Thank you to the kind people signed up as Github Sponsors for this project!

,

Matt PalmerDiscovering AWS IAM accounts

Let’s say you’re someone who happens to discover an AWS account number, and would like to take a stab at guessing what IAM users might be valid in that account. Tricky problem, right? Not with this One Weird Trick!

In your own AWS account, create a KMS key and try to reference an ARN representing an IAM user in the other account as the principal. If the policy is accepted by PutKeyPolicy, then that IAM account exists, and if the error says “Policy contains a statement with one or more invalid principals” then the user doesn’t exist.

As an example, say you want to guess at IAM users in AWS account 111111111111. Then make sure this statement is in your key policy:

{
  "Sid": "Test existence of user",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111111111111:user/bob"
  },
  "Action": "kms:DescribeKey",
  "Resource": "*"
}

If that policy is accepted, then the account has an IAM user named bob. Otherwise, the user doesn’t exist. Scripting this is left as an exercise for the reader.

Sadly, wildcards aren’t accepted in the username portion of the ARN, otherwise you could do some funky searching with ...:user/a*, ...:user/b*, etc. You can’t have everything; where would you put it all?

I did mention this to AWS as an account enumeration risk. They’re of the opinion that it’s a good thing you can know what users exist in random other AWS accounts. I guess that means this is a technique you can put in your toolbox safe in the knowledge it’ll work forever.

Given this is intended behaviour, I assume you don’t need to use a key policy for this, but that’s where I stumbled over it. Also, you can probably use it to enumerate roles and anything else that can be a principal, but since I don’t see as much use for that, I didn’t bother exploring it.

There you are, then. If you ever need to guess at IAM users in another AWS account, now you can!

,

Glen TurnerThe tyranny of product names

For a long time computer manufacturers have tried to differentiate themselves and their products from their competitors with fancy names with odd capitalisation and spelling. But as an author, using these names does a disservice to the reader: how are they to know that DEC is pronounced as if it was written Dec ("deck").

It's time we pushed back, and wrote for our readers, not for corporations.

It's time to use standard English rules for these Corporate Fancy Names. Proper names begin with a capital, unlike "ciscoSystems®" (so bad that Cisco itself moved away from it). Words are separated by spaces, so "Cisco Systems". Abbreviations and acronyms are written in lower case if they are pronounced as a word, in upper case if each letter is pronounced: so "ram" and "IBM®".

So from here on in I'll be using the following:

  • Face Book. Formerly, "Facebook®".
  • Junos. Formerly JUNOS®.
  • ram. Formerly RAM.
  • Pan OS. Formerly PAN-OS®.
  • Unix. Formerly UNIX®.

I'd encourage you to try this in your own writing. It does look odd for the first time, but the result is undeniably more readable. If we are not writing to be understood by our audience then we are nothing more than an unpaid member of some corporation's marketing team.



comment count unavailable comments

,

Dave HallYour Terraform Module Needs an Opinion

Learn why your Terraform modules should be opinionated.

,

Chris NeugebauerTalk Notes: On The Use and Misuse of Decorators

I gave the talk On The Use and Misuse of Decorators as part of PyConline AU 2021, the second in annoyingly long sequence of not-in-person PyCon AU events. Here’s some code samples that you might be interested in:

Simple @property implementation

This shows a demo of @property-style getters. Setters are left as an exercise :)


def demo_property(f):
    f.is_a_property = True
    return f


class HasProperties:

    def __getattribute__(self, name):
        ret = super().__getattribute__(name)
        if hasattr(ret, "is_a_property"):
            return ret()
        else:
            return ret

class Demo(HasProperties):

    @demo_property
    def is_a_property(self):
        return "I'm a property"

    def is_a_function(self):
        return "I'm a function"


a = Demo()
print(a.is_a_function())
print(a.is_a_property)

@run (The Scoped Block)

@run is a decorator that will run the body of the decorated function, and then store the result of that function in place of the function’s name. It makes it easier to assign the results of complex statements to a variable, and get the advantages of functions having less leaky scopes than if or loop blocks.

def run(f):
    return f()

@run
def hello_world():
    return "Hello, World!"

print(hello_world)

@apply (Multi-line stream transformers)

def apply(transformer, iterable_):

    def _applicator(f):

        return(transformer(f, iterable_))

    return _applicator

@apply(map, range(100)
def fizzbuzzed(i):
    if i % 3 == 0 and i % 5 == 0:
        return "fizzbuzz"
    if i % 3 == 0:
        return "fizz"
    elif i % 5 == 0:
        return "buzz"
    else:
        return str(i)

Builders


def html(f):
    builder = HtmlNodeBuilder("html")
    f(builder)
    return builder.build()


class HtmlNodeBuilder:
    def __init__(self, tag_name):
       self.tag_name = tag_name
       self.nodes = []

   def node(self, f):
        builder = HtmlNodeBuilder(f.__name__)
        f(builder)
        self.nodes.append(builder.build())

    def text(self, text):
        self.nodes.append(text)

    def build(self):
      nodes = "\n".join(self.nodes)
       return f"<{self.tag_name}>\n{nodes}\n</{self.tag_name}>"


@html
def document(b):
   @b.node
   def head(b):
       @b.node
       def title(b):
           b.text("Hello, World!")

   @b.node
   def body(b):
       for i in range(10, 0, -1):
           @b.node
           def p(b):
               b.text(f"{i}")

Code Registries

This is an incomplete implementation of a code registry for handling simple text processing tasks:

```python

def register(self, input, output):

def _register_code(f):
    self.registry[(input, output)] = f
    return f

return _register_code

in_type = (iterable[str], (WILDCARD, ) out_type = (Counter, (WILDCARD, frequency))

@registry.register(in_type, out_type) def count_strings(strings):

return Counter(strings)

@registry.register( (iterable[str], (WILDCARD, )), (iterable[str], (WILDCARD, lowercase)) ) def words_to_lowercase(words): …

@registry.register( (iterable[str], (WILDCARD, )), (iterable[str], (WILDCARD, no_punctuation)) ) def words_without_punctuation(words): …

def find_steps( self, input_type, input_attrs, output_type, output_attrs ):

hand_wave()

def give_me(self, input, output_type, output_attrs):

steps = self.find_steps(
    type(input), (), output_type, output_attrs
)

temp = input
for step in steps:
    temp = step(temp)

return temp

,

Jan SchmidtOpenHMD update

A while ago, I wrote a post about how to build and test my Oculus CV1 tracking code in SteamVR using the SteamVR-OpenHMD driver. I have updated those instructions and moved them to https://noraisin.net/diary/?page_id=1048 – so use those if you’d like to try things out.

The pandemic continues to sap my time for OpenHMD improvements. Since my last post, I have been working on various refinements. The biggest visible improvements are:

  • Adding velocity and acceleration API to OpenHMD.
  • Rewriting the pose transformation code that maps from the IMU-centric tracking space to the device pose needed by SteamVR / apps.

Adding velocity and acceleration reporting is needed in VR apps that support throwing things. It means that throwing objects and using gravity-grab to fetch objects works in Half-Life: Alyx, making it playable now.

The rewrite to the pose transformation code fixed problems where the rotation of controller models in VR didn’t match the rotation applied in the real world. Controllers would appear attached to the wrong part of the hand, and rotate around the wrong axis. Movements feel more natural now.

Ongoing work – record and replay

My focus going forward is on fixing glitches that are caused by tracking losses or outliers. Those problems happen when the computer vision code either fails to match what the cameras see to the device LED models, or when it matches incorrectly.

Tracking failure leads to the headset view or controllers ‘flying away’ suddenly. Incorrect matching leads to controllers jumping and jittering to the wrong pose, or swapping hands. Either condition is very annoying.

Unfortunately, as the tracking has improved the remaining problems get harder to understand and there is less low-hanging fruit for improvement. Further, when the computer vision runs at 52Hz, it’s impossible to diagnose the reasons for a glitch in real time.

I’ve built a branch of OpenHMD that uses GStreamer to record the CV1 camera video, plus IMU and tracking logs into a video file.

To go with those recordings, I’ve been working on a replay and simulation tool, that uses the Godot game engine to visualise the tracking session. The goal is to show, frame-by-frame, where OpenHMD thought the cameras, headset and controllers were at each point in the session, and to be able to step back and forth through the recording.

Right now, I’m working on the simulation portion of the replay, that will use the tracking logs to recreate all the poses.

Ian BrownNGINX Ingress Controller in GKE

GKE in Production - Part 2 This tutorial is part of a series I am creating on creating, running and managing Kubernetes on GCP the way I do in my day job. In this episode, we are covering how to setup a nginx ingress controller to handle incoming requests. Note: There may be some things I have skimmed over, if so or you see a glaring hole in my configuration, please drop me a line via the contact page linked at the top of the site.

,

Craige McWhirterAn Open Letter to the Bishop of Rockhampton

An Open Letter to the Bishop of Rockhampton

Good evening Michael. How are you?

I had the pleasure this morning of reading your stance against the Voluntary Assisted Dying legislation being considered by the Queensland Government.

I was surprised however to see that such a learned disciple of the Lord Jesus Christ was advocating to prolong the agony, pain and suffering of the vulnerable in our community.

My wife has a form of cancer for which there is no cure. Fiona suffers greatly every day and yet is a tower of strength and inspiration to those who know her and a beacon of hope to cancer sufferers around the world.

Despite this, Fiona knows that at some point the pain is going to be too great, even for her, after many years of suffering the pain will be overwhelming.

Your position on this legislation directly advocates prolonged suffering of many, many beautiful people in our community is directly at odds with Jesus' most valuable teachings - compassion.

We would like the opportunity to meet with you this week to both discuss this issue and invite you to come and live with us for a month in our home. This offer provides you with an opportunity to answer the question from 1 John 2:3-6 - how would Jesus walk in our shoes?

It's only a couple of blocks from the Cathedral, you could even walk daily with our young children to TCC - they would appreciate such a wonderful learning opportunity and they have many, many questions that need answering.

Our home is inviting and comfortable. We value compassion, music, learning, community volunteering and sport. Our library is well stocked with what I expect is the best personal library in Central Queensland.

Whilst staying with us you will not only gain an understanding of how people with incurable cancer suffer but you can join us, walking in Jesus' footsteps as we provide food, tea and coffee for the homeless on the waterfront.

Please let me know a time when the three of us can meet to discuss this.

Thank you in advance with love from Craige and Fiona.

,

Robert CollinsA moment of history

I’ve been asked more than once what it was like at the beginning of Ubuntu, before it was a company, when an email from someone I’d never heard of came into my mailbox.

We’re coming up on 20 years now since Ubuntu was founded, and I had cause to do some spelunking into IMAP archives recently… while there I took the opportunity to grab the very first email I received.

The Ubuntu long shot succeeded wildly. Of course, we liked to joke about how spammy those emails where: cold-calling a raft of Debian developers with job offers, some of them were closer to phishing attacks :). This very early one – I was the second employee (though I started at 4 days a week to transition my clients gradually) – was less so.

I think its interesting though to note how explicit a gamble this was framed as: a time limited experiment, funded for a year. As the company scaled this very rapidly became a hiring problem and the horizon had to be pushed out to 2 years to get folk to join.

And of course, while we started with arch in earnest, we rapidly hit significant usability problems, some of which were solvable with porcelain and shallow non-architectural changes, and we built initially patches, and then the bazaar VCS project to tackle those. But others were not: for instance, I recall exceeding the 32K hard link limit on ext3 due to a single long history during a VCS conversion. The sum of these challenges led us to create the bzr project, a ground up rethink of our version control needs, architecture, implementation and user-experience. While ultimately git has conquered all, bzr had – still has in fact – extremely loyal advocates, due to its laser sharp focus on usability.

Anyhow, here it is: one of the original no-name-here-yet, aka Ubuntu, introductory emails (with permission from Mark, of course). When I clicked through to the website Mark provided there was a link there to a fantastical website about a space tourist… not what I had expected to be reading in Adelaide during LCA 2004.


From: Mark Shuttleworth <xxx@xxx>
To: Robert Collins <xxx@xxx>
Date: Thu, 15 Jan 2004, 04:30

Tom Lord gave me your email address, I believe he’s
already sent you the email that I sent him so I’m sure
you have some background.

In short, I am going to fund some open source
development for a year. This is part of a new project
that I will be getting off the ground in the coming
weeks. I don’t know where it will lead, it’s flying in
the face of a stiff breeze but I think at the end of
the day it will at least fund a few very good open
source developers for a full year to work on the
projects they like most.

One of the pieces of the puzzle is high end source
code management. I’ll be looking to build an
infrastructure that will manage source code for
between 100 and 8000 open source projects (yes,
there’s a big difference between the two, I don’t know
at which end of the spectrum we will be at the end of
the year but our infrastructure will have to at least
be capable of scaling to the latter within two years)
with upwards of 2000 developers, drawing code from a
variety of sources, playing with it and spitting it
out regularly in nice packages.

Arch and Subversion seem to be the two leading
contenders for “next generation open source sccm”. I’d
be interested in your thoughts on the two of them, and
how they stack up. I’m looking to hire one person who
will lead that part of the effort. They’ll work alone
from home, and be responsible for two things. First,
extending the tool (arch or svn) in ways that help the
project. Such extensions will be released under an
open source licence, and hopefully embraced by the
tools maintainers and included in the mainline code
for the tool. And second, they will be responsible for
our large-scale implementation of SCCM, using that
tool, and building the management scripts and other
infrastructure to support such a large, and hopefully
highly automated, set of repositories.

Would you be interested in this position? What
attributes and experience do you think would make you
a great person to have on the team? What would your
salary expectation be, as a monthly figure, for a one
year contract full time?

I’m currently on your continent, well, just off it. On
Lizard Island, up North. Am headed today for Brisbane,
then on the 17th to Launceston via Melbourne. If you
happen to be on any of those stops, would you be
interested in meeting up to discuss it further?

If you’re curious you can find out a bit more about me
at www.markshuttleworth.com. This project is much
lower key than some of what you’ll find there. It’s a
very long shot indeed. But if at worst all that
happens is a bunch of open source work gets funded at
my expense I’ll feel it was money well spent.

Cheers,
Mark

=====

“Good judgement comes from experience, and often experience
comes from bad judgement” – Rita Mae Brown


,

Arjen LentzClassic McEleice and the NIST search for post-quantum crypto

I have always liked cryptography, and public-key cryptography in particularly. When Pretty Good Privacy (PGP) first came out in 1991, I not only started using it, also but looking at the documentation and the code to see how it worked. I created my own implementation in C using very small keys, just to better understand.

Cryptography has been running a race against both faster and cheaper computing power. And these days, with banking and most other aspects of our lives entirely relying on secure communications, it’s a very juicy target for bad actors.

About 5 years ago, the National (USA) Institute for Science and Technology (NIST) initiated a search for cryptographic algorithmic that should withstand a near-future world where quantum computers with a significant number of qubits are a reality. There have been a number of rounds, which mid 2020 saw round 3 and the finalists.

This submission caught my eye some time ago: Classic McEliece, and out of the four finalists it’s the only one that is not lattice-based [wikipedia link].

For Public Key Encryption and Key Exchange Mechanism, Prof Bill Buchanan thinks that the winner will be lattice-based, but I am not convinced.

Robert McEleice at his retirement in 2007

Tiny side-track, you may wonder where does the McEleice name come from? From mathematician Robert McEleice (1942-2019). McEleice developed his cryptosystem in 1978. So it’s not just named after him, he designed it. For various reasons that have nothing to do with the mathematical solidity of the ideas, it didn’t get used at the time. He’s done plenty cool other things, too. From his Caltech obituary:

He made fundamental contributions to the theory and design of channel codes for communication systems—including the interplanetary telecommunication systems that were used by the Voyager, Galileo, Mars Pathfinder, Cassini, and Mars Exploration Rover missions.

Back to lattices, there are both unknowns (aspects that have not been studied in exhaustive depth) and recent mathematical attacks, both of which create uncertainty – in the crypto sphere as well as for business and politics. Given how long it takes for crypto schemes to get widely adopted, the latter two are somewhat relevant, particularly since cyber security is a hot topic.

Lattices are definitely interesting, but given what we know so far, it is my feeling that systems based on lattices are more likely to be proven breakable than Classic McEleice, which come to this finalists’ table with 40+ years track record of in-depth analysis. Mind that all finalists are of course solid at this stage – but NIST’s thoughts on expected developments and breakthroughs is what is likely to decide the winner. NIST are not looking for shiny, they are looking for very very solid in all possible ways.

Prof Buchanan recently published implementations for the finalists, and did some benchmarks where we can directly compare them against each other.

We can see that Classic McEleice’s key generation is CPU intensive, but is that really a problem? The large size of its public key may be more of a factor (disadvantage), however the small ciphertext I think more than offsets that disadvantage.

As we’re nearing the end of the NIST process, in my opinion, fast encryption/decryption and small cyphertext, combined with the long track record of in-depth analysis, may still see Classic McEleice come out the winner.

The post Classic McEleice and the NIST search for post-quantum crypto first appeared on Lentz family blog.

,

Ian BrownKubenetes Basic Setup

GKE in Production - Part 1 This tutorial is part of a series I am creating on creating, running and managing Kubernetes on GCP the way I do in my day job. Note: There may be some things I have skimmed over, if so or you see a glaring hole in my configuration, please drop me a line via the contact page linked at the top of the site. What we will build In this first tutorial, we will be building a standard GKE cluster on Google Cloud Platform and deploying the hello world container to confirm everything is working.

,

Dave HallA Rube Goldberg Machine for Container Workflows

Learn how can you securely copy container images from GHCR to ECR.

,

Craige McWhirterThe Consensus on Branch Names

Consensus: Decisions are reached in a dialogue between equals

There was some kerfuffle in 2020 over the use of the term master in git, the origins of the term were resolutely settled so I set about renaming my primary branches to other words.

The one that most people seemed to be using was main, so I started using it too. While main was conveniently brief, it still felt inadequate. Something was wrong and it kept bubbling away in the background.

The word that kept percolating through was consensus.

I kept dismissing it for all the obvious reasons, such as it was too long, too unwieldy, too obscure or just simply not used commonly enough to be familiar or well understood.

The word was persistent though and consensus kept coming back.

One morning recently, I was staring at a git tree when the realisation slapped me in the face that in a git workflow the primary / master / main branches reflected a consensus point in that workflow.

Consensus: Decisions are reached in a dialogue between equals

That realisation settled it pretty hard for me, consensus not only accurately reflected the point in the workflow but was also the most correct English word for what that branch represented.

Continue the conversation on Matrix.

,

Chris NeugebauerAdding a PurpleAir monitor to Home Assistant

Living in California, I’ve (sadly) grown accustomed to needing to keep track of our local air quality index (AQI) ratings, particularly as we live close to places where large wildfires happen every other year.

Last year, Josh and I bought a PurpleAir outdoor air quality meter, which has been great. We contribute our data to a collection of very local air quality meters, which is important, since the hilly nature of the North Bay means that the nearest government air quality ratings can be significantly different to what we experience here in Petaluma.

I recently went looking to pull my PurpleAir sensor data into my Home Assistant setup. Unfortunately, the PurpleAir API does not return the AQI metric for air quality, only the raw PM2.5/PM5/PM10 numbers. After some searching, I found a nice template sensor solution on the Home Assistant forums, which I’ve modernised by adding the AQI as a sub-sensor, and adding unique ID fields to each useful sensor, so that you can assign them to a location.

You’ll end up with sensors for raw PM2.5, the PM2.5 AQI value, the US EPA air quality category, air pressure, relative humidity and air pressure.

How to use this

First up, visit the PurpleAir Map, find the sensor you care about, click “get this widget�, and then “JSON�. That will give you the URL to set as the resource key in purpleair.yaml.

Adding the configuration

In HomeAssistant, add the following line to your configuration.yaml:

sensor: !include purpleair.yaml

and then add the following contents to purpleair.yaml


 - platform: rest
   name: 'PurpleAir'

   # Substitute in the URL of the sensor you care about.  To find the URL, go
   # to purpleair.com/map, find your sensor, click on it, click on "Get This
   # Widget" then click on "JSON".
   resource: https://www.purpleair.com/json?key={KEY_GOES_HERE}&show={SENSOR_ID}

   # Only query once a minute to avoid rate limits:
   scan_interval: 60

   # Set this sensor to be the AQI value.
   #
   # Code translated from JavaScript found at:
   # https://docs.google.com/document/d/15ijz94dXJ-YAZLi9iZ_RaBwrZ4KtYeCy08goGBwnbCU/edit#
   value_template: >
     {{ value_json["results"][0]["Label"] }}
   unit_of_measurement: ""
   # The value of the sensor can't be longer than 255 characters, but the
   # attributes can.  Store away all the data for use by the templates below.
   json_attributes:
     - results

 - platform: template
   sensors:
     purpleair_aqi:
       unique_id: 'purpleair_SENSORID_aqi_pm25'
       friendly_name: 'PurpleAir PM2.5 AQI'
       value_template: >
         {% macro calcAQI(Cp, Ih, Il, BPh, BPl) -%}
           {{ (((Ih - Il)/(BPh - BPl)) * (Cp - BPl) + Il)|round|float }}
         {%- endmacro %}
         {% if (states('sensor.purpleair_pm25')|float) > 1000 %}
           invalid
         {% elif (states('sensor.purpleair_pm25')|float) > 350.5 %}
           {{ calcAQI((states('sensor.purpleair_pm25')|float), 500.0, 401.0, 500.0, 350.5) }}
         {% elif (states('sensor.purpleair_pm25')|float) > 250.5 %}
           {{ calcAQI((states('sensor.purpleair_pm25')|float), 400.0, 301.0, 350.4, 250.5) }}
         {% elif (states('sensor.purpleair_pm25')|float) > 150.5 %}
           {{ calcAQI((states('sensor.purpleair_pm25')|float), 300.0, 201.0, 250.4, 150.5) }}
         {% elif (states('sensor.purpleair_pm25')|float) > 55.5 %}
           {{ calcAQI((states('sensor.purpleair_pm25')|float), 200.0, 151.0, 150.4, 55.5) }}
         {% elif (states('sensor.purpleair_pm25')|float) > 35.5 %}
           {{ calcAQI((states('sensor.purpleair_pm25')|float), 150.0, 101.0, 55.4, 35.5) }}
         {% elif (states('sensor.purpleair_pm25')|float) > 12.1 %}
           {{ calcAQI((states('sensor.purpleair_pm25')|float), 100.0, 51.0, 35.4, 12.1) }}
         {% elif (states('sensor.purpleair_pm25')|float) >= 0.0 %}
           {{ calcAQI((states('sensor.purpleair_pm25')|float), 50.0, 0.0, 12.0, 0.0) }}
         {% else %}
           invalid
         {% endif %}
       unit_of_measurement: "bit"
     purpleair_description:
       unique_id: 'purpleair_SENSORID_description'
       friendly_name: 'PurpleAir AQI Description'
       value_template: >
         {% if (states('sensor.purpleair_aqi')|float) >= 401.0 %}
           Hazardous
         {% elif (states('sensor.purpleair_aqi')|float) >= 301.0 %}
           Hazardous
         {% elif (states('sensor.purpleair_aqi')|float) >= 201.0 %}
           Very Unhealthy
         {% elif (states('sensor.purpleair_aqi')|float) >= 151.0 %}
           Unhealthy
         {% elif (states('sensor.purpleair_aqi')|float) >= 101.0 %}
           Unhealthy for Sensitive Groups
         {% elif (states('sensor.purpleair_aqi')|float) >= 51.0 %}
           Moderate
         {% elif (states('sensor.purpleair_aqi')|float) >= 0.0 %}
           Good
         {% else %}
           undefined
         {% endif %}
       entity_id: sensor.purpleair
     purpleair_pm25:
       unique_id: 'purpleair_SENSORID_pm25'
       friendly_name: 'PurpleAir PM 2.5'
       value_template: "{{ state_attr('sensor.purpleair','results')[0]['PM2_5Value'] }}"
       unit_of_measurement: "μg/m3"
       entity_id: sensor.purpleair
     purpleair_temp:
       unique_id: 'purpleair_SENSORID_temperature'
       friendly_name: 'PurpleAir Temperature'
       value_template: "{{ state_attr('sensor.purpleair','results')[0]['temp_f'] }}"
       unit_of_measurement: "°F"
       entity_id: sensor.purpleair
     purpleair_humidity:
       unique_id: 'purpleair_SENSORID_humidity'
       friendly_name: 'PurpleAir Humidity'
       value_template: "{{ state_attr('sensor.purpleair','results')[0]['humidity'] }}"
       unit_of_measurement: "%"
       entity_id: sensor.purpleair
     purpleair_pressure:
       unique_id: 'purpleair_SENSORID_pressure'
       friendly_name: 'PurpleAir Pressure'
       value_template: "{{ state_attr('sensor.purpleair','results')[0]['pressure'] }}"
       unit_of_measurement: "hPa"
       entity_id: sensor.purpleair

Quirks

I had difficulty getting the AQI to display as a numeric graph when I didn’t set a unit. I went with bit, and that worked just fine. 🤷�♂�

,

Stewart SmithAn Unearthly Child

So, this idea has been brewing for a while now… try and watch all of Doctor Who. All of it. All 38 seasons. Today(ish), we started. First up, from 1963 (first aired not quite when intended due to the Kennedy assassination): An Unearthly Child. The first episode of the first serial.

A lot of iconic things are there from the start: the music, the Police Box, embarrassing moments of not quite remembering what time one is in, and normal humans accidentally finding their way into the TARDIS.

I first saw this way back when a child, where they were repeated on ABC TV in Australia for some anniversary of Doctor Who (I forget which one). Well, I saw all but the first episode as the train home was delayed and stopped outside Caulfield for no reason for ages. Some things never change.

Of course, being a show from the early 1960s, there’s some rougher spots. We’re not about to have the picture of diversity, and there’s going to be casual racism and sexism. What will be interesting is noticing these things today, and contrasting with my memory of them at the time (at least for episodes I’ve seen before), and what I know of the attitudes of the time.

“This year-ometer is not calculating properly” is a very 2020 line though (technically from the second episode).

,

Craige McWhirterRaising Free People: Unschooling as Liberation and Healing Work

by Akilah S. Richards

Raising Free People: Unschooling as Liberation and Healing Work

I'm making an effort to try and keep my reading more contemporary this year and this is the book I've started with - an insight into the Unschooling movement, a movement I was wholly unaware of.

Akilah's first-person writing uses her family's journey through unschooling to illustrate the traps, setbacks, success and triumphs her family has experienced along the way.

Our family defines unschooling as a child-trusting, anti-oppression, liberatory, love-centered approach to parenting and caregiving. As unschoolers, the four of us operate with a core belief that children own themselves and that parents and other adults work with children to nurture their confident autonomy not their ability to obey adults’ directives.

-- Akilah S. Richards

There are plenty of parenting nuances I'd already picked up along the way but many I had not thought about deliberately or had collected as a considered approach which I found insightful. There was also a lot of completely new perspectives on parenting which I found refreshing and intuitive.

If we can accept any form of oppression, we are susceptible to all forms of oppression. That mindset is imperative in our efforts to raise free people, because we are retraining ourselves to spot the ways we participate in oppression

-- Akilah S. Richards

Both Akilah's journey, the lessons learned and the insights that she brings ring strongly of the Socratic notion that to change the world, we must start with ourselves. It's much easier to focus our energies externally at politicians or corporations but if we do not start with ourselves and those we raise, we are just perpetuating the problems, not removing them.

This is why raising free people work is revolutionary. It’s both pushback and buildup; it is protest but also pivoting. It’s getting mad and frustrated and deciding exactly what to do to feel better and to live better, to not just fight against oppression and injustice but to facilitate freedom and prioritize joy.

-- Akilah S. Richards

"Raising free people" has been and continues to be the over-arching ethos of my approach to parenting, which is what initially attracted me to this book. While I did certainly get a lot self-congratulatory moments where the author made some key points I was already all over, there were also plenty of times I felt rightly called out for having missed and where I need to do better.

This is not only a highly recommended book to read but also one I'll be keeping handy to re-read and use as an occasional reference and touchstone.

,

Jan SchmidtRift CV1 – Getting close now…

It’s been a while since my last post about tracking support for the Oculus Rift in February. There’s been big improvements since then – working really well a lot of the time. It’s gone from “If I don’t make any sudden moves, I can finish an easy Beat Saber level” to “You can’t hide from me!” quality.

Equally, there are still enough glitches and corner cases that I think I’ll still be at this a while.

Here’s a video from 3 weeks ago of (not me) playing Beat Saber on Expert+ setting showing just how good things can be now:

Beat Saber – Skunkynator playing Expert+, Mar 16 2021

Strap in. Here’s what I’ve worked on in the last 6 weeks:

Pose Matching improvements

Most of the biggest improvements have come from improving the computer vision algorithm that’s matching the observed LEDs (blobs) in the camera frames to the 3D models of the devices.

I split the brute-force search algorithm into 2 phases. It now does a first pass looking for ‘obvious’ matches. In that pass, it does a shallow graph search of blobs and their nearest few neighbours against LEDs and their nearest neighbours, looking for a match using a “Strong” match metric. A match is considered strong if expected LEDs match observed blobs to within 1.5 pixels.

Coupled with checks on the expected orientation (matching the Gravity vector detected by the IMU) and the pose prior (expected position and orientation are within predicted error bounds) this short-circuit on the search is hit a lot of the time, and often completes within 1 frame duration.

In the remaining tricky cases, where a deeper graph search is required in order to recover the pose, the initial search reduces the number of LEDs and blobs under consideration, speeding up the remaining search.

I also added an LED size model to the mix – for a candidate pose, it tries to work out how large (in pixels) each LED should appear, and use that as a bound on matching blobs to LEDs. This helps reduce mismatches as devices move further from the camera.

LED labelling

When a brute-force search for pose recovery completes, the system now knows the identity of various blobs in the camera image. One way it avoids a search next time is to transfer the labels into future camera observations using optical-flow tracking on the visible blobs.

The problem is that even sped-up the search can still take a few frame-durations to complete. Previously LED labels would be transferred from frame to frame as they arrived, but there’s now a unique ID associated with each blob that allows the labels to be transferred even several frames later once their identity is known.

IMU Gyro scale

One of the problems with reverse engineering is the guesswork around exactly what different values mean. I was looking into why the controller movement felt “swimmy” under fast motions, and one thing I found was that the interpretation of the gyroscope readings from the IMU was incorrect.

The touch controllers report IMU angular velocity readings directly as a 16-bit signed integer. Previously the code would take the reading and divide by 1024 and use the value as radians/second.

From teardowns of the controller, I know the IMU is an Invensense MPU-6500. From the datasheet, the reported value is actually in degrees per second and appears to be configured for the +/- 2000 °/s range. That yields a calculation of Gyro-rad/s = Gyro-°/s * (2000 / 32768) * (?/180) – or a divisor of 938.734.

The 1024 divisor was under-estimating rotation speed by about 10% – close enough to work until you start moving quickly.

Limited interpolation

If we don’t find a device in the camera views, the fusion filter predicts motion using the IMU readings – but that quickly becomes inaccurate. In the worst case, the controllers fly off into the distance. To avoid that, I added a limit of 500ms for ‘coasting’. If we haven’t recovered the device pose by then, the position is frozen in place and only rotation is updated until the cameras find it again.

Exponential filtering

I implemented a 1-Euro exponential smoothing filter on the output poses for each device. This is an idea from the Project Esky driver for Project North Star/Deck-X AR headsets, and almost completely eliminates jitter in the headset view and hand controllers shown to the user. The tradeoff is against introducing lag when the user moves quickly – but there are some tunables in the exponential filter to play with for minimising that. For now I’ve picked some values that seem to work reasonably.

Non-blocking radio

Communications with the touch controllers happens through USB radio command packets sent to the headset. The main use of radio commands in OpenHMD is to read the JSON configuration block for each controller that is programmed in at the factory. The configuration block provides the 3D model of LED positions as well as initial IMU bias values.

Unfortunately, reading the configuration block takes a couple of seconds on startup, and blocks everything while it’s happening. Oculus saw that problem and added a checksum in the controller firmware. You can read the checksum first and if it hasn’t changed use a local cache of the configuration block. Eventually, I’ll implement that caching mechanism for OpenHMD but in the meantime it still reads the configuration blocks on each startup.

As an interim improvement I rewrote the radio communication logic to use a state machine that is checked in the update loop – allowing radio communications to be interleaved without blocking the regularly processing of events. It still interferes a bit, but no longer causes a full multi-second stall as each hand controller turns on.

Haptic feedback

The hand controllers have haptic feedback ‘rumble’ motors that really add to the immersiveness of VR by letting you sense collisions with objects. Until now, OpenHMD hasn’t had any support for applications to trigger haptic events. I spent a bit of time looking at USB packet traces with Philipp Zabel and we figured out the radio commands to turn the rumble motors on and off.

In the Rift CV1, the haptic motors have a mode where you schedule feedback events into a ringbuffer – effectively they operate like a low frequency audio device. However, that mode was removed for the Rift S (and presumably in the Quest devices) – and deprecated for the CV1.

With that in mind, I aimed for implementing the unbuffered mode, with explicit ‘motor on + frequency + amplitude’ and ‘motor off’ commands sent as needed. Thanks to already having rewritten the radio communications to use a state machine, adding haptic commands was fairly easy.

The big question mark is around what API OpenHMD should provide for haptic feedback. I’ve implemented something simple for now, to get some discussion going. It works really well and adds hugely to the experience. That code is in the https://github.com/thaytan/OpenHMD/tree/rift-haptics branch, with a SteamVR-OpenHMD branch that uses it in https://github.com/thaytan/SteamVR-OpenHMD/tree/controller-haptics-wip

Problem areas

Unexpected tracking losses

I’d say the biggest problem right now is unexpected tracking loss and incorrect pose extractions when I’m not expecting them. Especially my right controller will suddenly glitch and start jumping around. Looking at a video of the debug feed, it’s not obvious why that’s happening:

To fix cases like those, I plan to add code to log the raw video feed and the IMU information together so that I can replay the video analysis frame-by-frame and investigate glitches systematically. Those recordings will also work as a regression suite to test future changes.

Sensor fusion efficiency

The Kalman filter I have implemented works really nicely – it does the latency compensation, predicts motion and extracts sensor biases all in one place… but it has a big downside of being quite expensive in CPU. The Unscented Kalman Filter CPU cost grows at O(n^3) with the size of the state, and the state in this case is 43 dimensional – 22 base dimensions, and 7 per latency-compensation slot. Running 1000 updates per second for the HMD and 500 for each of the hand controllers adds up quickly.

At some point, I want to find a better / cheaper approach to the problem that still provides low-latency motion predictions for the user while still providing the same benefits around latency compensation and bias extraction.

Lens Distortion

To generate a convincing illusion of objects at a distance in a headset that’s only a few centimetres deep, VR headsets use some interesting optics. The LCD/OLED panels displaying the output get distorted heavily before they hit the users eyes. What the software generates needs to compensate by applying the right inverse distortion to the output video.

Everyone that tests the CV1 notices that the distortion is not quite correct. As you look around, the world warps and shifts annoyingly. Sooner or later that needs fixing. That’s done by taking photos of calibration patterns through the headset lenses and generating a distortion model.

Camera / USB failures

The camera feeds are captured using a custom user-space UVC driver implementation that knows how to set up the special synchronisation settings of the CV1 and DK2 cameras, and then repeatedly schedules isochronous USB packet transfers to receive the video.

Occasionally, some people experience failure to re-schedule those transfers. The kernel rejects them with an out-of-memory error failing to set aside DMA memory (even though it may have been running fine for quite some time). It’s not clear why that happens – but the end result at the moment is that the USB traffic for that camera dies completely and there’ll be no more tracking from that camera until the application is restarted.

Often once it starts happening, it will keep happening until the PC is rebooted and the kernel memory state is reset.

Occluded cases

Tracking generally works well when the cameras get a clear shot of each device, but there are cases like sighting down the barrel of a gun where we expect that the user will line up the controllers in front of one another, and in front of the headset. In that case, even though we probably have a good idea where each device is, it can be hard to figure out which LEDs belong to which device.

If we already have a good tracking lock on the devices, I think it should be possible to keep tracking even down to 1 or 2 LEDs being visible – but the pose assessment code will have to be aware that’s what is happening.

Upstreaming

April 14th marks 2 years since I first branched off OpenHMD master to start working on CV1 tracking. How hard can it be, I thought? I’ll knock this over in a few months.

Since then I’ve accumulated over 300 commits on top of OpenHMD master that eventually all need upstreaming in some way.

One thing people have expressed as a prerequisite for upstreaming is to try and remove the OpenCV dependency. The tracking relies on OpenCV to do camera distortion calculations, and for their PnP implementation. It should be possible to reimplement both of those directly in OpenHMD with a bit of work – possibly using the fast LambdaTwist P3P algorithm that Philipp Zabel wrote, that I’m already using for pose extraction in the brute-force search.

Others

I’ve picked the top issues to highlight here. https://github.com/thaytan/OpenHMD/issues has a list of all the other things that are still on the radar for fixing eventually.

Other Headsets

At some point soon, I plan to put a pin in the CV1 tracking and look at adapting it to more recent inside-out headsets like the Rift S and WMR headsets. I implemented 3DOF support for the Rift S last year, but getting to full positional tracking for that and other inside-out headsets means implementing a SLAM/VIO tracking algorithm to track the headset position.

Once the headset is tracking, the code I’m developing here for CV1 to find and track controllers will hopefully transfer across – the difference with inside-out tracking is that the cameras move around with the headset. Finding the controllers in the actual video feed should work much the same.

Sponsorship

This development happens mostly in my spare time and partly as open source contribution time at work at Centricular. I am accepting funding through Github Sponsorships to help me spend more time on it – I’d really like to keep helping Linux have top-notch support for VR/AR applications. Big thanks to the people that have helped get this far.

,

Stewart Smithlibeatmydata v129

Every so often, I release a new libeatmydata. This has not happened for a long time. This is just some bug fixes, most of which have been in the Debian package for some time, I’ve just been lazy and not sat down and merged them.

git clone https://github.com/stewartsmith/libeatmydata.git

Download the source tarball from here: libeatmydata-129.tar.gz and GPG signature: libeatmydata-129.tar.gz.asc from my GPG key.

Or, feel free to grab some Fedora RPMs:

Releases published also in the usual places:

,

BlueHackersWorld bipolar day 2021

Today, 30 March, is World Bipolar Day.

Vincent van Gogh - Worn Out

Why that particular date? It’s Vincent van Gogh’s birthday (1853), and there is a fairly strong argument that the Dutch painter suffered from bipolar (among other things).

The image on the side is Vincent’s drawing “Worn Out” (from 1882), and it seems to capture the feeling rather well – whether (hypo)manic, depressed, or mixed. It’s exhausting.

Bipolar is complicated, often undiagnosed or misdiagnosed, and when only treated with anti-depressants, it can trigger the (hypo)mania – essentially dragging that person into that state near-permanently.

Have you heard of Bipolar II?

Hypo-mania is the “lesser” form of mania that distinguishes Bipolar I (the classic “manic depressive” syndrome) from Bipolar II. It’s “lesser” only in the sense that rather than someone going so hyper they may think they can fly (Bipolar I is often identified when someone in manic state gets admitted to hospital – good catch!) while with Bipolar II the hypo-mania may actually exhibit as anger. Anger in general, against nothing in particular but potentially everyone and everything around them. Or, if it’s a mixed episode, anger combined with strong negative thoughts. Either way, it does not look like classic mania. It is, however, exhausting and can be very debilitating.

Bipolar II people often present to a doctor while in depressed state, and GPs (not being psychiatrists) may not do a full diagnosis. Note that D.A.S. and similar test sheets are screening tools, they are not diagnostic. A proper diagnosis is more complex than filling in a form some questions (who would have thought!)

Call to action

If you have a diagnosis of depression, only from a GP, and are on medication for this, I would strongly recommend you also get a referral to a psychiatrist to confirm that diagnosis.

Our friends at the awesome Black Dog Institute have excellent information on bipolar, as well as a quick self-test – if that shows some likelihood of bipolar, go get that referral and follow up ASAP.

I will be writing more about the topic in the coming time.

The post World bipolar day 2021 first appeared on BlueHackers.org.

,

Dave HallParameter Store vs Secrets Manager

Which AWS managed service is best for storing and managing your secrets?

,

Dave HallA Lost Parcel Results in a New Website

When Australia Post lost a parcel, we found a lot of problems with one of their websites.

,

Jan SchmidtRift CV1 – Testing SteamVR

Update:

This post documented an older method of building SteamVR-OpenHMD. I moved them to a page here. That version will be kept up to date for any future changes, so go there.


I’ve had a few people ask how to test my OpenHMD development branch of Rift CV1 positional tracking in SteamVR. Here’s what I do:

  • Make sure Steam + SteamVR are already installed.
  • Clone the SteamVR-OpenHMD repository:
git clone --recursive https://github.com/ChristophHaag/SteamVR-OpenHMD.git
  • Switch the internal copy of OpenHMD to the right branch:
cd subprojects/openhmd
git remote add thaytan-github https://github.com/thaytan/OpenHMD.git
git fetch thaytan-github
git checkout -b rift-kalman-filter thaytan-github/rift-kalman-filter
cd ../../
  • Use meson to build and register the SteamVR-OpenHMD binaries. You may need to install meson first (see below):
meson -Dbuildtype=release build
ninja -C build
./install_files_to_build.sh
./register.sh
  • It is important to configure in release mode, as the kalman filtering code is generally too slow for real-time in debug mode (it has to run 2000 times per second)
  • Make sure your USB devices are accessible to your user account by configuring udev. See the OpenHMD guide here: https://github.com/OpenHMD/OpenHMD/wiki/Udev-rules-list
  • Please note – only Rift sensors on USB 3.0 ports will work right now. Supporting cameras on USB 2.0 requires someone implementing JPEG format streaming and decoding.
  • It can be helpful to test OpenHMD is working by running the simple example. Check that it’s finding camera sensors at startup, and that the position seems to change when you move the headset:
./build/subprojects/openhmd/openhmd_simple_example
  • Calibrate your expectations for how well tracking is working right now! Hint: It’s very experimental 🙂
  • Start SteamVR. Hopefully it should detect your headset and the light(s) on your Rift Sensor(s) should power on.

Meson

I prefer the Meson build system here. There’s also a cmake build for SteamVR-OpenHMD you can use instead, but I haven’t tested it in a while and it sometimes breaks as I work on my development branch.

If you need to install meson, there are instructions here – https://mesonbuild.com/Getting-meson.html summarising the various methods.

I use a copy in my home directory, but you need to make sure ~/.local/bin is in your PATH

pip3 install --user meson

,

Jan SchmidtRift CV1 – Pose rejection

I spent some time this weekend implementing a couple of my ideas for improving the way the tracking code in OpenHMD filters and rejects (or accepts) possible poses when trying to match visible LEDs to the 3D models for each device.

In general, the tracking proceeds in several steps (in parallel for each of the 3 devices being tracked):

  1. Do a brute-force search to match LEDs to 3D models, then (if matched)
    1. Assign labels to each LED blob in the video frame saying what LED they are.
    2. Send an update to the fusion filter about the position / orientation of the device
  2. Then, as each video frame arrives:
    1. Use motion flow between video frames to track the movement of each visible LED
    2. Use the IMU + vision fusion filter to predict the position/orientation (pose) of each device, and calculate which LEDs are expected to be visible and where.
  3. Try and match up and refine the poses using the predicted pose prior and labelled LEDs. In the best case, the LEDs are exactly where the fusion predicts they’ll be. More often, the orientation is mostly correct, but the position has drifted and needs correcting. In the worst case, we send the frame back to step 1 and do a brute-force search to reacquire an object.

The goal is to always assign the correct LEDs to the correct device (so you don’t end up with the right controller in your left hand), and to avoid going back to the expensive brute-force search to re-acquire devices as much as possible

What I’ve been working on this week is steps 1 and 3 – initial acquisition of correct poses, and fast validation / refinement of the pose in each video frame, and I’ve implemented two new strategies for that.

Gravity Vector matching

The first new strategy is to reject candidate poses that don’t closely match the known direction of gravity for each device. I had a previous implementation of that idea which turned out to be wrong, so I’ve re-worked it and it helps a lot with device acquisition.

The IMU accelerometer and gyro can usually tell us which way up the device is (roll and pitch) but not which way they are facing (yaw). The measure for ‘known gravity’ comes from the fusion Kalman filter covariance matrix – how certain the filter is about the orientation of the device. If that variance is small this new strategy is used to reject possible poses that don’t have the same idea of gravity (while permitting rotations around the Y axis), with the filter variance as a tolerance.

Partial tracking matches

The 2nd strategy is based around tracking with fewer LED correspondences once a tracking lock is acquired. Initial acquisition of the device pose relies on some heuristics for how many LEDs must match the 3D model. The general heuristic threshold I settled on for now is that 2/3rds of the expected LEDs must be visible to acquire a cold lock.

With the new strategy, if the pose prior has a good idea where the device is and which way it’s facing, it allows matching on far fewer LED correspondences. The idea is to keep tracking a device even down to just a couple of LEDs, and hope that more become visible soon.

While this definitely seems to help, I think the approach can use more work.

Status

With these two new approaches, tracking is improved but still quite erratic. Tracking of the headset itself is quite good now and for me rarely loses tracking lock. The controllers are better, but have a tendency to “fly off my hands” unexpectedly, especially after fast motions.

I have ideas for more tracking heuristics to implement, and I expect a continuous cycle of refinement on the existing strategies and new ones for some time to come.

For now, here’s a video of me playing Beat Saber using tonight’s code. The video shows the debug stream that OpenHMD can generate via Pipewire, showing the camera feed plus overlays of device predictions, LED device assignments and tracked device positions. Red is the headset, Green is the right controller, Blue is the left controller.

Initial tracking is completely wrong – I see some things to fix there. When the controllers go offline due to inactivity, the code keeps trying to match LEDs to them for example, and then there are some things wrong with how it’s relabelling LEDs when they get incorrect assignments.

After that, there are periods of good tracking with random tracking losses on the controllers – those show the problem cases to concentrate on.

,

Colin CharlesLife with Rona 2.0 – Days 4, 5, 6, 7, 8 and 9

These lack of updates are also likely because I’ve been quite caught up with stuff.

Monday I had a steak from Bay Leaf Steakhouse for dinner. It was kind of weird eating it from packs, but then I’m reminded you could do this in economy class. Tuesday I wanted to attempt to go vegetarian and by the time I was done with a workout, the only place was a chap fan shop (Leong Heng) where I had a mixture of Chinese and Indian chap fan. The Indian stall is run by an ex-Hyatt staff member who immediately recognised me! Wednesday, Alice came to visit, so we got to Hanks, got some alcohol, and managed a smorgasbord of food from Pickers/Sate Zul/Lila Wadi. Night ended very late, and on Thursday, visited Hai Tian for their famous salted egg squid and prawns in a coconut shell. Friday was back to being normal, so I grabbed a pizza from Mint Pizza (this time I tried their Aussie variant). Saturday, today, I hit up Rasa Sayang for some matcha latte, but grabbed food from Classic Pilot Cafe, which Faeeza owns! It was the famous salted egg chicken, double portion, half rice.

As for workouts, I did sign up for Mantas but found it pretty hard to do, timezone wise. I did spend a lot of time jogging on the beach (this has been almost a daily affair). Monday I also did 2 MD workouts, Tuesday 1 MD workout, Wednesday half a MD workout, Thursday I did a Ping workout at Pwrhouse (so good!), Friday 1 MD workout, and Saturday an Audrey workout at Pwrhouse and 1 MD workout.

Wednesday I also found out that Rasmus passed away. Frankly, there are no words.

Thursday, my Raspberry Pi 400 arrived. I set it up in under ten minutes, connecting it to the TV here. It “just works”. I made a video, which I should probably figure out how to upload to YouTube after I stitch it together. I have to work on using it a lot more.

COVID-19 cases are through the roof in Malaysia. This weekend we’ve seen two days of case breaking records, with today being 5,728 (yesterday was something close). Nutty. Singapore suspended the reciprocal green lane (RGL) agreement with Malaysia for the next 3 months.

I’ve managed to finish Bridgerton. I like the score. Finding something on Netflix is proving to be more difficult, regardless of having a VPN. Honestly, this is why Cable TV wins… linear programming that you’re just fed.

Stock market wise, I’ve been following the GameStop short squeeze, and even funnier is the Top Glove one, that they’re trying to repeat in Malaysia. Bitcoin seems to be doing “reasonably well” and I have to say, I think people are starting to realise decentralised services have a future. How do we get there?

What an interesting week, I look forward to more productive time. I’m still writing in my Hobonichi Techo, so at least that’s where most personal stuff ends up, I guess?

,

Jan SchmidtHitting a milestone – Beat Saber!

I hit an important OpenHMD milestone tonight – I completed a Beat Saber level using my Oculus Rift CV1!

I’ve been continuing to work on integrating Kalman filtering into OpenHMD, and on improving the computer vision that matches and tracks device LEDs. While I suspect noone will be completing Expert levels just yet, it’s working well enough that I was able to play through a complete level of Beat Saber. For a long time this has been my mental benchmark for tracking performance, and I’m really happy 🙂

Check it out:

I should admit at this point that completing this level took me multiple attempts. The tracking still has quite a tendency to lose track of controllers, or to get them confused and swap hands suddenly.

I have a list of more things to work on. See you at the next update!

,

Colin CharlesLife with Rona 2.0 – Day 3

What an unplanned day. I woke up in time to do an MD workout, despite feeling a little sore. So maybe I was about 10 minutes late and I missed the first set, but his workouts are so long, and I think there were seven sets anyway. Had a good brunch shortly thereafter.

Did a bit of reading, and then I decided to do a beach boardwalk walk… turns out they were policing the place, and you can’t hit the boardwalk. But the beach is fair game? So I went back to the hotel, dropped off my slippers, and went for a beach jog. Pretty nutty.

Came back to read a little more and figured I might as well do another MD workout. Then I headed out for dinner, trying out a new place — Mint Pizza. Opened 20.12.2020, and they’re empty, and their pizza is actually pretty good. Lamb and BBQ chicken, they did half-and-half.

Twitter was discussing Raspberry Pi’s, and all I could see is a lot of misinformation, which is truly shocking. The irony is that open source has been running the Internet for so long, and progressive web apps have come such a long way…

Back in the day when I did OpenOffice.org or Linux training even, we always did say you should learn concepts and not tools. From the time we ran Linux installfests in the late-90s in Sunway Pyramid (back then, yes, Linux was hard, and you had winmodems), but I had forgotten that I even did stuff for school teachers and NGOs back in 2002… I won’t forget PC Gemilang either…

Anyway, I placed an order again for another Raspberry Pi 400. I am certain that most people talk so much crap, without realising that Malaysia isn’t a developed nation and most people can’t afford a Mac let alone a PC. Laptops aren’t cheap. And there are so many other issues…. Saying Windows is still required in 2021 is the nuttiest thing I’ve heard in a long time. Easy to tweet, much harder to think about TCO, and realise where in the journey Malaysia is.

Maybe the best thing was that Malaysian Twitter learned about technology. I doubt many realised the difference between a Pi board vs the 400, but hey, the fact that they talked about tech is still a win (misinformed, but a win).

,

Colin CharlesLife with Rona 2.0 – Days 1 & 2

Today is the first day that in the state of Pahang, we have to encounter what many Malaysians are referring to as the Movement Control Order 2.0 (MCO 2.0). I think everyone finally agrees with the terminology that this is a lockdown now, because I remember back in the day when I was calling it that, I’d definitely offend a handful of journalists.

This is one interesting change for me compared to when I last wrote Life with RonaDay 56 of being indoors and not even leaving my household, in Kuala Lumpur. I am now not in the state, I am living in a hotel, and I am obviously moving around a little more since we have access to the beach.

KL/Selangor and several other states have already been under the MCO 2.0 since January 13 2021, and while it was supposed to end on January 26, it seems like they’ve extended and harmonised the dates for Peninsular Malaysia to end on February 4 2021. I guess everyone got the “good news” yesterday. The Prime Minister announced some kind of aid last week, but it is still mostly a joke.

Today was the 2nd day I woke up at around 2.30pm because I went to bed at around 8am. First day I had a 23.5 hour uptime, and the today was less brutal, but working from 1-8am with the PST timezone is pretty brutal. Consequently, I barely got too much done, and had one meal, vegetarian, two packs that included rice. I did get to walk by the beach (between Teluk Cempedak and Teluk Cempedak 2), did quite a bit of exercise there and I think even the monkeys are getting hungry… lots of stray cats and monkeys. Starbucks closes at 7pm, and I rocked up at 7.10pm (this was just like yesterday, when I arrived at 9.55pm and was told they wouldn’t grant me a coffee!).

While writing this entry, I did manage to get into a long video call with some friends and I guess it was good catching up with people in various states. It also is what prevented me from publishing this entry!

Day 2

I did wake up reasonable early today because I had pre-ordered room service to arrive at 9am. There is a fixed menu at the hotel for various cuisines (RM48/pax, thankfully gratis for me) and I told them I prefer not having to waste, so just give me what I want which is off menu items anyway. Roti telur double telur (yes, I know it is a roti jantan) with some banjir dhal and sambal and a bit of fruit on the side with two teh tariks. They delivered as requested. I did forget to ask for a jar of honey but that is OK, there is always tomorrow.

I spent most of the day vacillating, and wouldn’t consider it productive by any measure. Just chit chats and napping. It did rain today after a long time, so the day seemed fairly dreary.

When I finally did awaken from my nap, I went for a run on the beach. I did it barefoot. I have no idea if this is how it is supposed to be done, or if you are to run nearer the water or further up above, but I did move around between the two quite often. The beach is still pretty dead, but it is expected since no one is allowed to go unless you’re a hotel guest.

The hotel has closed 3/4 of their villages (blocks) and moved everyone to the village I’m staying in (for long stay guests…). I’m thankful I have a pretty large suite, it is a little over 980sqft, and the ample space, while smaller than my home, is still welcome.

Post beach run, I did a workout with MD via Instagram. It was strength/HIIT based, and I burnt a tonne, because he gave us one of his signature 1.5h classes. It was longer than the 80 minute class he normally charges RM50 for (I still think this is undervaluing his service, but he really does care and does it for the love of seeing his students grow!).

Post-workout I decided to head downtown to find some dinner. Everything at the Teluk Cemepdak block of shops was closed, so they’re not even bothered with doing takeaway. Sg. Lembing steakhouse seemed to have cars parked, Vanggey was empty (Crocodile Rock was open, can’t say if there was a crowd, because the shared parking lot was empty), there was a modest queue at Sate Zul, and further down, Lena was closed, Pickers was open for takeaway but looked pretty closed, Tjantek was open surprisingly, and then I thought I’d give Nusantara a try again, this time for food, but their chef had just gone home at about 8pm. Oops. So I drove to LAN burger, initially ordering just one chicken double special; however they looked like they could use the business so I added on a beef double special. They now accept Boost payments so have joined the e-wallet era. One less place to use cash, which is also why I really like Kuantan. On the drive back, Classic Pilot Cafe was also open and I guess I’ll be heading there too during this lockdown.

Came back to the room to finish both burgers in probably under 15 minutes. While watching the first episode of Bridgerton on Netflix. I’m not sure what really captivates, but I will continue on (I still haven’t finished the first episode). I need to figure out how to use the 2 TVs that I have in this room — HDMI cable? Apple TV? Not normally using a TV, all this is clearly more complex than I care to admit.

I soaked longer than expected, ended up a prune, but I’m sure it will give me good rest!

One thought to leave with:

“Learn to enjoy every minute of your life. Be happy now. Don’t wait for something outside of yourself to make you happy in the future.” — Earl Nightingale

,

Sam WatkinsDeveloping CZ, a dialect of C that looks like Python

In my experience, the C programming language is still hard to beat, even 50 years after it was first developed (and I feel the same way about UNIX). When it comes to general-purpose utility, low-level systems programming, performance, and portability (even to tiny embedded systems), I would choose C over most modern or fashionable alternatives. In some cases, it is almost the only choice.

Many developers believe that it is difficult to write secure and reliable software in C, due to its free pointers, the lack of enforced memory integrity, and the lack of automatic memory management; however in my opinion it is possible to overcome these risks with discipline and a more secure system of libraries constructed on top of C and libc. Daniel J. Bernstein and Wietse Venema are two developers who have been able to write highly secure, stable, reliable software in C.

My other favourite language is Python. Although Python has numerous desirable features, my favourite is the light-weight syntax: in Python, block structure is indicated by indentation, and braces and semicolons are not required. Apart from the pleasure and relief of reading and writing such light and clear code, which almost appears to be executable pseudo-code, there are many other benefits. In C or JavaScript, if you omit a trailing brace somewhere in the code, or insert an extra brace somewhere, the compiler may tell you that there is a syntax error at the end of the file. These errors can be annoying to track down, and cannot occur in Python. Python not only looks better, the clear syntax helps to avoid errors.

The obvious disadvantage of Python, and other dynamic interpreted languages, is that most programs run extremely slower than C programs. This limits the scope and generality of Python. No AAA or performance-oriented video game engines are programmed in Python. The language is not suitable for low-level systems programming, such as operating system development, device drivers, filesystems, performance-critical networking servers, or real-time systems.

C is a great all-purpose language, but the code is uglier than Python code. Once upon a time, when I was experimenting with the Plan 9 operating system (which is built on C, but lacks Python), I missed Python’s syntax, so I decided to do something about it and write a little preprocessor for C. This converts from a “Pythonesque” indented syntax to regular C with the braces and semicolons. Having forked a little dialect of my own, I continued from there adding other modules and features (which might have been a mistake, but it has been fun and rewarding).

At first I called this translator Brace, because it added in the braces for me. I now call the language CZ. It sounds like “C-easy”. Ease-of-use for developers (DX) is the primary goal. CZ has all of the features of C, and translates cleanly into C, which is then compiled to machine code as normal (using any C compiler; I didn’t write one); and so CZ has the same features and performance as C, but enjoys a more pleasing syntax.

CZ is now self-hosted, in that the translator is written in the language CZ. I confess that originally I wrote most of it in Perl; I’m proficient at Perl, but I consider it to be a fairly ugly language, and overly complicated.

I intend for CZ’s new syntax to be “optional”, ideally a developer will be able to choose to use the normal C syntax when editing CZ, if they prefer it. For this, I need a tool to convert C back to CZ, which I have not fully implemented yet. I am aware that, in addition to traditionalists, some vision-impaired developers prefer to use braces and semicolons, as screen readers might not clearly indicate indentation. A C to CZ translator would of course also be valuable when porting an existing C program to CZ.

CZ has a number of useful features that are not found in standard C, but I did not go so far as C++, which language has been described as “an octopus made by nailing extra legs onto a dog”. I do not consider C to be a dog, at least not in a negative sense; but I think that C++ is not an improvement over plain C. I am creating CZ because I think that it is possible to improve on C, without losing any of its advantages or making it too complex.

One of the most interesting features I added is a simple syntax for fast, light coroutines. I based this on Simon Tatham’s approach to Coroutines in C, which may seem hacky at first glance, but is very efficient and can work very well in practice. I implemented a very fast web server with very clean code using these coroutines. The cost of switching coroutines with this method is little more than the cost of a function call.

CZ has hygienic macros. The regular cpp (C preprocessor) macros are not hygenic and many people consider them hacky and unsafe to use. My CZ macros are safe, and somewhat more powerful than standard C macros. They can be used to neatly add new program control structures. I have plans to further develop the macro system in interesting ways.

I added automatic prototype and header generation, as I do not like having to repeat myself when copying prototypes to separate header files. I added support for the UNIX #! scripting syntax, and for cached executables, which means that CZ can be used like a scripting language without having to use a separate compile or make command, but the programs are only recompiled when something has been changed.

For CZ, I invented a neat approach to portability without conditional compilation directives. Platform-specific library fragments are automatically included from directories having the name of that platform or platform-category. This can work very well in practice, and helps to avoid the nightmare of conditional compilation, feature detection, and Autotools. Using this method, I was able easily to implement portable interfaces to features such as asynchronous IO multiplexing (aka select / poll).

The CZ library includes flexible error handling wrappers, inspired by W. Richard Stevens’ wrappers in his books on Unix Network Programming. If these wrappers are used, there is no need to check return values for error codes, and this makes the code much safer, as an error cannot accidentally be ignored.

CZ has several major faults, which I intend to correct at some point. Some of the syntax is poorly thought out, and I need to revisit it. I developed a fairly rich library to go with the language, including safer data structures, IO, networking, graphics, and sound. There are many nice features, but my CZ library is more prototype than a finished product, there are major omissions, and some features are misconceived or poorly implemented. The misfeatures should be weeded out for the time-being, or moved to an experimental section of the library.

I think that a good software library should come in two parts, the essential low-level APIs with the minimum necessary functionality, and a rich set of high-level convenience functions built on top of the minimal API. I need to clearly separate these two parts in order to avoid polluting the namespaces with all sorts of nonsense!

CZ is lacking a good modern system of symbol namespaces. I can look to Python for a great example. I need to maintain compatibility with C, and avoid ugly symbol encodings. I think I can come up with something that will alleviate the need to type anything like gtk_window_set_default_size, and yet maintain compatibility with the library in question. I want all the power of C, but it should be easy to use, even for children. It should be as easy as BASIC or Processing, a child should be able to write short graphical demos and the like, without stumbling over tricky syntax or obscure compile errors.

Here is an example of a simple CZ program which plots the Mandelbrot set fractal. I think that the program is fairly clear and easy to understand, although there is still some potential to improve and clarify the code.

#!/usr/local/bin/cz --
use b
use ccomplex

Main:
	num outside = 16, ox = -0.5, oy = 0, r = 1.5
	long i, max_i = 50, rb_i = 30
	space()
	uint32_t *px = pixel()  # CONFIGURE!
	num d = 2*r/h, x0 = ox-d*w_2, y0 = oy+d*h_2
	for(y, 0, h):
		cmplx c = x0 + (y0-d*y)*I
		repeat(w):
			cmplx w = c
			for i=0; i < max_i && cabs(w) < outside; ++i
				w = w*w + c
			*px++ = i < max_i ? rainbow(i*359 / rb_i % 360) : black
			c += d

I wrote a more elaborate variant of this program, which generates images like the one shown below. There are a few tricks used: continuous colouring, rainbow colours, and plotting the logarithm of the iteration count, which makes the plot appear less busy close to the black fractal proper. I sell some T-shirts and other products with these fractal designs online.

An image from the Mandelbrot set, generated by a fairly simple CZ program.

I am interested in graph programming, and have been for three decades since I was a teenager. By graph programming, I mean programming and modelling based on mathematical graphs or diagrams. I avoid the term visual programming, because there is no necessary reason that vision impaired folks could not use a graph programming language; a graph or diagram may be perceived, understood, and manipulated without having to see it.

Mathematics is something that naturally exists, outside time and independent of our universe. We humans discover mathematics, we do not invent or create it. One of my main ideas for graph programming is to represent a mathematical (or software) model in the simplest and most natural way, using relational operators. Elementary mathematics can be reduced to just a few such operators:

+add, subtract, disjoint union, zero
×multiply, divide, cartesian product, one
^power, root, logarithm
sin, cos, sin-1, cos-1, hypot, atan2
δdifferential, integral
a set of minimal relational operators for elementary math

I think that a language and notation based on these few operators (and similar) can be considerably simpler and more expressive than conventional math or programming languages.

CZ is for me a stepping-stone toward this goal of an expressive relational graph language. It is more pleasant for me to develop software tools in CZ than in C or another language.

Thanks for reading. I wrote this article during the process of applying to join Toptal, which appears to be a freelancing portal for top developers; and in response to this article on toptal: After All These Years, the World is Still Powered by C Programming.

My CZ project has been stalled for quite some time. I foolishly became discouraged after receiving some negative feedback. I now know that honest negative feedback should be valued as an opportunity to improve, and I intend to continue the project until it lacks glaring faults, and is useful for other people. If this project or this article interests you, please contact me and let me know. It is much more enjoyable to work on a project when other people are actively interested in it!

Gary PendergastWordPress Importers: Free (as in Speech)

Back at the start of this series, I listed four problems within the scope of the WordPress Importers that we needed to address. Three of them are largely technical problems, which I covered in previous posts. In wrapping up this series, I want to focus exclusively on the fourth problem, which has a philosophical side as well as a technical one — but that does not mean we cannot tackle it!

Problem Number 4

Some services work against their customers, and actively prevent site owners from controlling their own content.

Some services are merely inconvenient: they provide exports, but it often involves downloading a bunch of different files. Your CMS content is in one export, your store products are in another, your orders are in another, and your mailing list is in yet another. It’s not ideal, but they at least let you get a copy of your data.

However, there’s another class of services that actively work against their customers. It’s these services I want to focus on: the services that don’t provide any ability to export your content — effectively locking people in to using their platform. We could offer these folks an escape! The aim isn’t to necessarily make them use WordPress, it’s to give them a way out, if they want it. Whether they choose to use WordPress or not after that is immaterial (though I certainly hope they would, of course). The important part is freedom of choice.

It’s worth acknowledging that this is a different approach to how WordPress has historically operated in relation to other CMSes. We provide importers for many CMSes, but we previously haven’t written exporters. However, I don’t think this is a particularly large step: for CMSes that already provide exports, we’d continue to use those export files. This is focussed on the few services that try to lock their customers in.

Why Should WordPress Take This On?

There are several aspects to why we should focus on this.

First of all, it’s the the WordPress mission. Underpinning every part of WordPress is the simplest of statements:

Democratise Publishing

The freedom to build. The freedom to change. The freedom to share.

These freedoms are the pillars of a Free and Open Web, but they’re not invulnerable: at times, they need to be defended, and that needs people with the time and resources to offer a defence.

Which brings me to my second point: WordPress has the people who can offer that defence! The WordPress project has so many individuals working on it, from such a wide variety of backgrounds, we’re able to take on a vast array of projects that a smaller CMS just wouldn’t have the bandwidth for. That’s not to say that we can do everything, but when there’s a need to defend the entire ecosystem, we’re able to devote people to the cause.

Finally, it’s important to remember that WordPress doesn’t exist in a vacuum, we’re part of a broad ecosystem which can only exist through the web remaining open and free. By encouraging all CMSes to provide proper exports, and implementing them for those that don’t, we help keep our ecosystem healthy.

We have the ability to take on these challenges, but we have a responsibility that goes alongside. We can’t do it solely to benefit WordPress, we need to make that benefit available to the entire ecosystem. This is why it’s important to define a WordPress export schema, so that any CMS can make use of the export we produce, not just WordPress. If you’ll excuse the imagery for a moment, we can be the knight in shining armour that frees people — then gives them the choice of what they do with that freedom, without obligation.

How Can We Do It?

Moving on to the technical side of this problem, I can give you some good news: the answer is definitely not screen scraping. 😄 Scraping a site is fragile, impossible to transform into the full content, and provides an incomplete export of the site: anything that’s only available in the site dashboard can’t be obtained through scraping.

I’ve recently been experimenting with an alternative approach to solving this problem. Rather than trying to create something resembling a traditional exporter, it turns out that modern CMSes provide the tools we need, in the form of REST APIs. All we need to do is call the appropriate APIs, and collate the results. The fun part is that we can authenticate with these APIs as the site owner, by calling them from a browser extension! So, that’s what I’ve been experimenting with, and it’s showing a lot of promise.

If you’re interested in playing around with it, the experimental code is living in this repository. It’s a simple proof of concept, capable of exporting the text content of a blog on a Wix site, showing that we can make a smooth, comprehensive, easy-to-use exporter for any Wix site owner.

Screenshot of the "Free (as in Speech)" browser extension UI.

Clicking the export button starts a background script, which calls Wix’s REST APIs as the site owner, to get the original copy of the content. It then packages it up, and presents it as a WXR file to download.

Screenshot of a Firefox download dialog, showing a Wix site packaged up as a WXR file.

I’m really excited about how promising this experiment is. It can ultimately provide a full export of any Wix site, and we can add support for other CMS services that choose to artificially lock their customers in.

Where Can I Help?

If you’re a designer or developer who’s excited about working on something new, head on over to the repository and check out the open issues: if there’s something that isn’t already covered, feel free to open a new issue.

Since this is new ground for a WordPress project, both technically and philosophically, I’d love to hear more points of view. It’s being discussed in the WordPress Core Dev Chat this week, and you can also let me know what you think in the comments!

This post is part of a series, talking about the WordPress Importers, their history, where they are now, and where they could go in the future.

,

Gary PendergastWordPress Importers: Defining a Schema

While schemata are usually implemented using language-specific tools (eg, XML uses XML Schema, JSON uses JSON Schema), they largely use the same concepts when talking about data. This is rather helpful, we don’t need to make a decision on data formats before we can start thinking about how the data should be arranged.

Note: Since these concepts apply equally to all data formats, I’m using “WXR” in this post as shorthand for “the structured data section of whichever file format we ultimately use”, rather than specifically referring to the existing WXR format. 🙂

Why is a Schema Important?

It’s fair to ask why, if the WordPress Importers have survived this entire time without a formal schema, why would we need one now?

There are two major reasons why we haven’t needed one in the past:

  • WXR has remained largely unchanged in the last 10 years: there have been small additions or tweaks, but nothing significant. There’s been no need to keep track of changes.
  • WXR is currently very simple, with just a handful of basic elements. In a recent experiment, I was able to implement a JavaScript-based WXR generator in just a few days, entirely by referencing the Core implementation.

These reasons are also why it would help to implement a schema for the future:

  • As work on WXR proceeds, there will likely need to be substantial changes to what data is included: adding new fields, modifying existing fields, and removing redundant fields. Tracking these changes helps ensure any WXR implementations can stay in sync.
  • These changes will result in a more complex schema: relying on the source to re-implement it will become increasingly difficult and error-prone. Following Gutenberg’s lead, it’s likely that we’d want to provide official libraries in both PHP and JavaScript: keeping them in sync is best done from a source schema, rather than having one implementation copy the other.

Taking the time to plan out a schema now gives us a solid base to work from, and it allows for future changes to happen in a reliable fashion.

WXR for all of WordPress

With a well defined schema, we can start to expand what data will be included in a WXR file.

Media

Interestingly, many of the challenges around media files are less to do with WXR, and more to do with importer capabilities. The biggest headache is retrieving the actual files, which the importer currently handles by trying to retrieve the file from the remote server, as defined in the wp:attachment_url node. In context, this behaviour is understandable: 10+ years ago, personal internet connections were too slow to be moving media around, it was better to have the servers talk to each other. It’s a useful mechanism that we should keep as a fallback, but the more reliable solution is to include the media file with the export.

Plugins and Themes

There are two parts to plugins and themes: the code, and the content. Modern WordPress sites require plugins to function, and most are customised to suit their particular theme.

For exporting the code, I wonder if a tiered solution could be applied:

  • Anything from WordPress.org would just need their slug, since they can be re-downloaded during import. Particularly as WordPress continues to move towards an auto-updated future, modified versions of plugins and themes are explicitly not supported.
  • Third party plugins and themes would be given a filter to use, where they can provide a download URL that can be included in the export file.
  • Third party plugins/themes that don’t provide a download URL would either need to be skipped, or zipped up and included in the export file.

For exporting the content, WXR already includes custom post types, but doesn’t include custom settings, or custom tables. The former should be included automatically, and the latter would likely be handled by an appropriate action for the plugin to hook into.

Settings

There are a currently handful of special settings that are exported, but (as I just noted, particularly with plugins and themes being exported) this would likely need to be expanded to included most items in wp_options.

Users

Currently, the bare minimum information about users who’ve authored a post is included in the export. This would need to be expanded to include more user information, as well as users who aren’t post authors.

WXR for parts of WordPress

The modern use case for importers isn’t just to handle a full site, but to handle keeping sites in sync. For example, most news organisations will have a staging site (or even several layers of staging!) which is synchronised to production.

While it’s well outside the scope of this project to directly handle every one of these use cases, we should be able to provide the framework for organisations to build reliable platforms on. Exports should be repeatable, objects in the export should have unique identifiers, and the importer should be able to handle any subset of WXR.

WXR Beyond WordPress

Up until this point, we’ve really been talking about WordPress→WordPress migrations, but I think WXR is a useful format beyond that. Instead of just containing direct exports of the data from particular plugins, we could also allow it to contain “types” of data. This turns WXR into an intermediary language, exports can be created from any source, and imported into WordPress.

Let’s consider an example. Say we create a tool that can export a Shopify, Wix, or GoDaddy site to WXR, how would we represent an online store in the WXR file? We don’t want to export in the format that any particular plugin would use, since a WordPress Core tool shouldn’t be advantaging one plugin over others.

Instead, it would be better if we could format the data in a platform-agnostic way, which plugins could then implement support for. As luck would have it, Schema.org provides exactly the kind of data structure we could use here. It’s been actively maintained for nearly nine years, it supports a wide variety of data types, and is intentionally platform-agnostic.

Gazing into my crystal ball for a moment, I can certainly imagine a future where plugins could implement and declare support for importing certain data types. When handling such an import (assuming one of those plugins wasn’t already installed), the WordPress Importer could offer them as options during the import process. This kind of seamless integration allows WordPress to show that it offers the same kind of fully-featured site building experience that modern CMS services do.

Of course, reality is never quite as simple as crystal balls and magic wands make them out to be. We have to contend with services that provide incomplete or fragmented exports, and there are even services that deliberately don’t provide exports at all. In the next post, I’ll be writing about why we should address this problem, and how we might be able to go about it.

This post is part of a series, talking about the WordPress Importers, their history, where they are now, and where they could go in the future.

,

Gary PendergastWordPress Importers: Getting Our House in Order

The previous post talked about the broad problems we need to tackle to bring our importers up to speed, making them available for everyone to use.

In this post, I’m going to focus on what we could do with the existing technology, in order to give us the best possible framework going forward.

A Reliable Base

Importers are an interesting technical problem. Much like you’d expect from any backup/restore code, importers need to be extremely reliable. They need to comfortable handle all sorts of unusual data, and they need to keep it all safe. Particularly considering their age, the WordPress Importers do a remarkably good job of handling most content you can throw at it.

However, modern development practices have evolved and improved since the importers were first written, and we should certainly be making use of such practices, when they fit with our requirements.

For building reliable software that we expect to largely run by itself, a variety of comprehensive automated testing is critical. This ensures we can confidently take on the broader issues, safe in the knowledge that we have a reliable base to work from.

Testing must be the first item on this list. A variety of automated testing gives us confidence that changes are safe, and that the code can continue to be maintained in the future.

Data formats must be well defined. While this is useful for ensuring data can be handled in a predictable fashion, it’s also a very clear demonstration of our commitment to data freedom.

APIs for creating or extending importers should be straightforward for hooking into.

Performance Isn’t an Optional Extra

With sites constantly growing in size (and with the export files potentially gaining a heap of extra data), we need to care about the performance of the importers.

Luckily, there’s already been some substantial work done on this front:

There are other groups in the WordPress world who’ve made performance improvements in their own tools: gathering all of that experience is a relatively quick way to bring in production-tested improvements.

The WXR Format

It’s worth talking about the WXR format itself, and determining whether it’s the best option for handling exports into the future. XML-based formats are largely viewed as a relic of days gone past, so (if we were to completely ignore backwards compatibility for a moment) is there a modern data format that would work better?

The short answer… kind of. 🙂

XML is actually well suited to this use case, and (particularly when looking at performance improvements) is the only data format for which PHP comes with a built-in streaming parser.

That said, WXR is basically an extension of the RSS format: as we add more data to the file that clearly doesn’t belong in RSS, there is likely an argument for defining an entirely WordPress-focused schema.

Alternative Formats

It’s important to consider what the priorities are for our export format, which will help guide any decision we make. So, I’d like to suggest the following priorities (in approximate priority order):

  • PHP Support: The format should be natively supported in PHP, thought it is still workable if we need to ship an additional library.
  • Performant: Particularly when looking at very large exports, it should be processed as quickly as possible, using minimal RAM.
  • Supports Binary Files: The first comments on my previous post asked about media support, we clearly should be treating it as a first-class citizen.
  • Standards Based: Is the format based on a documented standard? (Another way to ask this: are there multiple different implementations of the format? Do those implementations all function the same?
  • Backward Compatible: Can the format be used by existing tools with no changes, or minimal changes?
  • Self Descriptive: Does the format include information about what data you’re currently looking at, or do you need to refer to a schema?
  • Human Readable: Can the file be opened and read in a text editor?

Given these priorities, what are some options?

WXR (XML-based)

Either the RSS-based schema that we already use, or a custom-defined XML schema, the arguments for this format are pretty well known.

One argument that hasn’t been well covered is how there’s a definite trade-off when it comes to supporting binary files. Currently, the importer tries to scrape the media file from the original source, which is not particularly reliable. So, if we were to look at including media files in the WXR file, the best option for storing them is to base64 encode them. Unfortunately, that would have a serious effect on performance, as well as readability: adding huge base64 strings would make even the smallest exports impossible to read.

Either way, this option would be mostly backwards compatible, though some tools may require a bit of reworking if we were to substantial change the schema.

WXR (ZIP-based)

To address the issues with media files, an alternative option might be to follow the path that Microsoft Word and OpenOffice use: put the text content in an XML file, put the binary content into folders, and compress the whole thing.

This addresses the performance and binary support problems, but is initially worse for readability: if you don’t know that it’s a ZIP file, you can’t read it in a text editor. Once you unzip it, however, it does become quite readable, and has the same level of backwards compatibility as the XML-based format.

JSON

JSON could work as a replacement for XML in both of the above formats, with one additional caveat: there is no streaming JSON parser built in to PHP. There are 3rd party libraries available, but given the documented differences between JSON parsers, I would be wary about using one library to produce the JSON, and another to parse it.

This format largely wouldn’t be backwards compatible, though tools which rely on the export file being plain text (eg, command line tools to do broad search-and-replaces on the file) can be modified relatively easily.

There are additional subjective arguments (both for and against) the readability of JSON vs XML, but I’m not sure there’s anything to them beyond personal preference.

SQLite

The SQLite team wrote an interesting (indirect) argument on this topic: OpenOffice uses a ZIP-based format for storing documents, the SQLite team argued that there would be benefits (particularly around performance and reliability) for OpenOffice to switch to SQLite.

They key issues that I see are:

  • SQLite is included in PHP, but not enabled by default on Windows.
  • While the SQLite team have a strong commitment to providing long-term support, SQLite is not a standard, and the only implementation is the one provided by the SQLite team.
  • This option is not backwards compatible at all.

FlatBuffers

FlatBuffers is an interesting comparison, since it’s a data format focussed entirely on speed. The down side of this focus is that it requires a defined schema to read the data. Much like SQLite, the only standard for FlatBuffers is the implementation. Unlike SQLite, FlatBuffers has made no commitments to providing long-term support.

WXR (XML-based)WXR (ZIP-based)JSONSQLiteFlatBuffers
Works in PHP?✅✅⚠⚠⚠
Performant?⚠✅⚠✅✅
Supports Binary Files?⚠✅⚠✅✅
Standards Based?✅✅✅⚠ / ��
Backwards Compatible?⚠⚠���
Self Descriptive?✅✅✅✅�
Readable?✅⚠ / �✅��

As with any decision, this is a matter of trade-offs. I’m certainly interested in hearing additional perspectives on these options, or thoughts on options that I haven’t considered.

Regardless of which particular format we choose for storing WordPress exports, every format should have (or in the case of FlatBuffers, requires) a schema. We can talk about schemata without going into implementation details, so I’ll be writing about that in the next post.

This post is part of a series, talking about the WordPress Importers, their history, where they are now, and where they could go in the future.

Gary PendergastWordPress Importers: Stating the Problem

It’s time to focus on the WordPress Importers.

I’m not talking about tidying them up, or improve performance, or fixing some bugs, though these are certainly things that should happen. Instead, we need to consider their purpose, how they fit as a driver of WordPress’ commitment to Open Source, and how they can be a key element in helping to keep the Internet Open and Free.

The History

The WordPress Importers are arguably the key driver to WordPress’ early success. Before the importer plugins existed (before WordPress even supported plugins!) there were a handful of import-*.php scripts in the wp-admin directory that could be used to import blogs from other blogging platforms. When other platforms fell out of favour, WordPress already had an importer ready for people to move their site over. One of the most notable instances was in 2004, when Moveable Type changed their license and prices, suddenly requiring personal blog authors to pay for something that had previously been free. WordPress was fortunate enough to be in the right place at the right time: many of WordPress’ earliest users came from Moveable Type.

As time went on, WordPress became well known in its own right. Growth relied less on people wanting to switch from another provider, and more on people choosing to start their site with WordPress. For practical reasons, the importers were moved out of WordPress Core, and into their own plugins. Since then, they’ve largely been in maintenance mode: bugs are fixed when they come up, but since export formats rarely change, they’ve just continued to work for all these years.

An unfortunate side effect of this, however, is that new importers are rarely written. While a new breed of services have sprung up over the years, the WordPress importers haven’t kept up.

The New Services

There are many new CMS services that have cropped up in recent years, and we don’t have importers for any of them. WordPress.com has a few extra ones written, but they’ve been built on the WordPress.com infrastructure out of necessity.

You see, we’ve always assumed that other CMSes will provide some sort of export file that we can use to import into WordPress. That isn’t always the case, however. Some services (notable, Wix and GoDaddy Website Builder) deliberately don’t allow you to export your own content. Other services provide incomplete or fragmented exports, needlessly forcing stress upon site owners who want to use their own content outside of that service.

To work around this, WordPress.com has implemented importers that effectively scrape the site: while this has worked to some degree, it does require regular maintenance, and the importer has to do a lot of guessing about how the content should be transformed. This is clearly not a solution that would be maintainable as a plugin.

Problem Number 4

Some services work against their customers, and actively prevent site owners from controlling their own content.

This strikes at the heart of the WordPress Bill of Rights. WordPress is built with fundamental freedoms in mind: all of those freedoms point to owning your content, and being able to make use of it in any form you like. When a CMS actively works against providing such freedom to their community, I would argue that we have an obligation to help that community out.

A Variety of Content

It’s worth discussing how, when starting a modern CMS service, the bar for success is very high. You can’t get away with just providing a basic CMS: you need to provide all the options. Blogs, eCommerce, mailing lists, forums, themes, polls, statistics, contact forms, integrations, embeds, the list goes on. The closest comparison to modern CMS services is… the entire WordPress ecosystem: built on WordPress core, but with the myriad of plugins and themes available, along with the variety of services offered by a huge array of companies.

So, when we talk about the importers, we need to consider how they’ll be used.

Problem Number 3

To import from a modern CMS service into WordPress, your importer needs to map from service features to WordPress plugins.

Getting Our Own House In Order

Some of these problems don’t just apply to new services, however.

Out of the box, WordPress exports to WXR (WordPress eXtended RSS) files: an XML file that contains the content of the site. Back when WXR was first created, this was all you really needed, but much like the rest of the WordPress importers, it hasn’t kept up with the times. A modern WordPress site isn’t just the sum of its content: a WordPress site has plugins and themes. It has various options configured, it has huge quantities of media, it has masses of text content, far more than the first WordPress sites ever had.

Problem Number 2

WXR doesn’t contain a full export of a WordPress site.

In my view, WXR is a solid format for handling exports. An XML-based system is quite capable of containing all forms of content, so it’s reasonable that we could expand the WXR format to contain the entire site.

Built for the Future

If there’s one thing we can learn from the history of the WordPress importers, it’s that maintenance will potentially be sporadic. Importers are unlikely to receive the same attention that the broader WordPress Core project does, owners may come and go. An importer will get attention if it breaks, of course, but it otherwise may go months or years without changing.

Problem Number 1

We can’t depend on regular importer maintenance in the future.

It’s quite possible to build code that will be running in 10+ years: we see examples all across the WordPress ecosystem. Doing it in a reliable fashion needs to be a deliberate choice, however.

What’s Next?

Having worked our way down from the larger philosophical reasons for the importers, to some of the more technically-oriented implementation problems; I’d like to work our way back out again, focussing on each problem individually. In the following posts, I’ll start laying out how I think we can bring our importers up to speed, prepare them for the future, and make them available for everyone.

This post is part of a series, talking about the WordPress Importers, their history, where they are now, and where they could go in the future.

,

Glen TurnerCompiling and installing software for the uBITX v6 QRP amateur radio transciever

The uBITX uses an Arduino internally. This article describes how to update its software.

Required hardware

The connector on the back is a Mini-B USB connector, so you'll need a "Mini-B to A" USB cable. This is not the same cable as used with older Android smartphones. The Mini-B connector was used with a lot of cameras a decade ago.

You'll also need a computer. I use a laptop with Fedora Linux installed.

Required software for software development

In Fedora all the required software is installed with sudo dnf install arduino git. Add yourself to the users and lock groups with sudo usermod -a -G users,lock $USER (on Debian-style systems use sudo usermod -a -G dialout,lock $USER). You'll need to log out and log in again for that to have an effect (if you want to see which groups you are already in, then use the id command).

Run arduino as your ordinary non-root user to create the directories used by the Arduino IDE. You can quit the IDE once it starts.

Obtain the uBITX software

$ cd ~/Arduino
$ git clone https://github.com/afarhan/ubitxv6.git ubitx_v6.1_code

Connect the uBITX to your computer

Plug in the USB cable and turn on the radio. Running dmesg will show the Arduino appearing as a "USB serial" device:

usb 1-1: new full-speed USB device number 6 using xhci_hcd
usb 1-1: New USB device found, idVendor=1a86, idProduct=7523, bcdDevice= 2.64
usb 1-1: New USB device strings: Mfr=0, Product=2, SerialNumber=0
usb 1-1: Product: USB Serial
usbcore: registered new interface driver ch341
usbserial: USB Serial support registered for ch341-uart
ch341 1-1:1.0: ch341-uart converter detected
usb 1-1: ch341-uart converter now attached to ttyUSB1

If you want more information about the USB device then use:

$ lsusb -d 1a86:7523
Bus 001 Device 006: ID 1a86:7523 QinHeng Electronics CH340 serial converter


comment count unavailable comments

,

Craige McWhirterSober Living for the Revolution

by Gabriel Kuhn

Sober Living for the Revolution: Hardcore Punk, Straight Edge, and Radical Politics

This is not a new book, having been published in 2010 but it's a fairly recent discovery for me.

I was never part of the straight edge scene here in Australia but was certainly aware of some of the more prominent bands and music in the punk scene in general. I've always had an ear for music with a political edge.

When it came to the straight edge scene I knew sweet FA. So that aspect of this book was pure curiousity. What attracted me to this work was the subject of radical sobriety and it's lived experience amongst politically active people.

In life, if you decide to forgo something that everybody else does, it gives you a perspective on society that that you wouldn't have if you were just engaging. It teaches you a lot about the world.

-- Ian MacKaye

This was one of the first parts of the book to really pop out at me. This rang true for my lived experience in other parts of my life where I'd forgone things that everyone else does. There were costs in not engaging but Ian is otherwise correct.

While entirely clear eyed about the problems of inebriation amongst Australian activists and in wider society as a whole, the titular concept of sober living for the revolution had not previously resonated with me.

But then I realised that if you do not speak that language, you recognise that they are not talking to you... In short, if you don't speak the language of violence, you are released from violence. This was a very profound discovery for me.

-- Ian MacKaye

While my quotes are pretty heavily centered on one individual, there are about 20 contributors from Europe, the middle east and both North and South America provding reasonably diverse perspective on the music but more importantly the inspiration and positive impacts of radical sobriety on their communities.

As someone who was reading primarilly for the sober living insights, the book's focus on the straight edge scene was quite heavy to wade through but the insights gained were worth the musical history lessons.

The only strategy for sharing good ideas that succeeds unfailingly... is the power of example — if you put “ecstatic sobriety” into action in your life, and it works, those who sincerely want similar things will join in.

-- Crimethinc

Overall this book pulled together a number of threads I'd been pulling on myself over my adult life and brought them into one comical phrase: lucid bacchanalism.

I was also particularly embarassed to have not previously identified alcohol consumption as not merely a recreation but yet another insidious form of consumerism.

Well worth a read.

,

Jan SchmidtRift CV1 – Adventures in Kalman filtering Part 2

In the last post I had started implementing an Unscented Kalman Filter for position and orientation tracking in OpenHMD. Over the Christmas break, I continued that work.

A Quick Recap

When reading below, keep in mind that the goal of the filtering code I’m writing is to combine 2 sources of information for tracking the headset and controllers.

The first piece of information is acceleration and rotation data from the IMU on each device, and the second is observations of the device position and orientation from 1 or more camera sensors.

The IMU motion data drifts quickly (at least for position tracking) and can’t tell which way the device is facing (yaw, but can detect gravity and get pitch/roll).

The camera observations can tell exactly where each device is, but arrive at a much lower rate (52Hz vs 500/1000Hz) and can take a long time to process (hundreds of milliseconds) to analyse to acquire or re-acquire a lock on the tracked device(s).

The goal is to acquire tracking lock, then use the motion data to predict the motion closely enough that we always hit the ‘fast path’ of vision analysis. The key here is closely enough – the more closely the filter can track and predict the motion of devices between camera frames, the better.

Integration in OpenHMD

When I wrote the last post, I had the filter running as a standalone application, processing motion trace data collected by instrumenting a running OpenHMD app and moving my headset and controllers around. That’s a really good way to work, because it lets me run modifications on the same data set and see what changed.

However, the motion traces were captured using the current fusion/prediction code, which frequently loses tracking lock when the devices move – leading to big gaps in the camera observations and more interpolation for the filter.

By integrating the Kalman filter into OpenHMD, the predictions are improved leading to generally much better results. Here’s one trace of me moving the headset around reasonably vigourously with no tracking loss at all.

Headset motion capture trace

If it worked this well all the time, I’d be ecstatic! The predicted position matched the observed position closely enough for every frame for the computer vision to match poses and track perfectly. Unfortunately, this doesn’t happen every time yet, and definitely not with the controllers – although I think the latter largely comes down to the current computer vision having more troubler matching controller poses. They have fewer LEDs to match against compared to the headset, and the LEDs are generally more side-on to a front-facing camera.

Taking a closer look at a portion of that trace, the drift between camera frames when the position is interpolated using the IMU readings is clear.

Headset motion capture – zoomed in view

This is really good. Most of the time, the drift between frames is within 1-2mm. The computer vision can only match the pose of the devices to within a pixel or two – so the observed jitter can also come from the pose extraction, not the filtering.

The worst tracking is again on the Z axis – distance from the camera in this case. Again, that makes sense – with a single camera matching LED blobs, distance is the most uncertain part of the extracted pose.

Losing Track

The trace above is good – the computer vision spots the headset and then the filtering + computer vision track it at all times. That isn’t always the case – the prediction goes wrong, or the computer vision fails to match (it’s definitely still far from perfect). When that happens, it needs to do a full pose search to reacquire the device, and there’s a big gap until the next pose report is available.

That looks more like this

Headset motion capture trace with tracking errors

This trace has 2 kinds of errors – gaps in the observed position timeline during full pose searches and erroneous position reports where the computer vision matched things incorrectly.

Fixing the errors in position reports will require improving the computer vision algorithm and would fix most of the plot above. Outlier rejection is one approach to investigate on that front.

Latency Compensation

There is inherent delay involved in processing of the camera observations. Every 19.2ms, the headset emits a radio signal that triggers each camera to capture a frame. At the same time, the headset and controller IR LEDS light up brightly to create the light constellation being tracked. After the frame is captured, it is delivered over USB over the next 18ms or so and then submitted for vision analysis. In the fast case where we’re already tracking the device the computer vision is complete in a millisecond or so. In the slow case, it’s much longer.

Overall, that means that there’s at least a 20ms offset between when the devices are observed and when the position information is available for use. In the plot above, this delay is ignored and position reports are fed into the filter when they are available. In the worst case, that means the filter is being told where the headset was hundreds of milliseconds earlier.

To compensate for that delay, I implemented a mechanism in the filter where it keeps extra position and orientation entries in the state that can be used to retroactively apply the position observations.

The way that works is to make a prediction of the position and orientation of the device at the moment the camera frame is captured and copy that prediction into the extra state variable. After that, it continues integrating IMU data as it becomes available while keeping the auxilliary state constant.

When a the camera frame analysis is complete, that delayed measurement is matched against the stored position and orientation prediction in the state and the error used to correct the overall filter. The cool thing is that in the intervening time, the filter covariance matrix has been building up the right correction terms to adjust the current position and orientation.

Here’s a good example of the difference:

Before: Position filtering with no latency compensation
After: Latency-compensated position reports

Notice how most of the disconnected segments have now slotted back into position in the timeline. The ones that haven’t can either be attributed to incorrect pose extraction in the compute vision, or to not having enough auxilliary state slots for all the concurrent frames.

At any given moment, there can be a camera frame being analysed, one arriving over USB, and one awaiting “long term” analysis. The filter needs to track an auxilliary state variable for each frame that we expect to get pose information from later, so I implemented a slot allocation system and multiple slots.

The downside is that each slot adds 6 variables (3 position and 3 orientation) to the covariance matrix on top of the 18 base variables. Because the covariance matrix is square, the size grows quadratically with new variables. 5 new slots means 30 new variables – leading to a 48 x 48 covariance matrix instead of 18 x 18. That is a 7-fold increase in the size of the matrix (48 x 48 = 2304 vs 18 x 18 = 324) and unfortunately about a 10x slow-down in the filter run-time.

At that point, even after some optimisation and vectorisation on the matrix operations, the filter can only run about 3x real-time, which is too slow. Using fewer slots is quicker, but allows for fewer outstanding frames. With 3 slots, the slow-down is only about 2x.

There are some other possible approaches to this problem:

  • Running the filtering delayed, only integrating IMU reports once the camera report is available. This has the disadvantage of not reporting the most up-to-date estimate of the user pose, which isn’t great for an interactive VR system.
  • Keeping around IMU reports and rewinding / replaying the filter for late camera observations. This limits the overall increase in filter CPU usage to double (since we at most replay every observation twice), but potentially with large bursts when hundreds of IMU readings need replaying.
  • It might be possible to only keep 2 “full” delayed measurement slots with both position and orientation, and to keep some position-only slots for others. The orientation of the headset tends to drift much more slowly than position does, so when there’s a big gap in the tracking it would be more important to be able to correct the position estimate. Orientation is likely to still be close to correct.
  • Further optimisation in the filter implementation. I was hoping to keep everything dependency-free, so the filter implementation uses my own naive 2D matrix code, which only implements the features needed for the filter. A more sophisticated matrix library might perform better – but it’s hard to say without doing some testing on that front.

Controllers

So far in this post, I’ve only talked about the headset tracking and not mentioned controllers. The controllers are considerably harder to track right now, but most of the blame for that is in the computer vision part. Each controller has fewer LEDs than the headset, fewer are visible at any given moment, and they often aren’t pointing at the camera front-on.

Oculus Camera view of headset and left controller.

This screenshot is a prime example. The controller is the cluster of lights at the top of the image, and the headset is lower left. The computer vision has gotten confused and thinks the controller is the ring of random blue crosses near the headset. It corrected itself a moment later, but those false readings make life very hard for the filtering.

Position tracking of left controller with lots of tracking loss.

Here’s a typical example of the controller tracking right now. There are some very promising portions of good tracking, but they are interspersed with bursts of tracking losses, and wild drifting from the computer vision giving wrong poses – leading to the filter predicting incorrect acceleration and hence cascaded tracking losses. Particularly (again) on the Z axis.

Timing Improvements

One of the problems I was looking at in my last post is variability in the arrival timing of the various USB streams (Headset reports, Controller reports, camera frames). I improved things in OpenHMD on that front, to use timestamps from the devices everywhere (removing USB timing jitter from the inter-sample time).

There are still potential problems in when IMU reports from controllers get updated in the filters vs the camera frames. That can be on the order of 2-4ms jitter. Time will tell how big a problem that will be – after the other bigger tracking problems are resolved.

Sponsorships

All the work that I’m doing implementing this positional tracking is a combination of my free time, hours contributed by my employer Centricular and contributions from people via Github Sponsorships. If you’d like to help me spend more hours on this and fewer on other paying work, I appreciate any contributions immensely!

Next Steps

The next things on my todo list are:

  • Integrate the delayed-observation processing into OpenHMD (at the moment it is only in my standalone simulator).
  • Improve the filter code structure – this is my first kalman filter and there are some implementation decisions I’d like to revisit.
  • Publish the UKF branch for other people to try.
  • Circle back to the computer vision and look at ways to improve the pose extraction and better reject outlying / erroneous poses, especially for the controllers.
  • Think more about how to best handle / schedule analysis of frames from multiple cameras. At the moment each camera operates as a separate entity, capturing frames and analysing them in threads without considering what is happening in other cameras. That means any camera that can’t see a particular device starts doing full pose searches – which might be unnecessary if another camera still has a good view of the device. Coordinating those analyses across cameras could yield better CPU consumption, and let the filter retain fewer delayed observation slots.

,

Colin CharlesCiao, 2020

Another year comes to a close, and this is the 4th year running I’m in Kuala Lumpur — 2017, 2018, 2019, and 2020… Wow. Maybe the biggest difference is that I’ve been in Malaysia for 306 days, thanks to the novel coronavirus. I have never spent this much time in Malaysia, in my entire life… I want to say KL, but I’ve managed to zip my way around to Kuantan (a lot), Penang, and Malacca. I can’t believe I flew back on February 29 2020 from Tokyo, and never got on a plane again! What a grounded globalist I’ve become.

My travel stats are of course, pretty dismal. 39 days out of the country. Apparently I did a total of 13 trips, 92 days of travel (I don’t know if all my local trips are counted frankly), 60,766km, 17 cities, and still 7 countries :) I don’t even want to compare to what it was like in 2019.

I ended that by saying, “I welcome 2020 with arms wide open.”. I’m not so sure how I feel about 2020. There is life beyond travel. COVID and our reaction to it, really worries me.

KL has some pretty good food. Kuantan has some pretty good people. While in KL, I visited a spin studio at least once per day. I did a total of 272 spin classes over 366 days! Not to forget there was 56 days of complete lockdown, and studios didn’t open till about maybe mid-June… Sure I did do some spin in London and Paris too, but the bulk of all this happened while I was here in KL.

I became reasonably friendlier, I became vulnerable, and like every time you do that, you’re chances of happiness and getting hurt probably straddle 50:50. Madonna – The Power of Good-bye can be apt.

This is not to say I didn’t enjoy 2020. Glass half full. I really did. Carpe diem. Simplicity is best. If you can follow KISS principles in engineering, why would you pour your entire thought process out and overwhelm the other party?

Anyway, I still look forward to 2021, with wide open arms, and while I really do think the COVID mess isn’t going away and things are going to be worse for many, I will still be focused on the most positive aspects of 2021. And I’ll work on being my old self again ;-)

I also ended the year with a haircut (number 1/0.5 on the sides) on Monday 28 December 2020. Somewhat of an experiment (does CoQ10 help speed up hair growth?) but also somewhat of a reaction to saying goodbye to December 2020.

,

Glen TurnerBlocking a USB device

udev can be used to block a USB device (or even an entire class of devices, such as USB storage). Add a file /etc/udev/rules.d/99-local-blacklist.rules containing:

SUBSYSTEM=="usb", ATTRS{idVendor}=="0123", ATTRS{idProduct}=="4567", ATTR{authorized}="0"


comment count unavailable comments

,

Hamish TaylorWattlebird feeding

While I hope to update this site again soon, here’s a photo I captured over the weekend in my back yard. The red flowering plant is attracting wattlebirds and honey-eaters. This wattlebird stayed still long enough for me to take this shot. After a little bit of editing, I think it has turned out rather well.

Photo taken with: Canon 7D Mark II & Canon 55-250mm lens.

Edited in Lightroom and Photoshop (to remove a sun glare spot off the eye).

Wattlebird feeding

Gary PendergastMore than 280 characters

It’s hard to be nuanced in 280 characters.

The Twitter character limit is a major factor of what can make it so much fun to use: you can read, publish, and interact, in extremely short, digestible chunks. But, it doesn’t fit every topic, ever time. Sometimes you want to talk about complex topics, having honest, thoughtful discussions. In an environment that encourages hot takes, however, it’s often easier to just avoid having those discussions. I can’t blame people for doing that, either: I find myself taking extended breaks from Twitter, as it can easily become overwhelming.

For me, the exception is Twitter threads.

Twitter threads encourage nuance and creativity.

Creative masterpieces like this Choose Your Own Adventure are not just possible, they rely on Twitter threads being the way they are.

Publishing a short essay about your experiences in your job can bring attention to inequality.

And Tumblr screenshot threads are always fun to read, even when they take a turn for the epic (over 4000 tweets in this thread, and it isn’t slowing down!)

Everyone can think of threads that they’ve loved reading.

My point is, threads are wildly underused on Twitter. I think I big part of that is the UI for writing threads: while it’s suited to writing a thread as a series of related tweet-sized chunks, it doesn’t lend itself to writing, revising, and editing anything more complex.

To help make this easier, I’ve been working on a tool that will help you publish an entire post to Twitter from your WordPress site, as a thread. It takes care of transforming your post into Twitter-friendly content, you can just… write. 🙂

It doesn’t just handle the tweet embeds from earlier in the thread: it handles handle uploading and attaching any images and videos you’ve included in your post.

All sorts of embeds work, too. 😉

It’ll be coming in Jetpack 9.0 (due out October 6), but you can try it now in the latest Jetpack Beta! Check it out and tell me what you think. 🙂

This might not fix all of Twitter’s problems, but I hope it’ll help you enjoy reading and writing on Twitter a little more. 💖

,

Glen TurnerConverting MPEG-TS to, well, MPEG

Digital TV uses MPEG Transport Stream, which is a container for video designed for lossy transmission, such as radio. To save CPU cycles, Personal Video Records often save the MPEG-TS stream directly to disk. The more usual MPEG is technically MPEG Program Stream, which is designed for lossless transmission, such as storage on a disk.

Since these are a container formats, it should be possible to losslessly and quickly re-code from MPEG-TS to MPEG-PS.

ffmpeg -ss "${STARTTIME}" -to "${DURATION}" -i "${FILENAME}" -ignore_unknown -map 0 -map -0:2 -c copy "${FILENAME}.mpeg"


comment count unavailable comments

,

Chris NeugebauerTalk Notes: Practicality Beats Purity: The Zen Of Python’s Escape Hatch?

I gave the talk Practicality Beats Purity: The Zen of Python’s Escape Hatch as part of PyConline AU 2020, the very online replacement for PyCon AU this year. In that talk, I included a few interesting links code samples which you may be interested in:

@apply

def apply(transform):

    def __decorator__(using_this):
        return transform(using_this)

    return __decorator__


numbers = [1, 2, 3, 4, 5]

@apply(lambda f: list(map(f, numbers)))
def squares(i):
  return i * i

print(list(squares))

# prints: [1, 4, 9, 16, 25]

Init.java

public class Init {
  public static void main(String[] args) {
    System.out.println("Hello, World!")
  }
}

@switch and @case

__NOT_A_MATCHER__ = object()
__MATCHER_SORT_KEY__ = 0

def switch(cls):

    inst = cls()
    methods = []

    for attr in dir(inst):
        method = getattr(inst, attr)
        matcher = getattr(method, "__matcher__", __NOT_A_MATCHER__)

        if matcher == __NOT_A_MATCHER__:
            continue

        methods.append(method)

    methods.sort(key = lambda i: i.__matcher_sort_key__)

    for method in methods:
        matches = method.__matcher__()
        if matches:
            return method()

    raise ValueError(f"No matcher matches value {test_value}")

def case(matcher):

    def __decorator__(f):
        global __MATCHER_SORT_KEY__

        f.__matcher__ = matcher
        f.__matcher_sort_key__ = __MATCHER_SORT_KEY__
        __MATCHER_SORT_KEY__ += 1
        return f

    return __decorator__



if __name__ == "__main__":
    for i in range(100):

        @switch
        class FizzBuzz:

            @case(lambda: i % 15 == 0)
            def fizzbuzz(self):
                return "fizzbuzz"

            @case(lambda: i % 3 == 0)
            def fizz(self):
                return "fizz"

            @case(lambda: i % 5 == 0)
            def buzz(self):
                return "buzz"

            @case(lambda: True)
            def default(self):
                return "-"

        print(f"{i} {FizzBuzz}")