Reading Icalendar (.ics) file from Outlook on Linux

At $DAYJOB, email is handled through Microsoft’s Office 365, and with that I occasionally get event invitations in Microsofts’s internal format. As I am using an IMAP-based e-mail client (since I cannot stand Outlook Web Access), actually reading those invites can be a bit difficult.

With the default settings, the invitations are presented as a link into the Outlook Web Access client, with only the subject of the event readable (as the email subject). Everything else is completely hidden from the user. Thunderbird does have some built-in code that downloads the calendaring information and displays it to the user, but I am using a different email client and only get the web link.

Entering Outlook Web Access and going into the settings, there is a setting to present invites as Icalendar files (MIME type text/calendar, extension .ics). Enabling this changes the emails so that the event text is presented in the message body, but all the important details, such as start time and location, are only present in the Icalendar file. And while the calendar is “readable” in the sense that it is a text file, it is not readable in the sense that it is easy to find out what it says.

I am running Linux on my desktop, and do not have any calendaring software installed, so nothing wants to pick up the .ics file. And reading it in a text editor isn’t easy. There are several timestamps, and it takes a while to figure out that it is the third DTSTART entry that contains the event start time:

$ grep DT attachment.ics
DTSTART:16010101T030000
DTSTART:16010101T020000
DTSTART;TZID=W. Europe Standard Time:20211103T100000
DTEND;TZID=W. Europe Standard Time:20211103T142500
DTSTAMP:20211102T150149Z

Trying to find software that will just view an ics file in a readable format isn’t easy. I don’t need calendaring software on my desktop (I do have a calendar app on my phone that I could use, though), but it would be nice to display it.

After some intense web searching, I found mutt-ics, a plug-in for the textual Mutt e-mail client. I am not using Mutt, but running the script on the ics file did produce readable output:

$ python ./mutt_ics/mutt_ics.py /tmp/attachment857.ics
[...]
Start: Wednesday, 03 November 2021, 10:00 CET
End: Wednesday, 03 November 2021, 14:25 CET

That’s a step forward. The next issue is that I am using a graphical e-mail client, and this is a text-mode script. The e-mail software runs “xdg-open” to open the file, so I had to create a few items to get it working. First, a script wrapper that runs the script and shows the output using “xmessage” (other software also works, I have not yet found out how to get xmessage to display UTF-8 text properly, so I might need to replace it eventually):

#!/bin/bash
python /home/peter/mutt-ics/mutt_ics/mutt_ics.py "$1" | iconv -c -f UTF-8 -t ISO8859-1 | xmessage -file -
exit 0

Next step was to make a .desktop file that defines the script as a handler for the text/calendar MIME type:

$ cat /home/peter/.local/share/applications/view-ics.desktop
[Desktop Entry]
Type=Application
Version=1.0
Name=View iCalendar
Exec=/home/peter/bin/view_ics
Terminal=false
MimeType=text/calendar;
StartypNotify=false

And to tie it all together, I have to register it as the default handler for text/calendar by running xdg-mime:

xdg-mime default view-ics.desktop text/calendar

There, now running “xdg-open file.ics” opens a xmessage dialog showing the calendar details in a new window. Managed to get it working just in time, the meeting starts in twenty minutes…

Running memtest86 on a Mac Mini

At $DAYJOB, we are having issues with a Mac Mini that is acting up. It crashed on boot, and re-installing macOS didn’t help as it complained about the file system being damaged, no matter if I reformat (“erased” in Apple-speak) or repartition the disk. The built-in Apple Diagnostics tool crashed after about 16 minutes, so I thought I’d run memtest86+ on the machine. But without a working OS boot, I was unable to get it up and running, and googling for information didn’t help.

To get it running, I had to create a bootable USB stick, for which I had to find a Windows machine and run their USB Key installer. However, the disk did not show up in the list of boot options when booting the Mac Mini pressing the Option key. To find it, I had to install rEFInd on a second USB stick (they have a USB flash image ready for download, so no Windows machine needed).

With both USB sticks in the Mac, booting with the Option key let me select the rEFInd USB stick, which in turn found the memtest86+ stick as a “Legacy Windows” image. Now the test started fine.

Sound output from the wrong jack

Debian recently released an update to their stable release, version 8.7, and with it an update to slightly more recent Linux kernel version (up to 3.16 from 3.2). Well, that would be nice to have I thought, and updated my office workstation and rebooted. Everything looked fine, it even picked up and updated the Nvidia graphics driver that I always have problems with. But then, when I tried to play radio over the Internet, the sound suddenly started blaring out from a speaker inside the chassis that I didn’t even know it had, instead of my connected proper speakers.

So, first I thought the driver was broken, so I rebooted back to the old kernel. Still wrong, then I turned power off and back on and started the old kernel, still the wrong output. Strange.

I have a HP Z220 Workstation (from 2013) at the office, with an “Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04)” audio controller, with a Realtek ALC221 chip (as per output from lspci -v and /proc/asound/card0/codec#0). It took me an hour of intense googling to find the correct set of keywords to find something, but apparently most English-language threads use “jack” for the outputs. I should have known that.

I eventually stumbled on this ArchLinux thread from 2014 which mentioned a tool called hdajackretask that can be used to rearrange the outputs from the HDA cards. Debian distributes this utility in the alsa-tools-gui package. After installing the package and changing the output type I managed to get sound playing through my speakers again.

hdajackretask screenshot, setting "Green Line Out, Rear side" to "Line out (back)"

Screenshot from hdajackretask, used to select output devices from an HDA audio card

Now to actually get some work done. That is Mondays for you.

The futility of OSX parental control and web browsers

I have kids. Two of them, the youngest is five and the oldest is about to turn eight years old. Since they see me and my wife use a computer regularly, they of course also want to use it. The oldest has access to computers at school, and if they are going to be proficient with computers, they need to start using them at an early age. I have a MacBook Pro that they both have accounts on, both set up with OSX’s default “Parental Control” feature.

That works fairly well when they use the local application (Photo Booth is a favourite, if I hadn’t blocked it their little clips would probably have ended up on YouTube if the knew how to upload them). Well, before getting to the applications, there are all these little pesky pieces of software that phone home on every start-up, under the guise of doing software updates. No matter how many times I block “Google Software Update” or “Paragon Updater” and the like, every time they log in to their accounts, they get a message that they cannot run them. Well, they learn to click “OK” and go on with their life. Using a web browser is a lot more hassle, though.

I had initially set up a whilelist in the Parental Control settings, to only allow them to access certain web sites. That doesn’t work, since every site in the universe now include stuff from other places, either be it CDNs, Google’s web tracking stuff or a JavaScript library that they are too bored to copy to their own domain. I can live with that, a lot of it can be blocked with Ghostery or similar, but that is if you can even get to it.

Trying to even run a web browser on an account that has Parental Control enabled is a chapter in itself. First it is the phone-home auto-update stuff that kicks in every few moments. Then there are the pre-installed shortcuts (at least in Opera) that wants to download screenshots to display inside the Speed Dial screen (why can’t they just ship with default images?). Then even trying to type a web address keeps trying to send every single keystroke to Google, requiring having to close a dialog after every single letter in the URL. In Google Chrome, it seems utterly and completely impossible to disable this behavior. Opera has it, hidden deep inside its configuration options, but I then I have to enter a magic key combination to remove the Search field. And fight the blocked URL pop-ups to remove the pre-installed Speed Dials.

I need to try out Vivaldi for the kids’ accounts. I know it can be configured to be less intrusive, and it doesn’t send all keystrokes to the search engine. When I set up the account for my oldest daughter there wasn’t a stable version around, but it should be fine now.

End of an era

The day had to come, I knew it, I just postponed it for as long as possible. But now it is time to move on, it is time to close down my Fidonet system for good, over twenty years after setting up my first system. My Fidonet history has been going through a lot of different setups, starting out with reading off BBSes using Blue Wave, through a simple point setup with Terminate on MS-DOS, moving on to an OS/2-based system using SquishMail using timEd and Fleet Street as readers, even serving as the Swedish shareware registration agent for Fleet Street for a few years at the Fidonet peak in the late 1990s.

I then moved to a Linux-based system using CrashMail II (for a while, running timEd through an MS-DOS emulator under Linux, before GoldEd was ported to Linux), and lately using a Usenet News reader and the JAMNNTPd software. During my tenure as a Debian developer, I had a lot of this stuff packaged for Debian, but I haven’t checked if they are still there. I have just been using the stuff I compiled several years ago, but lately it has simply stopped working. Maybe my message bases have broken completely, I don’t know, and considering how seldom I read them, I figured now was the time to shut the system down for good.

It is still a bit sad, I remember the peak around 1996–1998, when I moderated a chat area and had enforce a limit of 50 posts per day per author, else it would overflow completely (remember, this was at the time where it could take a day or three for the messages to propagate). Now I don’t know how many years it has been since anyone even posted a single message in any of the derelict Swedish areas. There is some activity in the international areas,

Good-bye, Fidonet!

OS X Time Machine recovery does not find my USB disk

Today the root file system on my MacBook developed an “Invalid index key” error that I was unable to fix by booting into recovery mode and using the Disk Utility, or even by booting into single-user mode and using the fsck_hfs tool, no matter what flags I threw at it. Paragon HFS for Windows could still read (and write) to the partition from the Windows installation and I was able to read the file system, but I couldn’t boot it.

After a few hours of trying to fix the problem, I simply gave up. I saw several mentions of a tool called Disk Warrior that supposedly could fix a lot of the problems fsck couldn’t, but I was a bit reluctant at throwing over 100 US dollars at a tool that I didn’t know if it would make any difference.

I do have backups. Even if the MacBook isn’t set up to do daily backups like most my machines are (I never got the Time Machine interface in my Synology NAS to work with it), so the last backup I had was from December last year. Better than nothing, and I don’t really keep that many important files on the laptop – most of the important files are shared with other computers (using Git version control to synchronize), or in Dropbox.

So I booted from the recovery partition, selected Restore from Time Machine and … my backup didn’t appear.

So I rebooted. Still nothing.

Rebooting, this time booting from the backup disk (which has a convenient OS image installed onto it). Still no disk. I only saw my (failed) attempt of a backup node from the Synology NAS get listed (and I was unable to connect to it, just like Time Machine itself was).

Meh.

Then it struck me. What if I power off the Synology, and then open the recovery program? So that is what I tried, and there it was! Now the recovery finally let me select the disk that was physically connected to the machine, rather than the network share over WiFi (still, it’s quite impressive of it to find it when booting from the recovery partition on the backup disk, I must say that Apple are rather good at making those things just work, even if it failed at what I really wanted to do).

Now the backup is finally restoring. The clock is approaching half past midnight and it is at 7.5 % restored, so I guess I will have to wait until the morning until I see if it actually did work, but at least it is trying now…

Time to go to sleep.

Making OVF images using Packer

At my $DAYJOB, the need recently arose for not only making our software available as an installer that the end-user can install on their machines, but also for providing pre-built OVF (Open Virtualization Format) images, mainly targeted towards costumers running VMware vSphere and wanting to not have software running on bare metal. They can of course run the regular installer, but providing a pre-installed image cuts deployment time considerably and eliminates many of the mistakes that can be done while performing the installation.

Hunting around for solutions on how to actually generate these images, using some kind of automated procedure as we will regenerate the images several times and in slightly different configurations, I eventually landed on Packer. Packer lets me drive VMWare Workstation by submitting a configuration file listing an ISO image to install from and giving the commands necessary to run the installation automatically.

One of the issues with doing this is that most installations will add some unique identifiers in the image, and we do not want that. For instance, SSH host keys are generated, as are MAC addresses for the network cards, and also some other stuff is dropped. Fortunately, I was not the first one to have faced this problem, so it was fairly easy to find a solution that would clean up the generated image. In addition to that, I had the post-install script install VMWare Tools in the virtual image, and then go on to remove various UUIDs and MAC addresses from the generated VMWare configuration file.

The result of running Packer is, however, still a VMWare image. It does have a driver for OVF, but that one is using Oracle VirtualBox instead. OVF is supposed to be platform-independent but there are enough differences between how the images are built to create trouble if we use the wrong build platform. Instead we landed on using VMWare OVF Tool on the generated VMWare image, converting it into an OVF archive (.ova). This is the part that takes the longest time in our build process, which starts out with generating the ISO to install from on-the-fly. But in the end, we have an OVA file that can be imported into VMWare (vSphere, Workstation or Player all work fine) and be up and running in under two minutes.

A replacement for the Opera IRC client

After transitioning from Opera to Vivaldi as my primary browser, one of the features I have been missing is the IRC client. Granted, I am not a heavy IRC user, but there is this one channel I monitor where some of my friends and former Opera colleagues hang out. I liked the simplicity of the Opera IRC client and I am not quite a fan of the terminal-based ones.

One of my friends pointed me towards WeeChat, which is an extensible chat client. In its basic configuration, it runs in a terminal and looks like any old IRC client. However, it does have support for plug-ins which allows it to connect to many different systems (although I have as of yet only set up IRC), and also for relays, making it possible to use other front-ends.

One such front-end is Glowing Bear, which is web based. It connects to a WeeChat server which has a relay set up. By default, that relay is unencrypted, which is not very safe, but it does support SSL and I found this wonderful guide describing how to set that up with a proper certificate. I configured that, and dropped a copy of the Glowing Bear files to a web-site of my own (which is not really necessary since the connection is direct from browser to WeeChat, but it is nice to know exactly what I am connecting to). With the certificate I got using the configuration guide above I could also make this a https server.

Now I have a replacement for the IRC client. Now I just need to replace the mail client

Just a simple mail server installation

So, the mail server at work died on Wednesday. It was running Microsoft Exchange and died so utterly completely that even with several hours of premium support from Microsoft, they were unable to get it up and running again. Being one that comes in fairly early in the morning, and already am managing a few internal servers, I was asked to set up a new box using Linux or whatever.

Can’t be too difficult, huh?

Well, that depends. In this case, I needed to have it authenticate users against an Active Directory server and support mail aliases set up in its user database. After doing a fair amount of googling around, I found a few guides that helped me along the way. I started out with iRedMail and continued by configuring it to talk to the Active Directory server. Never having worked with AD or Kerberos before, it took me quite some time to get Kerberos working (tip: have a look at what the DNS thinks is the domain name of the KDC, in our case it was “BT.LOCAL” in all uppercase; use anything else as the Kerberos realm and all I got was cryptic error messages).

I had some hurdles to overcome, getting postfix to authenticate with Active Directory’s LDAP server was fairly easy once I a) had the unprivileged account that could do LDAP lookups (using the “Administrator” account for that does not work), and b) reduced the LDAP query so that it would actually find the users I was looking for (tip: make a dump of the LDAP directory and look at the lowest common denominator for the lookup keys).

Then I had the problem of Dovecot, which handles local mail delivery and IMAP/POP, could not read the mail that it had stored in mailboxes. It turned out that since I had set up Kerberos so that the AD users were available as Unix users, and had the recipient domain (“bt.local” from above) in “mydestination”, Postfix would always setuid the LDA. I had to remove the domain from there and add it to the list of virtual domains for that to work.

All in all, it took me about a day and half to get the thing set up. Not bad for the first time. I did set up Git to version-control all the important configuration files so that I can track my future mistakes and revert to a working configuration.

Now to get the SMTP SASL configuration working