Lineage 15.1

mobile-category-inverted

In my previous post about replacing Windows Phone I eventually installed Lineage 14.1 (Android 7.1.2) on my Motorola G4 Plus.  It’s been a while and things have been going well with it, but Lineage released 15.1 (Android 8.1) a few weeks ago and the G4 didn’tit make the list of first devices supported by 15.1.

Of course, the G4 is a very popular phone for Lineage and there is no doubt that it will eventually be supported, but I picked it up in the first place to give Lineage a try and after almost a year of usage, it was time to move on.

Samsung Galaxy Tab S2

Before I get to that though, about six months ago I picked up a Samsung Galaxy Tab S2 to replace my Surface 3 as my primary tablet.  The S2 is well supported by Lineage and was included in the initial list of supported devices for 15.1 and so that was the first device I upgraded.

Lineage has a nice built in auto-updater and it has worked well for me over the last year, but the upgrade to 15.1 is a manual process.  Basically you need to wipe the system partition and then re-flash Lineage.

That all went fine and the tablet booted up without a problem right until the launcher tried to load and failed.

I poked around a bit to see if I could download a different launcher and run it, but that failed to work as well.

I then tried to downgrade to 14.1 again, but that simply never got past the Lineage boot screen.

That turned out to be more of a problem than you might think as the S2 doesn’t do a hard power off by holding down the power button.  I looked around the net and there didn’t seem to be any info on how to do it and the few suggestions I found basically said to wait for the battery to run down.

That didn’t seem like something I wanted to wait on so I tried a few key combo’s and found that holding the volume down and power button for 10 seconds did the trick.

Once it was back in to recovery mode, I did a clean wipe of the hole device and reinstalled 15.1.

After rebooting, everything worked as expected, including the launcher.

Lineage 15.1

Lineage is pretty faithful to the stock Android look and feel, but there are some enhancements that Lineage has put in over the years.  A few of these are missing on 15.1 but are being worked on and should be released over the coming releases.

The most notable changes I’ve noticed so far are:

  • Trebuchet (the Lineage launcher) has had a pretty major overhaul, with the new “dots” feature fully supported.
  • The notifications pull down is now semi-transparent (not a real fan of this)
  • The settings icon in the notification area has been moved to below the quick actions instead of above it.
  • All new icons… again not sure if I like them yet.
  • Icon styles, don’t like them round?  No problem, though I find it odd that “square” isn’t an option.

OnePlus 5T

To replace my G4 I decided on the OnePus 5T, for a few reasons:

  • Reasonably priced.
  • High end specs.
  • Nice “almost” full body OLED screen, I really don’t like the idea of notches.
  • Carrier support.
  • Unlocking the bootloader is quick and easy.
  • True dual SIM support.

The phone is incredible well built, it feels solid and comfortable in my hand and so far it’s been great

I went with the 8g/128g model as I’ve always preferred to have lots of local storage.  It would have been nice to have an SD card slot, but not a deal breaker.

The only thing that it’s really missing is wireless charging.

Installing Lineage was pretty straight forward, simply follow the Lineage wiki instructions with two minor points:

  • Before unlocking the bootloader, you have to enable developer options and enable the unlock function.
  • After unlocking the bootloader, the 5T leaves the data partition unformatted, if you try and boot Lineage with it unformatted you’ll get punted back to recovery mode without an explanation.

After that it was just a matter of setting up the phone with apps.

There was one other thing to do, since I don’t use Google services on my phone, I used SMS Backup and Restore to move my SMS and call log across from the G4.

I’ve been using the phone for a few days and all is good. Next week I’ll give an update on the software that I’m currently using on my phone.

ProxMox Updates

linux-category-inverted

As mentioned in my previous article, ProxMox uses a subscription model to pay for development, however they do support home/lab usage via nagware.

However, by default, the base install assumes your going to have a subscription and that makes for a recurring error each time you try and update the server with “apt-get”:

W: The repository 'https://enterprise.proxmox.com/debian/pve stretch Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/stretch/pve-enterprise/binary-amd64/Packages 401 Unauthorized
E: Some index files failed to download. They have been ignored, or old ones used instead.

This is caused by the fact that without a subscription, the standard update repo for ProxMox is unavailable.

Fortunately they do have a public repo which they give instructions on this support page.

The basic steps are as follows:

  1. Open a shell on your ProxMox Server.
  2. Go in to “/etc/apt/sources.list.d”.
  3. Copy “pve-enterprise.list” to “pve-enterprise-no-sub.list”
  4. Edit “pve-enterprise-no-sub.list” and replace the contents with the lines from the support page.
  5. Edit the “pve-enterprise.list” file and comment out the repo line (use a #), alternatively you can just delete this file if you want as well.

Then re-run “apt-get update && apt-get upgrade” and your off to the races.

ProxMox Mail Config

linux-category-inverted

After getting ProxMox all setup, there was one last thing to investigate, mail was not flowing from the ProxMox host.  Taking a quick look, two changes were required.

  1. The Postfix aliases database wasn’t initialized, so it was throwing an error, simply running “newaliases” on both nodes resolved this error.
  2. As port 25 is blocked by my host, all mail has to go through my ISP’s mail servers.  Since I already have a Postfix server configured to handle this, setting the “relayhost” value in “/etc/postfix/main.cf” resolved this issues easily enough.

Backups in ProxMox

linux-category-inverted

In Hyper-V, taking backups was, welll, not very pleasant.  It was fine if you had a dedicated backup solution, but for a small set of VM’s like I have, it require basically taking snapshots of each individual VM and exporting them.

Microsoft didn’t make the process very easy, but with a bit of PowerShell scripting I had a pretty good system in place that kept the VM’s down only for as long as it took to export the disk images.

ProxMox on the other hand has a built in VM backup system that looks like it will automate most of the things I had scripted.

However, the first thing that becomes a problem is that by default, ProxMox stores the backup images on the boot drive, which is only 80g in size.  I installed a 1TB SSD to host my VM’s so that math doesn’t work very well 😉

I had upgraded my main data drives a while ago from some 2TB HD’s to 6TB HD’s and those old drives have just been laying around, so I decided to add them to the host servers as backup space.

My first attempt to do this was to create a new LVM storage group add them to the ProxMox server as a new storage group, but it turns out that doesn’t work as you can’t use an LVM disk for backup.  Instead I had to simply create a ext4 file system on the disks and mount them to the ProxMox host server.  I found this article useful, with the exception that it’s a little old and suggests ext3 still.

Once the drive was mounted, going in to the ProxMox web interface, selecting Datacenter->Storage->Add->Directory brought up the standard dialog to add storage to the system.  One item to note is that since I have two nodes in my cluster, you have to connect to the node that has the local drive on it.

There some standard fields to fill in, like the ID, which I labeled “backups”, note that creating a directory store in ProxMox will assume it is available on all nodes, not just he current one.  Make sure to select the right content type, for me I only selected “VZDump backup files”, but you might want to also support other things like ISO images or container templates.

The other item to note is the “Max Backups” field, this sets the maximum number of backup images that are allowed on the disk for each VM.  Once this number is exceeded the old backup files will be removed automatically (at least if I’m reading the documentation correctly).  For me, I set this to 3.

The final step is then to setup a backup schedule.  To do this, select the host Datacenter->Backups->Add.

Since this is backing up to local storage, the first option “Node” should be set to the physical node your backing VM’s up from.  Then you can simply select your newly added backup storage, what time you want the backups to start and which VM’s to backup.  The rest of the options are straight forward and now you have a weekly backup of all your VM’s.

I also keep a copy of my backups offsite, which is easy to do as you can just sftp a copy of the backups from your /mnt/backups/dump directory and move them to a USB drive or other storage to take with you.

One last point, as mentioned above, my main file server has a 6tb drive, which of course can’t be backed up to a 2tb disk.  I have a separate set of backup disk for this and so I don’t won’t ProxMox to backup that volume.  Fortunately, if you go to [Your VM]->Hardware->[Disk you want to exclude]->Edit, you can select the “No Backup” checkbox and it will not be included in the ProxMox backups.

Moving VM’s to ProxMox

linux-category-inverted

In my previous post I installed ProxMox as my HyperVisor, but I didn’t mention anything about moving VM’s from Hyper-V over to it.  Well, here’s how I went about the process.

First, I tracked down this article, which has a pretty good detailed guide of the process.  Basically it comes down to the following:

  • Export the VM from Hyper-V to get a clean snapshot.
  • Transfer the .vmdk files up to the ProxMox host.
  • Create a new VM in ProxMox with the same CPU/Memory/Disks.
  • Convert the .vmdk files to the KVM format.
  • Start the VM and make any other changes, like IP, DNS, etc. that are required if the OS doesn’t detect the changed hardware properly.

There are a few caveats:

  • While exporting from Hyper-V, the completion percentage is displayed in the “Status” column of the Hyper-V manager.  However that column is not always visible depending on the resolution of the display and how large the window is, so make sure to expand the Hyper-V manager until you can see it.
  • In the article, the .vmdk is converted to a .qcow2 file, however for me the default format of disks is actually raw, so replace “-O qcow2” with “-O raw”.
  • Similarly, the location of the disks in the article is incorrect, it’s actually in /dev/pve, but that’s actually a link back to /dev/dm-? but you can use either.

That makes the conversion command something like:

qemu-img convert -O raw ~/BootDisk.vhd /dev/pve/vm-102-disk-1

Overall the migration process is quite easy, if a little slow between the transfer of the .vmdk files as well as converting them over to the new format.