Taking Notes.

I’ve always taken a keen interest in note-taking in general and recently I’ve been obsessed with how other people do it.

There tends to be a push toward avoiding information silos while at the same time making your content available or at least putting it online where it can be found, (even if you specifically aren’t writing for mass consumption).

As for me, I keep see-sawing between the convenience and ubiquity of online silos and the “privacy” of self-hosting.

I’ve gone from keeping all my notes in a single text file (not quite as ambitious as this, but close) to a private dokuwiki, back to individual text files to moving them all online to my Google Drive.

Currently, I am migrating most of my notes out of Google a into a single folder of markdown files managed by Joplin, an open-source note solution that allows you to sync your notes to a back-end you control (Dropbox or better yet, NextCloud).

I like self hosting because

1. There’s a matter of some pride in saying “I built/maintain this” (even if if is an open-source project from github) and

2. There’s no chance of a big company closing up shop or switching from free to a paid or subscription model on you.

I’ve discovered that while I’d love to say I’ve been managing my notes in a single location for years, the truth is what keeps my note-taking process interesting useful is migrating it periodically. Migrating allows me to updates notes, combine notes and prune old notes. With each iteration of my notes system I have left a little pile of notes behind. A collection of notes that no longer serve me and can be discarded.

But make no mistake, so far, self-hosting is still a lot of work. Maintaining your own backups and updating your platforms. Patching bugs and working around limitations takes up a lot of time. But in the end – is it all worth it?

I’m still trying to answer that question.

Until the next system comes along.

Read My Feed

Two weeks ago I stopped using TinyTinyRSS as my RSS feed reader. I love having self-hosted apps and the privacy and control they provide, but honestly I wasn’t real worried about someone snooping my RSS feeds or contaminating my OPML.

I’ve been making a move back to some cloud-based apps recently to avoid the maintenance and backup space that my self-hosted apps require. I decided to try two popular feed readers and see how they compared. I left myself a reminder in Google Keep (see? NOT everything is self-hosted) to give it two weeks to decide which feed reader I was going to stick with. I disabled my TTRSS instance and opened two new tabs with Feedly and inoreader

The first thing I noticed was how similar they both looked to my default layout in TTRSS. This was going to be easy – they look almost the same! I imported my OPML file and away I went.

While both apps performed admirably, I was struck by how much MORE they both wanted to do for me.

Feedly wanted to help me find new feeds in which I might be interested. It offered a secure area for my Private Business Content. It allowed me to perform power searches for keywords and let me mark articles to be read later. The free account limited me to 100 feeds, but I’ve only got about 75, so I was OK.

Inoreader let me most of the same things and offered unlimited feeds. I could also automatically tag and organize items as they came in. Inoreader also keeps my old articles (Feedly stops indexing articles older than 30 days).

Both of these apps offer a paid version with even more features. Features I didn’t investigate because I wasn’t interested in paying for something that was working just fine on my self-hosted app.

So today my reminder popped up. I was to decide which app I was going to stick with.

What did I decide?

I decided to re-enable my self-hosted TTRSS instance and go back to something that was never broken in the first place.

While TTRSS may not offer all the bells and whistles of the competition (maybe? – TTRSS does have a fair amount of plug-ins), it does exactly what I want for the price I want to pay. It’s performed admirably for three years or more and I can’t think of a better reason to change. I’ve checked out some options and I recommend highly each of the above to anyone who doesn’t have the desire or server space to self-host.

But for me, it’s back to TTRSS. Another open-source success story.

How to install webDAV under Apache on Ubuntu (16.04)

HOW-TO DISCLAIMER: Most of the “how-to” posts are copy/pasted from a personal notes blog I keep for myself and as such sometimes presume familiarity with some of the concepts that might not be obvious. I’d like to make my notes and how-to’s more informative and accessible to a wider audience, so please let me know if something is not clear (or just plain wrong).

Enabling modules

The first thing you must do is enable the necessary modules. Open a terminal window and issue the following commands:

sudo a2enmod dav
sudo a2enmod dav_fs

Restart the Apache server with this command:

sudo systemctl restart apache2

Virtual host configuration

The next step is to create everything necessary for the virtual host. This will require creating specific directories (with specific permissions) and editing an Apache .conf file.

Let’s create the necessary directories. For simplicity, I’ll create a folder called webdav. From the terminal window, issue this command:

sudo mkdir -p /var/www/html/dav

Now we’ll change the owner of that directory to www-data with this command:

sudo chown www-data /var/www/html/dav

The next step is to create a .conf file that will help make apache aware of the virtual host. For this, we’ll create a new .conf file called /etc/apache2/sites-available/webdav.conf. The contents of this file will be:

<VirtualHost *:80>

ServerAdmin chuck@planethawleywood.com
ServerName webdav.planethawleywood.com
DocumentRoot /var/www/html/dav/

<Directory /var/www/html/dav/>
    Options Indexes MultiViews
    AllowOverride None
    Order allow,deny
    allow from all
    DAV On
    AuthType Basic
    AuthName "webdav"
    AuthUserFile /var/www/html/dav/passwd.dav
    Require valid-user
</Directory>

</VirtualHost>

Save and close the file.

Now we enable the webdav.conf file with:

sudo a2ensite webdav.conf

Before restarting apache, we need to create the WebDAV password file with this command (USER is a valid username on your system):

sudo htpasswd -c /var/www/html/dav/passwd.dav <user>

NOTE: if you need to update or recreate the users webDAV password, run the same command with out the -c option. (-c creates a new entry)

When prompted enter the password for USER.

Next we must change the permissions of the newly created passwd.dav file so that only root and members of the www-data group have access to it. You’ll do so with the following commands:

sudo chown root:www-data /var/www/html/dav/passwd.dav
sudo chmod 640 /var/www/html/dav/passwd.dav

Restart apache with this command:

sudo systemctl restart apache2

The WebDAV system is ready to test.

Testing your setup

There’s an easy to use tool for testing WebDAV—install the tool with this command:

sudo apt-get install cadaver

Once installed, issue this command (IP_OF_SERVER is the actual IP address of your server):

cadaver http://<server-name>

You should be prompted for a username/password. Enter the USER used when setting up WebDAV and the associated password. If the cadaver command succeeds, you’ll land at the dav:/webdav/> prompt. Type exit to exit cadaver.

Congratulations, WebDAV is working on your Ubuntu server. You can now use whatever tool you need (connect via file managers, web browsers, etc.) to connect to your WebDAV server.


Reference

Cherry Music Server on Ubuntu

I have a ton of ripped music. After setting up and getting used to MPD, today I discovered CherryMusic Server

This is what I was looking or originally. I’m sure I’ll use MPD for what it’s good at – but this is a better interface than Plex for listening to streaming music from home (so far).

Get CherryMusic

CherryMusic depends on Python. Although it also runs with Python 2, Python 3 is recommended for best performance and all features.

sudo apt-get install python3

CherryMusic has several optional dependencies, which should be installed for a seamless user experience:

sudo apt-get install mpg123 faad vorbis-tools flac imagemagick lame python3-unidecode

Optionally, you can replace the packages mpg123, faad, vorbis-tools, flac and lame with ffmpeg if you like. The advantage with ffmpeg is that you can also decode WMA files. If you are not running a headless server, consider installing “python3-gi”, which allows you to use CherryMusic’s GTK system tray icon.

Configuration and setup

For security reasons it it highly recommended to run CherryMusic under a dedicated Linux user. First, create the user “cherrymusic”:

sudo adduser cherrymusic

Now, switch to the newly created user:

su cherrymusic

There are two branches of CherryMusic: the stable main release (“master”) and the development version, called “devel”. I highly recommend the development branch, as it often is several steps ahead of the master release and provides all the new features. In this guide I also chose the devel branch. However, if you insist on using the master release, simply replace all occurrences of devel with master.

Now, get CherryMusic:

git clone --branch devel git://github.com/devsnd/cherrymusic.git ~/cherrymusic-devel

This command will download the develop branch of CherryMusic and place it in your home directory.

Due to a shortcoming in Debian, the repositories do not provide a recent version of the package cherrypy and the package stagger is not available in the Debian repositories at all. However, they can be fetched locally and simply put into the CherryMusic directory. CherryMusic has a build-in function, that checks if those two packages are available on the operating system and if necessary offers to automatically download and store them locally in the CherryMusic directory — without installing them on your system. This provides a clean way to get CherryMusic running on Debian. Simply change to the CherryMusic directory and start the server application with the –help switch (you will be prompted then):

cd cherrymusic-devel
python3 ./cherrymusic --help

Now, do the initial start-up to generate the configuration and data files in your home directory:

python3 ./cherrymusic

This creates the configuration file ~/.config/cherrymusic/cherrymusic.conf and the directory ~/.local/share/cherrymusic/, where the user data is stored.

Before you head on, edit the configuration file to point to your music library and make any other changes.

CherryMusic uses a database to search and access files in your music collection. Before you can use CherryMusic, you need to do an initial file database update:

python3 ./cherrymusic --update

To reflect changes in your music collection, you need to repeat this step every time you make changes to your music collection. On a standard computer, even very large music collections should not take longer than a few minutes.

Create a systemd service

CherryMusic doesn’t have a daemon, but we can fake it with a systemd script to start the service

Make sure you are logged in as someone with root or sudo rights (exit the cherrymusic user account if you’re still connected) and create the file /etc/systemd/system/cherrymusic@.service with the following contents:

[Unit]
Description=CherryMusic server
Requires=network.target
After=network.target

[Service]
User=%I
Type=simple
ExecStart=/home/cherrymusic/cherrymusic-devel/cherrymusic
StandardOutput=null
PrivateTmp=true
Restart=always

[Install]
WantedBy=multi-user.target

Enable and start CherryMusic

Enable the systemd service to start on each boot:

sudo systemctl enable cherrymusic@cherrymusic

Start the service

sudo systemctl start cherrymusic@cherrymusic

Finishing up

Open a web browser on a computer connected to the same LAN the CherryMusic server is in and go to http://<ip>:<port>, where ip is the IP of the server and port the port specified in the CherryMusic configuration file (defaults to “8080”).

Create an admin user and the basic setup is done.

Postscript

Good news! The above instructions seem to work just fine on Ubuntu’s latest LTS


Reference

Zoho and Postfix

I’ve recently migrated to zoho to host mail on a domain I own. I also have a couple VPSs that I’d like to send email from. The sensible thing would be to relay through my Zoho account, right? Well, it’s not that easy. This one took a while…

Pre-requisites

I am configuring this on a Fedora23 server, but the dependencies should be the same on any Linux system.

dnf install postfix postfix-pcre cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain

Configuration

First of all you need the Zoho email address you want to use when relaying emails through Zoho.

Let’s say that this email address is app@planethawleywood.com

It will have a password as well, say apppassword

When configuring postfix, you edit many files. Let’s see them one by one.

Generic

The file /etc/postfix/generic maps local users to email addresses.

If email is sent to a local user such root, the address will be replaced with the one you specify.

In my case I have a single line like:

root app@planethawleywood.com

After editing this file remember to hash the file by using the command:

postmap generic

Password

The file /etc/postfix/password contains the passwords postfix has to use to connect to the smtp server.

It’s content will be something like:

smtp.zoho.com:587 app@planethawleywood.com:apppassword

You also need to hash this file

postmap password

tls_policy

The file /etc/postfix/tls_policy contains the policies to be used when sending encrypted emails by using the TLS protocol, the one I’m using in this case. Create this file if it doesn’t exist.

The file contains just this line:

smtp.zoho.com:587 encrypt

By doing so we force the use of TLS every time we send an email.

And then hash the file

postmap tls_policy

smtp_header_checks

This is the most important file in our case.

The file /etc/postfix/smtp_header_checks contains rules to be used to rewrite the headers of the emails about to be sent. Create this file too, if needed.

It rewrites the sender so that it always matches our Zoho account, app@planethawleywood.com

No more ‘Relaying disallowed’ errors!

Put this in the file, replacing your valid email address:

/^From:.*/ REPLACE From: <app@planethawleywood.com>

No need for postmap here.

main.cf

This is the main configuration file postfix uses.

Replace yourhostname with the hostname of your server, the one where postfix is installed on and that is sending emails through Zoho.

# TLS parameters
smtp_tls_policy_maps = hash:/etc/postfix/tls_policy
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtp_header_checks = pcre:/etc/postfix/smtp_header_checks
 
myhostname = yourhostname
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = yourhostname, localhost.com, localhost
relayhost = smtp.zoho.com:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/password
smtp_sasl_security_options =
smtp_generic_maps = hash:/etc/postfix/generic

master.cf

In the file /etc/postfix/master.cf I uncommented this line:

smtps     inet  n       -       n       -       -       smtpd

Apply the changes

Reload postfix by typing

postfix reload

Or by restarting the service

systemctl restart postfix

Test

Try sending and email from the command line:

echo "Test" | mail -s "Postfix Zoho Email test" email@domain.com

References: