Nitin Khanna

I was once described as a philosopher programmer. I think I'd like to describe myself as a lifelong student.

Running Compass on Vultr

Intro

Recently, I came across a tweet by Aaron Parecki, where he talked about a lifelogging app he built (and recently released) which tracks our location constantly.

I’ve been using Moves on-and-off over the years and partly due to it being now owned by Facebook, and partly because it’s a very crashy app (first time works fine, doesn’t open ever after that and stops tracking properly soon after; I assume the developer is now working on some darker features for the Facebook apps and so doesn’t spend as much time on his own creation), I’ve never been satisfied with Moves.

So, I downloaded Aaron’s Overland GPS Tracker app (free!) and set it up. The app is rather bare and the functionality is not well explained (within it). But it’s free, open source, a one-man job, and in line with the vision for indie dev, so it’s up to us to figure things out. I asked a few questions, got pointed to the settings explainer here. Well worth a read if you download the app.

The next step of the app was to install a remote server which ingests the data and makes it human readable and useful. As Aaron explains, the quest is to answer the question – “where was I at blah date at blah time?” The app’s official homepage recommends one of two servers to send the data to – a service called Icecondor and a server Aaron wrote called Compass. Compass looks nicer than Icecondor, is self-hosted, and I’ve been itching to play with Vultr.com‘s SSD Cloud, which competes with DigitalOcean in pricing and resources. So, here’s a walk-through for getting yourself setup with Vultr, installing Compass, and setting it up with Overland GPS to start tracking your location as creepily as Facebook and Google do it! 🙂

Vultr

Vultr is a nice competitor to DigitalOcean. At $2.50/mo for their cheapest VPS, it’s half the price of what DigitalOcean offers ($5/mo for the same RAM, storage, and CPU, but DO offers twice the bandwidth and, well, is trusted more). There had to be a caveat, right?

I signed up and the first thing I was told to do was to add money to the account. I had the option of not adding any cash and just attaching my credit card, but I’m going to end up using Vultr for something or the other, so I threw $10 at them (shut-up-and-take-my-money style!).

Then, they told me I can deploy a new server! I picked Seattle as my server location, Ubuntu 17.10 as my poison (which was probably a bad idea; more on that later), and scrolled down to the Server pricing. The $10/mo server was pre-selected for me and the $2.50 option was grayed out! (Seriously though, they should give names to these tiers. It’s silly to keep referring to the price.)

I googled around a bit and found out that they keep disabling the cheapest tier (they call it “Temporarily Sold Out”) as a sort of bait-and-switch model to drive new users to the more expensive options. But that sounds somewhat bullshit. If this was truly the behavior, I’d like my money back. But, and I’m glad I did this, I went back and started clicking around to look for solutions. It came in the form of New York! Turns out, they try to drive users to lesser used data centers while everyone who’s trying to set things up actually tries to use the “Silicon Valley” data center (seriously? Who the heck put a data center there???)

New York and Miami currently have open $2.50/mo tiers (ugh, that naming is so needed! I guess I’ll call it the Micro tier and the next one Mini), and networking is not a problem for me (who cares if a little more bandwidth is needed to get this non-time-sensitive data to New York and back), so I picked New York and threw my hat in the ring.

The server came up within… minutes? (Seriously, it was fast!) and I had an IP address to point to! Yay! But, what’s the password? The usual Ubuntu password didn’t work and I looked around at their docs and there wasn’t much to go by (Vultr’s docs aren’t as awesome as DigitalOcean’s. They’re good, just not there yet. They have a documentation bounty program if you’re interested, dear reader.) Then I checked the email which I would have received on server activation. It said that the password is on the dashboard (silly me!).

As I said before, Vultr’s documentation isn’t great, so I followed a mix of Vultr’s LEMP install here and DO’s LEMP stack installation instructions here. I installed PHP 7.1 with FPM (which, I must admit, was a little leap-of-faith because I wasn’t sure Aaron’s code would work without throwing up legacy issues, which it didn’t) and skipped most of the tweaking that Vultr recommends (YMMV).

Compass

Then, I copied over the Compass files (from here) and started following the Setup. The first issue was the .env file. There’s a few settings in there which are confusing, so here’s what I did –

BASE_URL -> This is your website. It uses HTTPS. More on that below.

STORAGE_DIR -> This is the data directory which is supposed to store your incoming data. Oddly enough, it doesn’t. When you use the application, the GUI prompts you to make a ‘database’ (it should be called a ‘project’ Aaron). This database makes its own folder in the Compass directory, so this variable invariably doesn’t get used. Set it anyways.

APP_KEY -> This confused me a bit. I don’t think this is a password. But I set it to something like a password. It’s a 32 char string, so have fun setting it up.

DB_CONNECTION -> Set this all up as you would any other MySQL application. Use the WordPress tutorial by DigitalOcean as a hint of what to do.

DEFAULT_AUTH_ENDPOINT -> This was one of the more confusing things I saw. Was the idea that this was some generic authorization? To figure out, I found Aaron’s own Compass website and tried to login. Turns out Aaron uses a very neat authorization process. There’s no password. All you do is tell which Indie authorization website you want to use to authenticate who you are and it’ll allow you to login. Specifying this URL will mean that if you can login to that other website, you can login to this website. The default is set to ‘https://indieauth.com/auth’. If you let this remain, it’ll mean that anyone who has an indie auth login anywhere will be able to create an account on your Compass server and potentially use it for their own data. So, I authenticated myself into Aaron’s server and now I have an account there! Of course, I don’t recommend this. I changed this Endpoint to my withKnown.com site. That way, only people who can login to my withKnown site can login to my Compass server. Who can login to my withKnown server? Only me. 🙂

There’s a piece of the puzzle which needs addressing. APP_DEBUG is set to true right now. So whenever there’s an error, Compass spits out the entire MySQL connection string, including password, as well as very important system information out to anyone to see. I suspect that once you’re done setting up this server and you trust it, you should follow the Laravel process of ‘migrating’ the application from dev mode to production. This will help secure your application.

 

After this, I moved on to running Composer to install all the dependencies which I needed for Compass. Here’s all the issues I faced there –

“Composer not installed” – Install using

"apt install composer"

“danielstjules/stringy 1.10.0 requires ext-mbstring” –

"apt install php7.1-mbstring"

“phpunit/phpunit 4.8.21 requires ext-dom” –

"apt install phpunit"

“zip extension and unzip command are both missing” –

"apt install zip unzip"

Now, you can run ‘composer install’ and it’ll work.

 

nginx

I recommend using nginx. You’ve got a small server and you don’t want Apache to drown the memory, so just use nginx.

Aaron’s config for nginx were clear, but not helpful, because it doesn’t go with the usual nginx config floating around tutorials. So here’s mine (relevant portions only) –

index index.php index.html index.htm;
root /var/www/nitinkhanna/html/compass/public;

location / { 
    try_files $uri /index.php?$args; 
}
location /index.php { 
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;    
    fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ \.php$ {
    include snippets/fastcgi-php.conf; 
    fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
}

At this point, I thought I was done. But then, when I tried to open the site, I ran into some very nice errors in the application. First of all, notice the root. The root of the application is not the compass folder itself, but the public folder inside it. This is not mentioned anywhere in the documentation and was well worth twenty minutes of “what the heck?” and then some. But you have it on good authority that this is what you’re supposed to do.

Secondly, the application wasn’t done making me install stuff. So I also had to install curl –

apt install php-curl

Then, I wanted to digress a little and make my life a little more difficult (or easy, depending on who you ask). Aaron’s own Compass server uses Let’s Encrypt based SSL. I’ve always wanted to secure my own sites using SSL, but I’m lazy. For this, I thought, why not!

I found the CertBot instructions for installing with nginx and Ubuntu here. They’re pretty straightforward, with a small error that I ran into – Cloudflare. I use Cloudflare as my DNS, security, loadbalancer, God of Small Things. Cloudflare provides SSL. It’s literally a one click. When you add a new A record to your domain (such as compass.p3k.io), it adds DNS and security itself by routing traffic through Cloudflare’s network. CertBot doesn’t work with that. CertBot needs direct access to the server. So, I had to disable Cloudflare’s lovely protection for my subdomain and let certbot do it’s job. It did so. It automatically modified the nginx config to accept HTTPS-only connections and to route all traffic to HTTPS. I was even able to setup crontab to auto-renew certs –

43 6 * * * certbot renew --post-hook "service nginx restart"

After this, you run the job queue commands as listed by Aaron and you should technically have a running website. But there’s a catch, as there always is. This server that I’ve got is not a ‘mini’. It’s a ‘micro’. 512 MB RAM is not enough to run MySQL, Ubuntu 17.10, nginx, php-fpm, and actually run an application on top of that. So, I ran into a very cryptic error –

[PDOException]                                    
SQLSTATE[HY000] [2002] No such file or directory 

At this point, I had the application running and I was able to visit the site and all, but try to login and it threw this error. The php artisan command also started throwing this error (by the way, you’re supposed to run the ‘php artisan queue:listen’ command in the background for this server. Follow the instructions here to set up supervisord to do so). Most people on StackOverflow seemed to think that if you replace ‘localhost’ with ‘127.0.0.1’ in the app’s settings, it’ll start working again. But that didn’t help. Finally, someone recommended (not in real-time. I’ve only once ever in my life used StackOverflow in real-time to get answers to a question) restarting MySQL. Well duh.

Oh? MySQL won’t restart. Why???

It was this community question on DigitalOcean that gave me the answer I was looking for – I had run out of RAM. Turns out, 512 MB is just enough to play with a server, but not enough to run it for reals. Nonsense. Let’s just add a swap!

I used this excellent and very easy DO tutorial to add swap to my VPS. Notice the shade it throws at you for trying to use swap on SSDs. They specifically say that it doesn’t recommend using swap for DO “or any other provider that utilizes SSD storage” and that this degrades hardware performance for you and “your neighbors”. DO recommends upgrading your instance so it has more RAM instead of using swap. We don’t listen.

Added swap and voila! It’s working! MySQL fires up and the app stops throwing silly errors! I ran htop all night on the instance to monitor for Memory and Swap use and it works just fine! At last, we can login!

 

Overland

OK, we logged in using our designated Indie Auth website! Now what? You’re staring at the blank screen that recommends you create a database. Do it. You give it a fancy name and it spits out a bunch of configuration. Now what? First of all, change the Timezone in the settings to where you are. It’s set to UTC right now, but for me, it’s PST. Also, use

dpkg-reconfigure tzdata

in your Ubuntu command line to change the timezone of your server to where you are. Remember, my server is in New York. But I told it that its timezone is America/Los Angeles. Because.

OK! You’re good to go! You can throw some data at this server! Head over to the Overland GPS app and add this endpoint to it. Only, what’s the endpoint? I added just my compass server’s URL and that didn’t seem to work. Then I looked at the app screenshots and there it was –

https://compass.p3k.io/api/input?token=E6ncEYWxT...

That’s your Receiver endpoint! But, where should I find this? In your Compass ‘database’ settings, You’ve got a read token and a write token. Next to the write token is a link which says “show API endpoint”. Click it and out pops another line which shows you the above. Simply copy this and magically move it to your phone (I WhatsApp myself these things) and you can plug it into the app and start sending data! The first time you plug it in, the app will collect all the data you’ve accumulated till then (I had some 25000 points of data to transmit) and smoothly move everything to the server (Aaron really has done a great job with the app). After that, it’ll move the data in batches the size of which you can specify (God knows why).

But. You’ll see some odd things. For example, in the afternoon, the server’s map changed the date over to the next data (I suspect this is because my server was still on UTC time. Running the tzdata command above should solve this). Also, whenever there’s no data (or the data hasn’t loaded yet), the map points to Portland. I get that Aaron is from there, but I think we should be able to configure this (Seattle, woooo!) because it’s a little jarring. Finally, this will teach you how bad your GPS data is anyways. Most of the time, the map has me squarely in the water, or swimming out and coming back, or has me cross the I-90 bridge by, well, not crossing the bridge but swimming along it). But, that’s just the world we live in.

 

Questions/Issues
  1. Why does this server need MySQL? The Compass documentation says that the data is stored in flat files. Then is the MySQL database only used for temporary storage of data before it’s processed and saved to flat files?
  2. Is HTTPS a requirement of the server or a nice-to-have? I am not sure about this and I just took the safer route.
  3. The app, in debug mode, spits out way too much information which it shouldn’t. I’d like clear instructions on migrating it off debug mode.
  4. Did I decipher the meaning of DEFAULT_AUTH_ENDPOINT correctly? Not sure. Also, Aaron, if you’re reading this – what do I do with my login on your Compass server? Can you allow people to store their data on there, just for visualization (and wiped every night so as not to flood your server).
  5. I still don’t know what the best configuration is for the app (battery-use to tracking). If you’ve got pointers, throw them in the comments below!

Thoughts on a required reading page for blogs

I’ve been following Colin Walker’s thoughts on a ‘required reading’ page since Monday and have been thinking about it myself. His own thoughts were based on Dave Winer talking about the idea.

What is a required reading page to me?

Dave Winer seems to suggest a page which would link to articles that deeply affect the blogger, or explain their motivations and give context. Colin took the idea further and talked about old posts which the blogger would want to highlight. There could be external links which the blogger would want the reader to get acquainted with before weighing in on the subject the blogger talks about.

Why are we talking about it?

Two years ago, Derek Sivers and party introduced the idea of the /now page. It’s an easy way for bloggers to talk about what they’re doing right now. There was a marked effort to explain that this page would not be automated, so that the blogger frequently updates it and nurtures the page as a window into their lives. You can read my /now page here.

These ideas – a now page, and a required reading page, are extensions of a blog and a way to empower bloggers to build a blog as an extension of their lives. Sure, you can post what you’re working on on Instagram, and rant about it on twitter. But when it lives on your blog, you care about it more, and so does your reader.

When I was thinking about it, I felt that the required reading page is better implemented by the blogger simply choosing to write about the topic they care about. If you want people to notice a certain article, just blog about it, quote it, and explain your take on it. Ask the reader for their takeaway too. Perhaps, in that sense, a required reading page is every page on your blog. If you care about it enough to write about it, I’ll know that you recommend that I read it too. That is how it works right now, and that’s why I read Dave Winer’s post – because Colin Walker was talking about it.

But when you look at the way people blog now, a lot of bloggers have, since a long time, maintained a page of reading which they want to highlight. Famous bloggers often have a page which lists their most popular blog posts (a great example is this page by Leo Babauta) and others often point to external reading that they value. It’s time this too is formalized into a format and a ‘named’ page, so as to guide future bloggers (and current bloggers) and help leapfrog the blog from a stream of thoughts and articles to a centerpiece of activity and a deeper reflection of the blogger’s life.

p.s. Named pages are useful in both kickstarting a blog and maintaining it for your readers. Examples are the About page, the /now page, the Colophon page (which talks about your tools, your blog’s history; sort of an extended About page; here’s an example), and now, hopefully, the required reading page. As Colin says in his post about the Required Reading page –

I’m going to spend some time considering what I might have on mine.

Twitter backlash, again

Every time twitter changes something, it faces backlash. That’s not new, that’s common to all consumer tech companies. There’s a set of users around the world used to things the way they are and some who are looking forward to the change. Whenever Facebook makes a change, they get floods of comments attacking them. Perhaps that’s why Facebook has gone shadowy about all the changes they make to their algorithm. They’ll talk to the public about it, but not really explain anything.

When twitter changed the look of their app and service a few weeks ago, many liked the change and I got upset about it. It’s not that I use the official app or site a lot – I don’t care for it. But it seemed like such a useless change when there are so many other things to deal with.

The same happened yesterday, when twitter announced that they’re increasing the tweet chars limit to 280 for some users, as a test. I’m not part of the small group, but I’m one who’s been saying for a long time that this should happen.

Others, as John Saddington, over here on his blog, believe this betrays the fact that twitter is ending up like the next Yahoo!, having lost its soul a long time ago in the many quagmires it failed to defuse in its long march to infamy.

While I agree that the service is, in general, in shambles and begging for someone to whip some sense into it, this change is not anything which will destroy twitter. If anything, it’ll give users such as me some breathing space.

Yes, I can be pithy and reword my tweets to be exactly 140 chars or less. But who wants to? “Brevity is the soul of wit”, said the man who wrote 884,647 words in his career. Well, I don’t want to be witty all the time. I want to get my point across, and if it takes 145 characters to do so, so be it.

Yesterday, while discussing this change, twitter made a feeble attempt at giving a technical reason for this change. They explained on their blog that the reasoning is that some languages, unlike Japanese, Korean, and Chinese, are more visual in nature and can explain a lot more in one character than others, such as English, Spanish, and French. Therefore, they are making this change available for the latter languages. They also explained that only about 9% of all tweets reach the 140 characters limit.

That last bit surprised me. I thought about it a bit and realized that this number, 9%, is probably flawed for various reasons –

  1. Twitter has a lot of spam. Like, a lot. So, when someone talks about 9% of all English language tweets, they’re probably counting a lot of bots, crap, spammers, and the general noise you see on twitter. Remove all of those and the number might actually jump into double digits.
  2. Most people who want to express themselves better have gotten used to the idea of tweetstorming. They know that twitter’s not going to fix this issue, so they use tools, or just hit reply manually to post things in a better way. If twitter counts tweetstorms as one tweet, I’m sure, again, that they’ll tip over to north of 10% easily.
  3. Everyone else who ever faced the predicament of having a red negative number blocking their tweet just went ahead and reworded the tweet to fix things. Had they any method to tweetstorm from the official app, they would have done so, and, again, twitter’s numbers would be more truthful.

Given these factors, there’s not less reason to implement 280 chars, but more.

But, here’s a prediction, if I could make one –

When twitter revisits these numbers in a year or two, they’ll see that the number hasn’t really shifted a lot. If they see 9% now, they’ll, maybe see 10-11% a year from now. The reason is simple – when we’re given an arbitrary limit, our thoughts go towards meeting that limit instead of finding ways around it to fully express ourselves. Now that twitter has gotten everyone used to 140 chars, when the noise settles, those who need a few extra chars to express themselves will take those and use the space. Others will not. Simple.

This new limit is nothing to fret about. It’s not going to destroy twitter. That’s already to job of twitter’s bad advertising, political hand-wringing, and spam. All this is going to do is give some breathing space to people like me, who need a few extra chars once in a while.

Side note – I noticed that tweetbot took the 280 char limit and presented it beautifully since yesterday. I haven’t had to update my app for it. No good twitter app has a hard limit of 140 chars built into the stream display. That just shows that there’s no real reason for twitter to go back to 140 chars. The endless stream of tweets that people are used to will work exactly the same way as before. Don’t worry about it. Worry about everything else that’s still wrong with twitter.

p.s. – My thoughts are partly explained by Colin Walker here, funnily enough, in fewer words.

Photo by mkhmarketing

No updates please

I was an avid software updater. I would read the updates list, hit the update button and see the download happen. I enjoyed doing this manually because it’s a fun process to acknowledge all the work someone has put into this update that I’m downloading. In that sense, websites are no fun – they change suddenly and have no changelist to describe what all has changed and what new features are available.

But then I got bitten. First, on my iPad Mini (Series 1). iOS 9 slowed everything to a crawl. I still have use for the iPad, but it’s limited to two apps – Scrivener and Kindle. Everything else is basically unusable. I don’t even browse the web on it. It’s just easier to bring out my iPhone 7 Plus for that.

Then, went my Macbook Pro. The main reason is under-use. When I’m developing something, I’ll update the packages, update Xcode, get the latest and greatest of iTunes. But when I’m browsing or reading on it, Safari suffices. Chrome is a crybaby on OSX, so I dumped it and never looked back. Perhaps the lack of Chrome Sync is what drove my usage down? Not sure. All I know is that my Mac cries for updates and I deny it. I don’t even know what version of OS I have. It’s a pain to find out and keep track. I don’t have Siri on it. APFS, you ask? Not gonna do it.

Finally, the iPhone. Oh, the iPhone. I still enjoyed downloading and updating apps on it for the longest time. It’s the most used device I have (and I have the Apple Watch strapped to my wrist most of the day. It’s just not used in the same way). I have truly enjoyed watching app updates change the way I use my iPhone and what I keep on my main home screen.

Then, the inevitable happened. I got bitten. The app update didn’t mention that Terminology 3 was going to change one of the main features of the app – opening on the search view. I thought the cries of a thousand users would make the developer reconsider. I don’t even know where that debate went.

Then, I updated an app I was just trying and the developer put an ad at the beginning of the app, destroying the experience completely. I gave my first ever App Store review – a 1 star with a few choice bad words. I calmed down after a day and updated the 1 to 4 stars. But I made the developer notice. I made sure they understood that not mentioning the ads in the app update is the reason why they got the bad review. They changed the update text to include mention of the ads.

I don’t mind change. I’d just like to have it mentioned to me. Today, browsing the app updates page, I saw that Delta Dental had updated their app. I opened the details and all it said was “bug fixes”. There’s more effort made to inform users of what’s changing in SnapChat than what’s changing in an insurance company’s app. There’s technology for you.

Twitter changed. Instagram changed. Facebook changed. I see more ads and more crap ‘features’ in these apps that anyone around me. Maybe they’ve labelled me guinea pig?

One day, I updated Google Search’s app. There was a time I used it as my main search app. The app team had added Cards to the app. The feature destroyed the app. It had slowed down to a crawl, it was not even loading the cards properly and wouldn’t let me jump right into a search. Google eventually fixed the cards and made the thing faster, but the app’s main focus is still ‘showing information’ instead of letting me ‘search for information’. My main ‘search’ now happens through Safari – it’s got adblocking, it’s got session retention (Google Search app is crap for that), and it’s just nicer to use.

I’d like to remember what exactly it was that broke the camel’s back, but there’s just a very long list to look through. One day, I was just not updating apps with the same zeal and the same frequency. I realized that the release notes were a joke, and features were going to keep changing at whatever terrible pace the developers decided was right. I’m a developer, I know that it’s very easy to decide to change something (and very difficult to implement it). So I respect the devs who put hours into these updates. But I’m just not going to update apps (and OS versions) as frequently as they come out with them.

Since the last few days, we’ve been talking about iOS 11. My wife has been asking me to backup her phone and update it. She’s never been this excited about an OS update. But I couldn’t be farther away from it. I’m not excited about HEIF/HEVC. I’m not interested in iOS 11 ‘degrading’ my phone. I’m not even excited about all the bugs they’ll eventually iron out with a point release in a month or two.

But, I’ve readied my phone for it. I’ve deleted about thirteen thousand photos from my phone, primarily because I was tired of keeping them around (is it true that less storage used translated to better battery life?). I’ve taken a backup or two. Maybe I’ll update my phone today. Maybe I’ll update my wife’s phone first and see how that goes.

But app updates? No, thank you.

Unraveling the future of Day One Sync

Thursday was an important milestone for Day One and its users. The launch of the Day One browser extensions marks a time when the Day One team is ready to launch API based products outside of their default apps, a somewhat return to the time when Day One 1.0 was a beautiful, open garden of apps and services that could plug-in (and out) without much trouble. Day One 2.0 robbed a lot of people of those options and these browser extensions allow us to come back into the fold.

I’m not under the impression that this means that Day One will suddenly be as open and accepting as it once was. No, the walled garden that the team has created will remain. Their promises to end-to-end encrypt all data (while allowing complete access through their API), their wish to remain free of third-party sync services such as Dropbox, and their interest in keeping their company growing, mean that Day One is never headed back to the old days.

But that doesn’t mean things can’t move forward to a good place. Of course, with the launch of Day One Premium, what that good place is, is a little unclear. Yesterday, while launching extensions on Instagram, the Day One team answered a few questions and that gives us a hint of how things are going to work from now on.

Let’s first summarize what we understand of the customer ‘levels’ for the Day One service –

  1. Basic – This is a new tier. If you download the Day One app today (or are a Day One Classic user updating to Day One 2.0 today), on iOS or Mac, you’ll be a free user. All your data will be saved locally on the device which you use and any time you want to a. Create multiple journals, or b. Sync your data to the Day One Sync service, you’ll be prompted to pony up and become a Premium member.
  2. Premium – This too is a new tier. If you want to sync your data across devices, get access to the encrypted journals feature, support Day One in their awesome venture, and get 25% off print book orders), you get to buy into the Day One subscription service. It’s currently $35/year for new users and $25/year for older users, as explained in the FAQ.
  3. Plus – This is the new name for the old tier. If you downloaded Day One 2.0 on any platform before the Premium tier was introduced, this is where you stand. You get access to Day One Sync, get to make up to 10 journals, get access to data encryption, use cloud services such as IFTTT, etc.

Here’s what you don’t get with the Plus subscription –

  1. If you bought Day One on one platform (iOS or Mac) before Premium was launched, and bought it (for free) on the other platform after, you don’t get Sync between devices. You can still export your Day One journal and import it at the other end, but that’s just too cumbersome.
  2. Similarly, you don’t get access to more than 10 journals, and can have no more than 10 images per post.

But yesterday’s release taught me something interesting –

Day One is still a company that cares for its users. So, it seems that if you’re a Plus member, many future features and launches will work for you. Day One browser extensions currently work only with unencrypted journals. However, since Plus members do get access to encryption and Sync, in the future, it’s possible that support for end-to-end encryption will be added to the extensions and as a Plus member, you’ll still be able to use them.

Similarly, right now IFTTT is the only third-party sync service allowed to plug into Day One. You can use it in a lot of ways – saving your Instagram posts to Day One, emailing an entry into Day One, stashing away your tweets, your weight (using Withings), the day’s weather, your Instapaper Likes, and your Evernote entries.

But I suspect that when Day One launches their API, Plus members will definitely get access to it. They’ll get access to it for both encrypted and unencrypted journals, and will be able to use a lot of the tools and services they were using with Day One 1.0, updated to work with the API, of course. This seems not only likely, it seems definitive with the way the Day One team launched the browser extension.

Why am I even talking about Plus? It would seem that most future users of Day One will be either Basic or Premium members, right? But most of their current users are Plus members. On top of that, I believe that a large percentage of Day One users fall in one of two categories – they either have only on Apple device (iOS or Mac) and so don’t care for a lot of Premium features, or they went ahead and bought both Mac and iOS apps, and so they didn’t get affected by being pushed into the Plus tier either.

However, I am part of a significant majority which either want Day One access on Windows or Android. This is why understanding the Day One team’s motives behind every move they make is important to me. From what I’ve understood, they’ve got nothing but good intentions when it comes to treating Plus users with fairness, even if it comes at the cost of Premium subscriptions in the short-term.

Future Day One apps (for Windows and Android) will be free and siloed the way the new versions of the iOS and Mac apps are. You’ll have to be a Premium user in order to sync between these devices. But the devices on which you’re a Plus member right now will give you a pretty premium experience, and any third party tie-ins and API based features should be available to Plus members without having to move to the subscription model.

Of course, with the launch of the browser extensions, the Day One team has solved a very big problem – getting journal entries in, on Windows (and Mac for iOS-only) for Plus users. That saves people like me from a lot of time and effort!

p.s. According to the Day One team, Day One Classic is still around, just not under active development. Most of us (specially if you read till the end), have moved on to Day One 2.0, but if you download the Day One Classic app (or still have it installed on your system), Day One Sync is still working on it and syncing to it. So if you have that, keep using it!

The mountain is a stone bud, always ready to open, but destined never to.
.
.
.
#colchucklake #pnw #northwestforest #lakelife #hikingadventures

A note about Indian restaurants in the US

So, we are at an Indian restaurant again last night, and as usual, for a table of four, it got crowded really fast. Indian eating joints have this exquisite property of always seating you at tables not quite big enough for all the food you’ll order, and it is sad since their property is very big from https://www.williampitt.com/

But it’s not their fault. Indian food is community food. A central platter of dishes, and then our individual plates. Compare that to, say, American food, where everyone orders their own entree and all the food in contained within individual plates. That saves on space and consequently allows for smaller tables. That is space saved per tables which allows for a roomier restaurant or more tables per eating joint – especially useful for fast food joints. 

What’s the solution for Indian restaurants? How can they provide for the right amount of space for patrons? Well, they can swallow the cost of having fewer tables and just provide bigger tables – seating four at a table meant for six and two at a table meant for four. But we know they won’t do that. 

What can we as customers do? We can order thalis instead of entrees. Thalis have all of the food on the same plate, in small portions, providing a variety and a more complete meal. They’re also individualistic, so it’ll ensure everyone can get the dishes they want. But there’s two problems there – 

  1. Most restaurants don’t have a lot of varieties in thalis. They’ll have a maximum of two options. So even if we as consumers make this change in our eating habits, it’ll end up only hurting our choices. There are, of course, some restaurants which specialize in thalis and those are definitely worth visiting, but they’re few and far batween.  
  2. As a North Indian, I am geared towards larger portions of fewer dishes. That’s not going to change. 

There’s one more thing which need to address – naan (or as they’re affectionately called, ‘naan bread’). Naans are usually cooked individually and tossed into a metal bread basket which consumes an inordinate amount of space on the table. If you’re ordering a few different types for the table, those baskets quickly take up too much space, often spilling over and causing a great deal of wrangling to place everything on the table. The solution often ends up being that you consume your naan partially and then stack the baskets until someone comes along to take them away. This whole business is messy and commanded by this idea that if someone orders a garlic naan, a butter naan, and a parantha, they need to come in separate baskets, so as not to intermingle their aroma, even though most people end up sharing naans. This situation is further exacerbated by the difference in sizes of naans between different restaurants. Some make their naans huge, wherein people have to share their ‘breads’ while others serve smaller portions, making it difficult to know from the get-go whether we’ll be sharing naans or not.

I believe the solution is midway – a new kind of offering that is a cross between a thali and entree. This offering would let you pick your entree and naan but offer smaller portions for the same, to specifically cater to a single person. Some restaurants would choose to offer some options with it – raita or plain rice (which, to my utter amazement, is considered a freebie in most Indian restaurants in the US). This complete package would be constructed in such a way as to fit within a single plate, taking the right amount of space to allow for a comfortable dining experience. 

There is only one place where I’ve seen this kind of offering – Azitra in Broomfield, Colorado. Their lunch options were wonderful and the portions were filling. They too made the mistake of tossing the naan into a separate basket, but by saving space on the dish (the curry came in a beautiful boat-shaped dish), they allowed for a much cleaner and spacious table. I would like more restaurants to pick up this offering and improve our dining experience.