Comment on – Net neutrality, we hardly knew ye – Marginal REVOLUTION

Internet experts Tim Wu, Cory Doctorow, Farhad Manjoo and many others were just plain, flat out wrong about this, mostly due to their anti-capitalist mentality.

Source: Net neutrality, we hardly knew ye – Marginal REVOLUTION

This sort of conclusion shows that it really, really matters where you get your information from. In this case, the author has summarized from reading a Bloomberg article, that general supply and demand handled the lack of net neutrality without government intervention.

Cool. Cool.

Except, you seem to forget that in 2014, “the average speed of Netflix streaming video content delivered to Comcast subscribers has declined by more than 25%, according to Netflix” according to this article Time Magazine and Netflix’s own data as seen from the image below, published by Quartz.

Image for article titled The inside story of how Netflix came to pay Comcast for internet traffic

So what did Netflix do? They bought speed.

The exact details of the deal have been private information, so everyone built their own estimates. One estimate from that time (2014 was before Net Neutrality rules) is that though they might be liable for about $400M per year, in reality, Netflix would be paying about $25-50M per year on a multi-year deal.

Note that every website pays for access. After all, they are the ones in demand and the ISP knows this. Your monthly home internet bill is just one source of funds for ISPs. They charge a much larger chunk to large companies like Google, Meta, Netflix, AWS, etc for the amount of data they upload to the ISP’s network. This includes general websites like this blog, but also anyone who is in the video streaming, or game server business.

That’s normal. What’s not normal is that Comcast was knowingly (or unknowingly, for my CYA) throttling Netflix’s speed, thus giving Netflix customers a much worse streaming experience. Instead of a technical fix to the issue, the two parties struck a deal whereby Netflix bought a Direct Interconnection with Comcast and started uploading directly to their network.

Later, Net Neutrality laws prevented such behavior, but I suspect that this was a multi-year deal built under the guise of a Direct Interconnect, so it survived the Open Internet Order by the FCC and probably continues on to this day.

Also, Bloomberg claims that –

Bandwidth has expanded, and Netflix transmissions do not interfere with Facebook, or vice versa. There is plenty of access to go around.

This is flat out lies and a very bad way of thinking about the Internet.

First, of course Netflix transmissions “interfere” with Facebook (and Instagram, and YouTube, and Comcast’s own streaming service Peacock). Everyone is a video streaming behemoth. They are all uploading a lot more than when they were web 2.0 darlings way back when.

Second, Net Neutrality may not defend big players like Netflix and Facebook, but it sure can protect smaller businesses or independent website owners.

Let’s say tomorrow I post a wildly popular video on my site. Suddenly, there’s a spike in streaming traffic to my site. My own server vendor DigitalOcean may not want to charge me for the spike, because its a one time thing or maybe I’m already paying for the bandwidth and am within limits. But an ISP like CenturyLink or Comcast can easily come to DigitalOcean and ask for a bigger payout for supporting this sudden but consistent burst of video traffic. They can threaten to reduce streaming speeds for traffic to my site, so that anyone coming to my site is forced to watch the video in 480p or lower, instead of the 1080p or 4K I’ve shot the video in.

This increases transcoding costs for me, thus making me spend time, money, and energy to convert the video to multiple formats, hosting them on my site etc. DigitalOcean may also decide to punt the costs to me, so now I’m on the hook for Comcast’s lack of net neutrality. Suddenly, I have to figure out a monetization strategy to pay for video streaming at proper speeds. A nice little moment of fleeting Internet stardom then becomes either a hole in my pocket or necessitates a conversion to ad-supported or a paid website. All because Comcast realized I’m streaming video from my blog.

You might think this scenario is far-fetched, and maybe it is. But that’s how many webservices start. Someone has an idea, they try it online, and in the 15 minutes of fame they get, they have to run to get funds to cover the costs of just being a nice netizen.

Then folks like… Tyler Cowen (an Econ professor at George Mason, no less) of marginalrevolution read a tainted Bloomberg opinion piece and think they know how the Internet works.

The Open Web can learn comment moderation from Instagram

Instagram

Starting today, you can protect your account from unwanted interactions with a new feature called Restrict. Bullying is a complex issue, and we know that young people face a disproportionate amount…

Source: Empowering Our Community to Stand up to Bullying – Instagram

 

Bullying is about power and perception. When someone cyberbullies you, the idea that other people can see the comments and choose to ignore them, which makes bullying banal, or even someone else’s comedy, that idea is sometimes more hurtful than the comments themselves.

What’s interesting to me is that Restrict is a rehashing of a system that has existed since forever on the Open Web – comment moderation. The ability for a blog to not show a person’s comments has existed forever, and due to the lack of transparency and user-feedback in companies like Facebook and Google, has largely been ignored until they get to it.

However, Restrict is an improvement, depending on how they’ve implemented it. In blog comment moderation, the bully/poster sees and knows that their comment is under moderation. This gives them cause to go and continue their bullying on some other platform.

Restrict seems to make it so that the bully will not find out they are under review. This is a powerful tool, because the perception for the bully will be that other people saw their comment and ignored it, thereby removing the feedback loop that pushes them to bully more. Simultaneously, for the bullied, it will tell their subconscious that their community has not abandoned them in favor of the bully, because the community can’t even see the bully’s comments.

If this is how it’s implemented, and if it is successful, I’d say this is a good thing for the Open Web and for comment systems like Disqus and WordPress to also implement. Taking power from the bully means letting them think that their ‘hot takes’ have been ignored by bystanders. In this case, perception is power, and the bullied should be able to wield it.

Security vs Usability

I’ve come to a point where I do **not** update apps, plugins, software in general. I know that’s a regressive approach to safety, but safety can’t keep trumping usability all the time.

Source: My comment on Stephen’s Notebook

 

Every few days, I have a conversation about security vs usability somewhere. With my iPad Mini, I blindly trusted Apple to do the right thing and they’ve screwed me over. It’s a beloved device, destroyed completely by iOS 9.

So I’ve basically given up on this bullshit harp that companies sing of ‘security’ to shove software updates down our throats. Sometimes it’s their stupidity, and sometimes it’s just them being sinister. The new Microsoft is the old Microsoft. The benevolent Apple is an insidious Apple. Don’t get me started on Facebook, twitter, and Google. Gmail is just the latest casualty of our overzealous overlords.

Yes, security is a big problem. Yes, it needs constant vigilance. But just like national defense budgets, one key phrase doesn’t allow organizations to completely railroad people’s expectations, asks, hopes, and in this case, UX.

If you’re concerned that by not updating software, you’re living on the edge, restrict the things you do on that device, while keeping other devices that are completely updated and secured. Use only frequently updated third party browsers instead of the default options. Read up on the latest security scares on the Internet and just be aware of the situations you can get into. But most importantly – back up. Make frequent backups of things you care about. I don’t care if it’s as much as letting iCloud run its course every night, and Google Photos siphoning off your pics. Just do it so that if you brick your device, or get hacked, you’re not set back a hundred years.

99% of security is just keeping your eyes open.

Fixing Jetpack’s Stats module

Despite the hate that Jetpack gets for being a bloatware plugin, it is one of my favorite and the first step whenever I setup a new WordPress install. However, Jetpack does have a few irritating habits that I cannot overlook. One of these is the stats module. The module actually does pretty well, posting data to the wordpress.com dashboard and making it easy for me to quickly glance at the number of visitors I’ve had for the day.

However, every so often the module craps out and logs a large number of visits from crawlers, bots and spiders as legitimate hits, since those are not in the official list of crawlers, bot and spiders to look out for. To fix this, I went out to look for the list and to add to it. One quick GitHub code search later, I found that the file class-jetpack-user-agent.php is responsible for hosting the list of non-humans to look out for. What I found inside was actually a pretty comprehensive list of software, but one that definitely needed extending.

If you want to do what I did, find the file in your WP installation at –
/wp-content/plugins/jetpack/class.jetpack-user-agent.php

Inside the file, look for the following array variable –
$bot_agents

You’ll see that the array already contains common bots like alexa, googlebot, baiduspider and so on. However, I deepdived (meaning did a sublime text search) into my access.log files and found some more. To extend the array, simply look for the last element (which should be yammybot) and extend it as follows –
'yammybot', 'ahrefsbot', 'pingdom.com_bot', 'kraken', 'yandexbot', 'twitterbot', 'tweetmemebot', 'openhosebot', 'queryseekerspider', 'linkdexbot', 'grokkit-crawler', 'livelapbot', 'germcrawler', 'domaintunocrawler', 'grapeshotcrawler', 'cloudflare-alwaysonline',

Note that you want to leave in the last comma, and you want all the entries in lower case. This doesn’t actually matter, because the PHP function that does the string compare is case-insensitive, but it just looks neater. You’ll also notice that I’ve added the precise names of the bots, like ‘grokkit-crawler’ and ‘clousflare-alwaysonline’ but you can be less specific and save yourself some pain. This will, however, affect your final stats outcome.

Notes –

  1. Some of the bots are pretty interesting. I saw tweetmemebot, which is from a company called datasift, which seems to be in the business of trawling all social networks for interesting links and providing meaningful insights into them. Another was twitterbot. Why the heck does twitter need to send out a bot? We submit our links to it willingly! Also interesting were livelapbot, germcrawler and kraken. I have no idea why they’re looking at my site.
  2. Although Jetpack does not have a comprehensive list of bots, it still does a pretty good job. I found the main culprit of the stats mess in my case. Turns out, CloudFlare, in an effort to provide their AlwaysOnline service (which is enabled for my site), looks at all our pages frequently and this doesn’t sit well with Jetpack. I hope this tweak will fix this now.
  3. Although this fix is currently in place, every time the Jetpack plugin gets updated, all these entries will disappear. That’s why this blog post is both a tutorial for you all and a reminder and diary entry for me to make this change every time I run a Jetpack update. However, if someone can tell me a way to permanently extend Jetpack, or if someone can reach out to the Jetpack team (hey Nitin, why don’t you file a GitHub issue against this?) it’ll be awesome and I’ll be super thankful!

Update – I was trying to be hip and did a fork of Jetpack and GitHub, made the changes and then tried to make a pull request. Turns out, I don’t know how to do that, so I opened an issue instead. It sits here.