Released in 1991, Gopher was a protocol designed for retrieving documents over the internet and a direct competitor/predecessor to the World Wide Web. The system of hierarchical menus and documents made it excellent for information organisation and allowed digital libraries and interconnected directories, bridging gaps between research campuses. It’s simple protocol was also well suited to text-only output devices, but the rise of graphical user interfaces probably assisted in it’s slow demise compared with the lurid layout and formatting capabilities of HTML.
For my game project, I want to have multiple Docker containers started up at once which are all linked together by easy-to-use hostnames (not terrible auto-generated Docker hashes).
The solution to this arrived to me pretty easily thanks to my friend @will2bill on Twitter who pointed out that Docker Compose is a thing! By declaring a YAML manifest file called docker-compose.yml and filling it appropriately you can have a series of linked containers come up easily with a single command, sudo docker-compose up -d.
One of the main drawbacks I can think of to a static blog is the lack of easy search function. Because all the files are pre-generated HTML, CSS, and JS, there’s no server-side interpreted language that can perform actions and no database of posts which can be filtered. I decided to change this and did a little proof-of-concept on my local machine for how it would work.
If you’re running a Hugo blog, you can repeat my little experiment yourself!
When you’re setting up SSH on a new Linux install you need to be aware of the right permissions for your files - especially if you’re manually importing files from another installation. Below you’ll find a list of what permissions are required for what files/directories.
To set these permissions, just run chmod XXX FILENAME where you replace XXX with the permission number and FILENAME with the file/path.
.ssh directory - 700 - this makes it not writeable by other users.
To the 53 people who've watched A Christmas Prince every day for the past 18 days: Who hurt you?
— Netflix US (@netflix) December 11, 2017
This tweet from Netflix and some similar stuff from Spotify is some really cute marketing but I’ve seen a few people worried, calling it a privacy violation and claiming that anyone at these companies can just access all your personal data.
So I’m going to try and put those worries/fears to rest.
I’ve previously discussed hosting my site on AWS using a combination of Simple Storage Service (S3), CloudFront, and Route 53. I’m still doing that now and it’s been amazingly responsive and great for a static website (still using Hugo like I posted about previously).
However one of the ultimate goals of the modern web is security and projects like Let’s Encrypt have helped democratise access to SSL certificates, providing a free alternative that lets everyone secure their website without huge costs or slow verification procedures.
My much-loved NAS from 2015 died quite painfully at the end of 2016, due to the use of the ASRock C2550D4I. This isn’t ASRock’s fault in particular, but more the integrated Intel Avoton C2550 which has some show-stopping issues that are causing problems in everything from Synology storage devices to Cisco routers.
There was very little available in the low-end server market that compared favourably with what I bought, so I ended up splashing out extra for the following:
Situation I recently ran into some issues at work with building some new Mapserver servers we’re building. Using an UbuntuGIS repo we had a perfect Mapserver and GDAL build for 99% of the products we deliver on a day-to-day basis (with greater reliability than our old system) however it GDAL doesn’t handle the proprietary ECW format out of the box. It can be brought in but there is some strict licensing terms and we didn’t want to risk running afoul of them, nor did we want to have to run the arbitrary, closed source blobs in the current ECW SDK version.
So it came to pass, that all hardware was doomed to fail in time for it was crafted by men who were afflicted by the same curse.
My fileserver has served (giggle) me well for the past 4-5 years but it had been showing its age. I had recently had to remove one old drive that had failed and knew at least one other drive had some file sector issues.
While this incarnation of my personal website has been set up and served using only static components for some time, it was still limited to the capabilities of the virtual server it was hosted on. So I finally took the plunge and (along with the ScorpInc website) moved all the files into Amazon Web Services (AWS) various platforms for ultra-responsive static sites and assets. This includes hosting the files in a Simple Storage Service (S3) bucket, pushing DNS through Route 53, and using the CloudFront Content Distribution Network (CDN) service to distribute my site at various edge locations around the world for low-latency response times, regardless of where the request comes from.