While performing some networking work (as detailed in my Anycast article and
an upcoming followup) I hit an issue where machines with bad networking setups
would take up to 5 minutes to boot. As i was testing to ensure a machine came
up reliably, this drastically increased the time required to test. Luckily the
fix for this is fairly simple.
I have been looking at Anycast as a way to reduce the management on my network
and allow me to use IPs to refer to services on the network instead of
locations. As I rolled these services out on my network machines would
automatically make use of the best available resource and I could worry less
about site specific setups and just use the same setup at all sites.
It is not often i have an issue with Postgres, Sure i don't like some of the
naming or how it goes about things certain thinks. But the one thing i do love
about it is release to release you are generally not going to have problems.
Imagine my surprise when a "tested as working" setup in staging failed when
deployed to production.
With Linux 4.13 just being released I decided to write up my experiences with
getting its newly expanded Thunderbolt 3 to work. Most news outlets have
focused on the new TLS support in the Linux kernel however recent changes to
the Thunderbolt module have allowed it to work on non-Apple devices and allow
many laptops to attach high performance rendering and IO devices such as GPUs
and high speed network cards.
For a number of years i have been working on what i call
Project, an attempt to build a collection of cooperating tools to run small
scale container systems. It was mainly intended as a stepping stone from
existing Server/VM based approaches to a IaaS or PaaS/serverless model
How i watch videos at home is a very different affair to how many people watch
videos. Most people would watch a video on their TV or their PC and interact
with it directly. In my particular case i have a bunch of machines running an
X server and ssh connected to the network. Centrally at home there is a
storage server/Grid (though this is slowly becoming a mesh, with data being
moved to/from devices on demand)
This Post is a draft of information that is being prepared for a rewrite of
doger.io. It attempts to Categorize the different types of
container solutions in a course grain manner in order to quickly convey they
scope of management that the container solution provides.
Few bugs have stumped me like this before. I knew all the parts, I correctly
guessed the cause (at least for part of it) and yet due to the nature of the
bug this has me scratching my head for days exclaiming 'that's not possible'.
This is a deep dive into what occurred, How i solved this and that nature of
python and its different implementations (pypy and cpython).
One thing that has always amazed people has been the setup of my home network.
The setup of which has helped me to land at least 2 jobs. After letting it
fall into disrepair i finally found the energy to rebuild it and reassess some
of the decisions i made 8 years ago. This coupled with my containers knowledge
from maintaining doger.io allowed me to uncover and use
various Linux features in a way you may not have thought of before and may not
have known existed.
I noticed some odd behavior the other day when transferring a large amount of files to my NFS server. When watching the transfer the file copies would happen in large 'bursts' with reads from the disk occurring, then stopping followed by large long transfers on the network (saturating the gigabit uplink)
Installing a root CA certificate on your servers is an appealing option if you host many services and do not wish to pay the widely varying costs for certificates from "trusted" 3rd parties. Alternatively not having to deal with 3rd parties or the ability to include custom extensions can provide significant dividends when trying to administer your systems. In Debian this is a relatively straight forward affair however there is a right and wrong way to do this.
As the title promises and the previous blog post shows there is more coming to this site and future site improvements (just check out that working human readable timestamp on the index pages, only took 2 years). RSS support should be coming soon so stay tuned for this and more articles.
While i was working on butter to add
TimerFD support i noticed something odd. The TimerFD struct allowed specifying
intervals down to one nano second. Not being one to pass up an opportunity to
break my own machine by doing something stupid i decided to try this out and
see if it would hard lock my machine in a flurry of IRQ attempts. What happened
next turned out to be quite interesting.
As may be apparent by the theme of this site and its currently broken CSS for
articles , I am an avid console user, most of my machines don't have X11
installed. I thought i would take the time to document how i do things and
provide some tips and hints for anyone else looking to make the switch to what
i have refined into a highly productive working environment
1 Year ago i gave myself a challenge: Can i go a full month without X11? As
there is not enough articles on how one may go about this i thought i would
start a multi-part series talking about my program choices, How to wire
everything together and how to restore some of the functionality that 'goes
missing' when moving from traditional GUI environment to a text based one.
I have been playing with tulip in python
3.3 while developing archangel and
come across a couple of interesting problems that i thought were worth
documenting mainly for my own personal usage, But also with the hope that it
may be useful for others.
This time we only play with Linux specific Virtual interfaces. MAC VLAN and
Virtual Ethernet Pipes. These are mainly used in containerization situations
but can be useful for other things such as emulating network topologies in
conjunction with bridges.
In this part of the 'An introduction to Virtual Networking on Linux' Series we
talk about bridging networks and our first Linux specific virtual network
device, The Dummy network interface
Sometimes its handy to be able to simulate a high latency environment for
testing of web-services under Linux. Luckily for us this is fairly easy to do
and even more so to automate. Included is a script to build a virtual network
with 100mS of latency for testing
Part 1 of an introduction to virtual networking under Linux. This series will
cover Bonding, VLANs, Bridging, Dummy interfaces, MAC VLANs, Virtual Ethernet
Pipes VXLAN and finally finish up with a brief introduction to OpenFlow.
Throughout the guide the iproute2 tools will be used instead of the old
/sbin/ifconfig commands, whose usage has still not yet been supplanted.
After doing a presentation for asylum with a local company, i was presented
with the question 'do you have any one using this project or is it just a
personal project'. after saying 'no', i realized that a couple of people had
used my code as reference material but unfortunately i had no users. not being
one seek the approval of others for my tinkering i dismissed this question as
nothing more than a future TODO item
Welcome to the new website