Welcome to the technical ramblings that go on in my brain. I will post information about my homelab, projects I undergo and tutorials as I find similar blogs on the internet very inspiring and informational; I can only hope that the same comes across here. Without further ado, enjoy.
This post will be a brief intro into VMWare’s distributed switch technology and how to get your feet wet in the setup. This will not cover the advanced features or in-depth configuration but a followup may come in the future.
I was myself rather confused at the concept at first and failed to see the use in a lab setting, however, since taking the plunge I can honestly say I am unsure how I survived before and I only hope this post helps those of you new to the concept to give it a go.
What Are Distributed Switches?
Distributed switches, which will be referred to in this post as vDS, is a feature of VMWare vCenter Enterprise Plus allowing centralised provisioning and management of host networking spanning multiple VMWare hosts and clusters. vDS allow for one config change to be made to your virtual networking environment and have this change propagate across the participating hosts, alleviating the need to manually create networks on different hosts.
Following this, vDS allows for consistency in network connectivity when migrating VMs across hosts.
For a while now I’ve wanted a compact, ‘at a glance’ look at some of my lab statistics and other things in my flat, since my lab is elsewhere. Originally I started looking for 4k monitors that could be rotated vertically, then I started looking for small hobby boards that could output in 4k and eventually just putting the project to a side and spending my money elsewhere.
Recently, I got an old monitor from work that I was using as a VGA input for some stuff from time to time as my TV has no VGA, but eventually decided that this would be a cool thing to revisit using this monitor, even if it is not as flashy I first envisaged.
So it’s that time of year again when my girlfriend and I decided we wanted to move, after a few months of searching we found a very cosy (and a not so cosy rent price to go with it) flat in Zone 1/2, London. Without boring you with a life story, the aim of this move was to be extremely light, only moving the bare essentials so when it came to moving back out, there weren’t masses of furniture and servers to move. Moving servers is not fun.
This blog post will be about a build I wanted to do for this move, a small, low powered host that would live in this new flat as a local VM host//storage server for when accessing things from the lab would be inefficient.
It’s a pretty cool project if I do say so myself and this type of build would be ideal for a lot of people that I see on the internet that want something ‘all in one’ that fit the requirements I set. I’m hoping people in similar situations will find some inspiration in this build and either copy it or use it as a stepping stone for something similar.
So, let’s get to it.
So a lot of you read and enjoyed my colocation post a while back and since then things have been going well.
I’ve used the colo’d host pretty heavily and it’s been serving me great for what I needed it for, my public facing applications have never been happier. I have since deployed cool things like an entire VDI infrastructure there which has been awesome, I’ve pretty much managed to ditch my desktop and use my MacBook Pro along with my Windows & Linux VDI machines with minimal lag, it’s been great.
Recently, however, I have been running dangerously low on storage and upgrades need to be done to accommodate the new use cases I’ve brought to the host. The following post will be my adventures into kitting the host out as best I can for the workloads I am throwing at it, in the ‘muffin fashion’ of hobbling shit together until it works.
How to pfSense.
So, you’ve decided to ditch that POS ISP provided router, or just literally anything marketed towards consumers and have installed pfSense, so.. what now?
The following will be a guide on how to create, manage and understand both firewall rules and NAT in pfSense. I get asked a lot of questions daily and I thought this should be useful for those that are either new to pfSense or want to understand what they’re doing when they create rules.
This guide is not just for pfSense, it’s just what I use and is extremely popular so I’m doing a post about it. A lot of the fundamentals and methodology will carry over to many other devices/software.
In this post, I will try and explain why these steps are being taken and add some networking 101 into the mix as well.
Intro, A look back into the past.
So a while ago, I posted this rundown of my Plex server and storage setup for media, and it got some traction, which is nice. In that post I mentioned that my storage setup left a lot to be desired and I had issues with it, let me explain.
I started collecting media when I was about 13, starting off with a small HTPC custom build with 2 hard drives, over time this grew but I was still just adding disks randomly to create a pool, this is why I used FlexRAID at the time. The ability to add random disks at will and still have a protected pool was ideal for teenage muffin and his limited funds. Over the years I have continued this setup due to the price benefits and it’s pinched me in the ass a few times.
FlexRAID is awful. I’m dubious as to whether it’s working at all to be honest, but long story short, I’ve come close to total data loss a few times, and these days I am constantly losing data and having it randomly pop up due to the software not working how it should, so I decided it was finally time to drop some dough and create a proper storage server to migrate this one over to.
This post will be about that build, and how I managed to talk myself into buying a motherfucking 60 bay chassis.
HP, you motherfuckers.
So there I was, moving VMs off of my main host (DL380 G7, 2x x5690’s, 192GB RDIMM), getting ready to replace the 8x 300GB RAID10 array I’ve been using for a while now with some 1TB disks and SSDs, awesome, right?
So the host was powered down, ready for an upgrade, and being the logical guy I am I decided to do some software upgrades.
I used the latest SPP and ran that through, for some reason I was using a BIOS from 2010? Once that was done I had updated firmware, so I moved onto ESXi. I’ve been running 6.0.0U2 for the longest time and thought this would be the ideal time to upgrade.
I got the HPE official ISO, ran through the upgrade, waited for it to boot annnddd….
After some digging, it would appear this is actually a known issue with most G7 servers (unsure about G6).
It would appear that one driver is causing this in the 6.5 image, “hpe-smx-provider” (650.03.11.00.17-4240417). Installing the standard ESXI 6.5 ISO does allow the server to boot, but is missing a lot of drivers and does not give the pretty all-inclusive system stats that the HPE ISO does.
It’s colo time baby!
So for a while now I have been toying with the idea of putting my own hardware up in ‘the cloud’ but due to the enormous prices for a homelabber I decided against it many times but now, however, I found a deal that was too good to pass up (considering UK/EUR pricing) and pulled the trigger.
Muffin, why on earth do you need to colo?
So as happy as I am running all my services at home in my lab I have had difficulty at times with uptime for things I actually care about. Take this blog for instance, at the time of writing this post, this blog lives on a borrowed server because moving house has meant I have had to completely take everything offline, move, and ensure everything works as it did before. Having services like this blog in a colo alleviates a lot of this. (By the time you guys are reading this all those 1s and 0s should be coming from my colo!)
Here we are again! 1 whole year after my initial Homelab rundown and I’m back with another one. I’ve made a lot of changes in the past year, some significant, some not so much and some of the main ones still a WIP.
I was originally supposed to have a rundown in March but life got the better of me and I couldn’t finish a decent write-up, alas here we are.
Without further ado here is the new, cleaner and up to date diagram for MuffNet:
As you can see when comparing my previous rundown there is a lot more happening at a glance. I’ll try and go through this step by step for those of you interested in the internal workings of my lab.
Virtualization is awesome. It’s been the standard for enterprises for many years and although containers are gaining interest, virtual machines remain the go-to for any business; you’d be either crazy or incompetent not to go down this route in most scenarios. This post will go through the very basics of setting up and using ESXi.
This got me thinking, my automated downloads crunch through terabytes of data every month on a home connection, and if my ISP were to look into this it would not show me in a good light due to a lot of p2p I have going on in my household; with my flat mate constantly having torrent connections open and Sonarr + Couchpotato downloading via torrents and NZBs there is a lot of data I would like to mask from my ISP. Thanks to that awesome spreadsheet I managed to find a service that looked perfect for me, vpn.ac.
Surprisingly, I don’t have any backups of my stuff. Apart from my Microserver Gen8 FreeNAS replication project which was necessary for my photos and personal media I don’t have any backups of my VM hosts, laptops or machines I use everyday which is very, very bad. A lot of my stuff is stored on RAID or RAID-like systems which I know aren’t by any means secure, but it’s kept me going and the sheer thought of having to invest in another storage system to store things that are already on storage servers makes my stomach sink a little; I’m only (currently) 20, and have just started my career so I don’t really have the money to keep spending on hardware unfortunately.
Apart from exceptions of course.. Continue Reading
I’ve gotten a few requests from various places to show my Plex server to the world, this is not by any means as good as it gets but it is mine and it works phenomenally. I’ve had a lot of concurrent streams with this thing and it’s yet to break a sweat.
I have a lot of 9211-8i’s in the lab, probably about 12, all flashed to IT mode. These are a godsend as they just work. They’re easy to flash to IT mode (rids them of any RAID, they become HBAs) and are 6GBp/s which is awesome.
One of the things I didn’t know until quite later on is that these cards by default (on Windows at least) do not support spindown calls from the OS, meaning my disks are running 24/7 which wasn’t a problem until I had about 20 disks all running and I realised how much power that actually was for no good reason. After a little searching I found forum posts from people who had modified the INF driver of the card and gotten the power down to work, after putting it off for a while I finally did it and got it to work and have noticed both power loss and a little bit of a temperature drop which is nice.
Welcome to Muffin’s first Homelab rundown! I’ll explain what’s going on along the way, just sit back and scroll.
Before getting too much into it I’ll just leave this diagram here, it may help understand things a bit better:
So I started off my IT career as an intern for a rather large company in London. I was doing a lot of misc stuff, mostly desktop but always pushed for as many networking bits I could, networking is great. A year later and here I am as a junior network engineer on the path to my CCNA (almost there!)
This post is about VLANs. VLANs in the Cisco world explained how I wish someone had explained to me. Please bear in mind that this will not be a very technical explanation, you can find that elsewhere; this will be helping you get to grips with VLANs, how they work and setting them up. Once you have a better understanding of VLANs you can go and read up some more on Ciscos website perhaps?
What is a VLAN?
So I need to create an IPSEC point to point link between two sites so my two FreeNAS boxes can replicate between each other as per this project. I already run my network on PfSense and have done for a few years now and think it’s great so slapping a PfSense box at my mother’s house seemed like the easiest thing to do. Once all the NAS business was setup I dug out an old desktop machine (Dell Optiplex 760), put a 2 port Intel gigabit card inside and installed PfSense. After bringing it to my old house and changing the config on their DD-WRT router to act as a switch+AP I brought up the WAN connection and did some IP configuring. Once the interweb was setup and I confirmed the LAN was fully working (had to turn on static NAT for my lil’ bros PS4) I went ahead and configured the tunnel.
This plan didn’t work as intended. I had to come back to the drawing board and rethink/simplify some stuff. I have left everything as-is up to the point of failure incase it’s important to anyone and it really makes no sense to delete it.
Below is what I wanted to do and a few of the steps I documented towards this goal, here is where I revisited this project with a much different approach. I would read this first anyway before reading the revisited version.
If you don’t try you’ll never know, right?
So I’ve had this problem for a while since moving out, but I excuse it because, well, she gave birth to me. My mother calls me constantly asking me to fix stuff or implement something new in my old home which I am fine with but sometimes it feels extremely tedious as I could have sworn I fixed that same issue not 1 month ago…
The latest problem I’m facing is photo storage. My family have a few MacBooks with very limited storage onboard which they seem to fill up quite fast. Upgrade the storage? Sure, but that’s short term and not exactly safe, not in my eyes anyway. My solution? The following…
If you’re reading this revision first it’s probably better to read my initial plan as well before continuing as there are more details outlined there as to my overall goal and information about the hardware setup. You can read this here.
Right, so, at this point I was to make a post about how I setup the storage under FreeNAS running on ESXI with both RDM on one machine and passthrough on the other with the upgraded CPU but I ran into some problems. Pretty shitty ones.
After a long ordeal with Britan’s worst courier service (Yodel, for those of you unaware) I finally received most of my parts which is good enough to make a start, so here it all is;
FreeNAS will install just fine on the MS without any modifications and everything seems to work just fine, so i’ll skim over the install and add some detail to the configuration I used.
As of writing this the current stable version of FreeNAS is 9.3. All that needs to be done is to download the ISO from the FreeNAS website and copy this bootable ISO to a memory stick using your prefered method, I personally use my trusty ODD emulator (as seen in this archived post) and use iLO to get through the installation (again, detailed in this archived post.)
Boot from your chosen installation media using ‘F11’ at bootup and let the FreeNAS installer do it’s thing.
BT is an alright ISP, I get good speeds (150D/40U), never get throttled and have yet to hear anything about my internet activity which involves a fair amount of P2P.. their hardware however is god awful, and the BT HomeHub 5 is no exception to that.
I won’t go into how bad it is because if you’re reading this hoping to repurpose it you already know, the only good thing about it is the 4 port gigabit hub attached to it that actually okay, so we are going to make it into a dumb switch.
Doing this is actually very simple and all that’s needed is to turn off all the services on the HomeHub to make sure it doesn’t interfere with anything else on the network.