For a while now I’ve wanted a compact, ‘at a glance’ look at some of my lab statistics and other things in my flat, since my lab is elsewhere. Originally I started looking for 4k monitors that could be rotated vertically, then I started looking for small hobby boards that could output in 4k and eventually just putting the project to a side and spending my money elsewhere.
Recently, I got an old monitor from work that I was using as a VGA input for some stuff from time to time as my TV has no VGA, but eventually decided that this would be a cool thing to revisit using this monitor, even if it is not as flashy I first envisaged.
So it’s that time of year again when my girlfriend and I decided we wanted to move, after a few months of searching we found a very cosy (and a not so cosy rent price to go with it) flat in Zone 1/2, London. Without boring you with a life story, the aim of this move was to be extremely light, only moving the bare essentials so when it came to moving back out, there weren’t masses of furniture and servers to move. Moving servers is not fun.
This blog post will be about a build I wanted to do for this move, a small, low powered host that would live in this new flat as a local VM host//storage server for when accessing things from the lab would be inefficient.
It’s a pretty cool project if I do say so myself and this type of build would be ideal for a lot of people that I see on the internet that want something ‘all in one’ that fit the requirements I set. I’m hoping people in similar situations will find some inspiration in this build and either copy it or use it as a stepping stone for something similar.
So a lot of you read and enjoyed my colocation post a while back and since then things have been going well.
I’ve used the colo’d host pretty heavily and it’s been serving me great for what I needed it for, my public facing applications have never been happier. I have since deployed cool things like an entire VDI infrastructure there which has been awesome, I’ve pretty much managed to ditch my desktop and use my MacBook Pro along with my Windows & Linux VDI machines with minimal lag, it’s been great.
Recently, however, I have been running dangerously low on storage and upgrades need to be done to accommodate the new use cases I’ve brought to the host. The following post will be my adventures into kitting the host out as best I can for the workloads I am throwing at it, in the ‘muffin fashion’ of hobbling shit together until it works.
So a while ago, I posted this rundown of my Plex server and storage setup for media, and it got some traction, which is nice. In that post I mentioned that my storage setup left a lot to be desired and I had issues with it, let me explain.
I started collecting media when I was about 13, starting off with a small HTPC custom build with 2 hard drives, over time this grew but I was still just adding disks randomly to create a pool, this is why I used FlexRAID at the time. The ability to add random disks at will and still have a protected pool was ideal for teenage muffin and his limited funds. Over the years I have continued this setup due to the price benefits and it’s pinched me in the ass a few times.
FlexRAID is awful. I’m dubious as to whether it’s working at all to be honest, but long story short, I’ve come close to total data loss a few times, and these days I am constantly losing data and having it randomly pop up due to the software not working how it should, so I decided it was finally time to drop some dough and create a proper storage server to migrate this one over to.
This post will be about that build, and how I managed to talk myself into buying a motherfucking 60 bay chassis.
So for a while now I have been toying with the idea of putting my own hardware up in ‘the cloud’ but due to the enormous prices for a homelabber I decided against it many times but now, however, I found a deal that was too good to pass up (considering UK/EUR pricing) and pulled the trigger.
Muffin, why on earth do you need to colo?
So as happy as I am running all my services at home in my lab I have had difficulty at times with uptime for things I actually care about. Take this blog for instance, at the time of writing this post, this blog lives on a borrowed server because moving house has meant I have had to completely take everything offline, move, and ensure everything works as it did before. Having services like this blog in a colo alleviates a lot of this. (By the time you guys are reading this all those 1s and 0s should be coming from my colo!)
Surprisingly, I don’t have any backups of my stuff. Apart from my Microserver Gen8 FreeNAS replication project which was necessary for my photos and personal media I don’t have any backups of my VM hosts, laptops or machines I use everyday which is very, very bad. A lot of my stuff is stored on RAID or RAID-like systems which I know aren’t by any means secure, but it’s kept me going and the sheer thought of having to invest in another storage system to store things that are already on storage servers makes my stomach sink a little; I’m only (currently) 20, and have just started my career so I don’t really have the money to keep spending on hardware unfortunately.
I’ve gotten a few requests from various places to show my Plex server to the world, this is not by any means as good as it gets but it is mine and it works phenomenally. I’ve had a lot of concurrent streams with this thing and it’s yet to break a sweat. Continue Reading
So I need to create an IPSEC point to point link between two sites so my two FreeNAS boxes can replicate between each other as per this project. I already run my network on PfSense and have done for a few years now and think it’s great so slapping a PfSense box at my mother’s house seemed like the easiest thing to do. Once all the NAS business was setup I dug out an old desktop machine (Dell Optiplex 760), put a 2 port Intel gigabit card inside and installed PfSense. After bringing it to my old house and changing the config on their DD-WRT router to act as a switch+AP I brought up the WAN connection and did some IP configuring. Once the interweb was setup and I confirmed the LAN was fully working (had to turn on static NAT for my lil’ bros PS4) I went ahead and configured the tunnel. Continue Reading
This plan didn’t work as intended. I had to come back to the drawing board and rethink/simplify some stuff. I have left everything as-is up to the point of failure incase it’s important to anyone and it really makes no sense to delete it.
Below is what I wanted to do and a few of the steps I documented towards this goal, here is where I revisited this project with a much different approach. I would read this first anyway before reading the revisited version.
If you don’t try you’ll never know, right?
So I’ve had this problem for a while since moving out, but I excuse it because, well, she gave birth to me. My mother calls me constantly asking me to fix stuff or implement something new in my old home which I am fine with but sometimes it feels extremely tedious as I could have sworn I fixed that same issue not 1 month ago…
The latest problem I’m facing is photo storage. My family have a few MacBooks with very limited storage onboard which they seem to fill up quite fast. Upgrade the storage? Sure, but that’s short term and not exactly safe, not in my eyes anyway. My solution? The following… Continue Reading
If you’re reading this revision first it’s probably better to read my initial plan as well before continuing as there are more details outlined there as to my overall goal and information about the hardware setup. You can read this here.
Right, so, at this point I was to make a post about how I setup the storage under FreeNAS running on ESXI with both RDM on one machine and passthrough on the other with the upgraded CPU but I ran into some problems. Pretty shitty ones. Continue Reading
FreeNAS will install just fine on the MS without any modifications and everything seems to work just fine, so i’ll skim over the install and add some detail to the configuration I used.
As of writing this the current stable version of FreeNAS is 9.3. All that needs to be done is to download the ISO from the FreeNAS website and copy this bootable ISO to a memory stick using your prefered method, I personally use my trusty ODD emulator (as seen in this archived post) and use iLO to get through the installation (again, detailed in this archived post.)
Boot from your chosen installation media using ‘F11’ at bootup and let the FreeNAS installer do it’s thing. Continue Reading
Installing ESXI on most machines is relatively easy, and the MicroServer Gen8 is no different, I won’t go into too much detail on the installation/configuration as it’s childsplay.
Note: I used iLO to configure all of this as per my last post on this project. iLO standard only allows you to control the machine over the network up to a certain point in the OS bootup and then requires a license to work. I have a license that I put on both of these machines, but you can also just get a 30 day trail of the license for this setup phase of the MS lifetime.
Firstly; get the HP customized ESXI image from here. Once this is downloaded you can either use tools like unetbootin to ‘burn’ the image to a USB drive or if you have a valid iLO license (even a trial one) you can mount the ISO through the iLO console on another machine and still boot from the ISO over the network. Do not try and mount the ISO with the standard iLO license as it will boot and then cut off during the installer bootup.
I used my trusty external HDD/ODD emulator, this thing is my #1 tool as I can just throw any number of isos onto this thing and then mount it using the onboard screen and controllers and the device emulates a CD drive to any machine via USB. HellaFuckinUsefull.
Turn the thing on, press ‘F11’ to bootup to a boot override mode, select CD/USB ONETIME bootup and let thing ESXI installer bootup.
Go through the setup stages, I won’t go into that here but it’s just a next>next>next deal; just insure you select the correct install device, which in my case is the 8GB USB drive inside the system.
Once it’s done just remove your installation media and reboot the box. If ESXI doesn’t start to boot go into the BIOS and change the boot order to allow the ESXI source to boot first, I did this in my inital BIOS setup so everything was hunkydory.
Once booted up press “F2” to configure ESXI, enter the password you chose at installation and setup networking. It should have gotten an IP from DHCP (if you have DHCP enabled) but I like static addresses for my hypervisors so I used an address I had ready, 10.0.0.184. You can also change the DNS name of the box here too. Exit out of the configuration options and save the config, now leave the box and go over to your main machine.
If you’ve never used ESXI before you will need to download the vSphere Client, you can get this by navigating to the IP you chose/was given to the box and install it from there. If you already have the client you just need to open her up, slap in the IP, username (root) and chosen password and connect.
I use the web client myself and everything went swimmingly.
The following screenshot was taken a little bit after this step so some things have already been configured, it doesn’t really matter but I’m mentioning it to avoid any confusion.
The last thing that you want to do here is to go over to the configuration options of the ESXI host and then add the storage. I just added the SSD/your chosen storage medium as a VMFS5 datastore using all the space available, the name is completely up to you.
Oh and add your license; I do this all in vSphere web which applies my unlimited license but you will want to apply it ASAP even though you have 60 days.
Right so this is where things start to change quite a bit between these two boxes, and I’ll explain best I can.
The box at my mother’s house, now known was MUFFHOST04, or MUFF04, needs to have pfsense running on it as well as FreeNAS. I can create a ptp link using DDWRT (what she has right now) but I would much prefer pfsense as I am certain of its performance and reliability. I will be setting up pfsense at my house and then simply plugging it all in offsite which poses some configuration hurdles. Continue Reading