Whatcha talkin’ bout? FOO! Setting up proper fail over in a Cluster.

Here is what a 2 node fail over cluster should look like. Double network redundancy on the back end with each node and the SAN connecting to both back end switches.

selection_046

In order to setup proper cluster fail over the Server needs to be set to Fail Over Only (FOO). Remember that Windows has iSCSI volume size restrictions. So when you create volumes and LUNs on you SAN you need to limit the size. See link at end of article.

To do this you need a couple things, first you need to connect the iSCSI connection to both servers. Some SAN manufacturers have their own DMS drivers, which are usually a modified version of the Microsoft DSM driver. HP actually recommends using the Microsoft DSM.

The proper DSM is required in order to setup proper MPIO (Multipath Input and Output) on a cluster.

In short MPIO, is the multipath interconnect necessary for failover, it uses the DSM driver to achieve this. Generally the DSM driver is provided by the OS vendor, in this case Microsoft. This is also the HP recommended method of connecting to the SAN from Microsoft Windows, and most other manufacturers also use the Microsoft DSM driver.

Map iSCSI connections

First we need to properly map the iSCSI connections. Be aware that you will be mapping the same connection multiple times, this is necessary for failover. In the above example each server has 4 connections. 2 for the 20 subnet and 2 for the 30 subnet. Open up iSCSI Initiator and select the Discovery tab.

In the discovery tab add all 4 IP destinations, x.x.20.110, x.x.30.110, x.x.20.111, x.x.30.111.

Click the Discover Portal… button and add each one of those connections.

Next select the Targets tab, you should see the inactive iSCSI connection here.

Highlight the connection and click Connect. The HP SAN is setup with a single iSCSI connector and multiple LUNs. Some devices have multiple iSCSI connectors with a single LUN on each. Depending on the setup you might have to do this to each connector.

A Connect To Target window will pop up, check off Enable multi-path and click Advanced.

Under Local adapter select Microsoft iSCSI Initiator, for the Initiator IP select the IP for the Server, and the Target Portal IP should be one of the two IPs on the same subnet as the Initiator IP. It should look like the following.

selection_047

Now repeat these steps, highlight the same connection, click Connect, check of Enable multi-path, click Advanced…, rinse and repeat, this will map the other three connections.

10.10.20.4 -> 10.10.20.111, 10.10.30.6 ->  10.10.30.110, 10.10.30.6 -> 10.10.30.111.

If you click the Favourite targets tab you should see 4 similar targets. These are all the connections you just created for the one iSCSI target(iqn).

Set up Connection Fail Over

Next start the MPIO applet, Start > Run > mpiocpl your Vendor should be listed in the Devices: window. If it is not you will need to add it via the Discover Multi-Paths tab, others window. Highlight the Device Hardware and click Add. Say no to the reboot.  

selection_048

Next in the SPC-3 compliant window check off Add support for iSCSI devices and click Add. You will again be prompted to reboot. This time do so.

selection_049

If you run the command mpclaim -s -d in an admin CMD session you should see the connection now.

selection_050

Back in the iSCSC Initiator applet, if you highlight the iqn connection on the Targets tab and click Properties, Devices, and MPIO, you should see the Load Balance policy and all the paths that this connection can fail over to.

Your load balance policy will initially default to Round Robin change this to Fail Over Only. If you do this all but one connection should set to Active, all others will go into Standby. Click Apply.

Don’t worry if the connections don’t go into standby, just make sure that FOO is applied. Sometimes with multiple mapped disks this can happen.

selection_051

Now is you run the same mpclaim command your LB Policy should be changed to FOO (Fail Over Only). You will need to do this for each mapped disk.

selection_052

To change the Load Balancing policy to FOO run mpclaim with the -L and -M switch.

mpclaim.exe -L -M 1

The one at the end is indicative of a FOO LB policy, if a connection fails it will immediately fail over to the next one. This is for always on high demand systems.

Now if you run the -s -d switches you should see FOO under the LB policy.

selection_053

Now go into Control Panel > Administrative Tools > Computer Management and bring the iSCSI disk online and format them to NTFS. I had an instance where the disk wouldn’t come online even when I brought it online. If this is the case resize your LUN disks, they are too large.

Mpclaim determines the policy for the iSCSI connection. For more information on mpclaim go to Microsoft’s website and user the following reference https://technet.microsoft.com/en-us/library/ee619743(v=ws.10).aspx

https://technet.microsoft.com/en-us/library/dd851699(v=ws.11).aspx

iSCSI and VHD/VHDX volume size restrictions.

How to Create a Dell Server Update Utility (SUU) ISO

In this example we are going to walk through the creation of a Dell SUU ISO for 64-bit Windows. The SUU is crucial if you are building out Dell servers as it updates firmware and drivers.

I find the Dell documentation isn’t overly helpful so I’ve put together this quick tutorial on how to create a customized Dell SUU ISO, keep in mind this tutorial creates a Windows based installation ISO.

1. Go and download the latest Dell Repository Manager if you do not have it installed already.
http://en.community.dell.com/techcenter/systems-management/w/wiki/1767.dell-openmanage-repository-manager

2. Once installed find the icon on your Desktop and launch it.
icon

3. Once launched, you should be prompted to update some plugins, go ahead and do so. If you are prompted to update the Dell Online catalog do so as well.

4. Once the application has loaded, go to the menu bar and select Source > View Dell Online Catalog.
view_dell_catalog

5. If you have not updated the Dell Online Catalog, you should now be prompted to update, click Yes.
sync_db

6. Under Dup Format check off Windows 64-bit to narrow down the bundles.filter_catalog

7. Check off your System Bundles based on the models you’d like the ISO to support.

8. Once these are all selected click Create Deployment Tools.deployment_tools

9. A wizard will appear, select Create Server Update Utility (SUU) > SUU to ISO. Select Next.
create_suu

10. Accept the defaults on the Select Plug-ins Select Next. You will be prompted for the SUU export location, select a folder and click OK.
create_suu_2

11. On the Summary and Finish page, review the Selected Bundles and confirm that all the appropriate models have been selected for export. Click Finish if everything looks okay. The job will be added to the Jobs Queue where the progress can be seen.
create_suu_3

Fix and Repair a Dead Hard Drive

Everyone’s got a story about losing important data one way or another, whether it’s from the accidental deletion of some files, a stolen computer, or more commonly a failed hard drive.

To be honest I’ve never been a casualty to lost data, I always kept backups… probably too many backups… like backups of backups. To others it’s “a lot of work”, probably because they don’t have a good process/mechanism in place or they are “limited” technologically and that’s fair.

It’s never fun thinking about what you can’t get back when your hard drive goes belly up… but what if you could get it back and fairly painlessly. Well if your hard drive is dead, toast, caput, it just might be salvageable as I found out this week when a friend of my sister’s dropped off their hard drive to me to see if their life memories could be retrieved.

The hard drive is a Seagate, model ST31000528AS, it’s a 1 TB SATA 3.0Gb/s.IMG_20160422_125811

It would not power on at all, my first inclination was obviously something on the PCB (Printed Circuit Board) has gone awry. First things first, let’s remove the PCB so we can take a look at it. This may require a torx screw driver, most techies will have this on hand.IMG_20160422_195902
IMG_20160422_100855

Now the first place to check is the two diodes on the PCB. You want to check the resistance of each diode, if the resistance on either is very low then there is a good chance that removing the diode will resurrect your hard drive. The diodes act as a circuit protector (similar to a fuse), when there is a power surge it “takes one for the team”
to prevent damage to other circuitry.
IMG_20160422_102131

Notice when I test the first diode, the resistance is fairly high, it’s measuring approx 48K. This diode is OK.IMG_20160422_103230
IMG_20160422_103222

However, when I measure the 2nd diode the resistance is almost nil. This diode is bad.IMG_20160422_102258
IMG_20160422_103205

Simply desolder this diode, reassemble the PCB to the hard drive, cross your fingers and power it up.
IMG_20160422_105846

If it worked, great! Remember though, going forward you no longer have the circuit protection unless you replace the diode you removed. If for whatever reason there is another power surge you probably won’t be so lucky.

Now go and backup that hard drive so next time this happens you can get a good night’s sleep!

 

Install Dash Cam (Aukey DR-H1) in 2012 Honda Civic (9th Generation)

Recently there’s been a lot of buzz around Dash Cameras with many “interesting” videos popping up all over YouTube. As a techie it’s always cool to fiddle around with new stuff and I wanted to put such a camera into my daily commuter, a 2012 Honda Civic Sedan.

I wasn’t dying to get a dash cam, but it’s one of those techie things that if it falls in my wheel house I’m going to do it no matter what. And so, I was shopping on Amazon last week and I somehow came across a cheap covert dash cam for $79.99 CDN, and it had good reviews, so I thought hmm at this price it’s worth a shot. At higher price points I was much more reluctant to pull the trigger but this definitely seemed like a good value buy.

The camera I stumbled upon is an Aukey DR-H1, it’s a small well-built little camera. The camera supports up to 32 GB of micro SD storage and records at 1080p. It doesn’t have any fancy bells and whistles like some other cameras do (gps, etc) but I wasn’t going to use those anyways. To be honest I just wanted something that was a “set it and forget it” type of product, just for piece of mind. So let’s get to my install.

What’s in the box?

  • Dash Cam
  • Fuse Box Wiring Power Cable (with video out – which is used to customize settings)
  • Cigarette Lighter/12 Volt Accessory Power Cable
  • Manual + Registration Card (extends warranty by 6 months to 30 months warranty)

IMG_20160331_131411
IMG_20160331_132845IMG_20160331_163146

Camera Closeup

It’s very small and covert when installed, seems solid with great build quality. As you can see it uses the 3M double-sided tape, so once you stick it, it should hold solidly.

IMG_20160331_162839IMG_20160331_162850IMG_20160331_162859IMG_20160331_164042

Initial Testing:

I wanted to go the fuse box route for installation, it’s much cleaner and routing the cable in the civic took minimal time, maybe 10 minutes max. In my opinion, the manual provided did not give great directions for installing the camera into the fuse box. As a first time dash cam installer I thought the camera could just operate on acc power (ignition 12 v), I didn’t understand why the camera needed constant 12v and acc (ignition) 12v so before installing the camera in the car I did some testing externally to better understand.

My testing came to the conclusion that the acc ignition power wire was basically a normally open switch, but when energized, it closed and the camera powered on. It made sense after playing with this, because if the camera were to work just off acc power it would never shut down cleanly unless it had some internal circuitry/battery. What I mean is when the car is turned off, the power to the camera would be cut immediately and the camera would not have had a chance to shut down gracefully. Through testing it was easy to see this, when the car was turned off the camera continued to run for about 3 seconds afterwards.

IMG_20160401_180749Constant voltage applied, note dash cam is powered off.

IMG_20160401_180803
Acc voltage applied, note dash cam is now powered on.

Fuse Box in Civic:

First order of business was to find constant 12v power and acc 12v power… so I pulled out the multi meter and found fuse location 10 (constant) and 23 (acc). There are obviously more possible locations and the ability to tap other spots as well but these worked for me.

IMG_20160331_164759fuse

IMG_20160401_182319Here’s the successful test configuration with the supplied wiring.

Prepare Ground Wire and Locate Grounding Location

The only modification I had to make to the supplied wiring was to turn the black (ground) wire into a usable ground wire for installation. The process is quite simple, it just requires cutting the end of the cable and crimping on a more appropriate end.

IMG_20160401_200917IMG_20160401_202347

There are certainly many spots to ground this off, I picked a location that I thought was suitable for this application.

IMG_20160401_200635IMG_20160401_200439

Install the Dash Cam and Run the Wire

Now that we have everything ready to go, find a spot to stick the dash cam. The most common spot is right behind the rear view mirror so it does not obstruct your vision in any way. I chose to go right behind the mirror just on the right hand side.

IMG_20160401_203310

Run the wire… it seems daunting but really it’s rather simple as you will find out. I’ve marked the pictures in red to illustrate where the wire is running.

IMG_20160401_204330IMG_20160401_204335

IMG_20160401_204747IMG_20160401_204847

IMG_20160401_204855IMG_20160401_205252

IMG_20160401_205307

Wire it up

As described early, this is wired to a 9th generation Honda Civic (model year 2012). Yellow wire (constant 12v) fuse 10, red wire (acc) fuse 23, black wire (ground).

IMG_20160402_130451

Configure Settings

Plug the yellow rca/composite wire into some kind of display. I didn’t have a free TV kicking around so I ended up having to make a custom rca cable with some left over cables I had lying around. I ran it to my TV inside and used Facetime to program it… funny I know but it worked well and rather quickly. This allowed me to configure a few settings, the most important being date/time. 2 other settings of value are the 720p/1080p and the 1, 3, or 5 minute length setting.

IMG_20160402_124433

File Size and Recording Capacity

I set my camera to use the 5 minute video length setting. I did some rough calculations and it appears that the camera can record a maximum of approximately 300 minutes of footage at 1080p. The camera is geared to roll over on it’s own, so it’s maintenance free.

size

Final Thoughts

All in all it was fun little project, it didn’t break the bank and it was a good learning experience. Overall the camera is not too bad, at night it’s not the greatest but as I always say you get what you pay for.

I’ve taken some video and I have posted it up, enjoy and as a side note I’m sorry about the slightly distorted sound during the night-time clip, my music was a tad too loud.

Fix graphical desktop artifacts in crossfire.

Tools:

Hawaii Bios Reader

Atiflash 4.17

Dos boot disk

HxD hex editor

Hawaii Fan Editor

I have scoured the internet for a solution to my long standing problem with my crossfire setup. After much digging my searches yielded no results. I noticed a problem where the cards when in Crossfire would artifact if they were sitting idle on the destop. I have the problem documented here.

Inside my computer I have two R9 290x cards by Gigabyte in crossfire, these are the Windforce editions. The exact model is GV-R929XOC-4GD, one uses the F2 BIOS the other uses the F11 BIOS. When I game the temps on average are about 60-70 degrees Celsius on the GPU cores, and about 95-100 degrees on the VRM. My CPU doesn’t exceed 45 degrees. Cards are at clock speeds and both BIOS versions are the same, I recently updated the BIOS on both cards, but that did not fix the issue.

In short I can do about an 2 hour gaming session and everything runs smoothly, then when I exit to desktop I get artifacts, lines coming across all 3 monitors, but as soon as I go into a game again these lines disappear. Back to desktop the lines re appear again. I bring up anything graphical like a web page or youtube, the lines will disappear, if I minimize the browser the lines reappear. If I stay on the desktop and disable crossfire, again the lines will immediately disappear.

I initially suspected it was the fact that I was running a crossfire set up. My other suspicion was that despite both cards being the same make one has memory chips by Hynix(F11 BIOS) and the other by Elpida(F2 BIOS). I believed that the problem was with the memory or rather something to do with the memory.

Note worthy, when only running a single card this artifacting problem does not occur. It only happens in crossfire and when the cards are in a low power state mode, idle, or rather when the clocks are dropped to conserve energy.

After much tweaking of the system and performing various tests it all came down to the Memory Clock, the clocks on the memory were being stepped down to almost nothing. The reason I suspected the clocks is that when I went into a graphically intensive application the problem disappeared. And the reason I knew it wasn’t the Core clock and it was the memory clock, the core clock would clock up on demand but the memory clock would not, it had two states 150 Mhz or 1250 Mhz, and it only propped up to 1250 when something graphical was being presented on the desktop or a game was being played. During “power play” mode the cards core clock drops to 350 from potential 1040 and memory drops to 150 from 1250. Mind you the core can be stepped up on demand and it does this rather well the memory apparently not so much.

To edit the BIOS files and flash them they will require a *.rom extension. The files from the manufacturer did not have this extension, I renamed the files to include the .rom extension and flashed them using Atiflash, it worked and my cards are running fine.

In order to fix the issue I had to hex edit both the cards BIOS files and flash it with AtiFlash in DOS. I also disabled ULPS. Although ULPS is not a fix to the issue I like knowing that when I hop out of a game the fans will keep spinning to cool down my card to an acceptable temperature. I don’t like the idea of one card being passively cooled after it reached 80 degrees +. I essentially edited both the cards BIOS files to never drop the Memory clock, so now the memory clock is always at 1250Mhz. And this fixed the problem. There are other tweaks to the bios I made as well, and while not necessary I also edited the BIOS core clocks, the core now never drops below 500 Mhz, the next step up is 840 Mhz, and then 1040 Mhz. This was changed from 300 Mhz, 727 Mhz, and 1040 Mhz respectively. Below is a screen shot of the PowerPlay profile changes, original on the left, and edited on the right. Capture1

Finally I also changed my fan profiles and a single temp profile. Since I raised the Core clock slightly and the memory clock completely I wanted to make sure that the card was not running hot. So I raised the fan profiles by 10% and dropped the top temperature profile by 10° C.

Capture3

New version of Hawaii Bios reader on left can edit the Fan Profile

The single temperature profile I was worried about was the 90° Celsius/100% fan, I changed it to 80° Celsius/100% fan speed. Then I raised the other fan speeds by 10%, so 56 went to 66%, and 25% went to 35%. You can see below the changes I have made to the Fan profile as displayed in Hawaii Bios Reader. Note that although Hawaii can read the Fan profiles these need to be changed in a hex editor such as HxD, only the PowerPlay values can be changed in the Hawaii Bios Reader. Alternatively you can use the Hawaii Fan Editor by DDSZ. The new version of the Hawaii Bios Reader can now edit the fan speeds and temperatures on the Fan profile page, it is no longer necessary to hex edit the ROM file.Capture2

The last step after the BIOS was edited I had to flash the file using Atiflash with in DOS. Download the boot disk and create a dos bootable flash drive. Place the rom file and atiflash in the root of the flash drive. Boot into dos and flash the new BIOS for your card. Remember to only do one card at a time and to power down after each flash. Also flash one bank at a time, I have my original and the new BIOS on each card, I used the performance bank to flash the custom BIOS. Atiflash usage is as follows:

atiflash -p 0 biosname.rom

With all these changes to the GPU BIOS on both cards I now have eliminated the Desktop artifacts. My idle card temps hover around 50° C, ~ 3-5 degrees higher than the stock BIOS clocks. And ULPS is disabled. Everything is peachy on the gaming PC.

Here are the two sample ROMs I created for my cards, F2 and F11.

For more detailed information check the below links and sources.

Disabling ULPS: Open regedit and search (Edit – Find) for EnableUlps then change the DWORD value from 1 to 0. Ignore EnableUlps_NA this does nothing. Keep searching (pressing F3) through the registry and change every entry you find in there from 1 to 0. Once finished reboot. Although disabling ULPS is not necessary I like it because with this feature off the driver does not disable the secondary card after a gaming session, which in turn allows the fans to cool the card properly instead of just shutting it down.

Editing the VGA BIOS: I used tools such at Hawaii Bios Reader, it is capable of creating a proper BIOS check sum in order to flash the card. Essentially in Hawaii Bios reader I edited the frequencies of the clocks then I proceeded to change the Fan and Temperature profiles with a Hex editor, I used HxD to do that. Be aware that if you use HxD after you use the Hawaii tool, you will need to open the hex edited file and resave it in Hawaii to it retains the right check sum for flashing. Other wise the card will not take your custom BIOS.

Sources:12, 3, 4

PC Gaming accessories: Mouse

I’ve always used Logitech mice, except for a long long time ago when I had a dell mouse which i used with one of my first gaming/school PCs. But even prior to that I had used the basic Logitech mice. I’ve tried alternatives but I find that with most companies they don’t have the ergonomics that are necessary for prolonged gaming sessions. Logitech has done ergonomics well on mice for a long time now and it is one of the few companies that still does. I still own my older G5 Laser Mouse and it’s kicking around in one of my drawers as a backup. Somewhere else in a box, stored under my staircase, hidden in there lies an old MX 510 which I picked up at staples while I was in College. The MX 510 replaced my Dell stock mouse. Then a wireless MX700 replaced that, the G5 Laser replaced that, ang again the G700 replaced that. Then recently I thought I broke the G700, I slammed in on my desk in furious moment of rage and frustration, this messed up the scroll wheel functionality. Having thought I broke the scroll wheel I replaced the G700 with a wireless and optical G602 mouse. Yes sometimes I rage when I play games… sometimes.

From my personal experience in over 2 decades of PC gaming I can confidently say that Logitech mice are rock solid, they also have a good ergonomic shape to them. I’ve tried other mice but they just don’t cut it for me like the logitech ones, they don’t have the comfort or reliability. Logitech are comfortable mice and they can take a good beating. Having said this these mice are not without their flaws, I have not found the perfect mouse yet. Even though this migth be somewhat biased on my experience go ahead and check it out.

Also for the heck of it I will toss in a couple notes about the older mice, just to reminisce a little. But mostly this is a comparison of a couple Logitech wireless gaming mice. Note that with most wireless gaming mice from Logitech they are a bit heavier due to the added weight of the batteries. This suits me just fine, I like a bit of a heavier mouse for gaming, a lot of people don’t. If that is the case a wired mouse would be the way to go.

MX510: Great ergonomic feel to the mouse, it is a basic optical mouse and designed for non gaming. I used it for gaming a lot. It was comfortable and light. It’s still in a box in my home.418E7H3ZYHL._SX300_

MX700: This was one of my first forays into the wireless/gaming mouse territory. Great ergonomic feel and shaped much like the MX510, but to be honest I wasn’t happy with this mouse. I paid $120+ for it at the time and kept it for less than a year. I gave it away to someone. The battery life on these was terrible and the performance not much better. I understand sacrificing short battery life for a wireless experience but this mouse had very few benefits to owning it.MX700-930754-0215-R-unit

G5: Solid laser mouse performer, it has an adjustable weight tray that slides in and out of the bottom of the mouse. This was my first laser mouse, I didn’t know what I was missing with optical mice until I bought this. Weight can be adjusted by inserting and removing 1.7g and 4.5g weights into a weight tray the slides in and out of the bottom of the mouse. The threaded cable shielding proved to be a burden, and I had to eventually remove it completely. The cable underneath the shielding would get trapped and fold through the shielding. I would worry that this would make the wire inside the cable tear and this is why I removed it entirely.

G402: Dubbed the Hyperion Fury. Good basic optical mouse, comfortable and good for gaming. Light for the smaller hands as to not over accelerate when aiming or moving. If you are looking for a wired gaming mouse that is non laser this would probably be it. Although the cost of $50+ is a little too much for this. Personally I do not worth the money.logitech-g402-hyperion-furyG602: After having owned laser mice this optical wireless device lacks the smoothness and the precision that I find I get out of laser precision. I find that the profile of this mouse is a little too low as well. Since I like to rest my palm on the mouse, and my hands are rather big, in longer gaming sessions my hand will start to cramp up. The battery life is amazing on this device, it takes 2 double AA batteries, and has a switch on top of the mouse to switch it into performance or endurance mode. Endurance mode allows it to sip juice from one battery at a time, and grants the user 1400 hours of device usage. Performance mode on the other hand allows it to shine in gaming at full power giving you a lesser 250 hours of use. I assume the switch changes the polling rate at which the mouse tracks movement across a surface. Personally I find this mouse ok for gaming, I’m not a fan of optical technology as any imperfections in your polling surface, such as grease and dust will mess with the optical sensor and it’s accuracy. Other than the optical sensor, the mouse it is fully featured and does not lack in other functionality, although the construction feels a little flimsy. The software is ok, just ok, but then again I never use the G software with any of the Logitech devices. I don’t much care for macro functionality as I don’t play MMOs and the software annoys me more than anything else, so it’s never on my system. Since the battery life on this is rather long this device omits the USB cable/charging option like the G700/s has. It’s a shame as the last thing you want to be doing during a gaming session is fumbling for a fresh pair of batteries. One thing to note about this mouse is that the Middle Mouse Button was dead right out of the box. This seems to be a hardware defect as stated by Logitech to one of their user on the forum. This defect also seems to be plaguing a lof of devices across this line of mice, so it is not a matter of getting a lemon out of the bunch but rather a hardware design flaw. Also the G602 has 6 thumb buttons, that’s 4 to many, and since the software is useless, that’s 4 buttons without a function. I haven’t tried it but perhaps they can be mapped directly in game via key bindings.g602_1

G700: The g700 is  a work horse of all the mice I have had. It is solid and takes a beating. Trust me, it knows how to take a beating, I’ve abused this mouse. I like the larger profile on the mouse and it is very comfortable for larger hands in longer gaming sessions. What the g602 lacks the g700 has, precision, comfort, and a short battery life. The batteries don’t last nearly as long as they do in the g602. Having said this unless you do 12 hour gaming sessions that will not be an issue, also unlike the g602 the g700 has a micro USB plug at the front so you can charge the battery and game as if you would with a wired mouse.g700

The G700s is it’s successor with minor alterations to the design, such as the sensor and the coating on the mouse which add better perspiration protection and grip. Mostly cosmetic changes over the G700. Worth every penny you pay for this mouse. I have had my g700 for 3+ years, it hasn’t skipped a beat, it’s amazing for gaming and I highly recommend it. I bought one for my buddy on his Birthday last year, he loves it as well.logitech-g700s-910-003584-rechargeable-gaming-mouse

Mousepad: The surface on which you game makes a huge difference, especially if you are using an optical mouse, not so much with a laser mouse. The optics of a mouse can be impeded by dirt, dust, or any other form of surface that is not uniform. Also try using a glass surface with an optical mouse, it will not happen. A proper mouse pad will not only provide you with a clean surface for a mouse it will also provide you with a surface that gives very little friction and resistance to mouse movement. Your acceleration will not be impeded and depending on the type of mouse pad might actually improve. I prefer the hard mouse pads over the cloth ones they are generally a low resistance surface that aid in mouse precision.mouspad

Mouse software: Do not use the G software with these mice. I noticed that the software resets the mouse values and settings, which was odd, but it did that. As soon as I removed the mouse software, immediately the mouse started functioning properly again. That is a huge fail on Logitech’s part. Other than that I appreciate the mice, they are basic, ergonomic, and for the most part reliable. 

Home Media – Part 2 – The Setup

Part 1: The NAS Build

Part 3: The Rip

So what do you do if you want to build an unRAID box?

unRAID boots from a USB flash drive, and the flash drive needs to have a GUID or Globally Unique Identifier. This is for the purposes of licensing. If you do end up loosing or something happens to the flash drive generally lime-tech is pretty good about it and all you need to do is email them, they might ask for a new GUID and send you a new key. Here is their policy on that. I’ve had two flash drives fail on my and they were pretty good about giving me a replacement key. Once unRAID boots from the flash drive it mostly runs in memory, and since it’s a stripped version of Slackware Linux it doesn’t really require a lot of memory to run.

First, grab a 4GB flash drive, for best results grab a flash drive from the Hardware Compatibility page. The key here is that your flash drive needs to have a GUID, some have it and some don’t. Speed is also a factor, as in read and write speed. Alternatively if you want to know if your drive is compatible and has a GUID grab any flash drive and quick format it to FAT32, use Volume Label: UNRAID, then unzip the contents of the ZIP file found here to said drive, and boot it on any networked machine. If you’re having trouble booting follow the drive preparation instructions. Once booted log into unRAID, username:root, password:<blank>. Type in ifconfig to obtain the IP address, then head over to another networked machine and type that IP into a browser, this should take you to the main unRAID page. Here is the getting started page from Lime-Technology which describes the same thing in greater detail. Alternatively by default you can access the unRAID web GUI by typing in http://tower instead of http://<unRAID IP address>.

There is also something called the GO file which is located in the /config folder on the root of the flash drive. The go file gets executed at boot and in here you can put any special instructions or drive maps that you’d like to execute at boot. In my case I mount a drive that is outside of the array, I call it the system drive and it holds all the configurations for my docker containers. This is not necessary as the same can be accomplished with the cache drive. Here’s what my go file looks like.

#!/bin/bash
# Start the Management Utility
/usr/local/sbin/emhttp &

# Make directory and Mount the system drive
mkdir /mnt/system
mount -t reiserfs /dev/disk/by-label/systemdisk /mnt/system
sleep 5

#Install Screen and Utemper, utemper is necessary for screen to work
installpkg /boot/packages/screen-4.0.3-x86_64-4.txz
installpkg /boot/packages/utempter-1.1.5-x86_64-1.txz

The free version of unRAID (uR) allows you to use 3 drives. The Hardware compatibility list also provides some standards and minimum requirements for the server hardware, usually an old desktop will do. If you decide to go with a 8 or 25 disk server, you will need to pickup a decent RAID card and a good enclosure.

There is also something called the GO file which is located in the /config folder on the root of the flash drive. The go file gets executed at boot and in here you can put any special instructions or drive maps that you’d like to execute at boot. In my case I mount a drive that is outside of the array, I call it the system drive and it holds all the configurations for my docker containers. This is not necessary as you can to the same for the cache drive.

Plugins vs Docker. In unRAID, plugins were the go to in version 5 and prior, this was the way to install applications on your NAS. They were easy to install and worked well, until you installed a new plugin which had, let’s say a newer version of Python in it. If this happened it would break all the plugins which relied on the previous version of Python. I ran into this issue multiple times in both version 5 and 4 of unRAID. There was no standard and people wrote plugins they way it suited them. At one point in version 5 I had to go into a the plugin and change the python version it pulled and installed, this broke a minor function of the plugin but made it sort of work. There was also an issue I had with sql lite, a plugin was pulling a newer version than was supported by another application, and finally some plugins would break the web interface. When version 6 of unRAID entered beta and docker was a possibility I jumped for joy. Yes by nature the docker equivalent is larger, however since these are self contained application on top of a OS layer, there is no chance that one application would break another. Also now with docker one could have multiple versions of python ans whatever the prerequisite for an application was. The only problem with docker is that it is not as easy to implement as the plugins. It takes a little know how to get it up and running. But, since I moved everything to docker I have less downtime on my server, and I’m not as often remoting into the server to administer it. That’s a win in my books.

Most applications that you would want on your NAS more than likely already have a docker container created for them. However if you want to create your own and if you’re curious about docker in general here is a great Docker 101 tutorial video by Ken Cochrane.

I’m not going to tell you how to setup docker in unRaid or how to put together your server hardware, but what I will do is point you to some really cool articles/blogs that describe how to do so. In my opinion there is no point in reinventing the wheel.

Update: In the final version 6 of UnRAID docker container manager is included and installing docker containers is as easy as point and click for the most part. Good place to get the containers is from http://linuxserver.io

Over at the Corsair Blog there is a really cool How to Build a PC section. I hope http://www.corsair.com doesn’t mind me linking to their site. It’s intended on teaching you how plan and build an gaming PC, but there are still articles relevant to building and putting together computer hardware regardless of what the desired purpose is. If you click the link make sure you sort by Date (Older – Newer), this way you will start on the first post and you can continue on in a logical order.

The Lime-technology website has a really good tutorial on how to get docker up and running on your server. Check out their Docker Guide over at http://lime-technology.com/ . Again I hope they don’t mind that I link to their content as this is a very well written and comprehensive docker guide.

And last but not least there is a really good plugin for Docker containers in unRAID, yes I said plugin, it’s ok to have one. This plugin allows to view / add / remove unRAID docker template repositories. The alternative is adding the repositories yourself, you can get these from this lime-tech forum post.

To install the plg you need to SSH into your server, I generally use PuTTY from a Windows machine. Once there navigate to the directory /boot/config/plugins by typing the following:

cd /boot/config/plugins

Any plg file that is located in this folder will get installed on server boot. Once there you’ll need to download the plg to your local machine. Do this by typing the following:

wget –no-check-certificate https://raw.githubusercontent.com/Squidly271/repo.update/master/plugins/community.repositories.plg

This will download the plg file. Now you can either reboot the server or install it manually by typing the following:

installplg community.repositories.plg

That’s all for now. You should now see a new tab under the Docker heading in the web GUI of unRAID.

If you have any questions or comments, post below.

Next: Part 3 – The Rip