Install Dash Cam (Aukey DR-H1) in 2012 Honda Civic (9th Generation)

Recently there’s been a lot of buzz around Dash Cameras with many “interesting” videos popping up all over YouTube. As a techie it’s always cool to fiddle around with new stuff and I wanted to put such a camera into my daily commuter, a 2012 Honda Civic Sedan.

I wasn’t dying to get a dash cam, but it’s one of those techie things that if it falls in my wheel house I’m going to do it no matter what. And so, I was shopping on Amazon last week and I somehow came across a cheap covert dash cam for $79.99 CDN, and it had good reviews, so I thought hmm at this price it’s worth a shot. At higher price points I was much more reluctant to pull the trigger but this definitely seemed like a good value buy.

The camera I stumbled upon is an Aukey DR-H1, it’s a small well-built little camera. The camera supports up to 32 GB of micro SD storage and records at 1080p. It doesn’t have any fancy bells and whistles like some other cameras do (gps, etc) but I wasn’t going to use those anyways. To be honest I just wanted something that was a “set it and forget it” type of product, just for piece of mind. So let’s get to my install.

What’s in the box?

  • Dash Cam
  • Fuse Box Wiring Power Cable (with video out – which is used to customize settings)
  • Cigarette Lighter/12 Volt Accessory Power Cable
  • Manual + Registration Card (extends warranty by 6 months to 30 months warranty)

IMG_20160331_131411
IMG_20160331_132845IMG_20160331_163146

Camera Closeup

It’s very small and covert when installed, seems solid with great build quality. As you can see it uses the 3M double-sided tape, so once you stick it, it should hold solidly.

IMG_20160331_162839IMG_20160331_162850IMG_20160331_162859IMG_20160331_164042

Initial Testing:

I wanted to go the fuse box route for installation, it’s much cleaner and routing the cable in the civic took minimal time, maybe 10 minutes max. In my opinion, the manual provided did not give great directions for installing the camera into the fuse box. As a first time dash cam installer I thought the camera could just operate on acc power (ignition 12 v), I didn’t understand why the camera needed constant 12v and acc (ignition) 12v so before installing the camera in the car I did some testing externally to better understand.

My testing came to the conclusion that the acc ignition power wire was basically a normally open switch, but when energized, it closed and the camera powered on. It made sense after playing with this, because if the camera were to work just off acc power it would never shut down cleanly unless it had some internal circuitry/battery. What I mean is when the car is turned off, the power to the camera would be cut immediately and the camera would not have had a chance to shut down gracefully. Through testing it was easy to see this, when the car was turned off the camera continued to run for about 3 seconds afterwards.

IMG_20160401_180749Constant voltage applied, note dash cam is powered off.

IMG_20160401_180803
Acc voltage applied, note dash cam is now powered on.

Fuse Box in Civic:

First order of business was to find constant 12v power and acc 12v power… so I pulled out the multi meter and found fuse location 10 (constant) and 23 (acc). There are obviously more possible locations and the ability to tap other spots as well but these worked for me.

IMG_20160331_164759fuse

IMG_20160401_182319Here’s the successful test configuration with the supplied wiring.

Prepare Ground Wire and Locate Grounding Location

The only modification I had to make to the supplied wiring was to turn the black (ground) wire into a usable ground wire for installation. The process is quite simple, it just requires cutting the end of the cable and crimping on a more appropriate end.

IMG_20160401_200917IMG_20160401_202347

There are certainly many spots to ground this off, I picked a location that I thought was suitable for this application.

IMG_20160401_200635IMG_20160401_200439

Install the Dash Cam and Run the Wire

Now that we have everything ready to go, find a spot to stick the dash cam. The most common spot is right behind the rear view mirror so it does not obstruct your vision in any way. I chose to go right behind the mirror just on the right hand side.

IMG_20160401_203310

Run the wire… it seems daunting but really it’s rather simple as you will find out. I’ve marked the pictures in red to illustrate where the wire is running.

IMG_20160401_204330IMG_20160401_204335

IMG_20160401_204747IMG_20160401_204847

IMG_20160401_204855IMG_20160401_205252

IMG_20160401_205307

Wire it up

As described early, this is wired to a 9th generation Honda Civic (model year 2012). Yellow wire (constant 12v) fuse 10, red wire (acc) fuse 23, black wire (ground).

IMG_20160402_130451

Configure Settings

Plug the yellow rca/composite wire into some kind of display. I didn’t have a free TV kicking around so I ended up having to make a custom rca cable with some left over cables I had lying around. I ran it to my TV inside and used Facetime to program it… funny I know but it worked well and rather quickly. This allowed me to configure a few settings, the most important being date/time. 2 other settings of value are the 720p/1080p and the 1, 3, or 5 minute length setting.

IMG_20160402_124433

File Size and Recording Capacity

I set my camera to use the 5 minute video length setting. I did some rough calculations and it appears that the camera can record a maximum of approximately 300 minutes of footage at 1080p. The camera is geared to roll over on it’s own, so it’s maintenance free.

size

Final Thoughts

All in all it was fun little project, it didn’t break the bank and it was a good learning experience. Overall the camera is not too bad, at night it’s not the greatest but as I always say you get what you pay for.

I’ve taken some video and I have posted it up, enjoy and as a side note I’m sorry about the slightly distorted sound during the night-time clip, my music was a tad too loud.

Run Mac OS X on Windows 10 Using VMware

I’ve never been a Mac fan, but I do have to say that our family does have several Apple products in our home, 2 iPads and an iPhone… for the kids and my wife. Whether I like to admit it or not they do make a highly polished quality product.

It had been an interest of mine recently to run Mac OS X on my powerhouse PC at home, but I wanted it to run as virtual machine. I raked over some sites that stated it was not possible, I found that rather funny I mean how is it not possible doesn’t Mac run on Intel hardware nowadays anyhow? Then I stumbled on this video.

It does a good job at showing the basic steps, however it doesn’t explain much along the way, I figured it would be good to break this down and explain it.

  1. Download this file (approx. 6 GB), within this file is a file called Yosemite 10.10 Retail VMware.rar, this needs to be extracted to a location of your choice, preferably onto a SSD. This rar file contains VMware prepped OS X files (vmx, vmdk) for use with VMware products.
  2. Install VMware Workstation or VMware Player, I chose the Workstation route since I already had it installed.
  3. Confirm VMware Workstation or VMware Player is installed correctly, and close the program.
  4. Download the latest OS X Unlocker, at the time of writing it is version 2.0.8.
  5. Extract the contents of OS X Unlocker onto your computer. OS X Unlocker essentially patches the installed VMware product so Mac OS X can be installed. It does this modifying some core VMware system files.
  6. Browse to the folder where you extracted OS X Unlocker and Run the following files As Administrator (win-install.cmd and win-update-tools.cmd)os_x_unlocker
    Note: if something goes wrong or you’d like to restore the original files for your VMware application you can run win-uninstall.cmd.
  7. Run VMware Workstation or VMware Player and select Open a Virtual Machine.vmware_open_a_vm
  8. Select the Mac OS X 10.9.vmx file and select Open.vmware_open_vmx
  9. Go to Edit virtual machine settings. Either by right clicking on the Mac OS X 10.9 object on the left side panel or via the tabbed window.vmware_edit_vm
  10. You can keep the default resources if you prefer or bump them up, I personally bumped them up to 8 GB and 2 vCPU. The important option here is Version which is on the Options tab. This needs to be set to Mac OS X 10.7. This option is not available by default, the OS X Unlocker we ran earlier has exposed this option. If for some reason you don’t see this option, look at re-running the OS X Unlocker steps, it needs to be Run as Administrator.vmware_mac_os_x
  11. Now power on the Virtual Machine using Power on this virtual machine or by right clicking and going to Power > Start Up Guest.
  12. The machine will boot up and take you through the OS X setup process, it’s very quick and painless. Once complete it’s now time to install the latest VMware Tools onto the newly created OS X VM. You may have picked up on it when we ran win-update-tools.cmd for OS X Unlocker… it pulled down the latest and greatest for us to mount and install.
  13. Right click on the Mac OS X 10.9 VM on the left side and go to Settings.
  14. Go to CD/DVD and go to Browse and mount the darwin.iso file. Make sure Connected is checked!
    mount_vmware_tools_darwinbrowse_vmware_tools_darwin
  15. The VMware Tools installer should pop right up, just click Install VMware Tools and then reboot upon completion.os_x_vmware_tools
  16. If you want to take it a step further to improve the VM performance there is tool called BeamOff which is included in this file we downloaded in step 1. This tool disables beam synchronization which in turn improves OS X VM performance.
    • Mount the Beamoff Tool.iso similarly to VMware Tools in the step previous. Alternately you can download BeamOff zip and do this yourself if you prefer.
    • Extract the BeamOff application to somewhere on your VM.
    • Go to System Preferences.
      os_x_system_pref
    • Go to Users & Groups.os_x_users_and_groups
    • Click on your User account and select Login Items, click the + and browse and select beamoff.os_x_login_beamoff
  17. At the time of this writing OS X El Capitan is now available, if you want to apply it, go fetch the update from the App Store and install it!os_x_el_capitan

Hopefully you found this informative, I found it interesting and thought I should share my experience.

Install and Configure OpenVPN on OSMC/Kodi

Let’s face it, Kodi is pretty popular right now, everyone is talking about it. One of the first things I did after I installed OSMC on my Raspberry Pi was to configure OpenVPN. There is a little bit of work involved so I figured I’d share what I did to get it up and running!

Login to OSMC via SSH using PuTTy or your client of choice.

Elevate to Super User.
osmc@KODI:~$ sudo su

Update the software repositories.
root@KODI:/home/osmc# apt-get update

Install OpenVPN.
root@KODI:/home/osmc# apt-get install openvpn

Reboot.
root@KODI:/home/osmc# reboot

Create a folder to put your OpenVPN configuration files in.
osmc@KODI:~$ sudo su
root@KODI:/home/osmc# mkdir vpn-conf

Copy your .opvn file/files and your .crt file into /home/osmc/vpn-conf, there are a few ways to copy the files here, I personally like to use PSCP. This
example is using PSCP from a Windows computer.
C:\temp>pscp c:\temp\ca.crt osmc@192.168.1.100:/home/osmc/vpn-conf

Create a new file that will contain your login credentials for OpenVPN
root@KODI:/home/osmc# cd vpn-conf
root@KODI:/home/osmc/vpn-conf# vi login.conf

  • Press Insert
  • Type your Username on first line press enter and type your password on the next
    line.
  • Press Esc, type :wq
Username
Password

Now edit the .opvn file/files of choice to make sure the login.conf and <ca_file_name>.crt file are referenced correctly.
root@KODI:/home/osmc/vpn-conf# vi <filename>.ovpn

  • Find the following lines that begin with:
    • auth-user-pass
    • ca
  • If they exist – edit them accordingly, if they don’t exist you will need to add them.
auth-user-pass /home/osmc/vpn-conf/login.conf
ca /home/osmc/vpn-conf/ca.crt

Let’s test out OpenVPN, the service should start and connect successfully after running this command.
root@KODI:/home/osmc/vpn-conf# openvpn /home/osmc/vpn-conf/<filename>.ovpn

Confirm VPN connectivity by using curl, this should retrieve your VPN’d IP address.
root@KODI:/home/osmc/vpn-conf# curl http://checkip.dyndns.org

If everything checks out and is working so far it’s time to install the OpenVPN Add-On for
Kodi and import a profile. Grab the latest OpenVPN Add-On for Kodi, the quickest way is to just grab it is to use wget right from OSMC, using your web browser right-click on the script.openvpn-x.x.x.zip and Copy the link address.
root@KODI:/home/osmc/vpn-conf# wget -c <paste_link_addr_here>

From Kodi on your TV, go to Settings -> Add-ons -> Install from zip file

  • Select Install from zip file.
  • Navigate to the ZIP file and select it.
  • In the bottom right corner, Kodi notifies when the add-on is installed and enabled.

Now the OpenVPN Add-On for Kodi should be installed… go to Programs > Add-Ons
> OpenVPN from Kodi and import your .opvn files, once complete try to connect. It should work successfully since it’s literally just an interface to the actual OpenVPN service that we just installed and configured.

That’s it! …but if you’d like to take it one step further you can. I personally like to have a certain OpenVPN profile connect at startup of Kodi. It’s pretty simple to do this.

Browse to the userdata folder for Kodi and create an autoexec.py file.
root@KODI:/home/osmc# cd /home/osmc/.kodi/userdata
root@KODI:/home/osmc/.kodi/userdata# vi autoexec.py

  • Type the following where <profile_name> is the profile
    name of the profile you have created in the OpenVPN Add-On for Kodi.
import xbmc
xbmc.executebuiltin('XBMC.RunScript(script.openvpn,<profile_name>)')

Now every time you power on your Kodi box, OpenVPN will launch and the VPN
profile of choice will connect automatically. Cheers and happy streaming!

Batch Scripting – A Case for Using For

The Batch file has been around for a long long time and although it is an old concept I still find myself dabbling in it from time to time. You can say what you want about Batch scripting but it is a tried and true method and that’s probably why we still see it in use today for automating some tasks.

Batch scripting is good and has it’s time and place but it also has some drawbacks. I won’t dive into them all here in this post but one of the bigger ones in my mind is the inability to do some kind of looping effectively.

The For command in batch conditionally performs a command several times. This works but it’s implementation is not especially kind. I thought I would share a snippet of code that I think is handy and that I seem to use regularly in some of my Batch scripts that I write for installing and configuring applications. This snippet uses the aforementioned For command and it helps avoid hard coding a file path.

echo Finding Admin.exe...
c:
cd \
for /f "delims=" %%i in ('dir Admin.exe /b /s') do set path2admin=%%i
if "%path2admin%"=="" (
set EXITCODE=1
set EXITMESSAGE=Failed to find Admin.exe, exiting...
echo %EXITMESSAGE% && goto end
)

echo Run the Admin.exe at the path found...
"%path2admin%" /p /n
set EXITCODE=%ERRORLEVEL%
set EXITMESSAGE=An unexpected error occurred, exiting...
if %EXITCODE% NEQ 0 echo %EXITMESSAGE% && goto end
set EXITCODE=0

:end
endlocal
exit %EXITCODE%

This code is fairly straight forward, it searches the C: drive for Admin.exe. If Admin.exe is found the full path is saved to a variable called path2admin. The script then executes the executable Admin.exe at the location it was found in (eg. c:\Windows\Temp\Admin.exe), if Admin.exe is not found the script exits with an exit code of 1.

Converting Virtual Machine disk formats

There are many Virtual Disk formats, VDI, VMDK, VHD, VHDX, IMG, RAW, HDD, and many more. Unfortunately VHD and VHDX formats are amongst the least popular ones, but if you’re running a Hyper-V server these are the only formats you can work with.

Don’t fret, there is a way to convert a lot of these common formats to the VHD Micrsosoft disk image. Oracle includes a conversion tool with their Virtual Box application VBoxManage.Virtual box is a free tool available for download, and use. You can find the software over here: https://www.virtualbox.org/ .

I found that this works better than the actual tool that Microsoft offers and have since successfully converted many formats with Virtual Box to the VHD disk image. Usually I go from a IMG to VHD file. I capture the drive using dd from within a Linux Mint boot drive and then proceed to convert it. But I have sometimes setup test VMs in VirtualBox that I needed to run on a Hyper-V server after.

To perform the conversions have a look at the following two commands.

Install Virtual Box and use it by opening a command prompt and navigate to the VirtualBox installation directory. Usually located in C:\Program Files\Oracle\VirtualBox.

Usage:

VBoxManage clonehd  <uuid|inputfile>  <uuid|outputfile>
                    [--format VDI|VMDK|VHD|RAW|<other>]
                    [--variant Standard,Fixed,Split2G,Stream,ESX]
                    [--existing]

A baisc command and output of this would look something like this.

C:\Program Files\Oracle\VirtualBox>VBoxManage.exe clonehd "c:\VMs\Windows 10\Windows 10.vmdk" "f:\temp\Windows10.vhd" -format vhd
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone hard disk created in format 'vhd'. UUID: 1f6e118a-f0e2-49ed-a352-6b842791cdfa

C:\Program Files\Oracle\VirtualBox>

VHD is a Hyper-V generation 1 format, where as VHDX is a Hyper-V generation 2 format.

Alternatively if you have a DD captured raw IMG file you can convert it to VHD by using the following command first:

 C:\Program Files\Oracle\VirtualBox>VBoxManage.exe convertdd file.img file.vmdk

…once converted follow this up by the “clonehd” command which converts to vhd. Prior to converting to vhd make an attempt to boot the vmdk in VirtualBox. Attempt to boot it in to the OS, either normally or via safe mode. The reason for this is that sometimes the OS will need to run a chkdsk before booting into itself, you should let it run as this chkdsk will allow the vhd to properly mount in Hyper-V. It seems that either the VMDK format is more forgiving than VHD, or only VirtualBox can fix the conversion errors.

If you’re only looking to only mount a volume and not boot off the virtual disk and into an OS you can try a tool called Disk2vhd. Also since windows 7 the backup software built into the OS created vhd backup sets. That could be an option as well.

Hyper-V replication in a workgroup or across domains using a self signed certificate.

Why would you want a HyperV server in a workgroup environment?

Well if your Domain Controller is a VM you really don’t want to add the HyperV server to the domain as it will boot before the DC comes up. This type of setup is ripe for domain issues, so we’re left with a server that is only in a workgroup. Also if you are doing cross site replication, you might be replicating from/to different domains, this is where the self signed certificate authentication comes in to play as it is domain agnostic.

Kerberos authentication does not work in this setup, so we need to use a certificate authority as a means of authenticating the two servers with each other. The Primary server is where all the VMs are, and the Replica server is where the VMs will be copied to. HyperV replication is native and built into Server 2012 +, so there are no extra licenses necessary.

What are the steps involved?:

  1. Change the DNS suffix on both Primary and Replica servers.
  2. Reboot both servers.
  3. Create self signed certificates on both servers.
  4. Open the Certificate MMC snap-in on the Primary server and export the certificate to a .pfx file.
  5. Copy the export file and RootCA certificate from the Primary to the Replica server.
  6. Import the Primary RootCA certificate file on the Replica server.
  7. Import the .pfx file on the Replica server.
  8. Copy the RootCA certificate from the Replica to the Primary server and import it.
  9. Disable Certificate Revocation Check on both servers for replication and fail over replication.
  10. Setup the Replica server as a replica in HyperV.
  11. Start replication of a Server on the Primary server.

First we need to change the server names, or rather add a DNS suffix to them. Bring up System Properties in the Control Panel, under the Computer Name tab click change. In the Computer Name/Domain Changes window click More…

In the DNS Suffix and NetBIOS Computer Name add a Primary DNS suffix. Something along the lines of “hypervreplica.local”, it doesn’t matter call it what you will.

Click OK and save all the changes. Note that you will be required to reboot the server in order for changes to take effect. Do this to both the Primary and Replica server.

Primary is the server where your VMs reside, and Replica is where your VMs will be replicated or copied to.

Next we need to create a self signed certificate. For this you will either need Visual Studio or the Windwos SDK(https://www.microsoft.com/en-us/download/details.aspx?id=8442).

What we really need out of either of these is the makecert.exe file.

If you have VS installed the makecert.exe file is located under C:\Program Files (x86)\Windows Kits\8.1\bin\x64, or a similar path, the 8.1 will change depending on the version of Visual Studio you have installed.

Copy the makecert.exe file from here to the primary and the replica servers.

On both those servers create an empty directory somewhere, place the makecert file in there. This is also where we will create and store the self signed certificates.

On the Replica server open up an elevated command prompt and navigate to the directory where the “mekecert.exe” file is located and type in the following:

makecert -pe -n “CN=ReplicaRootCA” -ss root -sr LocalMachine -sky signature -r “ReplicaRootCA.cer”

The above command assigns a signature certificate issuer name to the replica server of “ReplicaRootCa”

Followed by:

makecert -pe -n “CN=replicahostname” -ss my -sr LocalMachine -sky exchange -eku 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 -in “ReplicaRootCA” -is root -ir LocalMachine -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 ReplicaCert.cer

…where the replicahostname is replaced by the name of the server with the DNS suffix and all. Ex. hostname.domain.local.

Now move over to the Primary server, open up an elevated command prompt and navigate over to the folder where “makecert.exe” is located, and type the following:

makecert -pe -n “CN=PrimaryRootCA” -ss root -sr LocalMachine -sky signature -r “PrimaryRootCA.cer”

Followed by:

makecert -pe -n “CN=primaryhostname” -ss my -sr LocalMachine -sky exchange -eku 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 -in “PrimaryRootCA” -is root -ir LocalMachine -sp “Microsoft RSA SChannel Cryptographic Provider” -sy 12 PrimaryCert.cer

… where the primaryhostname reflects the name of the Primary server with the added DNS suffix. The above 4 commands will create two files on each server.

On the Primary server run > mmc, click File and select Add/Remove Snap-in…

Select the Certificates snap in and click Add>, on the next windows select Computer Account, click Next>, and then select Local computer:. Click Finish.

In the Certificates snap in, expand the Personal store and then click on Certificates.

 

You should have a certificate here with the Replica serve name and Issued by ReplicaRootCa.

Right click this certificate, select All Tasks and select Export…

Capture7

This will open the Certificate Export Wizard, when prompted select Yes, export the private key.

Export File Format, use Personal Information Exchange…. (.PFX), Include all certificates and the certification path if possible.

On the Security page check the password box, and input a password you will remember.

Click the Browse button to save the export in a *.pfx file format, give it a file name (PrimaryServer.pfx) and click save.

Double check all your settings on the final page and click Finish.

Copy the PrimaryRootCA.cer file and the PrimaryServer.pfx files to the Replica server. Put it in the folder where you created your Replica Server certificates.

On the Replica server we will now import the cer and pfx files. Open up an elevated command prompt and navigate to the file location. Type in the follwing:

certutil -addstore -f Root “PrimaryRootCA.cer”

The quotes are only necessary if you have spaces or special characters in the file name.

Open up MMC, expand the Personal section, right click on Certificates and select All Tasks > Import.

The Certificate Import Wizard will open up. Click Next. On the File to Import page click Browse…

You might have to change the file type to view the pfx file.

Navigate over to the location of your PrimaryServer.pfx file and select it, click Open. Click Next. On the next screen enter the password for the Private Key. You can mark the key as exportable if you’d like, this means you can export it at a later time if you do not keep a copy of it somewhere. Also check off Include all extended properties. Click Next.

Place the certificate in the Personal certificate store. Click Next. On the final page inspect all the details and make they are correct. Finally click Finish.

*Please be aware that for fail over replication you will more than likely need to export the certificate in the pfx format from both servers and then copy them over and import them on both servers as well. The reason for this is that replication is only one way, where as fail over replication goes both ways. Something to think about.

Now copy the ReplicaRootCA.cer file over to the primary server, place it in the folder with all the other certificate files. In and elevated command prompt add it to the certificate store.

certutil -addstore -f Root “ReplicaRootCA.cer”

Run the following two commands on both servers in an elevated command prompt.

reg add “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\FailoverReplication” /v DisableCertRevocationCheck /d 1 /t REG_DWORD /f

reg add “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Replication” /v DisableCertRevocationCheck /d 1 /t REG_DWORD /f

Note that the only difference between the two registry commands is FailoverReplication and Replication. You shouldn’t need to restart either of the servers after these commands.

Next you need to enable the Replica server as the replica.

On the Replica server open the HyperV manager.

Select HyperV Replication Configuration, in the configuration window check off Enable this computer as a Replica server. Check off Use certificate-based Authentication (HTTPS). It should prompt you to select a certificate, if not click on Select Certificate… you should have the option to select the certificate you created on the Replica server. Select it and click Apply.

*Note, that you will get a prompt about the Windows Firewall, I have mine disabled on all servers so this message never applied to my setup. However if you have your fire wall turned on I would recommend adding a rule to it to allow the traffic on port 443 to pass.

Now the real test, go to your Primary HyperV server and attempt to enable replication. Open up HyperV manager, select a virtual machine, right click it and slect Enable Replication…

You will be presented with a Before You Begin page, click Next >.

On the Specify Replica Server page, type in the FQDN of the replica server, for example, replicahostname.domain.local, or whatever the hostname and dns suffix that you assigned to your replica server is. Click Next >.

On the Specify Connection Parameters, make sure that certificate-based authentication is selected. Kerberos authentication only works on a domain. You may need to select the proper certificate, this will be the Primary server certificate. Also check off Compress the data that is transmitted over the network. If you see a yellow exclamation sign with the text “Could not get configuration details of the specified server.” at the bottom, don’t worry about it, if everything is setup properly it should not impact the replication in any way, shape, or form. Click Next >.

Choose Replication VHDs, here you can pick and choose which storage attached to the server you want to replicate. Select the storage you want and click Next >.

Configure Replication Frequency. The options here are 30 seconds, 5 minutes, or 15 minutes. Depending on how mission critical your data is choose accordingly. Note that replication frequency differs from Server 2012 to Server 2012 R2.

Configure Additional Recovery Points. Depending on how many recovery points you require here is where you set that up. You can setup additional hourly recovery points and even use VSS for snapshots. Hourly recovery points provide granularity, no only can you recover form the last replication point, but with this option enabled you can go back hours. You also have the option of VSS snapshots which, from personal experience, can fail. I don’t have experience with VSS on replication, but VSS on backups and more often than not VSS was always the culprit for failed backups. VSS has a tendency to fail, not ofter but every once in a while. Either way I usually only maintain the latest recovery point. Again the number of recovery points differs from 2012 to 2012 R2, 15 vs 24.

Pick your poison and click Next >.

Choose Initial Replication Method. These options are self explanatory. Chose your replication method and when. I usually just send it over the network, I find that the impact is minimal. You can also start the initial replication at a defined time, perhaps when your system is not as busy at night etc.

 

*One thing to note about replication and this is important, replication creates an avhdx file. This is a HyperV change file, and during the initial replication this can grow quite large in size. On a normal active system I have observed that this file can grow to 33% size of the original VHDX/VHD file. So be careful and be warned, because if the storage medium runs out of space it will pause the VM.

Click Next >. Confirm your settings and click Finish. Your replication should now begin.

Fix graphical desktop artifacts in crossfire.

Tools:

Hawaii Bios Reader

Atiflash 4.17

Dos boot disk

HxD hex editor

Hawaii Fan Editor

I have scoured the internet for a solution to my long standing problem with my crossfire setup. After much digging my searches yielded no results. I noticed a problem where the cards when in Crossfire would artifact if they were sitting idle on the destop. I have the problem documented here.

Inside my computer I have two R9 290x cards by Gigabyte in crossfire, these are the Windforce editions. The exact model is GV-R929XOC-4GD, one uses the F2 BIOS the other uses the F11 BIOS. When I game the temps on average are about 60-70 degrees Celsius on the GPU cores, and about 95-100 degrees on the VRM. My CPU doesn’t exceed 45 degrees. Cards are at clock speeds and both BIOS versions are the same, I recently updated the BIOS on both cards, but that did not fix the issue.

In short I can do about an 2 hour gaming session and everything runs smoothly, then when I exit to desktop I get artifacts, lines coming across all 3 monitors, but as soon as I go into a game again these lines disappear. Back to desktop the lines re appear again. I bring up anything graphical like a web page or youtube, the lines will disappear, if I minimize the browser the lines reappear. If I stay on the desktop and disable crossfire, again the lines will immediately disappear.

I initially suspected it was the fact that I was running a crossfire set up. My other suspicion was that despite both cards being the same make one has memory chips by Hynix(F11 BIOS) and the other by Elpida(F2 BIOS). I believed that the problem was with the memory or rather something to do with the memory.

Note worthy, when only running a single card this artifacting problem does not occur. It only happens in crossfire and when the cards are in a low power state mode, idle, or rather when the clocks are dropped to conserve energy.

After much tweaking of the system and performing various tests it all came down to the Memory Clock, the clocks on the memory were being stepped down to almost nothing. The reason I suspected the clocks is that when I went into a graphically intensive application the problem disappeared. And the reason I knew it wasn’t the Core clock and it was the memory clock, the core clock would clock up on demand but the memory clock would not, it had two states 150 Mhz or 1250 Mhz, and it only propped up to 1250 when something graphical was being presented on the desktop or a game was being played. During “power play” mode the cards core clock drops to 350 from potential 1040 and memory drops to 150 from 1250. Mind you the core can be stepped up on demand and it does this rather well the memory apparently not so much.

To edit the BIOS files and flash them they will require a *.rom extension. The files from the manufacturer did not have this extension, I renamed the files to include the .rom extension and flashed them using Atiflash, it worked and my cards are running fine.

In order to fix the issue I had to hex edit both the cards BIOS files and flash it with AtiFlash in DOS. I also disabled ULPS. Although ULPS is not a fix to the issue I like knowing that when I hop out of a game the fans will keep spinning to cool down my card to an acceptable temperature. I don’t like the idea of one card being passively cooled after it reached 80 degrees +. I essentially edited both the cards BIOS files to never drop the Memory clock, so now the memory clock is always at 1250Mhz. And this fixed the problem. There are other tweaks to the bios I made as well, and while not necessary I also edited the BIOS core clocks, the core now never drops below 500 Mhz, the next step up is 840 Mhz, and then 1040 Mhz. This was changed from 300 Mhz, 727 Mhz, and 1040 Mhz respectively. Below is a screen shot of the PowerPlay profile changes, original on the left, and edited on the right. Capture1

Finally I also changed my fan profiles and a single temp profile. Since I raised the Core clock slightly and the memory clock completely I wanted to make sure that the card was not running hot. So I raised the fan profiles by 10% and dropped the top temperature profile by 10° C.

Capture3

New version of Hawaii Bios reader on left can edit the Fan Profile

The single temperature profile I was worried about was the 90° Celsius/100% fan, I changed it to 80° Celsius/100% fan speed. Then I raised the other fan speeds by 10%, so 56 went to 66%, and 25% went to 35%. You can see below the changes I have made to the Fan profile as displayed in Hawaii Bios Reader. Note that although Hawaii can read the Fan profiles these need to be changed in a hex editor such as HxD, only the PowerPlay values can be changed in the Hawaii Bios Reader. Alternatively you can use the Hawaii Fan Editor by DDSZ. The new version of the Hawaii Bios Reader can now edit the fan speeds and temperatures on the Fan profile page, it is no longer necessary to hex edit the ROM file.Capture2

The last step after the BIOS was edited I had to flash the file using Atiflash with in DOS. Download the boot disk and create a dos bootable flash drive. Place the rom file and atiflash in the root of the flash drive. Boot into dos and flash the new BIOS for your card. Remember to only do one card at a time and to power down after each flash. Also flash one bank at a time, I have my original and the new BIOS on each card, I used the performance bank to flash the custom BIOS. Atiflash usage is as follows:

atiflash -p 0 biosname.rom

With all these changes to the GPU BIOS on both cards I now have eliminated the Desktop artifacts. My idle card temps hover around 50° C, ~ 3-5 degrees higher than the stock BIOS clocks. And ULPS is disabled. Everything is peachy on the gaming PC.

Here are the two sample ROMs I created for my cards, F2 and F11.

For more detailed information check the below links and sources.

Disabling ULPS: Open regedit and search (Edit – Find) for EnableUlps then change the DWORD value from 1 to 0. Ignore EnableUlps_NA this does nothing. Keep searching (pressing F3) through the registry and change every entry you find in there from 1 to 0. Once finished reboot. Although disabling ULPS is not necessary I like it because with this feature off the driver does not disable the secondary card after a gaming session, which in turn allows the fans to cool the card properly instead of just shutting it down.

Editing the VGA BIOS: I used tools such at Hawaii Bios Reader, it is capable of creating a proper BIOS check sum in order to flash the card. Essentially in Hawaii Bios reader I edited the frequencies of the clocks then I proceeded to change the Fan and Temperature profiles with a Hex editor, I used HxD to do that. Be aware that if you use HxD after you use the Hawaii tool, you will need to open the hex edited file and resave it in Hawaii to it retains the right check sum for flashing. Other wise the card will not take your custom BIOS.

Sources:12, 3, 4