We implemented a three-node Hyper-V 2012 R2 cluster on HP DL380 G9 servers. Testing when well, but in the afternoon on the first day of go live we unfortunately experienced some strange issues.
Some virtual machines were randomly losing network connectivity - you could not make a remote desktop connection or ping the server. There was no pattern as to which servers would experience the issue, they varied between different Hyper-V hosts and Cluster Shared Volumes. Sometimes rebooting the VM or migrating to another host would resolve the issue, but only temporarily, soon another VM would then also randomly stop working.
It turns out the issue was with the network drivers. We were using Broadcom 1-gigabit network adapters, used for our virtual switch. If using a b57nd60a.sys driver, version 16.7 or earlier AND Virtual Machine Queues are enabled in the drivers properties then this is a known issue (see https://support.microsoft.com/en-gb/kb/2986895)
The fix is to either:
1) Upgrade the driver to 16.8 or newer, and/or
2) Disable virtual machine queues (VMQ)
In the properties of the network adapters > configure > advanced tab > and disable 'virtual machine queues'
Then, to avoid confusion, in the settings of the virtual machine, under network adapter > hardware acceleration > uncheck 'enable virtual machine queues'
Virtual machine queues are not required for 1-gigabit network adapters - they are only recommended for 10-gigabit or faster network adapters.
*** Update ***
Following the above work, we still had some slow performance issues in our VM guests, most noticeably applications hanging or slow processing of printing/spooling.
We resolved this by changing the NIC team for our virtual switch from 'Hyper-V Port' to 'Dynamic' mode (Dynamic mode is an additional NIC team load balancing option in Server 2012 R2 which is basically a combination of the best bits of Hyper-V Port and Address Hash modes).