you are measuring the wrong interfaces with your tests.
Any copies of the data between vm's or doing any svmotions..you are utilizing ther 1gbit nics and not touching the 10gbit nic's.
Personally, I would drop all the 1gbit nic's and just utilize the 10g only since you are not going to be saturating that 10g link anytime soon across multiple hosts
If you want to see san throughput you are going to need to do activities that relate to storage (like running iometer inside a VM.)
That will cause disk I/O which will use the 10g link...
We have 7 hosts running 2x10gb links for everything (san, vmotion, mgmt, client network). The only thing I did was implement egress bandwidth on the vmotion port group to 3mbps because esx host will utilize 8 concurrent vmotions at once and could saturate the link....bringing down the storage...
With 250 vm's on 7 hosts we actually see very little real traffic on the 10gb link unless I do a vmotion.
PS - if those 10g cards are the dual port and you have 2 per host you are running in an incorrect configuration as esx 4.1 only support 4x10gb nics or 2x10gb and 4 1gbit or 8 1gb (I think)
You cannot disable one of the ports on the 10gbit dual ports so you may end up having random ports working and not working during reboots with 4 physical 10gb and multiple 1gbit.
One other thing....vmware is really designed to ensure that multiple systems can fully utilize any given resource...so running a single VM with a single busy disk, doesn't really do anything because it typically can't utilize all available resources.
Run io meter on 20 vm's at once and you will crush that storage link...singe vm...not so much...