Welcome Guest. Please Login or Register  


You are here: Index > Virtualizor - Virtual Server Control Panel > General Support > Topic : Ip problem!

1


Threaded Mode | Print  

 Ip problem! (31 Replies, Read 8695 times)
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Hi,
This is the test result on a node with 4X500 GB ssd in raid10 (soft) :

VPS (Virtio enabled):

[root@host ~]# dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 14.4261 s, 149 MB/s


Main node:

[root@104 ~]# dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 1.86184 s, 1.2 GB/s


Why is this happening?!
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Can you give us the full output of virsh dumpxml VMID

,Ashley
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Another test with virtio enabled while creating the vps:
Board Image

Virish:


<domain type='kvm' id='12'>
  <name>v1006</name>
  <uuid>8c99136c-09b0-4c17-a53c-9ef7f1c3d177</uuid>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <shares>1024</shares>
    <period>100000</period>
    <quota>50000</quota>
    <emulator_period>100000</emulator_period>
    <emulator_quota>50000</emulator_quota>
  </cputune>
  <os>
    <type arch='x86_64' machine='rhel6.6.0'>hvm</type>
    <boot dev='hd'/>
    <bootmenu enable='yes'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='host-passthrough'>
  </cpu>
  <clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/dev/vg/vsv1006-vlx8npfwl6ilystc'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/                                                                                                                    >
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/                                                                                                                    >
    </controller>
    <interface type='bridge'>
      <mac address='00:16:3e:10:94:40'/>
      <source bridge='viifbr0'/>
      <target dev='viifv1006'/>
      <filterref filter='clean-traffic'>
        <parameter name='IP' value='104.237.194.2'/>
      </filterref>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/                                                                                                                    >
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/2'>
      <source path='/dev/pts/2'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5906' autoport='no' listen='104.237.193.218' keym                                                                                                                    ap='en-us'>
      <listen type='address' address='104.237.193.218'/>
    </graphics>
    <sound model='ich6'>
      <codec type='micro'/>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/                                                                                                                    >
    </sound>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/                                                                                                                    >
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/                                                                                                                    >
    </memballoon>
  </devices>
</domain>

IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Your last test of 420~ is what I would expect, being inside the VM it will be writing data straight to the array due to cache being set to none, where within your node OS it will be using the OS ram cache and then writing to disk later.

You can change this by changing the cache to write-back in virtualizor, however you then do risk data loss if your server was to loose power before all data is written from Ram to Disk.
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 2:01 pm
Your last test of 420~ is what I would expect, being inside the VM it will be writing data straight to the array due to cache being set to none, where within your node OS it will be using the OS ram cache and then writing to disk later.

You can change this by changing the cache to write-back in virtualizor, however you then do risk data loss if your server was to loose power before all data is written from Ram to Disk.


But I expect more for SSD in raid10 , check this:
Board Image

Its a VPS on another node with Vmware esxi with 4X3TB HDD in raid 10 , its mroe than the half of my SSD in raid 10 !
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Hello,

SSD's are good at some things but not others your still getting double the throughput than your HD raid SSD's are best at High IOp's and low latency to anywhere on the disk, where a standard HD has latency to seak to the required section of the disk, what SSD's are they?

Also you will want to check if you have cache enabled within VMware, as that could be helping with the values you are seeing, KVM/Virtualizor by default has cache disabled.

,Ashley

Edited by ZXHost : June 12, 2015, 2:39 pm
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
They are new Samsung SSD 840 EVO 500GB .
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Reviews and specs of that drive get around 280Mbps write, so in Raid 10 you can never expect more than twice the speed of one drive. And with overhead what your getting sounds about right.
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 2:50 pm
Reviews and specs of that drive get around 280Mbps write, so in Raid 10 you can never expect more than twice the speed of one drive. And with overhead what your getting sounds about right.


Where did you see that 280mbps?

Also how Also please check this: http://serverbear.com/benchmarks/io
How are they getting such a huge IO? Btw, I am using SoftRaid 10, as I believe with SSD, raid card just adds another point of failure and not much difference overall.

IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Hello,

Any review online of the SSD model shows you performance of around that amount, which is real life performance.

Did you mean to link to a certain benchmark as the link you provided is just a general link.
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 3:17 pm
Hello,

Any review online of the SSD model shows you performance of around that amount, which is real life performance.

Did you mean to link to a certain benchmark as the link you provided is just a general link.


I meant any of them, they are all above mine. Their prices shows that they are not using some special ssd models, I believe. Do you have any test from your own server by the way?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
All the top results are OpenVZ, this is a filed based virtualization instead of KVM which is a image based, so the I/O benchmark will be making use of the Write Back cache on the system.

The reason your getting high numbers on the OS is because Write Back will write the files to the RAM and then latter to disk, if you was to run the bechmark but instead ask it to write 20GB instead of 2GB you would have a much smaller value, more around the 400 your seeing in your VM.

As the ram cache will be exhausted so it will start to write the dirty pages (files in ram) to disk, the current I/O test you are completing is just being wrote straight to RAM.

However inside the VM as the cache is disabled it is being wrote directly to disk.
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
An example on one of our SSD node's

dd if=/dev/zero of=/tmp/output.img bs=32k count=256k
262144+0 records in
262144+0 records out
8589934592 bytes (8.6 GB) copied, 5.45408 s, 1.6 GB/s


dd if=/dev/zero of=/tmp/output.img bs=128k count=256k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 43.8186 s, 784 MB/s
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 3:42 pm
An example on one of our SSD node's

dd if=/dev/zero of=/tmp/output.img bs=32k count=256k
262144+0 records in
262144+0 records out
8589934592 bytes (8.6 GB) copied, 5.45408 s, 1.6 GB/s


dd if=/dev/zero of=/tmp/output.img bs=128k count=256k
262144+0 records in
262144+0 records out
34359738368 bytes (34 GB) copied, 43.8186 s, 784 MB/s


Check mine:
Board Image

why so much difference? May I know the server spec? Is it virt+kvm too? Whats the storage type? File based or LVM?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Hello,

Both of them results was in the node itself, showing you the difference between cache and non cache.

The first result was all going straight into RAM, second was going straight to disk hence the 1/2 performance, this is what your seeing in KVM by default, if you run the two commands on your node's OS your see the difference when using cache or not.

It is H/W Raid 10 4 x 1TB Samsung Pro 850 SSD
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 3:49 pm
Hello,

Both of them results was in the node itself, showing you the difference between cache and non cache.

The first result was all going straight into RAM, second was going straight to disk hence the 1/2 performance, this is what your seeing in KVM by default, if you run the two commands on your node's OS your see the difference when using cache or not.

It is H/W Raid 10 4 x 1TB Samsung Pro 850 SSD


Can you make a test from your SSD VPS too? What storage type do you use if may I ask?

offtopic: I want to use thin provisioning, but currently the storage is LVM. To change it to file based, should I just remove the LVM and use its VG?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Hello,


Please see attached performance from within KVM VM, QCOW file with cache disabled.

8589934592 bytes (8.6 GB) copied, 9.30505 s, 923 MB/s

34359738368 bytes (34 GB) copied, 48.4658 s, 709 MB/s

Obviously higher than yours however i'm using higher performance SSDs, however as you can see true values and not the cache value of 1.6GB/s

Yes you could do that, however id recommend destroying the LVM and just formating the software raid in a normal EXT4 method, as it will remove the extra LVM overhead/layer which would help with performance.

,Ashley
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 4:14 pm

Yes you could do that, however id recommend destroying the LVM and just formating the software raid in a normal EXT4 method, as it will remove the extra LVM overhead/layer which would help with performance.

,Ashley


Thank you, but can you tell me how to that?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Can you post the output of

df -h

and

cat /etc/fstab

from your node.
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
[root@104 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2        197G  103G  85G  55% /
tmpfs            16G    0  16G  0% /dev/shm
/dev/md0        243M  63M  168M  28% /boot
[root@104 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Jun 11 13:02:35 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=24d6349f-7156-478c-ae91-595ddbc50e29 /                      ext4    defaults        1 1
UUID=3028b7b6-96c4-437b-857a-4eb531d895a3 /boot                  ext2    defaults        1 2
UUID=df908483-511d-4ab0-8430-5986391d0a67 swap                    swap    defaults        0 0
tmpfs                  /dev/shm                tmpfs  defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                  /sys                    sysfs  defaults        0 0
proc                    /proc                  proc    defaults        0 0

IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Hello,

Sorry last output

cat /proc/mdstat
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
[root@104 ~]# cat /proc/mdstat
Personalities : [raid10] [raid1]
md0 : active raid1 sda1[0] sdc1[1]
      255936 blocks super 1.0 [2/2] [UU]

md127 : active raid10 sdd3[3] sdc5[2] sda5[0] sdb3[1]
      741113856 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/6 pages [0KB], 65536KB chunk

md1 : active raid10 sda3[0] sdc3[2] sdb2[1] sdd2[3]
      25149440 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]

md2 : active raid10 sda2[0] sdc2[2] sdd1[3] sdb1[1]
      209584128 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
So whatever your LVM name is replace xxxx with it

vgchange -a n xxxx

vgremove xxxx

mkfs.ext4 /dev/md127
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 4:39 pm
So whatever your LVM name is replace xxxx with it

vgchange -a n xxxx

vgremove xxxx

mkfs.ext4 /dev/md127


So should I add "/dev/md127" to storage as "file" in QCOW2 format?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
You will need to mount it first.

Do the following

mkdir /vz
mount /dev/md127 /vz

Then add /vz as a file based storage in Virtualizor, once your happy all is working your need to add the entry to the /etc/fstab file for it to mount on boot.
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Why is the free space less than the total size?
Board Image

And for the fstab, is this correct:

vms                /dev/md127 /vms      defaults        0 0

IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
A % is reserved for root you can disable this running

tune2fs -m 0 /dev/md127


The line you will want to add to /etc/fstab is

/dev/md127 /vms                  ext4    defaults        1 2
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Board Image

It reduced the io?!
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
It will do as a thin provisioned image has a slight wrote overhead, something you have to live with while gaining overcommiting.

The values of 300+ are still plenty for any VM and services hosted on it, with a SSD you will get continuous performance even when loaded with VM's. Standard HD's will drop in performance due to latency of having to move to different areas of the disk platter for different VM's data.
IP: --   

Ip problem!
gauthig
Group: Member
Post Group: Newbie
Posts: 12
Status:
It was hinted out that the SSD while does cannot perform at 2.1 GBS you see.  Linux by default is using ram and sending a single to DD that it completed, although it is still in ram not on the disk.  It take a few more ms to complete the write. 
The proper way to test a disk with dd is to add "conv=fdatasync"That will informs the system to return the actual write speed after the memory is synced to disk.  Try this on the host and you should see 300-400MBs.
So why can't you get the same performance in a VPS, there is an issue with double caching, i.e. cache in the VPS then cache on the host.  The VPS tries to take care of this but not very good.  As always recommended cache=none is actually the best for the virto driver. This ensures that the VPS gets direct access to the array and always ensures when the system thinks something is written it actually is.
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : gauthig June 13, 2015, 3:31 pm
It was hinted out that the SSD while does cannot perform at 2.1 GBS you see.  Linux by default is using ram and sending a single to DD that it completed, although it is still in ram not on the disk.  It take a few more ms to complete the write. 
The proper way to test a disk with dd is to add "conv=fdatasync"That will informs the system to return the actual write speed after the memory is synced to disk.  Try this on the host and you should see 300-400MBs.
So why can't you get the same performance in a VPS, there is an issue with double caching, i.e. cache in the VPS then cache on the host.  The VPS tries to take care of this but not very good.  As always recommended cache=none is actually the best for the virto driver. This ensures that the VPS gets direct access to the array and always ensures when the system thinks something is written it actually is.


Yes, I set the caching to "none" at first. Do you have any idea about the hard or soft raid?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Hard Raid is best with SSD's if used with a high end raid card.

However a lowend card will be worse than software raid.
IP: --   

« Previous    Next »

Threaded Mode | Print  

1


Jump To :


Users viewing this topic
1 guests, 0 users.


All times are GMT. The time now is April 30, 2025, 10:27 pm.

  Powered By AEF 1.0.8 © 2007-2008 Electron Inc.Queries: 10  |  Page Created In:0.043