Welcome Guest. Please Login or Register  


You are here: Index > Virtualizor - Virtual Server Control Panel > General Support > Topic : Ip problem!

2


Threaded Mode | Print  

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 3:49 pm
Hello,

Both of them results was in the node itself, showing you the difference between cache and non cache.

The first result was all going straight into RAM, second was going straight to disk hence the 1/2 performance, this is what your seeing in KVM by default, if you run the two commands on your node's OS your see the difference when using cache or not.

It is H/W Raid 10 4 x 1TB Samsung Pro 850 SSD


Can you make a test from your SSD VPS too? What storage type do you use if may I ask?

offtopic: I want to use thin provisioning, but currently the storage is LVM. To change it to file based, should I just remove the LVM and use its VG?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Hello,


Please see attached performance from within KVM VM, QCOW file with cache disabled.

8589934592 bytes (8.6 GB) copied, 9.30505 s, 923 MB/s

34359738368 bytes (34 GB) copied, 48.4658 s, 709 MB/s

Obviously higher than yours however i'm using higher performance SSDs, however as you can see true values and not the cache value of 1.6GB/s

Yes you could do that, however id recommend destroying the LVM and just formating the software raid in a normal EXT4 method, as it will remove the extra LVM overhead/layer which would help with performance.

,Ashley
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 4:14 pm

Yes you could do that, however id recommend destroying the LVM and just formating the software raid in a normal EXT4 method, as it will remove the extra LVM overhead/layer which would help with performance.

,Ashley


Thank you, but can you tell me how to that?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Can you post the output of

df -h

and

cat /etc/fstab

from your node.
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
[root@104 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2        197G  103G  85G  55% /
tmpfs            16G    0  16G  0% /dev/shm
/dev/md0        243M  63M  168M  28% /boot
[root@104 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Thu Jun 11 13:02:35 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=24d6349f-7156-478c-ae91-595ddbc50e29 /                      ext4    defaults        1 1
UUID=3028b7b6-96c4-437b-857a-4eb531d895a3 /boot                  ext2    defaults        1 2
UUID=df908483-511d-4ab0-8430-5986391d0a67 swap                    swap    defaults        0 0
tmpfs                  /dev/shm                tmpfs  defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                  /sys                    sysfs  defaults        0 0
proc                    /proc                  proc    defaults        0 0

IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
Hello,

Sorry last output

cat /proc/mdstat
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
[root@104 ~]# cat /proc/mdstat
Personalities : [raid10] [raid1]
md0 : active raid1 sda1[0] sdc1[1]
      255936 blocks super 1.0 [2/2] [UU]

md127 : active raid10 sdd3[3] sdc5[2] sda5[0] sdb3[1]
      741113856 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 0/6 pages [0KB], 65536KB chunk

md1 : active raid10 sda3[0] sdc3[2] sdb2[1] sdd2[3]
      25149440 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]

md2 : active raid10 sda2[0] sdc2[2] sdd1[3] sdb1[1]
      209584128 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
So whatever your LVM name is replace xxxx with it

vgchange -a n xxxx

vgremove xxxx

mkfs.ext4 /dev/md127
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Quote From : ZXHost June 12, 2015, 4:39 pm
So whatever your LVM name is replace xxxx with it

vgchange -a n xxxx

vgremove xxxx

mkfs.ext4 /dev/md127


So should I add "/dev/md127" to storage as "file" in QCOW2 format?
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
You will need to mount it first.

Do the following

mkdir /vz
mount /dev/md127 /vz

Then add /vz as a file based storage in Virtualizor, once your happy all is working your need to add the entry to the /etc/fstab file for it to mount on boot.
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Why is the free space less than the total size?
Board Image

And for the fstab, is this correct:

vms                /dev/md127 /vms      defaults        0 0

IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
A % is reserved for root you can disable this running

tune2fs -m 0 /dev/md127


The line you will want to add to /etc/fstab is

/dev/md127 /vms                  ext4    defaults        1 2
IP: --   

Ip problem!
moka
Group: Member
Post Group: Working Newbie
Posts: 60
Status:
Board Image

It reduced the io?!
IP: --   

Ip problem!
ZXHost
Group: Member
Post Group: Newbie
Posts: 34
Status:
It will do as a thin provisioned image has a slight wrote overhead, something you have to live with while gaining overcommiting.

The values of 300+ are still plenty for any VM and services hosted on it, with a SSD you will get continuous performance even when loaded with VM's. Standard HD's will drop in performance due to latency of having to move to different areas of the disk platter for different VM's data.
IP: --   

Ip problem!
gauthig
Group: Member
Post Group: Newbie
Posts: 12
Status:
It was hinted out that the SSD while does cannot perform at 2.1 GBS you see.  Linux by default is using ram and sending a single to DD that it completed, although it is still in ram not on the disk.  It take a few more ms to complete the write. 
The proper way to test a disk with dd is to add "conv=fdatasync"That will informs the system to return the actual write speed after the memory is synced to disk.  Try this on the host and you should see 300-400MBs.
So why can't you get the same performance in a VPS, there is an issue with double caching, i.e. cache in the VPS then cache on the host.  The VPS tries to take care of this but not very good.  As always recommended cache=none is actually the best for the virto driver. This ensures that the VPS gets direct access to the array and always ensures when the system thinks something is written it actually is.
IP: --   

« Previous    Next »

Threaded Mode | Print  

2


Jump To :


Users viewing this topic
1 guests, 0 users.


All times are GMT. The time now is April 30, 2025, 10:30 pm.

  Powered By AEF 1.0.8 © 2007-2008 Electron Inc.Queries: 10  |  Page Created In:0.414