Un système assez basique exécutant miroir + bande sur des disques sas 7.2k rpm, pas particulièrement chargé. Pas de déduplication, compression sur tous les jeux de données. Scrub court depuis 15 jours à la vitesse d'un escargot mort. Y a-t-il une optimisation qui doit être faite, ou peut-elle être due à un matériel défectueux?
- Dell R510 avec boîtier MD1200.
- 2x Xeon E5620
- 48 Go
- NexentaStor 3.1.3, édition communautaire
Des informations:
scan: scrub in progress since Mon Apr 1 19:00:05 2013
171G scanned out of 747G at 141K/s, 1187h40m to go
0 repaired, 22.84% done
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c7t5000C500414FB2CFd0 ONLINE 0 0 0
c7t5000C500414FCA57d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c7t5000C500415C3B1Bd0 ONLINE 0 0 0
c7t5000C500415C5E4Fd0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c7t5000C500415DC797d0 ONLINE 0 0 0
c7t5000C500415DC933d0 ONLINE 0 0 0
logs
c7t5000A7203006D81Ed0 ONLINE 0 0 0
cache
c7t5000A72030068545d0 ONLINE 0 0 0
# iostat -en
---- errors ---
s/w h/w trn tot device
0 8887 0 8887 c2t0d0
0 0 0 0 c0t395301D6B0C8069Ad0
0 0 0 0 c7t5000C500415DC933d0
0 0 0 0 c7t5000A72030068545d0
0 0 0 0 c7t5000C500415DC797d0
0 0 0 0 c7t5000C500414FCA57d0
0 0 0 0 c7t5000C500415C3B1Bd0
0 0 0 0 c7t5000C500415C5E4Fd0
0 0 0 0 c7t5000C500414FB2CFd0
0 0 0 0 c7t5000A7203006D81Ed0
Le spa_last_io est changé chaque fois que je lance ce
# echo "::walk spa | ::print spa_t spa_name spa_last_io spa_scrub_inflight" | mdb -k
spa_name = [ "syspool" ]
spa_last_io = 0x25661402
spa_scrub_inflight = 0
spa_name = [ "tank" ]
spa_last_io = 0x25661f84
spa_scrub_inflight = 0x21
Toutes les 5 secondes, environ 20-25 Mo / s sont écrits. Entre ces écritures, il n'y a fondamentalement aucune lecture ou écriture.
capacity operations bandwidth latency
pool alloc free read write read write read write
------------------------- ----- ----- ----- ----- ----- ----- ----- -----
syspool 427G 501G 0 0 0 0 0.00 0.00
c0t395301D6B0C8069Ad0s0 427G 501G 0 0 0 0 0.00 0.00
------------------------- ----- ----- ----- ----- ----- ----- ----- -----
tank 903G 1.84T 810 5.21K 1.50M 20.8M 9.42 4.71
mirror 301G 627G 22 1.00K 53.0K 3.96M 8.96 3.93
c7t5000C500414FB2CFd0 - - 20 244 50.1K 3.97M 6.70 1.14
c7t5000C500414FCA57d0 - - 19 242 48.2K 3.97M 7.60 1.12
mirror 301G 627G 25 1016 46.8K 4.10M 16.11 5.28
c7t5000C500415C3B1Bd0 - - 21 257 41.6K 4.11M 4.63 1.24
c7t5000C500415C5E4Fd0 - - 21 255 43.0K 4.11M 16.54 1.15
mirror 301G 627G 62 754 119K 3.03M 19.72 3.78
c7t5000C500415DC797d0 - - 57 219 114K 3.03M 9.99 1.15
c7t5000C500415DC933d0 - - 56 220 119K 3.03M 13.20 1.22
c7t5000A7203006D81Ed0 260K 46.5G 0 0 0 0 0.00 0.00
cache - - - - - -
c7t5000A72030068545d0 93.1G 8M 0 0 0 0 0.00 0.00
------------------------- ----- ----- ----- ----- ----- ----- ----- -----
Est-ce que les iostats me disent que je passe plus de temps à attendre le disque alors que je devrais le faire? Plus précisément la colonne% b
# iostat -xe
device r/s w/s kr/s kw/s wait actv svc_t %w %b s/w h/w trn tot
sd3 5.1 43.9 20.6 643.8 0.0 0.1 2.9 0 5 0 0 0 0
sd4 9.4 1.8 141.1 169.6 0.0 0.0 0.5 0 0 0 0 0 0
sd5 3.1 43.8 15.8 643.8 0.0 0.1 1.4 0 3 0 0 0 0
sd6 5.2 38.1 14.3 494.4 0.0 0.1 3.0 0 7 0 0 0 0
sd7 4.2 40.2 11.1 623.2 0.0 0.1 2.7 0 7 0 0 0 0
sd8 3.6 44.3 9.7 623.2 0.0 0.1 1.5 0 4 0 0 0 0
sd9 2.9 37.4 7.0 494.4 0.0 0.1 1.3 0 2 0 0 0 0
sd10 0.7 0.4 3.4 0.0 0.0 0.0 0.0 0 0 0 0 0 0
La latence un peu trop élevée?
# zpool iostat 10 10
capacity operations bandwidth latency
pool alloc free read write read write read write
tank 909G 1.83T 86 2.82K 208K 12.7M 22.68 13.63
---------- ----- ----- ----- ----- ----- ----- ----- -----
tank 909G 1.83T 29 857 42.4K 3.50M 17.86 4.47
---------- ----- ----- ----- ----- ----- ----- ----- -----
tank 909G 1.83T 30 947 46.1K 3.54M 15.55 5.67
Appliqué quelques ajustements qui ne faisaient guère de différence. zfs_top_maxinflight défini sur 127, zfs_scrub_delay sur 0 et zfs_scan_idle sur 0.
# echo zfs_top_maxinflight | mdb -k
zfs_top_maxinflight:
zfs_top_maxinflight: 127
# echo zfs_scrub_delay/D |mdb -k
zfs_scrub_delay:
zfs_scrub_delay:0
# echo zfs_scan_idle/D |mdb -k
zfs_scan_idle:
zfs_scan_idle: 0
scan: scrub in progress since Wed Apr 17 20:47:23 2013
1.85G scanned out of 918G at 1.14M/s, 229h36m to go
0 repaired, 0.20% done
pré-ajustement de mdb, notez la colonne b% plutôt élevée
$ iostat -nx -M 5
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t395301D6B0C8069Ad0
35.2 44.2 0.3 0.7 0.0 0.4 0.0 5.3 0 32 c7t5000C500415DC933d0
19.8 3.2 0.2 0.0 0.0 0.0 0.0 0.1 0 0 c7t5000A72030068545d0
31.2 46.2 0.2 0.7 0.0 0.3 0.0 4.4 0 27 c7t5000C500415DC797d0
30.6 46.8 0.2 0.8 0.0 0.4 0.0 4.6 0 28 c7t5000C500414FCA57d0
37.6 53.0 0.3 0.8 0.0 0.4 0.0 4.7 0 33 c7t5000C500415C3B1Bd0
37.6 53.6 0.3 0.8 0.0 0.5 0.0 5.6 0 39 c7t5000C500415C5E4Fd0
33.2 46.8 0.3 0.8 0.0 0.5 0.0 6.1 0 33 c7t5000C500414FB2CFd0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c7t5000A7203006D81Ed0
post mdb tweak, notez la colonne b%, 80-85% de temps en attente occupée
$ iostat -nx -M 5
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c2t0d0
0.2 27.2 0.0 0.3 0.0 1.0 0.0 35.4 0 18 c0t395301D6B0C8069Ad0
129.6 20.2 0.9 0.4 0.0 2.9 0.0 19.5 0 85 c7t5000C500415DC933d0
48.4 4.0 0.4 0.0 0.0 0.0 0.0 0.1 0 1 c7t5000A72030068545d0
130.4 19.8 0.9 0.4 0.0 3.0 0.0 20.2 0 84 c7t5000C500415DC797d0
125.8 25.8 0.9 0.5 0.0 2.9 0.0 19.2 0 80 c7t5000C500414FCA57d0
131.2 24.2 0.9 0.5 0.0 3.1 0.0 20.3 0 83 c7t5000C500415C3B1Bd0
130.6 25.8 0.9 0.5 0.0 3.5 0.0 22.5 0 88 c7t5000C500415C5E4Fd0
126.8 28.0 0.9 0.5 0.0 2.8 0.0 18.0 0 79 c7t5000C500414FB2CFd0
0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c7t5000A7203006D81Ed0
smartctl -A /dev/disk
dit-on de chaque lecteur (il peut être nécessaire d'installer smartctl
, je ne sais pas s'il est fourni avec l'installation de base).