このページの翻訳:
- 日本語 (ja)
- English (en)
最近の更新
- 2024.01.18 SSL Error in Rails created
- 2024.01.18 RailsでSSLエラー 以前のリビジョンを復元 (2024/01/18 07:06)
- 31 CentOS5 TLS1.2 created
- 2024.04.12 MySQL BIT Field created
最近の更新
文書の過去の版を表示しています。
fioというコマンドで、iopsなど計測してみる。
# yum -y install fio
fio version 2とfio version 3はちょっと動作が違うため
# yum install epel-release # yum --enablerepo=epel-testing install fio
# fio -filename=/tmp/fio2g -direct=1 -rw=write -bs=4k -size=2G -numjobs=64 -runtime=10 -group_reporting -name=fio
# fio -filename=/tmp/fio2g -direct=1 -rw=randwrite -bs=4k -size=2G -numjobs=64 -runtime=10 -group_reporting -name=fio
# fio -filename=/tmp/fio2g -direct=1 -rw=read -bs=4k -size=2G -numjobs=64 -runtime=10 -group_reporting -name=fio
# fio -filename=/tmp/fio2g -direct=1 -rw=randread -bs=4k -size=2G -numjobs=64 -runtime=10 -group_reporting -name=fio
下記はシーケンシャルwriteの一例
ブロックサイズ4kで
iops: 13209
スループット: 52840KB/s
レイテンシ: 10msec
※レイテンシは一番パーセンテージの高いものを見る
# fio -filename=/tmp/fio2g -direct=1 -rw=write -bs=4k -size=2G -numjobs=64 -runtime=10 -group_reporting -name=fio fio: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1 ... fio: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1 fio-2.0.13 Starting 64 processes Jobs: 64 (f=64): [WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW] [100.0% done] [0K/53930K/0K /s] [0 /13.5K/0 iops] [eta 00m:00s] fio: (groupid=0, jobs=64): err= 0: pid=55137: Mon Nov 12 17:27:48 2018 write: io=528872KB, bw=52840KB/s, iops=13209 , runt= 10009msec clat (usec): min=38 , max=22072 , avg=4837.10, stdev=1451.12 lat (usec): min=38 , max=22072 , avg=4837.52, stdev=1451.14 clat percentiles (usec): | 1.00th=[ 62], 5.00th=[ 73], 10.00th=[ 4768], 20.00th=[ 4832], | 30.00th=[ 4896], 40.00th=[ 4896], 50.00th=[ 5024], 60.00th=[ 5088], | 70.00th=[ 5216], 80.00th=[ 5472], 90.00th=[ 5792], 95.00th=[ 6048], | 99.00th=[ 6816], 99.50th=[ 7392], 99.90th=[11456], 99.95th=[18048], | 99.99th=[20864] bw (KB/s) : min= 667, max= 2674, per=1.56%, avg=825.18, stdev=180.35 lat (usec) : 50=0.04%, 100=6.87%, 250=0.14%, 500=0.01%, 750=0.01% lat (msec) : 2=0.01%, 4=0.09%, 10=92.74%, 20=0.07%, 50=0.05% cpu : usr=0.14%, sys=1.26%, ctx=265775, majf=0, minf=1983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=0/w=132218/d=0, short=r=0/w=0/d=0 Run status group 0 (all jobs): WRITE: io=528872KB, aggrb=52839KB/s, minb=52839KB/s, maxb=52839KB/s, mint=10009msec, maxt=10009msec Disk stats (read/write): dm-2: ios=0/130658, merge=0/0, ticks=0/6968, in_queue=6968, util=69.67%, aggrios=0/132282, aggrmerge=0/94, aggrticks=0/7037, aggrin_queue=6990, aggrutil=68.93% sda: ios=0/132282, merge=0/94, ticks=0/7037, in_queue=6990, util=68.93%
ioengine | IOをどう行うか決める。デフォルトはsync 以下が選択可能 sync,psync,vsync,psyncv,libaio,posixaio,solarisaio,windowsaio,mmap,splice,syslet-rw,sg,null,net,netsplice,cpuio,guasi,rdma,falloc,e4defrag,rbd,gfapi,gfapi_async,libhdfs,mtd,external |
numjobs | スレッド数 |
fio_config.txt
[global] ioengine=libaio iodepth=1 size=1g direct=1 runtime=60 directory=/tmp stonewall [Seq-Read] bs=1m rw=read [Seq-Write] bs=1m rw=write [Rand-Read-512K] bs=512k rw=randread [Rand-Write-512K] bs=512k rw=randwrite [Rand-Read-4K] bs=4k rw=randread [Rand-Write-4K] bs=4k rw=randwrite [Rand-Read-4K-QD32] iodepth=32 bs=4k rw=randread [Rand-Write-4K-QD32] iodepth=32 bs=4k rw=randwrite
# fio -f fio_config.txt --output-format=terse > `date +%Y%m%d` # cat `date +%Y%m%d` | awk -F ';' '{print $3, "\tbw:"($7+$48)/1000 "MB\tiops:"($8+$49)}' Seq-Read bw:102.67MB iops:100 Seq-Write bw:95.681MB iops:93 Rand-Read-512K bw:46.719MB iops:91 Rand-Write-512K bw:56.554MB iops:110 Rand-Read-4K bw:0.728MB iops:182 Rand-Write-4K bw:1.832MB iops:458 Rand-Read-4K-QD32 bw:3.88MB iops:970 Rand-Write-4K-QD32 bw:1.785MB iops:446
あとはawkなどで上手くやる。
manコマンドで見ると、terseの出力順番が明記されています。
1 terse_version_3 2 fio_version 3 jobname 4 groupid 5 error 6 read_kb 7 read_bandwidth 8 read_iops 9 read_runtime_ms 10 read_slat_min 11 read_slat_max 12 read_slat_mean 13 read_slat_dev 14 read_clat_min 15 read_clat_max 16 read_clat_mean 17 read_clat_dev 18 read_clat_pct01 19 read_clat_pct02 20 read_clat_pct03 21 read_clat_pct04 22 read_clat_pct05 23 read_clat_pct06 24 read_clat_pct07 25 read_clat_pct08 26 read_clat_pct09 27 read_clat_pct10 28 read_clat_pct11 29 read_clat_pct12 30 read_clat_pct13 31 read_clat_pct14 32 read_clat_pct15 33 read_clat_pct16 34 read_clat_pct17 35 read_clat_pct18 36 read_clat_pct19 37 read_clat_pct20 38 read_tlat_min 39 read_lat_max 40 read_lat_mean 41 read_lat_dev 42 read_bw_min 43 read_bw_max 44 read_bw_agg_pct 45 read_bw_mean 46 read_bw_dev 47 write_kb 48 write_bandwidth 49 write_iops 50 write_runtime_ms 51 write_slat_min 52 write_slat_max 53 write_slat_mean 54 write_slat_dev 55 write_clat_min 56 write_clat_max 57 write_clat_mean 58 write_clat_dev 59 write_clat_pct01 60 write_clat_pct02 61 write_clat_pct03 62 write_clat_pct04 63 write_clat_pct05 64 write_clat_pct06 65 write_clat_pct07 66 write_clat_pct08 67 write_clat_pct09 68 write_clat_pct10 69 write_clat_pct11 70 write_clat_pct12 71 write_clat_pct13 72 write_clat_pct14 73 write_clat_pct15 74 write_clat_pct16 75 write_clat_pct17 76 write_clat_pct18 77 write_clat_pct19 78 write_clat_pct20 79 write_tlat_min 80 write_lat_max 81 write_lat_mean 82 write_lat_dev 83 write_bw_min 84 write_bw_max 85 write_bw_agg_pct 86 write_bw_mean 87 write_bw_dev 88 cpu_user 89 cpu_sys 90 cpu_csw 91 cpu_mjf 92 cpu_minf 93 iodepth_1 94 iodepth_2 95 iodepth_4 96 iodepth_8 97 iodepth_16 98 iodepth_32 99 iodepth_64 100 lat_2us 101 lat_4us 102 lat_10us 103 lat_20us 104 lat_50us 105 lat_100us 106 lat_250us 107 lat_500us 108 lat_750us 109 lat_1000us 110 lat_2ms 111 lat_4ms 112 lat_10ms 113 lat_20ms 114 lat_50ms 115 lat_100ms 116 lat_250ms 117 lat_500ms 118 lat_750ms 119 lat_1000ms 120 lat_2000ms 121 lat_over_2000ms 122 disk_name 123 disk_read_iops 124 disk_write_iops 125 disk_read_merges 126 disk_write_merges 127 disk_read_ticks 128 write_ticks 129 disk_queue_time 130 disk_util