几种监控方法
原创大约 4 分钟
在专业的运维工具发展起来之前,监控MongoDB运行状态的方法有下面这几种。
profile
profiling
有下面几种级别。
关闭
profile
。只抓取
slow
查询。抓取所有数据。
启动profile
并且设置级别。
可以通过
mongo shell
启动,也可以通过驱动中的profile
命令启动,启动后记录会被保存在system.profile collection
下,可以使用db.setProfilingLevel
来启动。默认
slow
为100毫秒。db.setProfilingLevel
可以有2个参数,第一个参数指定Profiling
级别,第二个参数指定slow
阀值。检查当前
Profiling
级别的命令:db.getProfilingStatus()
。关闭
Profiling
的命令:db.setProfilingLevel(0)
。
可以直接在system.profile
的collection
上查看Profiling
数据。
db.system.profile.find()
# 或者
show profile
Mongostat
它反映的是当前mongod
的负荷。
# n为刷新秒数
mongostat n
inserts - # of inserts per second (* means replicated op)
query - # of queries per second
update - # of updates per second
delete - # of deletes per second
getmore - # of get mores (cursor batch) per second
command - # of commands per second, on a slave its local|replicated
flushes - # of fsync flushes per second
mapped - amount of data mmaped (total data size) megabytes
vsize - virtual size of process in megabytes
res - resident size of process in megabytes
faults - # of pages faults per sec
locked - name of and percent time for most locked database
idx miss - percent of btree page misses (sampled)
qr|qw - queue lengths for clients waiting (read|write)
ar|aw - active clients (read|write)
netIn - network traffic in - bits
netOut - network traffic out - bits
conn - number of open connections
set - replica set name
repl - replication type
PRI - primary (master)
SEC - secondary
REC - recovering
UNK - unknown
SLV - slave
RTR - mongos process ("router")
Mongoop
collection
级别,反映的是读写的时间。
> mongotop -h 192.168.10.69 2
每间隔2秒返回一次下面的结果。
ns total read write 2019-05-09T14:00:55
ub1405.system.users 0ms 0ms 0ms
ub1405.system.profile 0ms 0ms 0ms
b1405.system.namespaces 0ms 0ms 0ms
ub1405.system.indexes 0ms 0ms 0ms
ub1405.WapRecommend 0ms 0ms 0ms
ub1405.VisitPageInfo 0ms 0ms 0ms
ub1405.UsageInfo 0ms 0ms 0ms
ub1405.UpgradeInfo 0ms 0ms 0ms
ub1405.Switch 0ms 0ms 0ms
mongoperf
可以用来做mong
o的io
压力测试,和SQL Server
的SQLIOS
类似。
ServerStatus db.serverStatus()
它包含了很多信息。
实例信息
锁
全局锁
内存使用
连接
额外信息
索引计数器
cursors
网络
复制集
复制集操作集数
操作计数器
断言
writeBackQueued
Journal(dur)持久性
recordStats
工作集(workingSet)
指标(metrics)
db.stats()
它反映数据库所占用的存储空间。
{
"db" : "ub1405",
"collections" : 17,
"objects" : 9939344,
"avgObjSize" : 336.2453477815035,
"dataSize" : 3342058180,
"storageSize" : 4501643264,
"numExtents" : 111,
"indexes" : 15,
"indexSize" : 322633136,
"fileSize" : 8519680000,
"nsSizeMB" : 16,
"dataFileVersion" : {
"major" : 4,
"minor" : 5
},
"ok" : 1
}
db.collection.stats()
返回collection
的一些信息。
{
"ns" : "ub1405.WapRecommend",
"count" : 514,
"size" : 174416,
"avgObjSize" : 339.3307392996109,
"storageSize" : 430080,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString" : "allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=0,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=0,extractor=,format=btree,huffman_key=,huffman_value=,immutable=0,internal_item_max=0,internal_key_max=0,internal_key_truncate=,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=),lsm=(auto_throttle=,bloom=,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=0,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=0,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
"type" : "file",
"uri" : "statistics:table:chebian/collection-113--3662637609168705337",
"LSM" : {
"bloom filters in the LSM tree" : 0,
"bloom filter false positives" : 0,
"bloom filter hits" : 0,
"bloom filter misses" : 0,
"bloom filter pages evicted from cache" : 0,
"bloom filter pages read into cache" : 0,
"total size of bloom filters" : 0,
"sleep for LSM checkpoint throttle" : 0,
"chunks in the LSM tree" : 0,
"highest merge generation in the LSM tree" : 0,
"queries that could have benefited from a Bloom filter that did not exist" : 0,
"sleep for LSM merge throttle" : 0
},
"block-manager" : {
"file allocation unit size" : 4096,
"blocks allocated" : 4715,
"checkpoint size" : 11079680,
"allocations requiring file extension" : 180,
"blocks freed" : 1250,
"file magic number" : 120897,
"file major version number" : 1,
"minor version number" : 0,
"file bytes available for reuse" : 704512,
"file size in bytes" : 712704
},
"btree" : {
"btree checkpoint generation" : 2609,
"column-store variable-size deleted values" : 0,
"column-store fixed-size leaf pages" : 0,
"column-store internal pages" : 0,
"column-store variable-size RLE encoded values" : 0,
"column-store variable-size leaf pages" : 0,
"pages rewritten by compaction" : 0,
"number of key/value pairs" : 0,
"fixed-record size" : 0,
"maximum tree depth" : 3,
"maximum internal page key size" : 368,
"maximum internal page size" : 4096,
"maximum leaf page key size" : 3276,
"maximum leaf page size" : 32768,
"maximum leaf page value size" : 67108864,
"overflow pages" : 0,
"row-store internal pages" : 0,
"row-store leaf pages" : 0
},
"cache" : {
"bytes read into cache" : 1890440,
"bytes written from cache" : 20939898,
"checkpoint blocked page eviction" : 0,
"unmodified pages evicted" : 0,
"page split during eviction deepened the tree" : 0,
"modified pages evicted" : 57,
"data source pages selected for eviction unable to be evicted" : 0,
"hazard pointer blocked page eviction" : 0,
"internal pages evicted" : 0,
"internal pages split during eviction" : 0,
"leaf pages split during eviction" : 0,
"in-memory page splits" : 0,
"in-memory page passed criteria to be split" : 0,
"overflow values cached in memory" : 0,
"pages read into cache" : 59,
"pages read into cache requiring lookaside entries" : 0,
"overflow pages read into cache" : 0,
"pages written from cache" : 2366,
"page written requiring lookaside records" : 0,
"pages written requiring in-memory restoration" : 0
},
"compression" : {
"raw compression call failed, no additional data available" : 0,
"raw compression call failed, additional data available" : 0,
"raw compression call succeeded" : 0,
"compressed pages read" : 58,
"compressed pages written" : 1047,
"page written failed to compress" : 0,
"page written was too small to compress" : 1319
},
"cursor" : {
"create calls" : 16,
"insert calls" : 6213,
"bulk-loaded cursor-insert calls" : 0,
"cursor-insert key and value bytes inserted" : 752452,
"next calls" : 112837081,
"prev calls" : 2,
"remove calls" : 21264,
"cursor-remove key bytes removed" : 55218,
"reset calls" : 954717,
"restarted searches" : 0,
"search calls" : 42528,
"search near calls" : 899761,
"truncate calls" : 0,
"update calls" : 0,
"cursor-update value bytes updated" : 0
},
"reconciliation" : {
"dictionary matches" : 0,
"internal page multi-block writes" : 0,
"leaf page multi-block writes" : 1170,
"maximum blocks required for a page" : 0,
"internal-page overflow keys" : 0,
"leaf-page overflow keys" : 0,
"overflow values written" : 0,
"pages deleted" : 59,
"fast-path pages deleted" : 0,
"page checksum matches" : 16664,
"page reconciliation calls" : 2407,
"page reconciliation calls for eviction" : 57,
"leaf page key bytes discarded using prefix compression" : 0,
"internal page key bytes discarded using suffix compression" : 16682
},
"session" : {
"object compaction" : 0,
"open cursor count" : 5
},
"transaction" : {
"update conflicts" : 0
}
},
"nindexes" : 1,
"totalIndexSize" : 360448,
"indexSizes" : {
"_id_" : 360448
},
"ok" : 1
}
感谢支持
更多内容,请移步《超级个体》。