ÔÚMongoDB£¨°æ±¾
3.2.9£©ÖУ¬·ÖƬ¼¯Èº£¨sharded cluster£©ÊÇÒ»ÖÖˮƽÀ©Õ¹Êý¾Ý¿âϵͳÐÔÄܵķ½·¨£¬Äܹ»½«Êý¾Ý¼¯·Ö²¼Ê½´æ´¢ÔÚ²»Í¬µÄ·ÖƬ£¨shard£©ÉÏ£¬Ã¿¸ö·ÖƬֻ±£´æÊý¾Ý¼¯µÄÒ»²¿·Ö£¬MongoDB±£Ö¤¸÷¸ö·ÖƬ֮¼ä²»»áÓÐÖØ¸´µÄÊý¾Ý£¬ËùÓÐ·ÖÆ¬±£´æµÄÊý¾ÝÖ®ºÍ¾ÍÊÇÍêÕûµÄÊý¾Ý¼¯¡£·ÖƬ¼¯Èº½«Êý¾Ý¼¯·Ö²¼Ê½´æ´¢£¬Äܹ»½«¸ºÔØ·Ö̯µ½¶à¸ö·ÖƬÉÏ£¬Ã¿¸ö·ÖƬֻ¸ºÔð¶Áдһ²¿·ÖÊý¾Ý£¬³ä·ÖÀûÓÃÁ˸÷¸öshardµÄϵͳ×ÊÔ´£¬Ìá¸ßÊý¾Ý¿âϵͳµÄÍÌÍÂÁ¿¡£
Êý¾Ý¼¯±»²ð·Ö³ÉÊý¾Ý¿é£¨chunk£©£¬Ã¿¸öÊý¾Ý¿é°üº¬¶à¸ödoc£¬Êý¾Ý¿é·Ö²¼Ê½´æ´¢ÔÚ·ÖÆ¬¼¯ÈºÖС£MongoDB¸ºÔð×·×ÙÊý¾Ý¿éÔÚshardÉϵķֲ¼ÐÅÏ¢£¬Ã¿¸ö·ÖƬ´æ´¢ÄÄЩÊý¾Ý¿é£¬½Ð×ö·ÖƬµÄÔªÊý¾Ý£¬±£´æÔÚconfig
serverÉϵÄÊý¾Ý¿â configÖУ¬Ò»°ãʹÓÃ3̨config server£¬ËùÓÐconfig serverÖеÄconfigÊý¾Ý¿â±ØÐëÍêÈ«Ïàͬ¡£Í¨¹ýmongosÄܹ»Ö±½Ó·ÃÎÊÊý¾Ý¿âconfig£¬²é¿´·ÖƬµÄÔªÊý¾Ý£»mongo
shell Ìṩ sh ¸¨Öúº¯Êý£¬Äܹ»°²È«µØ²é¿´·ÖƬ¼¯ÈºµÄÔªÊý¾ÝÐÅÏ¢¡£

¶ÔÈκÎÒ»¸öshard½øÐвéѯ£¬Ö»»á»ñÈ¡collectionÔÚµ±Ç°·ÖƬÉϵÄÊý¾Ý×Ó¼¯£¬²»ÊÇÕû¸öÊý¾Ý¼¯¡£Application
Ö»ÐèÒªÁ¬½Óµ½mongos£¬¶ÔÆä½øÐеĶÁд²Ù×÷£¬mongos×Ô¶¯½«¶ÁдÇëÇó·Óɵ½ÏàÓ¦µÄshard¡£MongoDBͨ¹ýmongos½«·ÖƬµÄµ×²ãʵÏÖ¶ÔApplication͸Ã÷£¬ÔÚApplication¿´À´£¬·ÃÎʵÄÊÇÕû¸öÊý¾Ý¼¯¡£
Ò»£¬Ö÷·ÖƬ
ÔÚ·ÖÆ¬¼¯ÈºÖУ¬²»ÊÇÿ¸ö¼¯ºÏ¶¼»á·Ö²¼Ê½´æ´¢£¬Ö»ÓÐʹÓÃsh.shardCollection()ÏÔʽ½«collection·ÖƬºó£¬¸Ã¼¯ºÏ²Å»á·Ö²¼Ê½´æ´¢ÔÚ²»Í¬µÄshardÖС£¶ÔÓÚ·Ç·ÖÆ¬¼¯ºÏ£¨un-sharded
collection£©£¬ÆäÊý¾ÝÖ»»á´æ´¢ÔÚÖ÷·ÖƬ£¨Primary shard£©ÖУ¬Ä¬ÈÏÇé¿öÏ£¬Ö÷·ÖƬÊÇÖ¸Êý¾Ý¿â×î³õ´´½¨µÄshard£¬ÓÃÓÚ´æ´¢¸ÃÊý¾Ý¿âÖÐ·Ç·ÖÆ¬¼¯ºÏµÄÊý¾Ý¡£Ã¿¸öÊý¾Ý¿â¶¼ÓÐÒ»¸öÖ÷·ÖƬ¡£
Each database in a sharded cluster has a primary shard
that holds all the un-sharded collections for that database.
Each database has its own primary shard.
ÀýÈ磬һ¸ö·ÖƬ¼¯ÈºÓÐÈý¸ö·ÖƬ£ºshard1£¬shard2£¬shard3£¬ÔÚ·ÖÆ¬shard1´´½¨Ò»¸öÊý¾Ý¿âblog¡£Èç¹û½«Êý¾Ý¿âbolg·ÖƬ£¬ÄÇôMongoDB»á×Ô¶¯ÔÚshard2£¬shard3ÉÏ´´½¨Ò»¸ö½á¹¹ÏàͬµÄÊý¾Ý¿âblog£¬Êý¾Ý¿âblogµÄPrimary
ShardÊÇShard1¡£
ͼʾ£¬Collection2µÄÖ÷·ÖƬÊÇShardA¡£

ʹÓà movePrimaryÃüÁî±ä¸üÊý¾Ý¿âĬÈϵÄPrimary shard£¬·Ç·ÖƬ¼¯ºÏ½«»á´Óµ±Ç°shardÒÆ¶¯µ½ÐµÄÖ÷·ÖƬ¡£
db.runCommand( { movePrimary : "test", to : "shard0001" } ) |
ÔÚʹÓÃmovePrimaryÃüÁî±ä¸üÊý¾Ý¿âµÄÖ÷·ÖƬ֮ºó£¬config serverÖеÄÅäÖÃÐÅÏ¢ÊÇ×îеģ¬mongos»º´æµÄÅäÖÃÐÅÏ¢±äµÃ¹ýʱÁË¡£MongoDBÌṩÃüÁflushRouterConfig
Ç¿ÖÆmongos´Óconfig server»ñÈ¡×îеÄÅäÖÃÐÅÏ¢£¬Ë¢ÐÂmongosµÄ»º´æ¡£
db.adminCommand({"flushRouterConfig":1}) |
¶þ£¬·ÖƬµÄÔªÊý¾Ý
²»ÒªÖ±½Óµ½config serverÉϲ鿴·ÖƬ¼¯ÈºµÄÔªÊý¾ÝÐÅÏ¢£¬ÕâЩÊý¾Ý·Ç³£ÖØÒª£¬°²È«µÄ·½Ê½ÊÇͨ¹ýmongosÁ¬½Óµ½configÊý¾Ý²é¿´£¬»òÕßʹÓÃsh¸¨Öúº¯Êý²é¿´¡£
ʹÓÃsh¸¨Öúº¯Êý²é¿´
Á¬½Óµ½mongos²é¿´configÊý¾Ý¿âÖеļ¯ºÏ
1£¬shards ¼¯ºÏ±£´æ·ÖƬÐÅÏ¢
shardµÄÊý¾Ý´æ´¢ÔÚhostÖ¸¶¨µÄ replica set »ò standalone mongodÖС£
{ "_id" : "shard_name", "host" : "replica_set_name/host:port", "tag":[shard_tag1,shard_tag2] } |
2£¬databases¼¯ºÏ±£´æ·ÖƬ¼¯ÈºÖÐËùÓÐÊý¾Ý¿âµÄÐÅÏ¢£¬²»¹ÜÊý¾Ý¿âÊÇ·ñ·ÖƬ
Èç¹ûÔÚÊý¾Ý¿âÉÏÖ´ÐÐsh.enableSharding("db_name")£¬ÄÇô×Ö¶Îpartitioned×Ö¶ÎÖµ¾ÍÊÇtrue£»primary
×Ö¶ÎÖ¸¶¨Êý¾Ý¿âµÄÖ÷·ÖƬ£¨primary shard£©¡£
{ "_id" : "test", "primary" : "rs0", "partitioned" : true } |
3£¬collections¼¯ºÏ±£´æËùÓÐÒÑ·ÖÆ¬¼¯ºÏµÄÐÅÏ¢£¬²»°üÀ¨·Ç·ÖƬ¼¯ºÏ£¨un-sharded collections£©
keyÊÇ£º·ÖƬµÄƬ¼ü
db.collections.find()
{
"_id" : "test.foo",
"lastmodEpoch" : ObjectId("57dcd4899bd7f7111ec15f16"),
"lastmod" : ISODate("1970-02-19T17:02:47.296Z"),
"dropped" : false,
"key" : {
"_id" : 1
},
"unique" : true
} |
4£¬chunks ¼¯ºÏ±£´æÊý¾Ý¿éÐÅÏ¢£¬
ns£º·ÖƬµÄ¼¯ºÏ£¬½á¹¹ÊÇ£ºdb_name.collection_name
min ºÍ max£º Ƭ¼üµÄ×îСֵºÍ×î´óÖµ
shard£º¿éËùÔÚµÄ·ÖÆ¬
db.chunks.find()
{
"_id" : "test.foo-_id_MinKey",
"lastmod" : Timestamp(1, 1),
"lastmodEpoch" : ObjectId("57dcd4899bd7f7111ec15f16"),
"ns" : "test.foo",
"min" : {
"_id" : 1
},
"max" : {
"_id" : 3087
},
"shard" : "rs0"
} |
5£¬changelog¼¯ºÏ¼Ç¼·ÖƬ¼¯ÈºµÄ²Ù×÷£¬°üÀ¨chunkµÄ²ð·ÖºÍÇ¨ÒÆ²Ù×÷£¬ShardµÄÔö¼Ó»òɾ³ý²Ù×÷
what ×ֶΣº±íʾ²Ù×÷µÄÀàÐÍ£¬ÀýÈ磺multi-split±íʾchunkµÄ²ð·Ö£¬
"what" : "addShard", "what" : "shardCollection.start", "what" : "shardCollection.end", "what" : "multi-split", |
6£¬tags ¼Ç¼shardµÄtagºÍ¶ÔÓ¦µÄƬ¼ü·¶Î§
{ "_id" : { "ns" : "records.users", "min" : { "zipcode" : "10001" } }, "ns" : "records.users", "min" : { "zipcode" : "10001" }, "max" : { "zipcode" : "10281" }, "tag" : "NYC" } |
7£¬settings ¼¯ºÏ¼Ç¼¾ùºâÆ÷״̬ºÍchunkµÄ´óС£¬Ä¬ÈϵÄchunk sizeÊÇ64MB¡£
{ "_id" : "chunksize", "value" : 64 } { "_id" : "balancer", "stopped" : false } |
8£¬locks ¼¯ºÏ¼Ç¼·Ö²¼Ëø£¨distributed lock£©£¬±£Ö¤Ö»ÓÐÒ»¸ömongos ʵÀýÄܹ»ÔÚ·ÖÆ¬¼¯ÈºÖÐÖ´ÐйÜÀíÈÎÎñ¡£
mongosÔÚµ£ÈÎbalancerʱ£¬»á»ñȡһ¸ö·Ö²¼Ëø£¬²¢Ïòconfig.locksÖвåÈëÒ»Ìõdoc¡£
The locks collection stores a distributed lock. This
ensures that only one mongos instance can perform administrative
tasks on the cluster at once. The mongos acting as balancer
takes a lock by inserting a document resembling the
following into the locks collection.
{ "_id" : "balancer", "process" : "example.net:40000:1350402818:16807", "state" : 2, "ts" : ObjectId("507daeedf40e1879df62e5f3"), "when" : ISODate("2012-10-16T19:01:01.593Z"), "who" : "example.net:40000:1350402818:16807:Balancer:282475249", "why" : "doing balance round" } |
Èý£¬É¾³ý·ÖƬ
ɾ³ý·ÖƬʱ£¬±ØÐëÈ·±£¸Ã·ÖƬÉϵÄÊý¾Ý±»Òƶ¯µ½ÆäËû·ÖƬÖУ¬¶ÔÓÚÒÔ·ÖÆ¬µÄ¼¯ºÏ£¬Ê¹ÓþùºâÆ÷À´Ç¨ÒÆÊý¾Ý¿é£¬¶ÔÓÚ·Ç·ÖÆ¬µÄ¼¯ºÏ£¬±ØÐëÐ޸ļ¯ºÏµÄÖ÷·ÖƬ¡£
1£¬É¾³ýÒÑ·ÖÆ¬µÄ¼¯ºÏÊý¾Ý
step1£¬±£Ö¤¾ùºâÆ÷ÊÇ¿ªÆôµÄ
sh.setBalancerState(true); |
step2£¬½«ÒÑ·ÖÆ¬µÄ¼¯ºÏÈ«²¿Ç¨ÒƵ½ÆäËû·ÖƬ
use admin db.adminCommand({"removeShard":"shard_name"}) |
removeShardÃüÁî»á½«Êý¾Ý¿é´Óµ±Ç°·ÖƬÉÏÇ¨ÒÆµ½ÆäËû·ÖƬÉÏÈ¥£¬Èç¹û·ÖƬÉϵÄÊý¾Ý¿é±È½Ï¶à£¬Ç¨Òƹý³Ì¿ÉÄܺÄʱºÜ³¤¡£
step3£¬¼ì²éÊý¾Ý¿éÇ¨ÒÆµÄ״̬
use admin db.runCommand( { removeShard: "shard_name" } ) |
ʹÓÃremoveShardÃüÁîÄܹ»²é¿´Êý¾Ý¿éÇ¨ÒÆµÄ״̬£¬remaining ×ֶαíʾʣÓàÊý¾Ý¿éµÄÊýÁ¿
{ "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : 42, "dbs" : 1 }, "ok" : 1 } |
step4£¬Êý¾Ý¿éÍê³ÉÇ¨ÒÆ
use admin db.runCommand( { removeShard: "shard_name" } )
{
"msg" : "removeshard completed
successfully",
"state" : "completed",
"shard" : "shard_name",
"ok" : 1
} |
2£¬É¾³ýδ·ÖƬµÄÊý¾Ý¿â
step1£¬²é¿´Î´·ÖƬµÄÊý¾Ý¿â
δ·ÖƬµÄÊý¾Ý¿â£¬°üÀ¨Á½²¿·Ö£º
Êý¾Ý¿âδ±»·ÖƬ£¬¸ÃÊý¾ÝûÓÐʹÓÃsh.enableSharding("db_name")£¬ÔÚÊý¾Ý¿âconfigÖУ¬¸ÃÊý¾Ý¿âµÄpartitioned×Ö¶ÎÊÇfalse
Êý¾Ý¿âÖдæÔÚcollectionδ±»·ÖƬ£¬¼´µ±Ç°µÄ·ÖƬÊǸü¯ºÏµÄÖ÷·ÖƬ
use config db.databases.find({$or:[{"partitioned":false},{"primary":"shard_name"}]}) |
¶ÔÓÚpartitioned=falseµÄÊý¾Ý¿â£¬ÆäÊý¾ÝÈ«²¿±£´æÔÚµ±Ç°shardÖУ»¶ÔÓÚpartitioned=true£¬primary=¡±shard_name¡°µÄÊý¾Ý¿â£¬±íʾ´æÔÚδ·ÖƬ£¨un-sharded
collection£©´æ´¢ÔÚ¸ÃÊý¾Ý¿âÖУ¬±ØÐë±ä¸üÕâЩ¼¯ºÏµÄÖ÷·ÖƬ¡£
step2£¬ÐÞ¸ÄÊý¾Ý¿âµÄÖ÷·ÖƬ
db.runCommand( { movePrimary: "db_name", to: "new_shard" }) |
ËÄ£¬Ôö¼Ó·ÖƬ
ÓÉÓÚ·ÖÆ¬´æ´¢µÄÊÇÊý¾Ý¼¯µÄÒ»²¿·Ö£¬ÎªÁ˱£Ö¤Êý¾ÝµÄ¸ß¿ÉÓÃÐÔ£¬ÍƼöʹÓÃReplica Set×÷Ϊshard£¬¼´Ê¹Replica
SetÖÐÖ»°üº¬Ò»¸ö³ÉÔ±¡£Á¬½Óµ½mongos£¬Ê¹ÓÃsh¸¨Öúº¯ÊýÔö¼Ó·ÖƬ¡£
sh.addShard("replica_set_name/host:port") |
²»ÍƼö½«standalone mongod×÷Ϊshard
Îå£¬ÌØ´ó¿é
ÔÚÓÐЩÇé¿öÏ£¬chunk»á³ÖÐøÔö³¤£¬³¬³öchunk sizeµÄÏÞÖÆ£¬³ÉÎªÌØ´ó¿é£¨jumbo chunk£©£¬³öÏÖÌØ´ó¿éµÄÔÒòÊÇchunkÖеÄËùÓÐdocʹÓÃͬһ¸öƬ¼ü£¨shard
key£©£¬µ¼ÖÂMongoDBÎÞ·¨²ð·Ö¸Ãchunk£¬Èç¹û¸Ãchunk³ÖÐøÔö³¤£¬½«»áµ¼ÖÂchunkµÄ·Ö²¼²»¾ùÔÈ£¬³ÉΪÐÔÄÜÆ¿¾±¡£
ÔÚchunkÇ¨ÒÆÊ±£¬´æÔÚÏÞÖÆ£ºÃ¿¸öchunkµÄ´óС²»Äܳ¬¹ý2.5ÍòÌõdoc£¬»òÕß1.3±¶ÓÚÅäÖÃÖµ¡£chunk
sizeĬÈϵÄÅäÖÃÖµÊÇ64MB£¬³¬¹ýÏÞÖÆµÄchunk»á±»MongoDB±ê¼ÇÎªÌØ´ó¿é£¨ jumbo chunk
£©£¬MongoDB²»Äܽ«ÌØ´ó¿éÇ¨ÒÆµ½ÆäËûshardÉÏ¡£
MongoDB cannot move a chunk if the number of documents
in the chunk exceeds either 250000 documents or 1.3
times the result of dividing the configured chunk size
by the average document size.
1£¬²é¿´ÌØ´ó¿é
ʹÓÃsh.status()£¬Äܹ»·¢ÏÖÌØ´ó¿é£¬ÌØ´ó¿éµÄºóÃæ´æÔÚ jumbo ±êÖ¾
{ "x" : 2 } -->> { "x" : 3 } on : shard-a Timestamp(2, 2) jumbo |
2£¬·Ö·¢ÌØ´ó¿é
ÌØ´ó¿é²»Äܲð·Ö£¬²»ÄÜͨ¹ý¾ùºâÆ÷×Ô¶¯·Ö·¢£¬±ØÐëÊÖ¶¯·Ö·¢¡£
step1£¬¹Ø±Õ¾ùºâÆ÷
sh.setBalancerState(false) |
step2£¬Ôö´óChunk SizeµÄÅäÖÃÖµ
ÓÉÓÚMongoDB²»ÔÊÐíÒÆ¶¯´óС³¬³öÏÞÖÆµÄÌØ´ó¿é£¬Òò´Ë£¬±ØÐëÁÙʱÔö¼Óchunk sizeµÄÅäÖÃÖµ£¬ÔÙ½«ÌØ´ó¿é¾ùºâµØ·Ö·¢µ½·ÖƬ¼¯ÈºÖС£
use config db.settings.save({"_id":"chunksize","value":"1024"}) |
step3£¬Òƶ¯ÌØ´ó¿é
sh.moveChunk("db_name.collection_name", {sharded_filed:"value_in_chunk"},"new_shard_name") |
step4£¬ÆôÓþùºâÆ÷
sh.setBalancerState(true) |
step5£¬Ë¢ÐÂmongosµÄÅäÖûº´æ
Ç¿ÖÆmongos´Óconfig serverͬ²½ÅäÖÃÐÅÏ¢£¬²¢Ë¢Ð»º´æ¡£
use admin db.adminCommand({ flushRouterConfig: 1 } ) |
Áù£¬¾ùºâÆ÷
¾ùºâÆ÷ÊÇÓÉmongosת±äµÄ£¬¾ÍÊÇ˵£¬mongos²»½ö¸ºÔ𽫲éѯ·Óɵ½ÏàÓ¦µÄshardÉÏ£¬»¹Òª¸ºÔðÊý¾Ý¿éµÄ¾ùºâ¡£Ò»°ãÇé¿öÏ£¬MongoDB»á×Ô¶¯´¦ÀíÊý¾Ý¾ùºâ£¬Í¨¹ýconfig.settingsÄܹ»²é¿´balancerµÄ״̬£¬»òͨ¹ýsh¸¨Öúº¯Êý²é¿´
·µ»Øtrue£¬±íʾ¾ùºâÆ÷ÔÚÕýÔËÐУ¬ÏµÍ³×Ô¶¯´¦ÀíÊý¾Ý¾ùºâ£¬Ê¹ÓÃsh¸¨Öúº¯ÊýÄܹ»¹Ø±Õbalancer
sh.setBalancerState(false) |
balancer²»ÄÜÁ¢¼´ÖÕÖ¹ÕýÔÚÔËÐеĿéÇ¨ÒÆ²Ù×÷£¬ÔÚmongosת±äΪbalancerʱ£¬»áÉêÇëÒ»¸öbalancer
lock£¬²é¿´config.locks ¼¯ºÏ£¬
use config db.locks.find({"_id":"balancer"}) |
Èç¹ûstate=2£¬±íʾbalancerÕý´¦ÓÚ»îԾ״̬£¬Èç¹ûstate=0£¬±íʾbalancerÒѱ»¹Ø±Õ¡£
¾ùºâ¹ý³Ìʵ¼ÊÉÏÊǽ«Êý¾Ý¿é´ÓÒ»¸öshardÇ¨ÒÆµ½ÆäËûshard£¬»òÕßÏȽ«Ò»¸ö´óµÄchunk²ð·ÖСµÄchunk£¬ÔÙ½«Ð¡¿éÇ¨ÒÆµ½ÆäËûshardÉÏ£¬¿éµÄÇ¨ÒÆºÍ²ð·Ö¶¼»áÔö¼ÓϵͳµÄIO¸ºÔØ£¬×îºÃ½«¾ùºâÆ÷µÄ»îԾʱ¼äÏÞÖÆÔÚϵͳ¿ÕÏÐʱ½øÐУ¬¿ÉÒÔÉèÖÃbalancerµÄ»îԾʱ¼ä´°¿Ú£¬ÏÞÖÆbalancerÔÚÖ¸¶¨µÄʱ¼äÇø¼äÄÚ½øÐÐÊý¾Ý¿éµÄ²ð·ÖºÍÇ¨ÒÆ²Ù×÷¡£
use config
db.settings.update(
{"_id":"balancer"},
"$set":{"activeWindow":{"start":"23:00","stop":"04:00"}}),
true
) |
¾ùºâÆ÷²ð·ÖºÍÒÆ¶¯µÄ¶ÔÏóÊÇchunk£¬¾ùºâÆ÷Ö»±£Ö¤chunkÊýÁ¿ÔÚ¸÷¸öshardÉÏÊǾùºâµÄ£¬ÖÁÓÚÿ¸öchunk°üº¬µÄdocÊýÁ¿£¬²¢²»Ò»¶¨ÊǾùºâµÄ¡£¿ÉÄÜ´æÔÚһЩchunk°üº¬µÄdocÊýÁ¿ºÜ¶à£¬¶øÓÐЩchunk°üº¬µÄdocÊýÁ¿ºÜÉÙ£¬ÉõÖÁ²»°üº¬ÈκÎdoc¡£Òò´Ë£¬Ó¦¸ÃÉ÷ÖØÑ¡Ôñ·ÖƬµÄË÷Òý¼ü£¬¼´Æ¬¼ü£¬Èç¹ûÒ»¸ö×ֶμÈÄÜÂú×ã¾ø´ó¶àÊý²éѯµÄÐèÇó£¬ÓÖÄÜʹdocÊýÁ¿¾ùÔÈ·Ö²¼£¬ÄÇô¸Ã×Ö¶ÎÊÇÆ¬¼üµÄ×î¼ÑÑ¡Ôñ¡£
|