±à¼ÍƼö: |
±¾ÎÄÀ´×Ôdongkelun£¬½²¸÷ÖÖÇé¿öϵÄsc.defaultParallelism£¬defaultMinPartitions£¬¸÷ÖÖÇé¿öÏ´´½¨ÒÔ¼°×ª»¯¡£
|
|
ǰÑÔ
ÊìϤSparkµÄ·ÖÇø¶ÔÓÚSparkÐÔÄܵ÷ÓźÜÖØÒª£¬±¾ÎÄ×ܽáSparkͨ¹ý¸÷ÖÖº¯Êý´´½¨RDD¡¢DataFrameʱĬÈϵķÖÇøÊý£¬ÆäÖÐÖ÷ÒªºÍsc.defaultParallelism¡¢sc.defaultMinPartitionsÒÔ¼°HDFSÎļþµÄBlockÊýÁ¿Óйأ¬»¹ÓкܿӵÄijЩÇé¿öµÄĬÈÏ·ÖÇøÊýΪ1¡£
Èç¹û·ÖÇøÊýÉÙ£¬ÄÇô²¢ÐÐÖ´ÐеÄtask¾ÍÉÙ£¬ÌرðÇé¿öÏ£¬·ÖÇøÊýΪ1£¬¼´Ê¹Äã·ÖÅäµÄExecutorºÜ¶à£¬¶øÊµ¼ÊÖ´ÐеÄExecutorÖ»ÓÐ1¸ö£¬Èç¹ûÊý¾ÝºÜ´óµÄ»°£¬ÄÇôÈÎÎñÖ´ÐеľͺÜÂý£¬ºÃÏñÊÇ¿¨ËÀÁË~£¬ËùÒÔÊìϤ¸÷ÖÖÇé¿öÏÂĬÈϵķÖÇøÊý¶ÔÓÚSparkµ÷ÓžͺÜÓбØÒªÁË£¬ÌرðÊÇÖ´ÐÐÍêËã×Ó·µ»ØµÄ½á¹û·ÖÇøÊýΪ1µÄÇé¿ö£¬¸üÐèÒªÌØ±ð×¢Òâ¡££¨ÎҾͱ»¿Ó¹ý£¬ÎÒÒѾ·ÖÅäÁË×ã¹»¶àµÄExecutor¡¢Ä¬ÈϵIJ¢Ðжȡ¢ÒÔ¼°Ö´ÐÐ֮ǰµÄÊý¾Ý¼¯·ÖÇøÊý£¬µ«·ÖÇøÊýÒÀȻΪ1£©
1¡¢¹ØÓÚ sc.defaultMinPartitions
sc.defaultMinPartitions=min(sc.defaultParallelism,2)
Ò²¾ÍÊÇsc.defaultMinPartitionsÖ»ÓÐÁ½¸öÖµ1ºÍ2£¬µ±sc.defaultParallelism>1ʱֵΪ2£¬µ±sc.defaultParallelism=1ʱ£¬ÖµÎª1
ÉÏÃæµÄ¹«Ê½ÊÇÔÚÔ´ÂëÀﶨÒåµÄ£¨¾ùÔÚÀàSparkContextÀ£º
def defaultMinPartitions:
Int = math.min (default Parallelism , 2)
def defaultParallelism: Int = {
assertNotStopped ()
taskScheduler.defaultParallelism
} |
2¡¢¹ØÓÚsc.defaultParallelism
2.1 Ê×ÏÈ¿Éͨ¹ýspark.default.parallelismÉèÖÃsc.defaultParallelismµÄÖµ
2.1.1 ÔÚÎļþÖÐÅäÖÃ
ÔÚÎļþspark-defaults.confÌí¼ÓÒ»ÐУ¨ÕâÀïÓõÄÎÒµÄwindows»·¾³£©
spark.default.parallelism=20
ÑéÖ¤£º
ÔÚspark-shellÀïÊäÈësc.defaultParallelism£¬Êä³ö½á¹ûΪ20

2.1.2 ÔÚ´úÂëÀïÅäÖÃ
val spark =
SparkSession.builder()
.appName ("TestPartitionNums")
.master ("local")
.config ("spark.default.parallelism",
20)
.getOrCreate()
val sc = spark.sparkContext
println (sc.defaultParallelism)
spark.stop |

2.1.3 spark-submitÅäÖÃ
ͨ¹ý¨Cconf spark.default.parallelism=20¼´¿É
spark-submit
--conf spark. default. parallelism = 160 ... |
2.2 ûÓÐÅäÖÃspark.default.parallelismʱµÄĬÈÏÖµ
2.2.1 spark-shell
spark-shellÀïµÄÖµµÈÓÚcpuµÄºËÊý£¬±ÈÈçÎÒµÄwindowsµÄcpuµÄºËÊýΪ4

ÔÙ±ÈÈç²âÊÔ»úµÄºËÊýΪ8

2.2.2 Ö¸¶¨masterΪlocal
×¢£ºÔÚspark-shellÀïͨ¹ý¨Cmaster localºÍÔÚ´úÂëÀïͨ¹ý.master(¡°local¡±)µÄ½á¹ûÊÇÒ»ÑùµÄ£¬ÕâÀïÒÔspark-shellΪÀý
µ±masterΪlocalʱ£¬ÖµÎª1£¬µ±masterΪlocal[n]ʱ£¬ÖµÎªn

2.2.3 masterΪlocal[*]ºÍ²»Ö¸¶¨master(2.2.1)Ò»Ñù£¬¶¼ÎªcpuºËÊý

2.2.4 masterΪyarn
masterΪyarnģʽʱΪ·ÖÅäµÄËùÓеÄExecutorµÄcpuºËÊýµÄ×ܺͻòÕß2£¬Á½ÕßÈ¡×î´óÖµ£¬½«2.1.2µÄ´úÂëµÄmaster×¢Ê͵ô²¢´ò°ü£¬È»ºóÓÃÏÂÃæµÄ½Å±¾Ö´ÐвâÊÔ
test.sh
spark-submit
-- num-executors $1 --executor-cores 1 -- executor-memory
640M --master yarn --class com.dkl.leanring.spark.TestPartitionNums
spark -scala_2.11-1.0.jar |
Ö®ËùÓÃÕâÖÖ·½Ê½²»ÓÃspark-shellÊÇÒòΪÕâÖÖ·½Ê½½ØÍ¼µÄ»°£¬Õ¼µÃ¿Õ¼ä±È½ÏС
ÒòΪyarnģʽʱʹÓõÄcpuºËÊýΪÐéÄâµÄcpuºËÊý£¬ºÍʵ¼ÊcpuµÄºËÊýÓÐÆ«²î£¬¾ßÌåÓ¦¸ÃºÍyarnµÄÅäÖÃÓйأ¬¶øÇÒ¸ù¾Ý½á¹û£¬Ã¿´ÎÉêÇëµÄʵ¼ÊµÄcpuºËÊý²»ÍêȫһÑù£¬ÕâÀïûÓÐÈ¥ÉÔÒò

2.2.5 Standalone¡¢ÆäËû¼¯ÈºÄ£Ê½
Òò±¾È˹¤×÷ÓÃyarnģʽ£¬StandaloneºÍÆäËûģʽû·¨ÔÚÕâÀï½ØÍ¼ÑéÖ¤ÁË£¬¸ù¾ÝÍøÉϵÄ×ÊÁÏ£¬Ó¦¸ÃºÍyarnģʽĬÈÏÖµÊÇÒ»ÑùµÄ¡£
3¡¢HDFSÎļþµÄĬÈÏ·ÖÇø
ÕâÀï¼°ºóÃæÌÖÂÛµÄÊÇrddºÍdataframeµÄ·ÖÇø£¬Ò²¾ÍÊǶÁÈ¡hdfsÎļþ²¢²»»á¸Ä±äÇ°Ãæ½²µÄsc.defaultParallelismºÍsc.defaultMinPartitionsµÄÖµ¡£
3.1 sc.textFile()
rddµÄ·ÖÇøÊý = max(hdfsÎļþµÄblockÊýÄ¿, sc.defaultMinPartitions)
3.1.1 ²âÊÔ´óÎļþ£¨blockµÄÊýÁ¿´óÓÚ2£©
ÕâÀïÎÒÉÏ´«ÁËÒ»¸ö1.52GµÄtxtµ½hdfsÉÏÓÃÀ´²âÊÔ£¬ÆäÖÐÿ¸öblockµÄ´óСΪĬÈϵÄ128M£¬ÄÇô¸ÃÎļþÓÐ13¸ö·ÖÇø


ÓÃÏÂÃæ´úÂë¿ÉÒÔ²âÊÔ¶ÁÈ¡hdfsÎļþµÄ·ÖÇøÊý
val rdd = sc.textFile
("hdfs: //ambari .master. com /data /egaosu
/txt /20180416 .txt")
rdd.rdd .getNumPartitions |
ÕâÖÖ·½Ê½ÎÞÂÛÊÇsc.defaultParallelism´óÓÚblockÊý»¹ÊÇsc.defaultParallelismСÓÚblockÊý£¬rddµÄĬÈÏ·ÖÇøÊý¶¼ÎªblockÊý
×¢£ºÖ®ËùÒÔ˵ÊÇĬÈÏ·ÖÇø£¬ÒòΪtextFile¿ÉÒÔÖ¸¶¨·ÖÇøÊý£¬sc.textFile(path,
minPartitions)£¬Í¨¹ýµÚ¶þ¸ö²ÎÊý¿ÉÒÔÖ¸¶¨·ÖÇøÊý
sc.defaultMinPartitions´óÓÚblockÊý

sc.defaultMinPartitionsСÓÚblockÊý

µ±ÓòÎÊýÖ¸¶¨·ÖÇøÊýʱ£¬ÓÐÁ½ÖÖÇé¿ö£¬µ±²ÎÊý´óÓÚblockÊýʱ£¬ÔòrddµÄ·ÖÇøÊýΪָ¶¨µÄ²ÎÊýÖµ£¬·ñÔò·ÖÇøÊýΪblockÊý

3.1.2 ²âÊÔСÎļþ£¨blockÊýÁ¿µÈÓÚ1£©
ÕâÖÖÇé¿öµÄĬÈÏ·ÖÇøÊýΪsc.defaultMinPartitions£¬ÏÂÃæÊǶÔÓ¦µÄhdfsÎļþ£º


½«ÉÏÃæµÄhdfs·¾¶¸ÄΪ£ºhdfs://ambari.master.com/tmp/dkl/data.txt£¬½á¹û£º


µ±ÓòÎÊýÖ¸¶¨·ÖÇøÊýʱ£¬rddµÄ·ÖÇøÊý´óÓÚµÈÓÚ²ÎÊýÖµ£¬±¾´Î²âÊÔΪµÈÓÚ²ÎÊýÖµ»ò²ÎÊýÖµ+1

3.2 spark.read.csv()
´óÎļþ£¨block½Ï¶à£©£ºdfµÄ·ÖÇøÊý = max(hdfsÎļþµÄblockÊýÄ¿, sc.defaultParallelism)
СÎļþ£¨±¾´Î²âÊÔµÄblockΪ1£©£ºdfµÄ·ÖÇøÊý=1£¬Ò²¾ÍÊǺÍsc.defaultParallelismÎÞ¹Ø(Ò»°ãСÎļþҲû±ØÒªÓúܶà·ÖÇø£¬Ò»¸ö·ÖÇøºÜ¿ì¾Í¿ÉÒÔ´¦ÀíÍê³É)
3.2.1 ´óÎļþ
Îļþ´óС8.98G,blockÊýΪ72


¶ÁÈ¡´úÂ룺
val df = spark.read.option("header",
"true"). csv ( "hdfs: //ambari
.master .com //data /etc_t / etc _t_ consumewaste201801.csv")
|
·ÖÇøÊý
1¡¢µ±sc.defaultParallelismСÓÚblock£¬·ÖÇøÊýĬÈÏΪblockÊý£º72

2¡¢µ±sc.defaultParallelism´óÓÚÓÚblock£¬·ÖÇøÊýĬÈÏΪsc.defaultParallelism

3.2.2 СÎļþ£¨1¸öblock£©
·ÖÇøÊýΪ1

¶ÁÈ¡´úÂ룺
val df = spark.read.option("header",
"true") .csv ("hdfs: //ambari .master
.com //data /etc_t / etc _ sale_ desc.csv ") |

3.3 ²âÊÔ¶ÁÈ¡hive±í´´½¨µÄDataFrameµÄ·ÖÇøÊý
ÏÂÃæÊǸñíµÄhdfs·¾¶£¬´ÓÏÂÃæµÄͼ¿ÉÒÔ¿´³ö¸Ã±í¶ÔÓ¦µÄhdfsÎļþµÄblockµÄÊýĿΪ10£¨2*5£©

ÓÃÏÂÃæµÄ´úÂë²âÊÔ£º
//Çл»Êý¾Ý¿â
spark.sql ("use route_analysis")
//¶ÁÈ¡¸ÃÊý¾Ý¿âϵÄegaosu±íΪdf
val df = spark.table("egaosu")
//´òÓ¡df¶ÔÓ¦µÄrddµÄ·ÖÇøÊý
df.rdd.getNumPartitions |
²âÊÔ·¢ÏÖ£¬µ±sc.defaultParallelism´óÓÚblockʱ£¬dfµÄ·ÖÇøÊǵÈÓÚsc.defaultParallelism£¬µ±Ð¡ÓÚblockʱ£¬dfµÄ·ÖÇøÊý½éÓÚsc.defaultParallelismºÍblockÖ®¼ä£¬ÖÁÓÚÏêϸµÄ·ÖÅä²ßÂÔ£¬ÎÒ»¹Ã»Óв鵽~



ÓÃspark.sql(¡°select * from egaosu¡±)ÕâÖÖ·½Ê½µÃµ½dfºÍÉÏÃæµÄ·ÖÇøÊýÊÇÒ»ÑùµÄ
ÉÏÃæ½²µÄÊÇÎÒ¾³£Ê¹Óõļ¸ÖÖ¶ÁÈ¡hdfsÎļþµÄ·½·¨£¬ÖÁÓÚÀûÓÃÆäËû·½·¨¶ÁÈ¡hdfsÎļþµÄĬÈϵķÖÇø£¬´ó¼Ò¿ÉÒÔ×Ô¼º²âÊÔ£¨±ÈÈçjsonÎļþ£¬ÒòÎÒûÓбȽϴóµÄhdfsÎļþ¾Í²»×ö²âÊÔÁË£©
4¡¢·ÇHDFSÎļþµÄĬÈÏ·ÖÇø£¨±¾µØÎļþ£©
ʵ¼Ê¹¤×÷ÖÐÓõ½±¾µØÎļþµÄÇé¿öºÜÉÙ£¬Ò»°ã¶¼ÊÇhdfs¡¢¹ØÏµÐÍÊý¾Ý¿âºÍ´úÂëÀïµÄ¼¯ºÏ
4.1 sc.textFile()
4.1.1 ´óÎļþ
Îļþ´óСΪ1142M£¬¾²âÊÔ±¾µØÎļþÒ²»áÏñhdfsÒ»Ñù½øÐÐÀàËÆÓÚblockµÄ»®·Ö£¬¹Ì¶¨°´32MÀ´·ÖƬ£¨ÕâÀïµÄ32M²Î¿¼Spark
RDDµÄĬÈÏ·ÖÇøÊý£º£¨spark 2.1.0£©£©

ËùÒÔÓ¦¸ÃĬÈÏÓÐ36¸ö·ÖÇø£¨1142/32=35.6875£©
µ±ÓòÎÊýÖµÖ¸¶¨Ê±£¬²ÎÊýСÓÚblockʱ£¬·ÖÇøÊýΪblockµÄÊýÄ¿£¬´óÓÚblockʱ£¬·ÖÇøÊýΪ²ÎÊýÖµ£¬¼´·ÖÇøÊý
= max(±¾µØÎļþµÄblockÊýÄ¿, ²ÎÊýÖµ)
¶ÁÈ¡´úÂ룺
val rdd = sc.textFile
("file: ///root /dkl / 170102 .txt") |

4.1.2 СÎļþ
ĬÈϵķÖÇøÊýΪsc.defaultMinPartitions
н¨Ò»¸ö²âÊÔÎļþtext.txt,ÄÚÈÝ×Ô¼ºÔ켸ÐÐ


µ±ÓòÎÊýÖ¸¶¨·ÖÇøÊýʱ£¬±¾ÒÔΪ·ÖÇøÊýΪָ¶¨µÄ²ÎÊýÖµ£¬½á¹û¾²âÊÔ£¬µ±²ÎÊýÖµÔÚÒ»¶¨µÄ·¶Î§ÄÚ·ÖÇøÊýȷʵΪָ¶¨µÄ²ÎÊýÖµ£¬µ±²ÎÊýÖµ´óÓÚij¸öÊýֵʱ£¬·ÖÇøÊýʵ¼Ê±È²ÎÊýÖµ´óÒ»µã£¬²»ÖªµÀÊDz»ÊÇSparkµÄbug»¹ÊÇÓÐ×Ô¼ºµÄ²ßÂÔ¡£
¶ÁÈ¡´úÂ룺
val rdd = sc.textFile
("file: ///root /dkl /sh / test /test.txt")
|

4.2 spark.read.csv()
¹æÂɺÍHDFSÎļþÊÇÒ»ÑùµÄ£¨¼û3.2£©£¬ÇÒ°´128MÀ´·Öblock£¬ÕâÀïºÍÉÏÃæ½²µÄtxt²»Ò»Ñù£¬txtÊǰ´32M
4.2.1 ´óÎļþ
1081M£¬ÄÇôblockΪ9£¨1081/128£©£¬·ÖÇøÊý = max(±¾µØÎļþµÄblockÊýÄ¿,
sc.defaultParallelism)

¶ÁÈ¡´úÂ룺
val df = spark.read.option("header",
"true") .csv ("file: /// root /dir/etc_t
/etc_t_ consumewaste 20180614 - 0616.csv")
|

4.2.2 СÎļþ
´óС6K£¬blockΪ1£¬·ÖÇøÊýΪ1

¶ÁÈ¡´úÂ룺
val df = spark.read.option("header",
"true") .csv ("file: ///root /dkl
/sh /test /test.csv") |

5¡¢¹ØÏµÐÍÊý¾Ý¿â
´Ó¹ØÏµÐÍÊý¾Ý¿â±í¶ÁÈ¡µÄdfµÄ·ÖÇøÊýΪ1£¬ÒÔmysqlΪÀý£¬ÎÒÕâÀïÄÃÒ»ÕÅ1000ÍòÌõÊý¾ÝµÄ±í½øÐвâÊÔ


SparkÁ¬½ÓmysqlµÄ´úÂë¶¼¿ÉÒԲο¼Spark Sql Á¬½Ómysql
ÉèÖÃdfµÄĬÈÏ·ÖÇøÊýÒ²¿ÉÒԲο¼Spark Sql Á¬½Ómysql
6¡¢´Ó´úÂëÀïµÄÄÚ²¿Êý¾Ý¼¯´´½¨
6.1 sc.parallelize()´´½¨RDD
ĬÈÏ·ÖÇøÊýµÈÓÚsc.defaultParallelism£¬Ö¸¶¨²ÎÊýʱ·ÖÇøÊýÖµµÈÓÚ²ÎÊýÖµ¡£
6.2 spark.createDataFrame(data)´´½¨DataFrame
µ±dataµÄ³¤¶ÈСÓÚsc.defaultParallelism£¬·ÖÇøÊýµÈÓÚdata³¤¶È£¬·ñÔò·ÖÇøÊýµÈÓÚsc.
default Parallelism
Èçͼ£º

6.3 ´úÂë
ÏÂÃæÊÇÉÏÃæÍ¼ÖеĴúÂ룺
package com.dkl.leanring.spark
import org .apache.spark.sql.SparkSession
object TestPartitionNums {
def main (args: Array[String]): Unit = {
val spark = SparkSession.builder()
.appName ("TestPartitionNums")
.master ("local")
.config ("spark.default.parallelism",
8)
.getOrCreate()
val sc = spark.sparkContext
println("ĬÈϵIJ¢ÐжÈ: " + sc.defaultParallelism
)
println ("sc.parallelize ĬÈÏ·ÖÇø£º" + sc.
parallelize (1 to 30).getNumPartitions)
println ("sc.parallelize ²ÎÊýÖ¸¶¨£¬²ÎÊý´óÓÚsc. defaultParallelism
ʱ£º" + sc.parallelize(1 to 30, 100).getNumPartitions)
println("sc.parallelize ²ÎÊýÖ¸¶¨£¬²ÎÊýСÓÚsc. defaultParallelismʱ£º"
+ sc.parallelize(1 to 30, 3) .getNumPartitions)
var data = Seq((1, 2), (1, 2), (1, 2), (1,
2), (1, 2))
println ("spark.createDataFrame dataµÄ³¤¶ÈСÓÚsc.
defaultParallelismʱ £¬³¤¶È£º" + data.length +
" ·ÖÇøÊý£º" + spark. createDataFrame (data)
.rdd. getNumPartitions )
data = Seq((1, 2), (1, 2), (1, 2), (1, 2), (1,
2 ), (1, 2), (1, 2), (1, 2), (1, 2), (1, 2), (1,
2), (1, 2))
println ("spark.createDataFrame dataµÄ³¤¶È´óÓÚsc.
defaultParallelism ʱ£¬³¤¶È£º" + data.length +
" ·ÖÇøÊý£º" + spark.createDataFrame (data).
rdd. getNumPartitions )
spark .stop
}
} |
7¡¢ÆäËû¸Ä±ä·ÖÇøÊýµÄËã×Ó
7.1 ·ÖÇøÊýΪ1
ÉÏÃæÒѾ½²¹ý¼¸¸ö·ÖÇøÊýΪ1£¨µ±Ä¬ÈϵIJ¢ÐжȴóÓÚ1ʱ£©µÄÇé¿ö£º
1¡¢spark.read.csv()¶ÁÈ¡±¾µØÎļþ
2¡¢¶ÁÈ¡¹ØÏµÐÍÊý¾Ý¿â±í
ÉÏÃæÊÇ´ÓÍⲿÊý¾ÝÔ´¼ÓÔØ½øÀ´¾ÍΪ1µÄÇé¿ö£¬»¹ÓоÍÊǶÔdf»òrdd½øÐÐת»»²Ù×÷Ö®ºóµÄ·ÖÇøÊýΪ1µÄÇé¿ö£º
1¡¢df.limit(n)
7.2 ·ÖÇøÊý²»Îª1µÄÇé¿ö
df.distinct()·ÖÇøÊýΪ200
Èçͼ£º

8¡¢ºÏÀíµÄÉèÖ÷ÖÇøÊý
¸ù¾Ý×Ô¼º¼¯ÈºµÄÇé¿öºÍÊý¾Ý´óСµÈºÏÀíÉèÖ÷ÖÇøµÄÊýÄ¿£¬¶ÔÓÚSparkÐÔÄܵ÷ÓźÜÓбØÒª£¬¸ù¾ÝÇ°Ãæ½²µÄ¿ÉÖª£¬¿Éͨ¹ýÅäÖÃspark.default.parallelism¡¢´«²ÎÉèÖ÷ÖÇøÊý£¬Óöµ½ÄÇЩ·ÖÇøÊýΪ1µÄÌØÊâËã×Ó¿ÉÒÔÀûÓÃrepartition()½øÐÐÖØÐ·ÖÇø¼´¿É¡£
9¡¢×ܽá
±¾ÎÄÊ×ÏȽ²Á˸÷ÖÖÇé¿öϵÄsc.defaultParallelism£¬defaultMinPartitions£¬È»ºó½²Á˸÷ÖÖÇé¿öÏ´´½¨ÒÔ¼°×ª»¯RDD¡¢DataFrameµÄ·ÖÇøÊý£¬ÒòΪSparkµÄÍⲿÊý¾ÝÔ´ºÜ¶à£¬´´½¨ÒÔ¼°×ª»¯RDD¡¢DataFrameµÄ·½·¨ºÍËã×ÓÒ²ºÜ¶à£¬ËùÒÔÖ÷ÒªÊǽ²ÁËÎÒ¸öÈ˳£Óõĸ÷ÖÖÇé¿ö£¬²¢²»Äܰüº¬ËùÓÐÇé¿ö£¬ÖÁÓÚÆäËûÇé¿ö£¬´ó¼Ò¿ÉÒÔ×Ô¼º²âÊÔ×ܽᡣ»¹ÓÐÒ»µã¾ÍÊDZ¾ÎIJ¢Ã»ÓдÓÔ´ÂëµÄ²ã´ÎÈ¥·ÖÎö£¬Ö»ÊÇ×ܽáһЩ¹æÂÉ£¬¶ÔÓÚÇ°ÃæÌáµ½µÄһЩ»¹²»Ì«Çå³þµÄ¹æÂÉ£¬ÒÔºóÈç¹ûÓÐʱ¼äµÄ»°¿ÉÒÔ´ÓÔ´ÂëµÄ²ã´ÎÈ¥·ÖÎöΪʲô~ |