Äú¿ÉÒÔ¾èÖú£¬Ö§³ÖÎÒÃǵĹ«ÒæÊÂÒµ¡£

1Ôª 10Ôª 50Ôª





ÈÏÖ¤Â룺  ÑéÖ¤Âë,¿´²»Çå³þ?Çëµã»÷Ë¢ÐÂÑéÖ¤Âë ±ØÌî



  ÇóÖª ÎÄÕ ÎÄ¿â Lib ÊÓÆµ iPerson ¿Î³Ì ÈÏÖ¤ ×Éѯ ¹¤¾ß ½²×ù Modeler   Code  
»áÔ±   
 
   
 
 
     
   
 ¶©ÔÄ
  ¾èÖú
MapReduceÖ®Join²Ù×÷
 
×÷Õߣºmanny À´Ô´£ºÍøÒײ©¿Í ·¢²¼ÓÚ£º2014-12-26
  3670  次浏览      33
 

ÔÚ¹ØÏµÐÍÊý¾Ý¿âÖÐ join ÊǷdz£³£¼ûµÄ²Ù×÷£¬¸÷ÖÖÓÅ»¯ÊÖ¶ÎÒѾ­µ½Á˼«Ö¡£ÔÚº£Á¿Êý¾ÝµÄ»·¾³Ï£¬²»¿É±ÜÃâµÄÒ²»áÅöµ½ÕâÖÖÀàÐ͵ÄÐèÇó£¬ÀýÈçÔÚÊý¾Ý·ÖÎöʱÐèÒªÁ¬½Ó´Ó²»Í¬µÄÊý¾ÝÔ´ÖлñÈ¡µ½µÄÊý¾Ý¡£²»Í¬ÓÚ´«Í³µÄµ¥»úģʽ£¬ÔÚ·Ö²¼Ê½´æ´¢µÄϲÉÓà MapReduce ±à³ÌÄ£ÐÍ£¬Ò²ÓÐÏàÓ¦µÄ´¦Àí´ëÊ©ºÍÓÅ»¯·½·¨¡£

±¾ÎÄ¶Ô Hadoop ÖÐ×î»ù±¾µÄ join ·½·¨½øÐмòµ¥½éÉÜ£¬ÕâÒ²ÊÇÆäËüÐí¶à·½·¨ºÍÓÅ»¯´ëÊ©µÄ»ù´¡¡£ÎÄÖÐËù²ÉÓõÄÀý×ÓÀ´×ÔÓÚ¡¶ Hadoop in Action ¡·Ò»ÊéÖÐµÄ 5.2 ½Ú ¡£¼ÙÉèÁ½¸ö±íËùÔÚµÄÎļþ·Ö±ðΪCustomersºÍOrders£¬ÒÔCSV¸ñʽ´æ´¢ÔÚHDFSÖС£

1,Stephanie Leung,555-555-5555
2,Edward Kim,123-456-7890
3,Jose Madriz,281-330-8004
4,David Stork,408-555-0000
3,A,12.95,02-Jun-2008
1,B,88.25,20-May-2008
2,C,32.00,30-Nov-2007
3,D,25.02,22-Jan-2009

ÕâÀïµÄCustomer IDÊÇÁ¬½ÓµÄ¼ü£¬ÄÇôÁ¬½ÓµÄ½á¹û:

1,Stephanie Leung,555-555-5555,B,88.25,20-May-2008
2,Edward Kim,123-456-7890,C,32.00,30-Nov-2007
3,Jose Madriz,281-330-8004,A,12.95,02-Jun-2008
3,Jose Madriz,281-330-8004,D,25.02,22-Jan-2009

»ØÒäÒ»ÏÂHadoopÖÐMapReduceÖеÄÖ÷Òª¼¸¸ö¹ý³Ì£ºÒÀ´ÎÊǶÁÈ¡Êý¾Ý·Ö¿é£¬map²Ù×÷£¬shuffle²Ù×÷£¬reduce²Ù×÷£¬È»ºóÊä³ö½á¹û¡£¼òµ¥À´Ëµ£¬Æä±¾ÖÊÔÚÓÚ´ó¶ø»¯Ð¡£¬·Ö²ð´¦Àí¡£ÏÔÈ»ÎÒÃÇÏëµ½µÄÊǽ«Á½¸öÊý¾Ý±íÖмüÖµÏàͬµÄÔª×é·Åµ½Í¬Ò»¸öreduce½áµã½øÐУ¬¹Ø¼üÎÊÌâÔÚÓÚÈçºÎ×öµ½£¿¾ßÌå´¦Àí·½·¨Êǽ«map²Ù×÷Êä³öµÄkeyÖµÉèΪÁ½±íµÄ Á¬½Ó¼ü(ÈçÀý×ÓÖеÄCustomer ID) £¬ÄÇôÔÚshuffle½×¶Î£¬HadoopÖÐĬÈϵÄpartitioner»á½«ÏàͬkeyÖµµÃmapÊä³ö·¢Ë͵½Í¬Ò»¸öreduce½áµã¡£ËùÒÔÕû¸ö¹ý³ÌÈçÏÂͼËùʾ£º

ÕâÖÖ·½·¨³ÆÎªRepartition Join£¬Í¬Ê±Ëü½øÐÐjoin²Ù×÷ÊÇÔÚreduce½×¶Î½øÐУ¬Ò²ÊôÓÚReduce-side Join£»ÔÚHadoopÖÐcontribĿ¼ÏµÄdatajoin¾ÍÊDzÉÓõÄÕâÖÖ·½·¨¡£

ÉÏһƪ½éÉÜÁË Repartition Join µÄ»ù±¾Ë¼Ï룬ʵ¼ù³öÕæÖª£¬¾ßÌåµÄʵÏÖÖÐ×ÜÊÇ´æÔÚ¸÷ÖÖϸ½ÚÎÊÌâ¡£ÏÂÃæÎÒÃÇͨ¹ý¾ßÌåµÄÔ´Âë·ÖÎöÀ´¼ÓÉîÀí½â¡£±¾ÎÄ·ÖÎöµÄÊÇ Hadoop-0.20.2 °æ±¾µÄ datajoin ´úÂ룬ÆäËü°æ±¾Ò²Ðí»áÓб仯£¬ÕâÀïÔÝÇÒ²»ÂÛ¡£

²Î¿´Ô´ÂëĿ¼Ï£¬¹²ÊµÏÖÓÐ 7 ¸öÀ࣬·Ö±ðÊÇ£º

ArrayListBackIterator.java
DataJoinJob.java
DataJoinMapperBase.java
DataJoinReducerBase.java
JobBase.java
ResetableIterator.java
TaggedMapOutput.java

Ô´Âë±È½Ï¼òµ¥£¬´úÂëÁ¿Ð¡£¬ÏÂÃæ¶ÔһЩ¹Ø¼üµÄµØ·½½øÐзÖÎö£ºÇ°ÃæÎÒÃÇÌáµ½ÁË map ½×¶ÎµÄÊä³öµÄ key ÖµµÄÉ趨£»È»¶øÔÚʵÏÖÖУ¬ÆävalueÖµÒ²ÊÇÁíÍâÒ»¸öÐèÒª¿¼Âǵĵط½£¬ÔÚ²»Í¬µÄ reduce ½áµã½øÐÐ join ²Ù×÷ʱ£¬ÐèÒªÖªµÀ²ÎÓë join µÄÔª×éËùÊôµÄ±í£»½â¾ö·½·¨ÊÇÔÚ map Êä³öµÄ value ÖµÖмÓÈëÒ»¸ö±ê¼Ç (tag) £¬ÀýÈçÉÏһƪÀý×ÓÖÐÁ½±íµÄtag ¿ÉÒÔ·Ö±ð customer ºÍ order (×¢£ºÊµ¼ÊÉÏ£¬ÔÚreduce½×¶Î¿ÉÒÔÖ±½Ó·ÖÎöÁ½Ôª×éµÄ½á¹¹¾Í¿ÉÒÔÈ·¶¨Êý¾ÝÀ´Ô´)¡£ÕâÒ²ÊÇ TaggedMapOutput.java µÄÀ´Àú¡£×÷Ϊ Hadoop µÄÖмäÊý¾Ý£¬±ØÐëʵÏÖ Writable µÄ·½·¨£¬ÈçÏÂËùʾ£º

public abstract class TaggedMapOutput implements Writable {  
protected Text tag;
public TaggedMapOutput() {
this.tag = new Text("");
}
public Text getTag() {
return tag;
}
public void setTag(Text tag) {
this.tag = tag;
}
public abstract Writable getData();
public TaggedMapOutput clone(JobConf job) {
return (TaggedMapOutput) WritableUtils.clone(this, job);
}
}

½ÓÏÂÀ´£¬ÎÒÃÇ¿´¿´ DataJoinMapperBase ÖеÄÏà¹Ø·½·¨

protected abstract TaggedMapOutput generateTaggedMapOutput(Object value);  
protected abstract Text generateGroupKey(TaggedMapOutput aRecord);

ÒÔÉÏÁ½¸ö·½·¨ÐèÒªÓÉ×ÓÀàʵÏÖ¡£ÉÏһƪÎÄÕÂÌáµ½£¬½«Á½¸ö±íµÄÁ¬½Ó¼ü×÷Ϊ map Êä³öµÄ key Öµ£¬ÆäÖеڶþ¸ö·½·¨Ëù×öµÄ¾ÍÊÇÕâ¼þÊ£¬Éú³ÉÒ»¸öÀàÐÍΪ Text µÄ key £¬²»¹ýÕâÀïÊǽ«Ëü³Æ×÷ÊÇ GroupKey ¶øÒÑ¡£Òò´Ë map ·½·¨Ò²¾Í±È½Ï¼òµ¥Ò×¶®ÁË

public void map(Object key, Object value, OutputCollector output,   
Reporter reporter) throws IOException {
if (this.reporter == null) {
this.reporter = reporter;
}
addLongValue("totalCount", 1);
TaggedMapOutput aRecord = generateTaggedMapOutput(value);
if (aRecord == null) {
addLongValue("discardedCount", 1);
return;
}
Text groupKey = generateGroupKey(aRecord);
if (groupKey == null) {
addLongValue("nullGroupKeyCount", 1);
return;
}
output.collect(groupKey, aRecord);
addLongValue("collectedCount", 1);
}

˵ÍêÁË map ²Ù×÷£¬½ÓÏÂÀ´¾ÍÊÇ reduce ½×¶ÎµÄÊÂÇéÁË¡£²Î¿´ DataJoinReducerBase Õâ¸öÀ࣬ÆäÖÐµÄ reduce ·½·¨Ö÷Òª²¿·ÖÊÇ£º

public void reduce(Object key, Iterator values,   
OutputCollector output, Reporter reporter) throws IOException {

if (this.reporter == null) {
this.reporter = reporter;
}

SortedMap<Object, ResetableIterator> groups = regroup(key, values, reporter);

Object[] tags = groups.keySet().toArray();

ResetableIterator[] groupValues = new ResetableIterator[tags.length];

for (int i = 0; i < tags.length; i++) {
groupValues[i] = groups.get(tags[i]);
}

joinAndCollect(tags, groupValues, key, output, reporter);
addLongValue("groupCount", 1);

for (int i = 0; i < tags.length; i++) {
groupValues[i].close();
}
}

ÆäÖÐ groups Êý×é±£´æµÄÊÇ tag ÒÔ¼°ËüÃǶÔÓ¦Ôª×éµÄ iterator ¡£ÀýÈç Customer ID Ϊ 3 µÄÊý¾Ý¿éËùÔÚµÄ reduce ½ÚµãÉÏ£¬ tags = {"Custmoers" , "Orders"}, groupValues Ôò¶ÔÓ¦ {"3,Jose Madriz,281-330-8004"} ºÍ {"3,A,12.95,02-Jun-2008","3,D,25.02,22-Jan-2009"} µÄ iterator ¡£¹é¸ù½áµ×£¬¹ØÓÚÁ½¸öÔª×éµÄ join ²Ù×÷·ÅÔÚ

protected abstract TaggedMapOutput combine(Object[] tags, Object[] values);  

¸Ã·½·¨ÓÉ×ÓÀàʵÏÖ¡£

ÏÂÃæ¸½ÉÏ ¡¶ Hadoop in Action ¡·ÖÐÌṩµÄÒ»ÖÖʵÏÖ

public class DataJoin extends Confi gured implements Tool {  

public static class MapClass extends DataJoinMapperBase {
protected Text generateInputTag(String inputFile) {
String datasource = inputFile.split(¡°-¡±)[0];
return new Text(datasource);
}
protected Text generateGroupKey(TaggedMapOutput aRecord) {
String line = ((Text) aRecord.getData()).toString();
String[] tokens = line.split(¡°,¡±);
String groupKey = tokens[0];
return new Text(groupKey);
}
protected TaggedMapOutput generateTaggedMapOutput(Object value) {
TaggedWritable retv = new TaggedWritable((Text) value);
retv.setTag(this.inputTag);
return retv;
}
}

public static class Reduce extends DataJoinReducerBase {
protected TaggedMapOutput combine(Object[] tags, Object[] values) {
if (tags.length < 2) return null;
String joinedStr = ¡°¡±;
for (int i=0; i<values.length; i++) {
if (i > 0) joinedStr += ¡°,¡±;
TaggedWritable tw = (TaggedWritable) values[i];
String line = ((Text) tw.getData()).toString();
String[] tokens = line.split(¡°,¡±, 2);
joinedStr += tokens[1];
}
TaggedWritable retv = new TaggedWritable(new Text(joinedStr));
retv.setTag((Text) tags[0]);
return retv;
}
}

public static class TaggedWritable extends TaggedMapOutput {
private Writable data;
public TaggedWritable(Writable data) {
this.tag = new Text(¡°¡±);
this.data = data;
}
public Writable getData() {
return data;
}
public void write(DataOutput out) throws IOException {
this.tag.write(out);
this.data.write(out);
}
public void readFields(DataInput in) throws IOException {
this.tag.readFields(in);
this.data.readFields(in);
}
}

public int run(String[] args) throws Exception {
Confi guration conf = getConf();
JobConf job = new JobConf(conf, DataJoin.class);
Path in = new Path(args[0]);
Path out = new Path(args[1]);
FileInputFormat.setInputPaths(job, in);
FileOutputFormat.setOutputPath(job, out);
job.setJobName(¡°DataJoin¡±);
job.setMapperClass(MapClass.class);
job.setReducerClass(Reduce.class);
job.setInputFormat(TextInputFormat.class);
job.setOutputFormat(TextOutputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(TaggedWritable.class);
job.set(¡°mapred.textoutputformat.separator¡±, ¡°,¡±);
JobClient.runJob(job);
return 0;
}

public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Confi guration(),
new DataJoin(),
args);
System.exit(res);
}

}

±¾ÎĽ²ÊöÈçºÎÔÚmap¶ËÍê³Éjoin²Ù×÷¡£Ö®Ç°ÎÒÃÇÌáµ½ÁËreduce-join£¬ÕâÖÖ·½·¨µÄÁé»îÐÔ²»´í£¬Ò²ÊÇÀíËùµ±È»µØÄܹ»Ïëµ½µÄ·½·¨£»µ«ÕâÖÖ·½·¨´æÔÚµÄÒ»¸ö×î´óµÄÎÊÌâÊÇÐÔÄÜ¡£´óÁ¿µÄÖмäÊý¾ÝÐèÒª´Ómap½Úµãͨ¹ýÍøÂç·¢Ë͵½reduce½Úµã£¬Òò¶øÐ§ÂʱȽϵ͡£Êµ¼ÊÉÏ£¬Á½±íµÄjoin²Ù×÷Öкܶ඼ÊÇÎÞÓõÄÊý¾Ý¡£ÏÖÔÚ¿¼ÂÇ¿ÉÄܵÄÒ»ÖÖ³¡¾°£¬ÆäÖÐÒ»¸ö±í·Ç³£Ð¡£¬ÒÔÖÂÓÚ¿ÉÒÔÖ±½Ó´æ·ÅÔÚÄÚ´æÖУ¬ÄÇôÎÒÃÇ¿ÉÒÔÀûÓÃHadoopÌṩµÄDistributedCache»úÖÆ£¬½«½ÏСµÄ±í¼ÓÈëµ½ÆäÖУ¬ÔÚÿ¸ömap½Úµã¶¼Äܹ»·ÃÎʵ½¸Ã±í£¬×îÖÕʵÏÖÔÚmap½×¶ÎÍê³Éjoin²Ù×÷¡£ÕâÀïÌáÒ»ÏÂDistributedCache£¬¿ÉÒÔÖ±¹ÛÉϽ«Ëü¿´×÷ÊÇÒ»¸öÈ«¾ÖµÄÖ»¶Á¿Õ¼ä£¬´æ´¢Ò»Ð©ÐèÒª¹²ÏíµÄÊý¾Ý£»¾ßÌå¿ÉÒԲο´HadoopÏà¹Ø×ÊÁÏ£¬ÕâÀï²»½øÐÐÉîÈëÌÖÂÛ¡£

ʵÏÖµÄÔ´ÂëÈçÏ£¬Ô­Àí·Ç³£¼òµ¥Ã÷ÁË£º

import java.io.BufferedReader;  
import java.io.FileReader;
import java.io.IOException;
import java.util.Hashtable;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.KeyValueTextInputFormat;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

@SuppressWarnings("deprecation")
public class DataJoinDC extends Configured implements Tool{

private final static String inputa = "hdfs://m100:9000/joinTest/Customers";
private final static String inputb = "hdfs://m100:9000/joinTest/Orders";
private final static String output = "hdfs://m100:9000/joinTest/output";

public static class MapClass extends MapReduceBase
implements Mapper<Text, Text, Text, Text> {
private Hashtable<String, String> joinData = new Hashtable<String, String>();
@Override
public void configure(JobConf conf) {
try {
Path [] cacheFiles = DistributedCache.getLocalCacheFiles(conf);
if (cacheFiles != null && cacheFiles.length > 0) {
String line;
String[] tokens;
BufferedReader joinReader = new BufferedReader(
new FileReader(cacheFiles[0].toString()));
try {
while ((line = joinReader.readLine()) != null) {
tokens = line.split(",", 2);
joinData.put(tokens[0], tokens[1]);
}
}finally {
joinReader.close();
}}} catch (IOException e) {
System.err.println("Exception reading DistributedCache: " + e);
}
}
public void map(Text key, Text value,OutputCollector<Text, Text> output,
Reporter reporter) throws IOException {
// for(String t: joinData.keySet()){
// output.collect(new Text(t), new Text(joinData.get(t)));
// }
String joinValue = joinData.get(key.toString());
if (joinValue != null) {
output.collect(key,new Text(value.toString() + "," + joinValue));
}
}
}

@Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
DistributedCache.addCacheFile(new Path(inputa).toUri(), conf);
JobConf job = new JobConf(conf, DataJoinDC.class);
Path in = new Path(inputb);
Path out = new Path(output);
FileInputFormat.setInputPaths(job, in);
FileOutputFormat.setOutputPath(job, out);
job.setJobName("DataJoin with DistributedCache");
job.setMapperClass(MapClass.class);
job.setNumReduceTasks(0);
job.setInputFormat(KeyValueTextInputFormat.class);
job.setOutputFormat(TextOutputFormat.class);
job.set("key.value.separator.in.input.line", ",");
JobClient.runJob(job);
return 0;
}

public static void main(String[] args) throws Exception{
int res = ToolRunner.run(new Configuration(), new DataJoinDC(), args);
System.exit(res);
}

}

ÒÔÉϲÎÕÕ¡¶Hadoop in Action¡· Ëù¸½´úÂ룬ÎÒÕâÀïÊǽ«Customers±í×÷Ϊ½ÏСµÄ±í£¬´«ÈëDistributedCache¡£

ÕâÀïÐèҪעÒâµÄµØ·½

DistributedCache.addCacheFile(new Path(inputa).toUri(), conf);  

Õâ¾äÒ»¶¨Òª·ÅÔÚjob³õʼ»¯Ö®Ç°£¬·ñÔòÔÚmapÖжÁÈ¡²»µ½Îļþ¡£ÒòΪjob³õʼ»¯Ê±½«´«ÈëConfiguration¶ÔÏó¿½±´ÁËÒ»·Ý¸øÁËJobContext£¡

   
3670 ´Îä¯ÀÀ       33
Ïà¹ØÎÄÕÂ

»ùÓÚEAµÄÊý¾Ý¿â½¨Ä£
Êý¾ÝÁ÷½¨Ä££¨EAÖ¸ÄÏ£©
¡°Êý¾Ýºþ¡±£º¸ÅÄî¡¢ÌØÕ÷¡¢¼Ü¹¹Óë°¸Àý
ÔÚÏßÉ̳ÇÊý¾Ý¿âϵͳÉè¼Æ ˼·+Ч¹û
 
Ïà¹ØÎĵµ

GreenplumÊý¾Ý¿â»ù´¡Åàѵ
MySQL5.1ÐÔÄÜÓÅ»¯·½°¸
ijµçÉÌÊý¾ÝÖÐ̨¼Ü¹¹Êµ¼ù
MySQL¸ßÀ©Õ¹¼Ü¹¹Éè¼Æ
Ïà¹Ø¿Î³Ì

Êý¾ÝÖÎÀí¡¢Êý¾Ý¼Ü¹¹¼°Êý¾Ý±ê×¼
MongoDBʵս¿Î³Ì
²¢·¢¡¢´óÈÝÁ¿¡¢¸ßÐÔÄÜÊý¾Ý¿âÉè¼ÆÓëÓÅ»¯
PostgreSQLÊý¾Ý¿âʵսÅàѵ
×îл¼Æ»®
DeepSeek´óÄ£ÐÍÓ¦Óÿª·¢ 6-12[ÏÃÃÅ]
È˹¤ÖÇÄÜ.»úÆ÷ѧϰTensorFlow 6-22[Ö±²¥]
»ùÓÚ UML ºÍEA½øÐзÖÎöÉè¼Æ 6-30[±±¾©]
ǶÈëʽÈí¼þ¼Ü¹¹-¸ß¼¶Êµ¼ù 7-9[±±¾©]
Óû§ÌåÑé¡¢Ò×ÓÃÐÔ²âÊÔÓëÆÀ¹À 7-25[Î÷°²]
ͼÊý¾Ý¿âÓë֪ʶͼÆ× 8-23[±±¾©]

MySQLË÷Òý±³ºóµÄÊý¾Ý½á¹¹
MySQLÐÔÄܵ÷ÓÅÓë¼Ü¹¹Éè¼Æ
SQL ServerÊý¾Ý¿â±¸·ÝÓë»Ö¸´
ÈÃÊý¾Ý¿â·ÉÆðÀ´ 10´óDB2ÓÅ»¯
oracleµÄÁÙʱ±í¿Õ¼äдÂú´ÅÅÌ
Êý¾Ý¿âµÄ¿çƽ̨Éè¼Æ


²¢·¢¡¢´óÈÝÁ¿¡¢¸ßÐÔÄÜÊý¾Ý¿â
¸ß¼¶Êý¾Ý¿â¼Ü¹¹Éè¼ÆÊ¦
HadoopÔ­ÀíÓëʵ¼ù
Oracle Êý¾Ý²Ö¿â
Êý¾Ý²Ö¿âºÍÊý¾ÝÍÚ¾ò
OracleÊý¾Ý¿â¿ª·¢Óë¹ÜÀí


GE Çø¿éÁ´¼¼ÊõÓëʵÏÖÅàѵ
º½Ìì¿Æ¹¤Ä³×Ó¹«Ë¾ Nodejs¸ß¼¶Ó¦Óÿª·¢
ÖÐÊ¢Òæ»ª ׿Խ¹ÜÀíÕß±ØÐë¾ß±¸µÄÎåÏîÄÜÁ¦
ijÐÅÏ¢¼¼Êõ¹«Ë¾ PythonÅàѵ
ij²©²ÊITϵͳ³§ÉÌ Ò×ÓÃÐÔ²âÊÔÓëÆÀ¹À
ÖйúÓÊ´¢ÒøÐÐ ²âÊÔ³ÉÊì¶ÈÄ£Ðͼ¯³É(TMMI)
ÖÐÎïÔº ²úÆ·¾­ÀíÓë²úÆ·¹ÜÀí