±à¼ÍƼö: |
±¾ÎÄÊôÓÚÈëÃż¶±ðµÄÎÄÕ£¬Ö÷Òª½éÉÜÁËÊÇʲô£¬ZeppelinÔÀí£ºInterpreterºÍSparkInterpreter¡£
±¾ÎÄÀ´×Ô²©¿ÍÔ°£¬ÓÉ»ðÁú¹ûÈí¼þAnna±à¼¡¢ÍƼö¡£ |
|
ZeppelinÔÀí¼ò½é
ZeppelinÊÇÒ»¸ö»ùÓÚWebµÄnotebook£¬Ìṩ½»»¥Êý¾Ý·ÖÎöºÍ¿ÉÊÓ»¯¡£ºǫ֧́³Ö½ÓÈë¶àÖÖÊý¾Ý´¦ÀíÒýÇæ£¬Èçspark£¬hiveµÈ¡£Ö§³Ö¶àÖÖÓïÑÔ£º
Scala(Apache Spark)¡¢Python(Apache Spark)¡¢SparkSQL¡¢
Hive¡¢ Markdown¡¢ShellµÈ¡£±¾ÎÄÖ÷Òª½éÉÜZeppelinÖÐInterpreterºÍSparkInterpreterµÄʵÏÖÔÀí¡£
°²×°ÓëʹÓÃ
²Î¿¼
ÔÀí¼ò½é
Interpreter ZeppelinÖÐ×îºËÐĵĸÅÄîÊÇInterpreter£¬interpreterÊÇÒ»¸ö²å¼þÔÊÐíÓû§Ê¹ÓÃÒ»¸öÖ¸¶¨µÄÓïÑÔ»òÊý¾Ý´¦ÀíÆ÷¡£Ã¿Ò»¸öInterpreter¶¼ÊôÓÚ»»Ò»¸öInterpreterGroup£¬Í¬Ò»¸öInterpreterGroupµÄInterpreters¿ÉÒÔÏ໥ÒýÓã¬ÀýÈçSparkSqlInterpreter
¿ÉÒÔÒýÓà SparkInterpreter ÒÔ»ñÈ¡ SparkContext£¬ÒòΪËûÃÇÊôÓÚͬһ¸öInterpreterGroup¡£µ±Ç°ÒѾʵÏÖµÄInterpreterÓÐspark½âÊÍÆ÷£¬python½âÊÍÆ÷£¬SparkSQL½âÊÍÆ÷,JDBC£¬MarkdownºÍshellµÈ¡£ÏÂͼÊÇZeppelin¹ÙÍøÖнéÉÜInterpreterµÄÔÀíͼ¡£

Interpreter½Ó¿ÚÖÐ×îÖØÒªµÄ·½·¨ÊÇopen£¬close£¬interpertÈý¸ö·½·¨£¬ÁíÍ⻹ÓÐcancel£¬gerProgress£¬completionµÈ·½·¨¡£
Open Êdzõʼ»¯²¿·Ö£¬Ö»»áµ÷ÓÃÒ»´Î¡£ Close ÊǹرÕÊÍ·Å×ÊÔ´µÄ½Ó¿Ú£¬Ö»»áµ÷ÓÃÒ»´Î¡£ Interpret »áÔËÐÐÒ»¶Î´úÂë²¢·µ»Ø½á¹û£¬Í¬²½Ö´Ðз½Ê½¡£ Cancel¿ÉÑ¡µÄ½Ó¿Ú£¬ÓÃÓÚ½áÊøinterpret·½·¨ getPregress ·½·¨»ñÈ¡interpretµÄ°Ù·Ö±È½ø¶È completion »ùÓÚÓαêλÖûñÈ¡½áÊøÁÐ±í£¬ÊµÏÖÕâ¸ö½Ó¿Ú¿ÉÒÔʵÏÖ×Ô¶¯½áÊø
SparkInterpreter
Open·½·¨ÖУ¬»á³õʼ»¯SparkContext£¬SQLContext£¬ZeppelinContext£»µ±Ç°Ö§³ÖµÄģʽÓУº
local[*] in local mode
spark://master:7077 in standalone
cluste
yarn-client in Yarn client mode
mesos://host:5050 in Mesos cluster
ÆäÖÐYarn¼¯ÈºÖ»Ö§³Öclientģʽ¡£
if (isYarnMode())
{
conf.set("master", "yarn");
conf.set("spark.submit.deployMode",
"client");
} |
Interpret·½·¨ÖлáÖ´ÐÐÒ»ÐдúÂ루ÒÔ\n·Ö¸î£©£¬Æäʵ»áµ÷ÓÃspark
µÄSparkILoopÒ»ÐÐÒ»ÐеÄÖ´ÐУ¨ÀàËÆÓÚspark shellµÄʵÏÖ£©£¬ÕâÀïµÄÒ»ÐÐÊÇÂß¼ÐУ¬Èç¹ûÏÂÒ»ÐдúÂëÖÐÒÔ¡°.¡±¿ªÍ·£¨·Ç¡°..¡±,¡°./¡±£©£¬Ò²»áºÍ±¾ÐÐÒ»ÆðÖ´ÐС£¹Ø¼ü´úÂëÈçÏ£º
scala.tools.nsc.interpreter.Results.Result
res = null;
try {
res = interpret(incomplete + s);
} catch (Exception e) {
sc.clearJobGroup();
out.setInterpreterOutput(null);
logger.info("Interpreter exception",
e);
return new InterpreterResult(Code.ERROR, InterpreterUtils.getMostRelevantMessage(e));
}
r = getResultCode(res); |
sparkInterpretµÄ¹Ø¼ü·½·¨
close ·½·¨»áÍ£Ö¹SparkContext
cancel ·½·¨Ö±½Óµ÷ÓÃSparkContextµÄcancel·½·¨¡£sc.cancelJobGroup(getJobGroup(context) getProgress ͨ¹ýSparkContext»ñÈ¡ËùÓÐstageµÄ×ܵÄtaskºÍÒѾ½áÊøµÄtask£¬½áÊøµÄtasks³ýÒÔ×ܵÄtaskµÃµ½µÄ±ÈÀý¾ÍÊǽø¶È¡£ ÎÊÌâ1£¬ÊÇ·ñ¿ÉÒÔ´æÔÚ¶à¸öSparkContext£¿
InterpreterÖ§³Ö'shared', 'scoped', 'isolated'ÈýÖÖÑ¡ÏÔÚscopdeģʽÏ£¬spark
interpreterΪÿ¸önotebook´´½¨±àÒëÆ÷µ«Ö»ÓÐÒ»¸öSparkContext£»isolatedģʽÏ»áΪÿ¸önotebook´´½¨Ò»¸öµ¥¶ÀµÄSparkContext¡£ ÎÊÌâ2£¬isolatedģʽÏ£¬¶à¸öSparkContextÊÇ·ñÔÚͬһ¸ö½ø³ÌÖУ¿ Ò»¸ö·þÎñ¶ËÆô¶¯¶à¸öspark Interpreterºó£¬»áÆô¶¯¶à¸öSparkContext¡£²»¹ý¿ÉÒÔÓÃÁíÍâÒ»¸öjvmÆô¶¯spark
Interpreter¡£
ZeppelinÓÅȱµãС½á
Óŵã 1.ÌṩrestfulºÍwebSocketÁ½ÖÖ½Ó¿Ú¡£ 2.ʹÓÃspark½âÊÍÆ÷£¬Óû§°´ÕÕsparkÌṩµÄ½Ó¿Ú±à³Ì¼´¿É£¬Óû§¿ÉÒÔ×Ô¼º²Ù×÷SparkContext£¬²»¹ýÓû§3.²»ÄÜ×Ô¼ºÈ¥stop
SparkContext£»SparkContext¿ÉÒÔ³£×¤¡£ 4.°üº¬¸ü¶àµÄ½âÊÍÆ÷£¬À©Õ¹ÐÔÒ²ºÜºÃ£¬¿ÉÒÔ·½±ãÔö¼Ó×Ô¼ºµÄ½âÊÍÆ÷¡£ 5.ÌṩÁ˶à¸öÊý¾Ý¿ÉÊÓ»¯Ä£¿é£¬Êý¾Ýչʾ·½±ã¡£
ȱµã 1.ûÓÐÌṩjar°üµÄ·½Ê½ÔËÐÐsparkÈÎÎñ¡£ 2.Ö»ÓÐͬ²½µÄ·½Ê½ÔËÐУ¬¿Í»§¶Ë¿ÉÄÜÐèÒªµÈ´ý½Ï³¤Ê±¼ä¡£ |