本篇文章为大家展示了spark 2.1.0 standalone模式配置以及jar包怎么通过spark-submit提交,内容简明扼要并且容易理解,绝对能使你眼前一亮,通过这篇文章的详细介绍希望你能有所收获。配置spark-env.shex
本篇文章为大家展示了spark 2.1.0 standalone模式配置以及jar包怎么通过spark-submit提交,内容简明扼要并且容易理解,绝对能使你眼前一亮,通过这篇文章的详细介绍希望你能有所收获。
配置spark-env.shexport JAVA_HOME=/apps/jdk1.8.0_181export SPARK_MASTER_HOST=bigdata00export SPARK_MASTER_PORT=7077slavesbigdata01bigdata02bigdata03启动spark shell./spark-shell --master spark://bigdata00:7077 --executor-memory 512M 用spark shell 完成一个WordcountScala> sc.textFile("hdfs://bigdata00:9000/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect结果:res3: Array[(String, Int)] = Array((this,1), (is,4), (girl,3), (love,1), (will,1), (day,1), (boreing,1), (my,1), (miss,2), (test,2), (forget,1), (spark,2), (soon,1), (most,1), (that,1), (a,2), (afternonn,1), (i,3), (might,1), (of,1), (today,2), (Good,1), (for,1), (beautiful,1), (time,1), (and,1), (the,5))
//主类package hgs.sparkwcimport org.apache.spark.SparkContextimport org.apache.spark.SparkConfobject WordCount { def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("WordCount") val context = new SparkContext() context.textFile(args(0),1).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2).saveAsTextFile(args(1)) context.stop }}//------------------------------------------------------------------------------------------//以下式pom.xml文件<project xmlns="Http://Maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>hgs</groupId> <artifactId>sparkwc</artifactId> <version>1.0.0</version> <packaging>jar</packaging> <name>sparkwc</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties><dependencies> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>2.11.8</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.1.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.6.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.6</version> <configuration> <arcHive> <manifest> <!-- 我运行这个jar所运行的主类 --> <mainClass>hgs.sparkwc.WordCount</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef> <!-- 必须是这样写 --> jar-with-dependencies </descriptorRef> </descriptorRefs> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> <plugin><groupId>net.alchim31.maven</groupId><artifactId>scala-maven-plugin</artifactId><version>3.2.0</version><executions><execution><goals><goal>compile</goal><goal>testCompile</goal> </goals><configuration><args><!-- <arg>-make:transitive</arg> --> <arg>-dependencyfile</arg> <arg>${project.build.directory}/.scala_dependencies</arg> </args></configuration></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-surefire-plugin</artifactId><version>2.18.1</version><configuration><useFile>false</useFile><disableXmlReport>true</disableXmlReport><!-- If you have classpath issue like nodefClassError,... --><!-- useManifestOnlyJar>false</useManifestOnlyJar --><includes><include>***Suite.*</include></includes></configuration></plugin> </plugins> </build></project>
最后在build assembly:assembly的时候出现以下问题 scalac error: bad option: '-make:transitive' 原因是scala-maven-plugin 插件的配置 <arg>-make:transitive</arg> 有问题,把该行注释掉即可 网上的答案: 删除<arg>-make:transitive</arg> 或者添加该依赖:<dependency><groupId>org.specs2</groupId><artifactId>specs2-junit_${scala.compat.version}</artifactId><version>2.4.16</version><scope>test</scope></dependency>最后在服务器提交任务:./spark-submit --master spark://bigdata00:7077 --executor-memory 512M --total-executor-cores 3 /home/sparkwc.jar hdfs://bigdata00:9000/words hdfs://bigdata00:9000/wordsout2
上述内容就是spark 2.1.0 standalone模式配置以及jar包怎么通过spark-submit提交,你们学到知识或技能了吗?如果还想学到更多技能或者丰富自己的知识储备,欢迎关注编程网精选频道。
--结束END--
本文标题: spark 2.1.0 standalone模式配置以及jar包怎么通过spark-submit提交
本文链接: https://lsjlt.com/news/231177.html(转载时请注明来源链接)
有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
回答
回答
回答
回答
回答
回答
回答
回答
回答
回答
0